url stringlengths 14 1.76k | text stringlengths 100 1.02M | metadata stringlengths 1.06k 1.1k |
|---|---|---|
http://blog.wikimedia.org/2013/01/03/wikimedia-research-newsletter-december-2012/ | ## Wikimedia Research Newsletter, December 2012
Vol: 2 • Issue: 12 • December 2012
Wikipedia and Sandy Hook; SOPA blackout reexamined
With contributions by: Daniel Mietchen, Piotrus, Junkie.dolphin, Taha Yasseri, Benjamin Mako Hill, Aaron Shaw, Tbayer, DarTar and Ragesoss
### How Wikipedia deals with a mass shooting
Northeastern University researcher Brian Keegan analyzed the gathering of hundreds of Wikipedians to cover the Sandy Hook Elementary School shooting in the immediate aftermath of the tragedy. The findings are reported in a detailed blog post that was later republished by the Nieman Journalism Lab.[1] Keegan observes that the Sandy Hook shooting article reached a length of 50Kb within 24 hours of its creation, making it the fastest growing article by length in the first day among recent articles covering mass shootings on the English-language Wikipedia. The analysis compares the Sandy Hook page with six similar articles from a list of 43 articles on shooting sprees in the US since 2007. Among the analyses described in the study, of particular interest is the dynamics of dedicated vs occasional contributors as the article reaches maturity: while in the first few hours contributions are evenly distributed with a majority of single-edit editors, after hour 3 or 4 a number of dedicated editors show up and “begin to take a vested interest in the article, which is manifest in the rapid centralization of the article”. A plot of inter-edit time also shows the sustained frequency of revisions that these articles display days after their creation, with Sandy Hook averaging at about 1 edit/minute around 24 hours since its first revision. The notebook and social network data produced by the author for the analysis are available on his website. The Nieman Journalism Lab previously covered the role that Wikipedia is playing as a platform for collaborative journalism, and why its format outperforms Wikinews with an interview of Andrew Lih published in 2010.[2] The early revision history of the Sandy Hook shooting article was also covered in a blog post by Oxford Internet Institute fellow Taha Yasseri, however with a focus on the coverage in different Wikipedia language editions.[3]
### Network positions and contributions to online public goods: the case of the Chinese Wikipedia
A graph with nodes color-coded by betweenness centrality (from red=0 to blue=max).
In a forthcoming paper in the Journal of Management Information Systems (presented earlier at HICSS ’12[4]), Xiaoquan (Michael) Zhang and Chong (Alex) Wang use a natural experiment to demonstrate that changes to the position of individuals within the editor network of a wiki modify their editing behavior. The data for this study came from the Chinese Wikipedia. In October 2005, the Chinese government suddenly blocked access to the Chinese Wikipedia from mainland China, creating an unanticipated decline in the editor population. As a result, the remaining editors found themselves in a new network structure and, the authors claim, any changes in editor behavior that ensued are likely effects of this discontinuous “shock” to the network. The paper defines each editor as a node (vertex) in the network and a tie (edge) between two editors is created whenever the editors edit the same page in the wiki. They then examine how changes to three aspects of individual editors’ relative connectedness (centrality) to other editors within the network altered their subsequent patterns of contribution.
The main finding is that changes in the three kinds of editors’ connectedness within the network result in differential changes to their editing behavior. First, an increase in the number of direct connections between one editor and the rest of the network (degree centrality) resulted in fewer edits by that editor, and more work on articles they created. Second, an increase in the overall proximity of an editor to the other members of the network (closeness centrality) resulted in fewer edits and less work on articles they created. Third, an increase in the extent to which an editor connected otherwise isolated groups in the network (betweenness centrality) resulted in more edits and more work by that editor on articles they created. Overall, these results imply that alterations to the network structure of a wiki can change both the quantity and quality of editor contributions. The researchers argue that their findings confirm the predictions of both network game theory and role theory; and that future research should try to analyze the character of the network ties created within platforms for large-scale online collaboration, to better understand how changes to network structure may alter collaborative practices and public goods creation.
### Quality of pharmaceutical articles in the Spanish Wikipedia
Ibuprofen, one of the World Health Organisation‘s “essential drugs”, a topic covered in detail by the Spanish-language Wikipedia.
In an online early version of an upcoming article in Atención Primaria,[5] researchers at the Miguel Hernández University of Elche and the University of Alicante have benchmarked articles on pharmaceutical drugs in the Spanish Wikipedia against information available in a pharmaceutical database, Vademécum.[6] A subset of the Vademécum corpus of 3,595 drugs was created using simple random sampling without replacement, consisting of 386 drugs. Of these, 171 (44%) had entries on the Spanish Wikipedia, which were then scrutinized along several dimensions in May 2012. Usage of the drug was correctly indicated in 155 (91%) of these articles, dosage in 26 (15%), and side-effects in 64 (37%), with only 15 articles (9%) scoring well in all of these dimensions. The researchers conclude that, while Wikipedia has a high potential to help with the dissemination of pharmaceutical knowledge, the Spanish-language edition does not currently live up to this potential. As a possible solution, they suggest the pharmaceutical community more actively participate in editing Wikipedia. The list of the drugs involved has not been made public, since a similar study is currently underway whose results may be distorted by targeted intervention. The authors have signalled to this research report their intention to make the list available after this second study is complete.
### Wikipedia editing patterns are consistent with a non-finite state model of computation
A paper posted to ArXiv[7] by SFI‘s Omidyar fellow Simon DeDeo presents evidence for non-finite state computation in a human social system using data from Wikipedia edit histories. Finite state-systems are the basis for the study of formal languages in computer science and linguistics, and many real-world complex phenomena in biology and the social sciences are also studied empirically by assuming the existence of underlying finite-state processes, for the analysis of which powerful probabilistic methods have been devised. However, the question of whether the description of a system truly entails a finite or a non-finite, unbounded number of states, is an open one. This is significant from a functionalist point of view: can we classify a system by its computational properties, and can these properties help us better understand how the system works regardless of its material details?
The paper’s contribution lies in its proof of a probabilistic generalization of the pumping lemma, a device used in theoretical computer science as a necessary condition for a language to be described by only a finite number of states. The lemma is applied to the edit histories of a number of the most frequently edited articles in the English Wikipedia, after being properly transformed into coarse-grain sequences of “cooperative” or “non-cooperative (reversion) edits (reverts being identified by means of their SHA1 field). A Bayesian argument is applied to show that the lemma cannot hold for a majority of sequences, thus showing that Wikipedia’s collaborative editing system as a whole cannot be described by any aggregation of finite-state systems. The author discusses the implications of this finding for a more grounded study of Wikipedia’s editing model, and for the identification of detailed computational models of other social and biological systems.
### Wikipedia as our collective memory
A protester on Tahrir Square during the 2011 Egyptian revolution.
Michela Ferron, a member of the SoNet (Social Networking) research group at the Bruno Kessler Foundation in Trento, Italy submitted her PhD thesis[8] in December 2012. She examined the idea of viewing Wikipedia as a venue for collective memory and the language indicators of the dynamic process of memory formation in response to “traumatic” events. Parts of the thesis have already been published in journals and conference proceedings, such as WikiSym 2011 and 2012 (cf. presentation slides).
A full chapter is dedicated to the background on the concept of collective memory and its appearance in the digital world. The thesis continues with an analysis of “anniversary edits”, showing a significant increase in editorial activities on articles related to traumatic events during the anniversary period compared to a large random sample of “other” articles. More detailed linguistic indicators are introduced in the next chapter. It is statistically shown that the terms related to affective processes, negative emotions, and cognitive and social processes occur more often in articles on traumatic events; “Specifically, the relative number of words expressing anxiety (e.g., “worried”), anger (e.g., “hate”) and sadness (e.g., “cry”) was significantly higher in articles about traumatic events”.
In the next step, Ferron tried to distinguish between human-made and natural disasters. It has been observed that “human-made traumatic events were characterized by language referring to anger and anxiety, while the collective representation of natural disasters expressed more sadness”. Finally, a detailed case study of the talk pages of articles on the 7 July 2005 London bombings and the 2011 Egyptian revolution was carried out, and language indicators, especially those related to emotions, were investigated in a dynamic framework and compared for both examples.
### SOPA blackout decision analyzed
A First Monday article[9] reviews several aspects of the Wikipedia participation in the 18 January 2012 protests against SOPA and PIPA legislation in the US. The paper focuses on the question of legitimacy, looking at how the Wikipedia community arrived at the decision to participate in those protests.
The English Wikipedia landing page, symbolically its only page during the blackout on January 18, 2012
The paper provides an interesting discussion of legitimacy in Wikipedia’s governance, and discusses the legitimacy of the decision to participate in the protests. The author notes that the initiative was given a major boost by Jimmy Wales’ charismatic authority, as Wales posted a straw poll about the issue on his talk page on December 10, 2011, as while the issue was discussed by the community beforehand (for example, in mid-November at the Village Pump), those discussions attracted much less attention. It is hard to say whether the protest would have happened without Jimbo’s push for more discussion, as it veers towards “what if” territory; as things happened, it is true that Jimbo’s actions began a landslide that led to the protests. However, this reviewer is more puzzled at the claim made in the introduction to the article that the discussion involved a “massive involvement of the Wikimedia Foundation staff”. While several WMF staffers were active in the discussions in their official capacity, and while the WMF did issue some official statements about the ongoing discussion, the paper certainly does not provide any evidence to justify the word “massive”.
The paper subsequently notes that the WMF focused on providing information and gently steering the discussion, without any coercion; this hardly justifies the claim of “massive involvement”. At the very least, a clear explanation is necessary of precisely how many WMF staffers participated in the discussion before such a grandiose adjective as “massive” is used. It is true that the WMF staffers helped push the discussion forward, but this reviewer believes that the paper does not sufficiently justify the stress it puts on their participation, and thus may overestimate their influence.
The third part of the paper discusses how the arguments about legitimacy or the lack of it framed the subsequent discourse of the voters. The author notes that after initial period of discussing SOPA itself, the discussion of whether it was legitimate or not for Wikipedia to become involved in the protest took over, with a major justification for it emerging in the form of an argument that it was legitimate for Wikipedia to protest against SOPA as SOPA threatened Wikipedia itself. While this is an interesting claim, unfortunately, other than citing one single comment, no other qualitative or quantitative data are provided; nor is the methodology discussed. We are not told how many individuals voted, how many commented on legitimacy or illegitimacy, how many felt that Wikipedia is threatened; we do not know how the author classified comments supporting any of the viewpoints, or the shifts in the discussion … this list could unfortunately go on. In one specific example drawn from the conclusion, the author writes that “The main factor that shaped the multi-phased process was the will to have the community accept the final decision as legitimate, and avoid backlash. This factor especially influenced those who are suspected of relying on traditional means of legitimacy such as charisma or professionalism.” At the same time, we are provided with no number, no percentage, and certainly no correlation to back up this claim. Without a clear methodology or distinct data it is hard to verify the author’s claims and conclusions.
The introduction also notes that “the mass effort of planning an effective political action was not something “anyone [could] edit”” and “the debate preceding the blackout did not follow Wikipedia’s open and anarchic decision-making system”; unfortunately this reviewer finds no justification for those rather strong claims anywhere else in the article.
Overall, this is an interesting paper about legitimacy in Wikipedia, but it seems to overreach when it tries to draw conclusions from the data that is simply not presented to the reader. It suffers from a failure to explain the research’s methodology, making verification of the claims made very hard. Due to the lack of hard data, most conclusions are unfortunately rendered dubious, and the paper has a tendency to make strong claims that are not backed up by data or even developed later on.
### Bots and collective intelligence explored in dissertation
Rats (blue trace) interacting with a rat-sized robot (red) controlled by a human who in turn perceives the rat’s movements through those of a human-sized avatar in a virtual reality environment.[10] The video was uploaded to Wikimedia Commons by the Open Access Media Importer Bot.
In his Communication and Society PhD dissertation,[11] Randall M. Livingstone of the University of Oregon explores the relationship between the social and technical structures of Wikipedia, with a particular focus on bots and bot operators. After a fairly broad literature review (which summarizes the basic approaches to Wikipedia studies from new media theory, social network analysis, science and technology studies, and political economy), Livingstone gives a concise history of the technical development of Wikipedia, from UseModWiki to MediaWiki, and from a single server to hundreds.
The most interesting chapters for Wikipedians will be V – Wikipedia as a Sociotechnical System – and VI – Wikipedia as Collective Intelligence. Chapter 5 looks at the ways the editing community and the evolution of software (both MediaWiki and the semi-automated tools and bots that interact with editors and articles) “construct” each other. Based on 45 interviews with bot operators and WMF staff, this chapter gives an interesting and varied picture of how Wikipedia works as a sociotechnical system. It will in part be a familiar account to the more tech-minded Wikipedians, but offers an accessible overview of bots and their place in the ecosystem to editors who normally steer clear of bots and software development. Chapter 6 looks at theories of intelligence and the concept of collective intelligence, arguing that Wikipedia exhibits (at least to some extent) the key traits of stigmergy, distributed cognition, and emergence.
### Briefly
• “History’s most influential people” according to Wikipedia: While more in the realm of popular science, Wired UK, among others, published[12] an infographic attributed to César Hidalgo, head of the MIT Media Lab‘s Macro Connections group, visualizing “History’s most influential people”. Unfortunately, beyond noting that rankings “are based on parameters such as the number of language editions in which that person has a page, and the number of people known to speak those languages” the small article does not provide any methodology, nor does it provide much discussion. Until a more extensive description is released, the current graph, while pretty, is little more than a trivia piece.
• Teachers say 75% of teens use Wikipedia (or online encyclopedias) for research assignments: In a Pew Research survey among more than 2000 US middle and high school teachers[13] 75% said that their teenage students use “Wikipedia or other online encyclopedia” in research assignments, making online encyclopedias the second most popular source for students behind search engines such as Google. This number was lower (68%) “among teachers of the lowest income students (those living below the poverty line)” and higher (80%) for those teaching “mostly upper and upper middle income” students, and it also varied by subject (between 69% for teachers of English and 82% for science teachers). The survey report cautions that the sample “skews towards ‘cutting edge’ educators who teach some of the most academically successful students in the country”.
The Google matrix of Wikipedia entries, from an earlier paper by the same authors of this study.[14]
• “Wikipedia communities” as eigenvectors of its Google matrix: An ArXiv preprint[14] studies the “Spectral properties of Google matrix of Wikipedia and other networks”. This Google matrix consists of entries for each pair of pages (for the English Wikipedia, including non-mainspace pages like portals), roughly speaking modelling the behavior of a surfer who goes from one page to any of those that it links to, with equal probability (or, with probability $1-alpha$, jumps to a random page; the damping parameter $alpha$ is set to around 0.85 in the Google search engine). The PageRank appears as the eigenvector of this matrix for the eigenvalue $lambda = 1$. The paper studies the spectrum (eigenvalues) and eigenvectors apart from this special case, interpreting them as certain topic areas: “the eigenvectors of the Google matrix of Wikipedia clearly identify certain communities which are relatively weakly connected with the Wikipedia core when the modulus of corresponding eigenvalue is close to unity. For moderate values of $left|lambdaright|$ we still have well defined communities which however have stronger links with some popular articles (e.g. countries) that leads to a more rapid decay of such eigenmodes.”
• Serial singularities: developing of a network organization by organizing events: In a paper published in the Schmalenbach Business Review,[15] Leonhard Dobusch and Gordon Müller-Seitz from the Freie Universität Berlin suggest that research on organized events has tended to treat those events as isolated and singular events. Using interviews and other data on Wikimania, chapter meetings, and local meet-ups over several years, the authors challenge this idea and show how many different events on different scales and scopes – each with a distinct character – can interact and reinforce each other to help drive the nature of a large distributed organization like Wikimedia.
• The web mirrors value in the real world: comparing a firm’s valuation with its web network position: In a MIT Sloan Working Paper,[16] Qiaoyun Yun and Peter Gloor create a measure of US and Chinese firms “social network” position by looking at how those firms are linked to from a variety of web sources – prominently Wikipedia. They find a positive correlation between betweenness centrality of a firm in a social network constructed from links online and its innovation capability and financial performance. They find that Wikipedia only predicts a firm’s performance in the US.
• Teahouse analyzed: Jonathan Morgan, Sarah Stierch, Siko Bouterse and Heather Walls, from the Wikimedia Foundation Teahouse team, report on the impact of the initiative on 1,098 new Wikipedia contributors who joined the Teahouse between February and October 2012, in a paper to be presented at CSCW ’13.[17] The study reports that participants in the project “make more edits overall, and edit longer”, “make more edits, to more articles” and “participate more in discussion spaces” compared to non-visitors. This paper is part of a research track entirely dedicated to Wikipedia Supported Collaborative Work, featuring three other studies.
Slides from the recently published Article Feedback research report.[18]
• Article feedback: The Wikimedia Foundation published an update about the Article feedback tool on the English Wikipedia, providing statistics about the usage of the feature, and about the moderation activities for the feedback provided.[18]
• New review of Good Faith Collaboration: The reviewer locates[19] Joseph Reagle’s 2010 book about Wikipedia (free online version) as following in a wider context of research on Wikipedia: “The reliability of the encyclopaedia’s content.. and quantitative analysis of large-scale public datasets formed the predominant approach in early empirical research on Wikipedia … This was followed by a more social approach and the adopting of qualitative methods. In this switch to social norms and away from an ethnographic approach, Reagle’s book is a main reference, particularly in terms of its cultural and historical specificity.” Overall, the review finds that “The book is well documented, with an elaborative but accessible writing style, which is at times provocative. It results in a form of rich composition of eight pieces (chapters) of Wikipedia ‘puzzle’, even if some readers might miss a more explicit continuum linking the lines together. Finally, the book is a primary reference point for researchers aiming to study Wikipedia, especially for those unfamiliar with it.”
• Measuring the impact of Wikipedia for GLAM institutions: Ed Baker, software developer at the Natural History Museum in London, has started a series of blog posts on “the impact and use of Wikipedia by organisations”.[20] In the first post, he looked at how the scope of pages linking to the NHM’s website fits with the overall scope of the institution when pages are ranked either by number of page views or by number of links to the NHM. The latter approach could help identify opportunities for a collaboration between GLAM institutions and the Wikimedia communities.
### Notes
1. Keegan, B. (2012). How does Wikipedia deal with a mass shooting? A frenzied start gives way to a few core editors. Nieman Journalism Lab HTML
2. Seward, Z.M. (2012) Why Wikipedia beats Wikinews as a collaborative journalism project. Nieman Journalism Lab HTML
3. Yasseri, T. (2012) The coverage of a tragedy. Stories for Sunday morning HTML
4. Wang, C. (Alex), & Zhang, X. (Michael). (2012). Network Centrality and Contributions to Online Public Good–The Case of Chinese Wikipedia. 2012 45th Hawaii International Conference on System Sciences (pp. 4515–4524). IEEE. DOI
5. López Marcos, P.; Sanz-Valero, J. (2012). “Presencia y adecuación de los principios activos farmacológicos en la edición española de la Wikipedia”. Atención Primaria. DOI.
6. Vademécum. UBM Medica Spain S.A.. Archived from the original on 30 December 2012. Retrieved on 30 December 2012.
7. DeDeo, S. (2012). Evidence for Non-Finite-State Computation in a Human Social System. ArXiV. PDF
8. Ferron, M. (2012, December 7). Collective Memories in Wikipedia. PhD Thesis, University of Trento. PDF
9. Oz, A. (2012). Legitimacy and efficacy: The blackout of Wikipedia. First Monday, 17(12). HTML
10. Normand, J. M.; Sanchez-Vives, M. V.; Waechter, C.; Giannopoulos, E.; Grosswindhager, B.; Spanlang, B.; Guger, C.; Klinker, G. et al. (2012). De Polavieja, Gonzalo G. ed. “Beaming into the Rat World: Enabling Real-Time Interaction between Rat and Human Each at Their Own Scale”. PLoS ONE 7 (10): e48331. DOI. PMC 3485138. PMID 23118987.
11. Randall M. Livingstone: Network of Knowledge: Wikipedia as a Sociotechnical System of Intelligence. PDF
12. Medeiros, J. (2012). Infographic: History’s most influential people, ranked by Wikipedia reach. Wired UK. HTML
13. Purcell, K., Rainie, L., Heaps, A., Buchanan, J., Friedrich, L., Jacklin, A., Chen, C., Zickuhr, K. (2012): How Teens Do Research in the Digital World. Pew Internet HTML
14. a b Ermann, L., Frahm, K. M., & Shepelyansky, D. L. (2012). Spectral properties of Google matrix of Wikipedia and other networks. ArXiv PDF
15. Dobusch, L., & Müller-Seitz, G. (2012). Serial Singularities: Developing a Network Organization by Organizing Events. Schmalenbach Business Review, 64, 204–229. HTML
16. Yun, Q., & Gloor, P. A. (2012). The Web Mirrors Value in the Real World – Comparing a Firm’s Valuation with Its Web Network Position. SSRN Electronic Journal. DOI
17. Morgan, J. T., Bouterse, S., Stierch, S., & Walls, H. (2013). Tea & Sympathy: Crafting Positive New User Experiences on Wikipedia. CSCW ’13. PDF
18. a b Florin, F., Taraborelli, D., Keyes, O. (2012). Article Feedback: New research and next steps. Wikimedia blog HTML
19. Morell, M. F. (2013). Good Faith Collaboration: The Culture of Wikipedia. Information, Communication & Society, 16(1), 146–147. DOI
20. Baker, E. (2012). Measuring the Impact of Wikipedia for organisations (Part 1), Ed’s blog, HTML | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4267381727695465, "perplexity": 3192.4772476969065}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246641266.56/warc/CC-MAIN-20150417045721-00014-ip-10-235-10-82.ec2.internal.warc.gz"} |
http://www.chegg.com/homework-help/questions-and-answers/uniform-electric-field-magnitude-180-v-m-directed-positive-x-direction-160-m-charge-moves--q932179 | ## [HELP PLEASE] UNIFORM ELECTRIC FIELD
A uniform electric field of magnitude 180 V/m is directed in the positive x direction. A 16.0-µm charge moves from the origin to the point (x, y) = (10.0 cm, 60.0 cm).
(a) What was the change in the potential energy of this charge?
(b) Through what potential difference did the charge move? | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9358087182044983, "perplexity": 992.7310860444617}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704590423/warc/CC-MAIN-20130516114310-00066-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://arxiv.org/abs/1803.00966 | math.NA
# Title:Stability and error analysis for the Helmholtz equation with variable coefficients
Abstract: We discuss the stability theory and numerical analysis of the Helmholtz equation with variable and possibly non-smooth or oscillatory coefficients. Using the unique continuation principle and the Fredholm alternative, we first give an existence-uniqueness result for this problem, which holds under rather general conditions on the coefficients and on the domain. Under additional assumptions, we derive estimates for the stability constant (i.e., the norm of the solution operator) in terms of the data (i.e. PDE coefficients and frequency), and we apply these estimates to obtain a new finite element error analysis for the Helmholtz equation which is valid at high frequency and with variable wave speed. The central role played by the stability constant in this theory leads us to investigate its behaviour with respect to coefficient variation in detail. We give, via a 1D analysis, an a priori bound with stability constant growing exponentially in the variance of the coefficients (wave speed and/or diffusion coefficient). Then, by means a family of analytic examples (supplemented by numerical experiments), we show that this estimate is sharp
Subjects: Numerical Analysis (math.NA) MSC classes: 65N12 65N15 65N30 Cite as: arXiv:1803.00966 [math.NA] (or arXiv:1803.00966v2 [math.NA] for this version)
## Submission history
From: Ivan Graham [view email]
[v1] Fri, 2 Mar 2018 17:40:08 UTC (265 KB)
[v2] Wed, 17 Apr 2019 14:25:20 UTC (217 KB) | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8328333497047424, "perplexity": 683.2299345465924}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999740.32/warc/CC-MAIN-20190624211359-20190624233359-00327.warc.gz"} |
https://www.bankofcanada.ca/profile/laurence-savoie-chabot/ | Laurence Savoie-Chabot
Show all
Staff Analytical Notes
Introducing a Systematic Measure of Idiosyncratic Prices
Staff Analytical Note 2018-33
There is a risk that Bank of Canada staff may inadvertently be biased when analyzing inflation: when inflation surprises on the downside, staff might emphasize negative idiosyncratic factors. When inflation surprises on the upside, staff might emphasize the positive idiosyncratic factors.
Content Type(s): Staff Research, Staff Analytical Notes JEL Code(s): E, E3, E31
Bending the Curves: Wages and Inflation
Staff Analytical Note 2018-15
As economic slack continues to be absorbed and the labour market tightens, wage growth and inflation could increase faster than expected, which would suggest convexity in their Phillips curves. This note investigates whether there is convexity in the Phillips curves for Canadian wage growth and inflation by testing different empirical approaches over the post-inflation-targeting period.
Content Type(s): Staff Research, Staff Analytical Notes Topic(s): Inflation and prices, Labour markets JEL Code(s): E, E2, E24, E3, E31, J, J3
Global Factors and Inflation in Canada
Staff Analytical Note 2017-17
This note investigates whether the recent weakness in inflation in Canada can be related to global factors not included in the current staff analytical framework (domestic slack, movements in commodity prices and in the exchange rate). A global common factor for inflation among selected advanced economies appears to contain marginal information for Canadian inflation beyond what is found in movements in commodity prices and the exchange rate.
Content Type(s): Staff Research, Staff Analytical Notes JEL Code(s): E, E3, E31
Ce que révèle une analyse sectorielle des dynamiques récentes de l’inflation au Canada
Staff Analytical Note 2016-7
Decomposing total inflation in Canada as measured by the consumer price index (CPI) into its key macroeconomic factors, as presented in the most recent Monetary Policy Report, is an interesting exercise that shows how the exchange rate pass-through, commodity prices and the output gap have influenced the evolution of the total inflation rate over time. This aggregate approach, however, may mask important sectoral changes.
Content Type(s): Staff Research, Staff Analytical Notes Topic(s): Exchange rates, Inflation and prices JEL Code(s): E, E3, E31
Un examen plus approfondi des pressions salariales au Canada
Staff Analytical Note 2016-6
In this note, we provide a brief outline of the recent developments in wage measures in Canada. We then assess whether wage growth is consistent with its fundamentals.
Content Type(s): Staff Research, Staff Analytical Notes Topic(s): Labour markets, Productivity JEL Code(s): E, E2, E24, J, J3, J30
See More
Staff Discussion Papers
Exchange Rate Pass-Through to Consumer Prices: Theory and Recent Evidence
Staff Discussion Paper 2015-9
In an open economy such as Canada’s, exchange rate movements can have a material impact on consumer prices. This is particularly important in the current context, with the significant depreciation of the Canadian dollar vis-a-vis the U.S. dollar since late 2012.
Content Type(s): Staff Research, Staff Discussion Papers Topic(s): Exchange rates, Inflation and prices JEL Code(s): E, E3, E31, E5, E52, F, F3, F31
See More
Staff Working Papers
The Trend Unemployment Rate in Canada: Searching for the Unobservable
Staff Working Paper 2019-13
In this paper, we assess several methods that have been used to measure the Canadian trend unemployment rate (TUR). We also consider improvements and extensions to some existing methods.
Content Type(s): Staff Research, Staff Working Papers JEL Code(s): C, C5, C52, C53, E, E2, E24, E27
See More
See More | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8362241387367249, "perplexity": 15449.679574914326}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655883439.15/warc/CC-MAIN-20200703215640-20200704005640-00505.warc.gz"} |
https://journal.psych.ac.cn/xlxb/CN/10.3724/SP.J.1041.2016.01047 | ISSN 0439-755X
CN 11-1911/B
中国科学院心理研究所
• 论文 •
四参数Logistic模型潜在特质参数的 Warm加权极大似然估计
1. (1东北师范大学教育学部; 2东北师范大学数学与统计学院, 应用统计教育部重点实验室; 3中国基础教育质量监测协同创新中心东北师范大学分中心, 长春 130024)
• 收稿日期:2015-10-31 出版日期:2016-08-25 发布日期:2016-08-25
• 通讯作者: 陶剑, E-mail: taoj@nenu.edu.cn
• 基金资助:
国家自然科学基金项目(11501094, 11571069), 中国基础教育质量监测协同创新中心自主课题项目, 应用统计教育部重点实验室开放课题(230026510), 东北师范大学哲学社会科学校内青年基金项目(中央高校基本科研业务费专项资金资助, 1409124)。
Warm’sweighted maximum likelihood estimation of latent trait in the four-parameter logistic model
MENG Xiangbin1,2; TAO Jian2,3; CHEN Shali2
1. (1 Faculty of Education, Northeast Normal University, Changchun 130024, China) (2 KLAS, School of Mathematics and Statistics, Northeast Normal University, Changchun 130024, China) (3 Northeast Normal University Branch, Collaborative Innovation Center of Assessment toward Basic Education Quality, Changchun 130024, China)
• Received:2015-10-31 Online:2016-08-25 Published:2016-08-25
• Contact: TAO Jian, E-mail: taoj@nenu.edu.cn
Abstract:
There are two types of aberrant responses, the correct responses resulting from lucky guesses, and the false responses resulting fromcarelessness. Because the two aberrant responses do not reflect the examinee’s actual knowledge, they may cause an erroneous estimation of the latent trait of examinee.Compared with guesses, careless errors might cause more serious estimation biases, especially if these errors occur at the beginning of a test. To account for the effect of careless errors, Barton and Lord (1981) developed a four-parameter logistic (4PL) model by adding an upper asymptote parameter in the three-parameter logistic (3PL) model. Recently, the 4PLmodel received more attentions, and some literatures highlighted its potential and usefulness both from a methodological point of view and for practical purposes. It can be expected that the 4PL model will be promoted as a competing item response model in psychological and educational measurement. This paper focuses on one important aspect of the 4PL model, that is, the estimation of latent trait levels. In general, unbiased parameter estimation is desirable. Reducing bias in the latent trait estimator is very important for the application of IRT model. Warm (1989) proposed a weighted maximum likelihood (WML) method for estimating the latent trait parameter in the 3PL model, which was found to be less bias than the maximum likelihood (ML) and expected a posteriori (EAP) estimates. The WML estimate has also been extended to the generalized partial credit model (GPCM). In light of the superior performance of the WML method in previous studies, this studyapplies a WML latent trait estimator to the 4PL model. The main works of this article are to present the derivations of the WML estimator under the 4PL model, and to construct a simulation study to compare the properties of the WML estimator to that of the ML and EAP estimators. The results of the simulation study suggested that, the bias of the WML estimator was consistently smaller than that of the ML and EAP estimators, particularly, the accuracy of the WML estimator was superior to that of the ML estimator and nearly equivalent to the EAPE. The difference in bias (and accuracy)of the three estimators was substantial when the latent trait is far away from the location of test, but was negligible when the latent trait matches the location of test. Furthermore, both the test length and the item discriminationhad a greater impacton the performanceof the ML and EAP estimatorsthan that of the WML estimator. In the relatively short tests of low discriminating items, the EAP estimator displayed grossly inflated levels of bias, the ML estimator displayed the largest decrease in accuracy, but theWML estimator performed more robustly. In general, the WML estimator maintains better properties than both the ML and EAP estimators, especially under conditions thatthe test information function was relatively small. Such conditions include, but are not limited to:(a) the mismatch between the latent trait and the location of test; (b) the shortness of the tests (e.g., n ≤12); and (c) the low-discrimination ofitems. In our paper, the findings are not extended to the framework of computer adaptive testing (CAT), asthe simulation was conducted under the linear testing. As a result, our research may be of greatvalue to test developers concerned with constructing fixed and non-adaptive tests. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.586236834526062, "perplexity": 3074.3082563142957}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494936.89/warc/CC-MAIN-20230127033656-20230127063656-00243.warc.gz"} |
http://www.physicspages.com/2016/12/15/ | # Relation between action and energy
References: Shankar, R. (1994), Principles of Quantum Mechanics, Plenum Press. Section 2.8; Exercises 2.8.6 – 2.8.7.
Here we’ll examine an interesting relation between the action ${S}$ and the total energy of a system, as given by the Hamiltonian ${H}$. Suppose a single particle moving in one dimension follows a classical path given by ${x_{cl}\left(t\right)}$, and moves from an initial position at time ${t_{i}}$ of ${x_{cl}\left(t_{i}\right)=x_{i}}$ to a final position at time ${t_{f}}$ of ${x_{cl}\left(t_{f}\right)=x_{f}}$. The action ${S_{cl}}$ of this classical path is given by the integral of the Lagrangian
$\displaystyle S_{cl}=\int_{t_{i}}^{t_{f}}L\left(x,\dot{x}\right)dt \ \ \ \ \ (1)$
What can we say about the rate of change of the action with respect to the final time ${t_{f}}$? That is, we want to calculate ${\partial S_{cl}/\partial t_{f}}$, where all other parameters ${t_{i},x_{i}}$and ${x_{f}}$ are held constant. The situation can be illustrated as shown:
Since the only thing that is changing is ${t_{f}}$, the particle starts at the same initial time (which we’ve taken to be ${t_{i}=0}$ in the diagram) and moves to the same location ${x_{f}}$, but at a different time (in the diagram, later time). This means that the particle must follow a different path, possibly over its entire trajectory. This path, which we’ll call ${x\left(t\right)}$, is related to the original path ${x_{cl}\left(t\right)}$ by perturbing the original path by an amount ${\eta\left(t\right)}$:
$\displaystyle x\left(t\right)=x_{cl}\left(t\right)+\eta\left(t\right) \ \ \ \ \ (2)$
In the diagram, the original path ${x_{cl}}$ is shown in red and the perturbed path ${x}$ in blue. The amount ${\eta}$ is seen to be the vertical distance between these two curves at each time, and in the case of the paths shown in the diagram, ${\eta\left(t\right)<0}$.
The difference in the action between the two paths is due to two contributions: first, there is the contribution due to the extra time, from ${t_{f}}$ to ${t_{f}+\Delta t}$, that the particle takes to complete its path. Second, there is the difference in the two actions over the path from ${t_{i}}$ to ${t_{f}}$. The first contribution is entirely new and, for an infinitesimal extra time ${\Delta t}$, it is given by
$\displaystyle \delta S_{1}=L\left(t_{f}\right)\Delta t \ \ \ \ \ (3)$
where ${L\left(t_{f}\right)}$ is the Lagrangian evaluated at time ${t_{f}}$. The other contribution can be obtained by varying the action over the path from ${t_{i}=0}$ to ${t_{f}}$:
$\displaystyle \delta S_{2}=\int_{0}^{t_{f}}\delta L\;dt \ \ \ \ \ (4)$
Since ${L}$ depends on ${x}$ and ${\dot{x}}$, we have
$\displaystyle \delta L=\frac{\partial L}{\partial x}\delta x+\frac{\partial L}{\partial\dot{x}}\delta\dot{x} \ \ \ \ \ (5)$
For infinitesimally different trajectories, we can see from the diagram above that ${\delta x=\eta\left(t\right)}$ at each point on the curve, so ${\delta\dot{x}=\dot{\eta}\left(t\right)}$, so we get
$\displaystyle \delta S_{2}$ $\displaystyle =$ $\displaystyle \int_{0}^{t_{f}}\left[\frac{\partial L}{\partial x}\eta\left(t\right)+\frac{\partial L}{\partial\dot{x}}\dot{\eta}\left(t\right)\right]\;dt\ \ \ \ \ (6)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \int_{0}^{t_{f}}\left[-\frac{d}{dt}\frac{\partial L}{\partial\dot{x}}+\frac{\partial L}{\partial x}\right]\eta\left(t\right)dt+\int_{0}^{t_{f}}\frac{d}{dt}\left(\frac{\partial L}{\partial\dot{x}}\eta\left(t\right)\right)dt\ \ \ \ \ (7)$ $\displaystyle$ $\displaystyle =$ $\displaystyle 0+\left.\frac{\partial L}{\partial\dot{x}}\eta\left(t\right)\right|_{t_{f}} \ \ \ \ \ (8)$
In these equations, the derivatives of ${L}$ are evaluated on the original curve ${x_{cl}}$. To verify the second line, use the product rule on the second integrand and cancel terms to get the first line. The second term in the last is evaluated at ${t=t_{f}}$ only since we’re assuming that ${\eta\left(0\right)=0}$.
The quantity in brackets in the first integral is zero, because of the Euler-Lagrange equations which are valid on the original curve ${x_{cl}}$:
$\displaystyle \frac{d}{dt}\frac{\partial L}{\partial\dot{x}}-\frac{\partial L}{\partial x}=0 \ \ \ \ \ (9)$
Putting everything together, we get for the total variation in the action:
$\displaystyle \delta S_{cl}$ $\displaystyle =$ $\displaystyle \delta S_{1}+\delta S_{2}\ \ \ \ \ (10)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \left[\frac{\partial L}{\partial\dot{x}}\eta\left(t\right)+L\Delta t\right]_{t_{f}} \ \ \ \ \ (11)$
Looking at the diagram above, the slope of the blue curve ${x\left(t_{f}\right)}$ at the time ${t_{f}}$ is given by
$\displaystyle \dot{x}\left(t_{f}\right)=\frac{\left|\eta\left(t_{f}\right)\right|}{\Delta t} \ \ \ \ \ (12)$
From the definition 2 of ${\eta}$ we see that ${\eta\left(t_{f}\right)<0}$, so
$\displaystyle \eta\left(t_{f}\right)=-\dot{x}\left(t_{f}\right)\Delta t \ \ \ \ \ (13)$
This gives the final equation for the variation of the action:
$\displaystyle \delta S_{cl}$ $\displaystyle =$ $\displaystyle \left[-\frac{\partial L}{\partial\dot{x}}\dot{x}+L\right]_{t_{f}}\Delta t\ \ \ \ \ (14)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \left(-p\dot{x}+L\right)\Delta t\ \ \ \ \ (15)$ $\displaystyle$ $\displaystyle =$ $\displaystyle -H\Delta t \ \ \ \ \ (16)$
where the second line follows from the definition of the canonical momentum ${p=\partial L/\partial\dot{x}}$.
The required derivative is
$\displaystyle \boxed{\frac{\partial S_{cl}}{\partial t_{f}}=-H\left(t_{f}\right)} \ \ \ \ \ (17)$
Using a similar technique, we can work out ${\partial S_{cl}/\partial x_{f}}$. In this case, the situation is as shown in this diagram:
The two trajectories now take the same time, but in the modified trajectory, the particle moves a distance ${\Delta x}$ further. Since both paths take the same time, there is no extra contribution ${L\Delta t}$. In this case ${\eta\left(t\right)>0}$, since the new (blue) curve ${x\left(t\right)}$ is above the old (red) one ${x_{cl}\left(t\right)}$. The derivation is the same as above up to 8, and the total variation in the action is now
$\displaystyle \delta S_{cl}=\left.\frac{\partial L}{\partial\dot{x}}\eta\left(t\right)\right|_{t_{f}} \ \ \ \ \ (18)$
At ${t=t_{f}}$, ${\eta\left(t_{f}\right)=\Delta x}$, so we get
$\displaystyle \delta S_{cl}$ $\displaystyle =$ $\displaystyle \left.\frac{\partial L}{\partial\dot{x}}\right|_{t_{f}}\Delta x\ \ \ \ \ (19)$ $\displaystyle \frac{\partial S_{cl}}{\partial x_{f}}$ $\displaystyle =$ $\displaystyle \left.\frac{\partial L}{\partial\dot{x}}\right|_{t_{f}}=p\left(t_{f}\right) \ \ \ \ \ (20)$
Example We can verify 17 for the case of the one-dimensional harmonic oscillator. The general solution for the position is given by
$\displaystyle x\left(t\right)$ $\displaystyle =$ $\displaystyle A\cos\omega t+B\sin\omega t\ \ \ \ \ (21)$ $\displaystyle \dot{x}\left(t\right)$ $\displaystyle =$ $\displaystyle -A\omega\sin\omega t+B\omega\cos\omega t \ \ \ \ \ (22)$
The total energy is given by
$\displaystyle E$ $\displaystyle =$ $\displaystyle \frac{1}{2}m\dot{x}^{2}+\frac{1}{2}m\omega^{2}x^{2}\ \ \ \ \ (23)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \frac{m}{2}\left(\left(-A\omega\sin\omega t+B\omega\cos\omega t\right)^{2}+\omega^{2}\left(A\cos\omega t+B\sin\omega t\right)^{2}\right)\ \ \ \ \ (24)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \frac{m\omega^{2}}{2}\left(A^{2}+B^{2}\right) \ \ \ \ \ (25)$
where we just multiplied out the second line, cancelled terms and used ${\cos^{2}x+\sin^{2}x=1}$.
To get the action, we need the Lagrangian:
$\displaystyle L$ $\displaystyle =$ $\displaystyle T-V\ \ \ \ \ (26)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \frac{1}{2}m\dot{x}^{2}-\frac{1}{2}m\omega^{2}x^{2}\ \ \ \ \ (27)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \frac{m}{2}\left(\left(-A\omega\sin\omega t+B\omega\cos\omega t\right)^{2}-\omega^{2}\left(A\cos\omega t+B\sin\omega t\right)^{2}\right)\ \ \ \ \ (28)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \frac{m\omega^{2}}{2}\left[A^{2}\left(\sin^{2}\omega t-\cos^{2}\omega t\right)+B^{2}\left(\cos^{2}\omega t-\sin^{2}\omega t\right)-4AB\sin\omega t\cos\omega t\right]\ \ \ \ \ (29)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \frac{m\omega^{2}}{2}\left(\left(B^{2}-A^{2}\right)\cos2\omega t-2AB\sin2\omega t\right) \ \ \ \ \ (30)$
The action for a trajectory from ${t=0}$ to ${t=T}$ is then
$\displaystyle S$ $\displaystyle =$ $\displaystyle \int_{0}^{T}Ldt\ \ \ \ \ (31)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \frac{m\omega}{4}\left[\left(B^{2}-A^{2}\right)\sin2\omega t+2AB\cos2\omega t\right]_{0}^{T}\ \ \ \ \ (32)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \frac{m\omega}{4}\left[\left(B^{2}-A^{2}\right)\sin2\omega T+2AB\left(\cos2\omega T-1\right)\right]\ \ \ \ \ (33)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \frac{m\omega}{2}\left[\left(B^{2}-A^{2}\right)\sin\omega T\cos\omega T+AB\left(\cos^{2}\omega T-\sin^{2}\omega T-1\right)\right]\ \ \ \ \ (34)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \frac{m\omega}{2}\left[\left(B^{2}-A^{2}\right)\sin\omega T\cos\omega T-2AB\sin^{2}\omega T\right] \ \ \ \ \ (35)$
To proceed further, we need to specify ${A}$ and ${B}$, since these depend on the boundary conditions (that is, on where we require the mass to be at ${t=0}$ and ${t=T}$). If we require ${x\left(0\right)=x_{1}}$ and ${x\left(T\right)=x_{2}}$, then
$\displaystyle A$ $\displaystyle =$ $\displaystyle x_{1}\ \ \ \ \ (36)$ $\displaystyle x_{1}\cos\omega T+B\sin\omega T$ $\displaystyle =$ $\displaystyle x_{2}\ \ \ \ \ (37)$ $\displaystyle B$ $\displaystyle =$ $\displaystyle \frac{x_{2}-x_{1}\cos\omega T}{\sin\omega T} \ \ \ \ \ (38)$
Plugging these into 25 gives the energy as
$\displaystyle E$ $\displaystyle =$ $\displaystyle \frac{m\omega^{2}}{2}\left(x_{1}^{2}+\left(\frac{x_{2}-x_{1}\cos\omega T}{\sin\omega T}\right)^{2}\right)\ \ \ \ \ (39)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \frac{m\omega^{2}}{2\sin^{2}\omega T}\left(x_{1}^{2}+x_{2}^{2}-2x_{1}x_{2}\cos\omega T\right) \ \ \ \ \ (40)$
Plugging ${A}$ and ${B}$ into 35, we get (using ${c\equiv\cos\omega T}$ and ${s\equiv\sin\omega T}$, so that ${s^{2}+c^{2}=1}$):
$\displaystyle S$ $\displaystyle =$ $\displaystyle \frac{m\omega}{2s}\left[\left(x_{2}-x_{1}c\right)^{2}c-x_{1}s^{2}c-2x_{1}s^{2}\left(x_{2}-x_{1}c\right)\right]\ \ \ \ \ (41)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \frac{m\omega}{2s}\left[\left(x_{2}^{2}-2x_{1}x_{2}c+x_{1}^{2}c^{2}\right)c-x_{1}^{2}s^{2}c-2x_{1}x_{2}s^{2}+2x_{1}s^{2}c\right]\ \ \ \ \ (42)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \frac{m\omega}{2s}\left[\left(x_{1}^{2}+x_{2}^{2}\right)c-2x_{1}x_{2}\right]\ \ \ \ \ (43)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \frac{m\omega}{2\sin\omega T}\left[\left(x_{1}^{2}+x_{2}^{2}\right)\cos\omega T-2x_{1}x_{2}\right] \ \ \ \ \ (44)$
Taking the derivative, we get
$\displaystyle \frac{\partial S}{\partial T}$ $\displaystyle =$ $\displaystyle \frac{m\omega}{2s^{2}}\left[-\omega\left(x_{1}^{2}+x_{2}^{2}\right)s^{2}-\left(\left(x_{1}^{2}+x_{2}^{2}\right)c-2x_{1}x_{2}\right)\omega c\right]\ \ \ \ \ (45)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \frac{m\omega^{2}}{2s^{2}}\left[-\left(x_{1}^{2}+x_{2}^{2}\right)+2x_{1}x_{2}c\right]\ \ \ \ \ (46)$ $\displaystyle$ $\displaystyle =$ $\displaystyle -\frac{m\omega^{2}}{2\sin^{2}\omega T}\left(x_{1}^{2}+x_{2}^{2}-2x_{1}x_{2}\cos\omega T\right)\ \ \ \ \ (47)$ $\displaystyle$ $\displaystyle =$ $\displaystyle -E \ \ \ \ \ (48)$
Thus the result is verified for the harmonic oscillator. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 192, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.993678867816925, "perplexity": 71.60700017292072}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173866.98/warc/CC-MAIN-20170219104613-00018-ip-10-171-10-108.ec2.internal.warc.gz"} |
http://server1.wikisky.org/starview?object_type=1&object_id=397&object_name=HR+3705&locale=PL | WIKISKY.ORG
Home Getting Started To Survive in the Universe News@Sky Astro Photo The Collection Forum Blog New! FAQ Press Login
# α Lyn (Elvashak)
Contents
### Images
DSS Images Other Images
### Related articles
Predicting accurate stellar angular diameters by the near-infrared surface brightness techniqueI report on the capabilities of the near-infrared (near-IR) surfacebrightness technique to predict reliable stellar angular diameters asaccurate as <~2 per cent using standard broad-band Johnson photometryin the colour range -0.1 <= (V-K)O<= 3.7 includingstars of A, F, G, K spectral type. This empirical approach is fast toapply and leads to estimated photometric diameters in very goodagreement with recent high-precision interferometric diametermeasurements available for non-variable dwarfs and giants, as well asfor Cepheid variables. Then I compare semi-empirical diameters predictedby model-dependent photometric and spectrophotometric (SP) methods withnear-IR surface brightness diameters adopted as empirical referencecalibrators. The overall agreement between all these methods is withinapproximately +/-5 per cent, confirming previous works. However, on thesame scale of accuracy, there is also evidence for systematic shiftspresumably as a result of an incorrect representation of the stellareffective temperature in the model-dependent results. I also comparemeasurements of spectroscopic radii with near-IR surface brightnessradii of Cepheids with known distances. Spectroscopic radii are found tobe affected by a scatter as significant as >~9 per cent, which is atleast three times greater than the formal error currently claimed by thespectroscopic technique. In contrast, pulsation radii predicted by theperiod-radius (PR) relation according to the Cepheid period result aresignificantly less dispersed, indicating a quite small scatter as aresult of the finite width of the Cepheid instability strip, as expectedfrom pulsation theory. The resulting low level of noise stronglyconfirms our previous claims that the pulsation parallaxes are the mostaccurate empirical distances presently available for Galactic andextragalactic Cepheids. The Effective Temperature Scale of FGK Stars. II. Teff:Color:[Fe/H] CalibrationsWe present up-to-date metallicity-dependent temperature versus colorcalibrations for main-sequence and giant stars based on temperaturesderived with the infrared flux method (IRFM). Seventeen colors in thephotometric systems UBV, uvby, Vilnius, Geneva, RI(Cousins), DDO,Hipparcos-Tycho, and Two Micron All Sky Survey (2MASS) have beencalibrated. The spectral types covered by the calibrations range from F0to K5 (7000K>~Teff>~4000K) with some relationsextending below 4000 K or up to 8000 K. Most of the calibrations arevalid in the metallicity range -3.5>~[Fe/H]>~0.4, although some ofthem extend to as low as [Fe/H]~-4.0. All fits to the data have beenperformed with more than 100 stars; standard deviations range from 30 to120 K. Fits were carefully performed and corrected to eliminate thesmall systematic errors introduced by the calibration formulae. Tablesof colors as a function of Teff and [Fe/H] are provided. Thiswork is largely based on the study by A. Alonso and collaborators; thus,our relations do not significantly differ from theirs except for thevery metal-poor hot stars. From the calibrations, the temperatures of 44dwarf and giant stars with direct temperatures available are obtained.The comparison with direct temperatures confirms our finding in Paper Ithat the zero point of the IRFM temperature scale is in agreement, tothe 10 K level, with the absolute temperature scale (that based onstellar angular diameters) within the ranges of atmospheric parameterscovered by those 44 stars. The colors of the Sun are derived from thepresent IRFM Teff scale and they compare well with those offive solar analogs. It is shown that if the IRFM Teff scaleaccurately reproduces the temperatures of very metal-poor stars,systematic errors of the order of 200 K, introduced by the assumption of(V-K) being completely metallicity independent when studying verymetal-poor dwarf stars, are no longer acceptable. Comparisons with otherTeff scales, both empirical and theoretical, are also shownto be in reasonable agreement with our results, although it seems thatboth Kurucz and MARCS synthetic colors fail to predict the detailedmetallicity dependence, given that for [Fe/H]=-2.0, differences as highas approximately +/-200 K are found. Broad-band photometric colors and effective temperature calibrations for late-type giants. I. Z = 0.02We present new synthetic broad-band photometric colors for late-typegiants based on synthetic spectra calculated with the PHOENIX modelatmosphere code. The grid covers effective temperatures T_eff=3000dots5000 K, gravities log g=-0.5dots{+3.5}, and metallicities[M/H]=+0.5dots{-4.0}. We show that individual broad-band photometriccolors are strongly affected by model parameters such as molecularopacities, gravity, microturbulent velocity, and stellar mass. Ourexploratory 3D modeling of a prototypical late-type giant shows thatconvection has a noticeable effect on the photometric colors too, as italters significantly both the vertical and horizontal thermal structuresin the outer atmosphere. The differences between colors calculated withfull 3D hydrodynamical and 1D model atmospheres are significant (e.g.,Δ(V-K)0.2 mag), translating into offsets in effectivetemperature of up to 70 K. For a sample of 74 late-type giants in theSolar neighborhood, with interferometric effective temperatures andbroad-band photometry available in the literature, we compare observedcolors with a new PHOENIX grid of synthetic photometric colors, as wellas with photometric colors calculated with the MARCS and ATLAS modelatmosphere codes. We find good agreement of the new synthetic colorswith observations and published T_eff-color and color-color relations,especially in the T_eff-(V-K), T_eff-(J-K) and (J-K)-(V-K) planes.Deviations from the observed trends in the T_eff-color planes aregenerally within ±100 K for T_eff=3500 to 4800 K. Syntheticcolors calculated with different stellar atmosphere models agree to±100 K, within a large range of effective temperatures andgravities. The comparison of the observed and synthetic spectra oflate-type giants shows that discrepancies result from the differencesboth in the strengths of various spectral lines/bands (especially thoseof molecular bands, such as TiO, H2O, CO) and the continuum level.Finally, we derive several new T_eff-log g-color relations for late-typegiants at solar-metallicity (valid for T_eff=3500 to 4800 K), based bothon the observed effective temperatures and colors of the nearby giants,and synthetic colors produced with PHOENIX, MARCS and ATLAS modelatmospheres. CHARM2: An updated Catalog of High Angular Resolution MeasurementsWe present an update of the Catalog of High Angular ResolutionMeasurements (CHARM, Richichi & Percheron \cite{CHARM}, A&A,386, 492), which includes results available until July 2004. CHARM2 is acompilation of direct measurements by high angular resolution methods,as well as indirect estimates of stellar diameters. Its main goal is toprovide a reference list of sources which can be used for calibrationand verification observations with long-baseline optical and near-IRinterferometers. Single and binary stars are included, as are complexobjects from circumstellar shells to extragalactic sources. The presentupdate provides an increase of almost a factor of two over the previousedition. Additionally, it includes several corrections and improvements,as well as a cross-check with the valuable public release observationsof the ESO Very Large Telescope Interferometer (VLTI). A total of 8231entries for 3238 unique sources are now present in CHARM2. Thisrepresents an increase of a factor of 3.4 and 2.0, respectively, overthe contents of the previous version of CHARM.The catalog is only available in electronic form at the CDS viaanonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/431/773 Local kinematics of K and M giants from CORAVEL/Hipparcos/Tycho-2 data. Revisiting the concept of superclustersThe availability of the Hipparcos Catalogue has triggered many kinematicand dynamical studies of the solar neighbourhood. Nevertheless, thosestudies generally lacked the third component of the space velocities,i.e., the radial velocities. This work presents the kinematic analysisof 5952 K and 739 M giants in the solar neighbourhood which includes forthe first time radial velocity data from a large survey performed withthe CORAVEL spectrovelocimeter. It also uses proper motions from theTycho-2 catalogue, which are expected to be more accurate than theHipparcos ones. An important by-product of this study is the observedfraction of only 5.7% of spectroscopic binaries among M giants ascompared to 13.7% for K giants. After excluding the binaries for whichno center-of-mass velocity could be estimated, 5311 K and 719 M giantsremain in the final sample. The UV-plane constructed from these datafor the stars with precise parallaxes (σπ/π≤20%) reveals a rich small-scale structure, with several clumpscorresponding to the Hercules stream, the Sirius moving group, and theHyades and Pleiades superclusters. A maximum-likelihood method, based ona Bayesian approach, has been applied to the data, in order to make fulluse of all the available stars (not only those with precise parallaxes)and to derive the kinematic properties of these subgroups. Isochrones inthe Hertzsprung-Russell diagram reveal a very wide range of ages forstars belonging to these groups. These groups are most probably relatedto the dynamical perturbation by transient spiral waves (as recentlymodelled by De Simone et al. \cite{Simone2004}) rather than to clusterremnants. A possible explanation for the presence of younggroup/clusters in the same area of the UV-plane is that they have beenput there by the spiral wave associated with their formation, while thekinematics of the older stars of our sample has also been disturbed bythe same wave. The emerging picture is thus one of dynamical streamspervading the solar neighbourhood and travelling in the Galaxy withsimilar space velocities. The term dynamical stream is more appropriatethan the traditional term supercluster since it involves stars ofdifferent ages, not born at the same place nor at the same time. Theposition of those streams in the UV-plane is responsible for the vertexdeviation of 16.2o ± 5.6o for the wholesample. Our study suggests that the vertex deviation for youngerpopulations could have the same dynamical origin. The underlyingvelocity ellipsoid, extracted by the maximum-likelihood method afterremoval of the streams, is not centered on the value commonly acceptedfor the radial antisolar motion: it is centered on < U > =-2.78±1.07 km s-1. However, the full data set(including the various streams) does yield the usual value for theradial solar motion, when properly accounting for the biases inherent tothis kind of analysis (namely, < U > = -10.25±0.15 kms-1). This discrepancy clearly raises the essential questionof how to derive the solar motion in the presence of dynamicalperturbations altering the kinematics of the solar neighbourhood: doesthere exist in the solar neighbourhood a subset of stars having no netradial motion which can be used as a reference against which to measurethe solar motion?Based on observations performed at the Swiss 1m-telescope at OHP,France, and on data from the ESA Hipparcos astrometry satellite.Full Table \ref{taba1} is only available in electronic form at the CDSvia anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/430/165} Improved Baade-Wesselink surface brightness relationsRecent, and older accurate, data on (limb-darkened) angular diameters iscompiled for 221 stars, as well as BVRIJK[12][25] magnitudes for thoseobjects, when available. Nine stars (all M-giants or supergiants)showing excess in the [12-25] colour are excluded from the analysis asthis may indicate the presence of dust influencing the optical andnear-infrared colours as well. Based on this large sample,Baade-Wesselink surface brightness (SB) relations are presented fordwarfs, giants, supergiants and dwarfs in the optical and near-infrared.M-giants are found to follow different SB relations from non-M-giants,in particular in V versus V-R. The preferred relation for non-M-giantsis compared to the earlier relation by Fouqué and Gieren (basedon 10 stars) and Nordgren et al. (based on 57 stars). Increasing thesample size does not lead to a lower rms value. It is shown that theresiduals do not correlate with metallicity at a significant level. Thefinally adopted observed angular diameters are compared to thosepredicted by Cohen et al. for 45 stars in common, and there isreasonable overall, and good agreement when θ < 6 mas.Finally, I comment on the common practice in the literature to average,and then fix, the zero-point of the V versus V-K, V versus V-R and Kversus J-K relations, and then rederive the slopes. Such a commonzero-point at zero colour is not expected from model atmospheres for theV-R colour and depends on gravity. Relations derived in this way may bebiased. The Indo-US Library of Coudé Feed Stellar SpectraWe have obtained spectra for 1273 stars using the 0.9 m coudéfeed telescope at Kitt Peak National Observatory. This telescope feedsthe coudé spectrograph of the 2.1 m telescope. The spectra havebeen obtained with the no. 5 camera of the coudé spectrograph anda Loral 3K×1K CCD. Two gratings have been used to provide spectralcoverage from 3460 to 9464 Å, at a resolution of ~1 Å FWHMand at an original dispersion of 0.44 Å pixel-1. For885 stars we have complete spectra over the entire 3460 to 9464 Åwavelength region (neglecting small gaps of less than 50 Å), andpartial spectral coverage for the remaining stars. The 1273 stars havebeen selected to provide broad coverage of the atmospheric parametersTeff, logg, and [Fe/H], as well as spectral type. The goal ofthe project is to provide a comprehensive library of stellar spectra foruse in the automated classification of stellar and galaxy spectra and ingalaxy population synthesis. In this paper we discuss thecharacteristics of the spectral library, viz., details of theobservations, data reduction procedures, and selection of stars. We alsopresent a few illustrations of the quality and information available inthe spectra. The first version of the complete spectral library is nowpublicly available from the National Optical Astronomy Observatory(NOAO) via ftp and http. Further Results of TiO-Band Observations of StarspotsWe present measurements of starspot parameters (temperature and fillingfactor) on five highly active stars, using absorption bands of TiO, fromobservations made between 1998 March and 2001 December. We determinedstarspot parameters by fitting TiO bands using spectra of inactive G andK stars as proxies for the unspotted photospheres of the active starsand spectra of M stars as proxies for the spots. For three evolved RSCVn systems, we find spot filling factors between 0.28 and 0.42 for DMUMa, 0.22 and 0.40 for IN Vir, and 0.31 and 0.35 for XX Tri; thesevalues are similar to those found by other investigators usingphotometry and Doppler imaging. Among active dwarfs, we measured a lowerspot temperature (3350 K) for EQ Vir than found in a previous study ofTiO bands, and for EK Dra a lower spot temperature (~3800 K) than foundthrough photometry. For all active stars but XX Tri, we achieved goodphase coverage through a stellar rotational period. We also present ourfinal, extensive grid of spot and nonspot proxy stars.This paper includes data taken at McDonald Observatory of the Universityof Texas at Austin. Empirically Constrained Color-Temperature Relations. II. uvbyA new grid of theoretical color indices for the Strömgren uvbyphotometric system has been derived from MARCS model atmospheres and SSGsynthetic spectra for cool dwarf and giant stars having-3.0<=[Fe/H]<=+0.5 and 3000<=Teff<=8000 K. Atwarmer temperatures (i.e., 8000-2.0. To overcome thisproblem, the theoretical indices at intermediate and high metallicitieshave been corrected using a set of color calibrations based on fieldstars having well-determined distances from Hipparcos, accurateTeff estimates from the infrared flux method, andspectroscopic [Fe/H] values. In contrast with Paper I, star clustersplayed only a minor role in this analysis in that they provided asupplementary constraint on the color corrections for cool dwarf starswith Teff<=5500 K. They were mainly used to test thecolor-Teff relations and, encouragingly, isochrones thatemploy the transformations derived in this study are able to reproducethe observed CMDs (involving u-v, v-b, and b-y colors) for a number ofopen and globular clusters (including M67, the Hyades, and 47 Tuc)rather well. Moreover, our interpretations of such data are verysimilar, if not identical, with those given in Paper I from aconsideration of BV(RI)C observations for the sameclusters-which provides a compelling argument in support of thecolor-Teff relations that are reported in both studies. Inthe present investigation, we have also analyzed the observedStrömgren photometry for the classic Population II subdwarfs,compared our final'' (b-y)-Teff relationship with thosederived empirically in a number of recent studies and examined in somedetail the dependence of the m1 index on [Fe/H].Based, in part, on observations made with the Nordic Optical Telescope,operated jointly on the island of La Palma by Denmark, Finland, Iceland,Norway, and Sweden, in the Spanish Observatorio del Roque de losMuchachos of the Instituto de Astrofisica de Canarias.Based, in part, on observations obtained with the Danish 1.54 mtelescope at the European Southern Observatory, La Silla, Chile. Angular Diameters of Stars from the Mark III Optical InterferometerObservations of 85 stars were obtained at wavelengths between 451 and800 nm with the Mark III Stellar Interferometer on Mount Wilson, nearPasadena, California. Angular diameters were determined by fitting auniform-disk model to the visibility amplitude versus projected baselinelength. Half the angular diameters determined at 800 nm have formalerrors smaller than 1%. Limb-darkened angular diameters, effectivetemperatures, and surface brightnesses were determined for these stars,and relationships between these parameters are presented. Scatter inthese relationships is larger than would be expected from themeasurement uncertainties. We argue that this scatter is not due to anunderestimate of the angular diameter errors; whether it is due tophotometric errors or is intrinsic to the relationship is unresolved.The agreement with other observations of the same stars at the samewavelengths is good; the width of the difference distribution iscomparable to that estimated from the error bars, but the wings of thedistribution are larger than Gaussian. Comparison with infraredmeasurements is more problematic; in disagreement with models, coolerstars appear systematically smaller in the near-infrared than expected,warmer stars larger. High resolution spectroscopy over lambda lambda 8500-8750 Å for GAIA. IV. Extending the cool MK stars sampleA library of high resolution spectra of MK standard and reference stars,observed in support to the GAIA mission, is presented. The aim of thispaper is to integrate the MK mapping of Paper I of this series as wellas to consider stars over a wider range of metallicities. Radialvelocities are measured for all the target stars.The spectra are available in electronic form (ASCII format) at CDS viaanonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/406/995 and from the webpage http://ulisse.pd.astro.it/MoreMK/, where further bibliographicalinformation for the target stars is given. Hipparcos red stars in the HpV_T2 and V I_C systemsFor Hipparcos M, S, and C spectral type stars, we provide calibratedinstantaneous (epoch) Cousins V - I color indices using newly derivedHpV_T2 photometry. Three new sets of ground-based Cousins V I data havebeen obtained for more than 170 carbon and red M giants. These datasetsin combination with the published sources of V I photometry served toobtain the calibration curves linking Hipparcos/Tycho Hp-V_T2 with theCousins V - I index. In total, 321 carbon stars and 4464 M- and S-typestars have new V - I indices. The standard error of the mean V - I isabout 0.1 mag or better down to Hp~9 although it deteriorates rapidly atfainter magnitudes. These V - I indices can be used to verify thepublished Hipparcos V - I color indices. Thus, we have identified ahandful of new cases where, instead of the real target, a random fieldstar has been observed. A considerable fraction of the DMSA/C and DMSA/Vsolutions for red stars appear not to be warranted. Most likely suchspurious solutions may originate from usage of a heavily biased color inthe astrometric processing.Based on observations from the Hipparcos astrometric satellite operatedby the European Space Agency (ESA 1997).}\fnmsep\thanks{Table 7 is onlyavailable in electronic form at the CDS via anonymous ftp tocdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/397/997 The Effect of TiO Absorption on Optical and Infrared Angular Diameters of Cool StarsWe review the systematic variation between optical- andinfrared-wavelength angular diameters reported for stars in theapproximate range of spectral types K0-M6. We show that there is acorrelation between the ratio of angular diameters and the depth of TiOabsorption, in the sense that the optical diameters are larger. We arguethat this validates a recent proposal by Houdashelt et al. that TiOabsorption affects certain, but not all, optical-wavelength angulardiameters significantly. Those authors pointed out that the infraredangular diameters appear to yield better effective temperatures than dothe optical diameters, even though the latter are of higher precision.The observed angular diameter differences may arise either from limbdarkening, atmospheric extension, or a combination of these twoprocesses. Model atmosphere calculations of limb-darkening coefficientsare needed to see whether the diameter discrepancy may be resolved.These models need to contain the correct opacity sources and a realisticestimate of the atmospheric geometry and dynamics. A comparison withobservations such as those described in this paper will be useful fortesting the validity of atmosphere models. A catalogue of calibrator stars for long baseline stellar interferometryLong baseline stellar interferometry shares with other techniques theneed for calibrator stars in order to correct for instrumental andatmospheric effects. We present a catalogue of 374 stars carefullyselected to be used for that purpose in the near infrared. Owing toseveral convergent criteria with the work of Cohen et al.(\cite{cohen99}), this catalogue is in essence a subset of theirself-consistent all-sky network of spectro-photometric calibrator stars.For every star, we provide the angular limb-darkened diameter, uniformdisc angular diameters in the J, H and K bands, the Johnson photometryand other useful parameters. Most stars are type III giants withspectral types K or M0, magnitudes V=3-7 and K=0-3. Their angularlimb-darkened diameters range from 1 to 3 mas with a median uncertaintyas low as 1.2%. The median distance from a given point on the sky to theclosest reference is 5.2degr , whereas this distance never exceeds16.4degr for any celestial location. The catalogue is only available inelectronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr(130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/393/183 CHARM: A Catalog of High Angular Resolution MeasurementsThe Catalog of High Angular Resolution Measurements (CHARM) includesmost of the measurements obtained by the techniques of lunaroccultations and long-baseline interferometry at visual and infraredwavelengths, which have appeared in the literature or have otherwisebeen made public until mid-2001. A total of 2432 measurements of 1625sources are included, along with extensive auxiliary information. Inparticular, visual and infrared photometry is included for almost allthe sources. This has been partly extracted from currently availablecatalogs, and partly obtained specifically for CHARM. The main aim is toprovide a compilation of sources which could be used as calibrators orfor science verification purposes by the new generation of largeground-based facilities such as the ESO Very Large Interferometer andthe Keck Interferometer. The Catalog is available in electronic form atthe CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/386/492, and from theauthors on CD-Rom. Betelgeuse: Giant Convection CellsSpectroscopic observations of the M supergiant star Betelgeuse weretaken at the Elginfield Observatory over 17 months in the 1999-2000observing seasons in order to search for giant convection cells.Although the photospheric spectral lines show some temporal variations,mainly in their depths (consistent with a previously study), the Dopplershift distribution inferred from them is remarkably stable. The spectrallines show characteristic macroturbulence dispersion ~15 kms-1 and cover a full span of +/-50-60 km s-1. Thewidths of the lines show occasional as well as longer term changes of afew percent but no evidence for giant convection cells. Thesespectroscopic observations are more consistent with a classical pictureof nonthermal photospheric velocities in which large numbers ofconvection cells appear on the stellar disk at all times. Comparison of Stellar Angular Diameters from the NPOI, the Mark III Optical Interferometer, and the Infrared Flux MethodThe Navy Prototype Optical Interferometer (NPOI) has been used tomeasure the angular diameters of 41 late-type giant and supergiant starspreviously observed with the Mark III optical interferometer. Sixteen ofthese stars have published angular diameters based on model atmospheres(infrared flux method, IRFM). Comparison of these angular diametersshows that there are no systematic offsets between any pair of datasets. Furthermore, the reported uncertainties in the angular diametersmeasured using both interferometers are consistent with the distributionof the differences in the diameters. The distribution of diameterdifferences between the interferometric and model atmosphere angulardiameters are consistent with uncertainties in the IRFM diameters of1.4%. Although large differences in angular diameter measurements areseen for three stars, the data are insufficient to determine whetherthese differences are due to problems with the observations or are dueto temporal changes in the stellar diameters themselves. On the Wilson-Bappu relationship in the Mg II k lineAn investigation is carried out on the Wilson-Bappu effect in the Mg Iik line at 2796.34 Å. The work is based on a selection of 230 starsobserved by both the IUE and HIPPARCOS satellites, covering a wide rangeof spectral types (F to M) and absolute visual magnitudes (-5.4<=MV <=9.0). A semi-automatic procedure is used to measurethe line widths, which applies also in the presence of strong centralabsorption reversal. The Wilson-Bappu relationship here provided isconsidered to represent an improvement over previous recent results forthe considerably larger data sample used, as well as for a properconsideration of the measurement errors. No evidence has been found fora possible dependence of the WB effect on stellar metallicity andeffective temperature. Catalogue of Apparent Diameters and Absolute Radii of Stars (CADARS) - Third edition - Comments and statisticsThe Catalogue, available at the Centre de Données Stellaires deStrasbourg, consists of 13 573 records concerning the results obtainedfrom different methods for 7778 stars, reported in the literature. Thefollowing data are listed for each star: identifications, apparentmagnitude, spectral type, apparent diameter in arcsec, absolute radiusin solar units, method of determination, reference, remarks. Commentsand statistics obtained from CADARS are given. The Catalogue isavailable in electronic form at the CDS via anonymous ftp tocdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/cgi-bin/qcar?J/A+A/367/521 The proper motions of fundamental stars. I. 1535 stars from the Basic FK5A direct combination of the positions given in the HIPPARCOS cataloguewith astrometric ground-based catalogues having epochs later than 1939allows us to obtain new proper motions for the 1535 stars of the BasicFK5. The results are presented as the catalogue Proper Motions ofFundamental Stars (PMFS), Part I. The median precision of the propermotions is 0.5 mas/year for mu alpha cos delta and 0.7mas/year for mu delta . The non-linear motions of thephotocentres of a few hundred astrometric binaries are separated intotheir linear and elliptic motions. Since the PMFS proper motions do notinclude the information given by the proper motions from othercatalogues (HIPPARCOS, FK5, FK6, etc.) this catalogue can be used as anindependent source of the proper motions of the fundamental stars.Catalogue (Table 3) is only available at the CDS via anonymous ftp tocdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strastg.fr/cgi-bin/qcat?J/A+A/365/222 The intermediate-band approach to the surface-brightness method for Cepheid radii and distance determinationThe surface-brightness parameter Fν is calibrated in termsof the Strömgren intermediate-band colour b-y. The relationFν-(b-y)o valid for Cepheids is calibratedusing accurate near-infrared radii and distances for selected Cepheids.We have obtained uvby photometry for non-Cepheid giant and supergiantstars with known angular diameters and compared the slope and zero-pointof their Fν-(b-y)o relation with the Cepheidcalibration. We found that the two calibrations are significantlydifferent. The theoretical models lie in between the two calibrations.It is remarked that Fν-colour relations derived fromnon-Cepheids and involving blue colours (e.g. B-V or b-y) are notapplicable to Cepheids, while those involving redder colours (e.g. V-R,V-K or V-J) also produce good radii for Cepheids. Selected Cepheids ascalibrators lead to the accurate relationFν=3.898(+/-0.003)-0.378(+/-0.006)(b-y)o, whichallowed the calculation of radii and distances for a sample of 59Galactic Cepheids. The uncertainties in the zero-point and slope of theabove relation are similar to those obtained from near-infrared colours,and determine the accuracies in radii and distance calculations. Whileinfrared light and colour curves for Cepheids may be superior inprecision, the intermediate-band b-y colour allows the recovery of meanradii with an accuracy comparable to those obtained from the infraredsolutions. The derived distances are consistent within the uncertaintieswith those predicted by a widely accepted period-luminosityrelationship. Likewise, the resulting period-radius relation from theintermediate-band approach is in better agreement with infrared versionsthan with optical versions of this law. It is highlighted that theintermediate-band calibration of the surface-brightness method in thiswork is of comparable accuracy to the near-infrared calibrations. Thepresent results stress the virtues of uvby in determining the physicalparameters of supergiant stars of intermediate temperature. Diffraction-limited Near-IR Imaging at Keck Reveals Asymmetric, Time-variable Nebula around Carbon Star CIT 6We present multiepoch, diffraction-limited images of the nebula aroundthe carbon star CIT 6 at 2.2 and 3.1 μm from aperture masking on theKeck I Telescope. The near-IR nebula is resolved into two maincomponents, an elongated, bright feature showing time-variable asymmetryand a fainter component about 60 mas away with a cooler colortemperature. These images were precisely registered (~35 mas) withrespect to recent visible images from the Hubble Space Telescope(Trammell et al.), which showed a bipolar structure in scattered light.The dominant near-IR feature is associated with the northern lobe ofthis scattering nebula, and the multiwavelength data set can beunderstood in terms of a bipolar dust shell around CIT 6. Variability ofthe near-IR morphology is qualitatively consistent with previouslyobserved changes in red polarization, caused by varying illuminationgeometry due to nonuniform dust production. The blue emission morphologyand polarization properties cannot be explained by the above modelalone, but require the presence of a wide binary companion in thevicinity of the southern polar lobe. The physical mechanisms responsiblefor the breaking of spherical symmetry around extreme carbon stars, suchas CIT 6 and IRC +10216, remain uncertain. 250 GHz observations of Be starsNew 250 GHz flux density measurements with the 30 m telescope at PicoVeleta are presented for 23 Be stars. We suggest that the radio spectralindex is typically close to 1.4 for these stars. We present in adiscusssion of own and literature data some evidence that in few casesslow variations with a time scale of about a year occur. Both evidenceis compatible with the idea that there is no stellar activity type radioemission of Be stars but a maybe slightly modulated quiescent radiationof the outer parts of selfsimilar circumstellar disks. An immediateimprovement of existing models is not possible. Sixth Catalogue of Fundamental Stars (FK6). Part I. Basic fundamental stars with direct solutionsThe FK6 is a suitable combination of the results of the HIPPARCOSastrometry satellite with ground-based data, measured over more than twocenturies and summarized in the FK5. Part I of the FK6 (abbreviatedFK6(I)) contains 878 basic fundamental stars with direct solutions. Suchdirect solutions are appropriate for single stars or for objects whichcan be treated like single stars. From the 878 stars in Part I, we haveselected 340 objects as "astrometrically excellent stars", since theirinstantaneous proper motions and mean (time-averaged) ones do not differsignificantly. Hence most of the astrometrically excellent stars arewell-behaving "single-star candidates" with good astrometric data. Thesestars are most suited for high-precision astrometry. On the other hand,199 of the stars in Part I are Δμ binaries in the sense ofWielen et al. (1999). Many of them are newly discovered probablebinaries with no other hitherto known indication of binarity. The FK6gives, besides the classical "single-star mode" solutions (SI mode),other solutions which take into account the fact that hidden astrometricbinaries among "apparently single-stars" introduce sizable "cosmicerrors" into the quasi-instantaneously measured HIPPARCOS proper motionsand positions. The FK6 gives in addition to the SI mode the "long-termprediction (LTP) mode" and the "short-term prediction (STP) mode". TheseLTP and STP modes are on average the most precise solutions forapparently single stars, depending on the epoch difference with respectto the HIPPARCOS epoch of about 1991. The typical mean error of anFK6(I) proper motion in the single-star mode is 0.35 mas/year. This isabout a factor of two better than the typical HIPPARCOS errors for thesestars of 0.67 mas/year. In the long-term prediction mode, in whichcosmic errors are taken into account, the FK6(I) proper motions have atypical mean error of 0.50 mas/year, which is by a factor of more than 4better than the corresponding error for the HIPPARCOS values of 2.21mas/year (cosmic errors included). Photometric modelling of starspots - I. A Barnes-Evans-like surface brightness-colour relation using (Ic-K)In the first part of this work, the empirical correlation of stellarsurface brightness FV with (Ic-K) broad-bandcolour is investigated by using a sample of stars cooler than the Sun. Abilinear correlation is found to represent well the brightness of G, Kand M giant stars. The change in slope occurs at (Ic-K)~2.1or at about the transition from K to M spectral types. The samerelationship is also investigated for dwarf stars and found to bedistinctly different from that of the giants. The dwarf star correlationdiffers by an average of -0.4 in (Ic-K) or by a maximum inFV of ~-0.1, positioning it below that of the giants, withboth trends tending towards convergence for the hotter stars in oursample. The flux distribution derived from theFV-(Ic-K) relationship for the giant stars,together with that derived from an FV-(V-K) relationship andthe blackbody flux distribution, is then utilized to compute syntheticlight V and colour (V-R)c, (V-I)c and (V-K) curvesof cool spotted stars. We investigate the effects on the amplitudes ofthe curves by using these FV-colour relations and by assumingthe effective gravity of the spots to be lower than the gravity of theunspotted photosphere. We find that the amplitudes produced by using theFV-(Ic-K) relationship are larger than thoseproduced by the other two brightness correlations, meaning smallerand/or warmer spots. Library of Medium-Resolution Fiber Optic Echelle Spectra of F, G, K, and M Field Dwarfs to Giant StarsWe present a library of Penn State Fiber Optic Echelle (FOE)observations of a sample of field stars with spectral types F to M andluminosity classes V to I. The spectral coverage is from 3800 to 10000Å with a nominal resolving power of 12,000. These spectra includemany of the spectral lines most widely used as optical and near-infraredindicators of chromospheric activity such as the Balmer lines (Hαto Hepsilon), Ca II H & K, the Mg I b triplet, Na I D_1, D_2, He ID_3, and Ca II IRT lines. There are also a large number of photosphericlines, which can also be affected by chromospheric activity, andtemperature-sensitive photospheric features such as TiO bands. Thespectra have been compiled with the goal of providing a set of standardsobserved at medium resolution. We have extensively used such data forthe study of active chromosphere stars by applying a spectralsubtraction technique. However, the data set presented here can also beutilized in a wide variety of ways ranging from radial velocitytemplates to study of variable stars and stellar population synthesis.This library can also be used for spectral classification purposes anddetermination of atmospheric parameters (T_eff, logg, [Fe/H]). A digitalversion of all the fully reduced spectra is available via ftp and theWorld Wide Web (WWW) in FITS format. Spectral Irradiance Calibration in the Infrared. X. A Self-Consistent Radiometric All-Sky Network of Absolutely Calibrated Stellar SpectraWe start from our six absolutely calibrated continuous stellar spectrafrom 1.2 to 35 μm for K0, K1.5, K3, K5, and M0 giants. These wereconstructed as far as possible from actual observed spectral fragmentstaken from the ground, the Kuiper Airborne Observatory, and the IRAS LowResolution Spectrometer, and all have a common calibration pedigree.From these we spawn 422 calibrated spectral templates'' for stars withspectral types in the ranges G9.5-K3.5 III and K4.5-M0.5 III. Wenormalize each template by photometry for the individual stars usingpublished and/or newly secured near- and mid-infrared photometryobtained through fully characterized, absolutely calibrated,combinations of filter passband, detector radiance response, and meanterrestrial atmospheric transmission. These templates continue ourongoing effort to provide an all-sky network of absolutely calibrated,spectrally continuous, stellar standards for general infrared usage, allwith a common, traceable calibration heritage. The wavelength coverageis ideal for calibration of many existing and proposed ground-based,airborne, and satellite sensors, particularly low- tomoderate-resolution spectrometers. We analyze the statistics of probableuncertainties, in the normalization of these templates to actualphotometry, that quantify the confidence with which we can assert thatthese templates truly represent the individual stars. Each calibratedtemplate provides an angular diameter for that star. These radiometricangular diameters compare very favorably with those directly observedacross the range from 1.6 to 21 mas. The effective temperature scale of giant stars (F0-K5). I. The effective temperature determination by means of the IRFMWe have applied the InfraRed Flux Method (IRFM) to a sample ofapproximately 500 giant stars in order to derive their effectivetemperatures with an internal mean accuracy of about 1.5% and a maximumuncertainty in the zero point of the order of 0.9%. For the applicationof the IRFM, we have used a homogeneous grid of theoretical modelatmosphere flux distributions developed by \cite[Kurucz (1993)]{K93}.The atmospheric parameters of the stars roughly cover the ranges: 3500 K<= T_eff <= 8000 K; -3.0 <= [Fe/H] <= +0.5; 0.5 <= log(g) <= 3.5. The monochromatic infrared fluxes at the continuum arebased on recent photometry with errors that satisfy the accuracyrequirements of the work. We have derived the bolometric correction ofgiant stars by using a new calibration which takes the effect ofmetallicity into account. Direct spectroscopic determinations ofmetallicity have been adopted where available, although estimates basedon photometric calibrations have been considered for some stars lackingspectroscopic ones. The adopted infrared absolute flux calibration,based on direct optical measurements of stellar angular diameters, putsthe effective temperatures determined in this work in the same scale asthose obtained by direct methods. We have derived up to fourtemperatures, TJ, TH, TK and T_{L'},for each star using the monochromatic fluxes at different infraredwavelengths in the photometric bands J, H, K and L'. They show goodconsistency over 4000 K, and there is no appreciable trend withwavelength, metallicity and/or temperature. We provide a detaileddescription of the steps followed for the application of the IRFM, aswell as the sources of error and their effect on final temperatures. Wealso provide a comparison of the results with previous work. Catalogs of temperatures and [Fe/H] averages for evolved G and K starsA catalog of mean values of [Fe/H] for evolved G and K stars isdescribed. The zero point for the catalog entries has been establishedby using differential analyses. Literature sources for those entries areincluded in the catalog. The mean values are given with rms errors andnumbers of degrees of freedom, and a simple example of the use of thesestatistical data is given. For a number of the stars with entries in thecatalog, temperatures have been determined. A separate catalogcontaining those data is briefly described. Catalog only available atthe CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/Abstract.html Stellar radii of M giantsWe determine the stellar radii of the M giant stars in the Hipparcoscatalogue that have a parallax measured to better than 20% accuracy.This is done with the help of a relation between a visual surfacebrightness parameter and the Cousins (V - I) colour index, which wecalibrate with M giants with published angular diameters.The radii of(non-Mira) M giants increase from a median value of 50 R_Sun at spectraltype M0 III to 170 R_Sun at M7/8 III. Typical intermediate giant radiiare 65 R_Sun for M1/M2, 90 R_Sun for M3, 100 R_Sun for M4, 120 R_Sun forM5 and 150 R_Sun for M6. There is a large intrinsic spread for a givenspectral type. This variance in stellar radius increases with latertypes but in relative terms, it remains constant.We determineluminosities and, from evolutionary tracks, stellar masses for oursample stars. The M giants in the solar neighbourhood have masses in therange 0.8-4 M_Sun. For a given spectral type, there is a close relationbetween stellar radius and stellar mass. We also find a linear relationbetween the mass and radius of non-variable M giants. With increasingamplitude of variability we have larger stellar radii for a given mass.
Submit a new article
• - No Links Found - | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8473096489906311, "perplexity": 6008.064262951175}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142617/warc/CC-MAIN-20130516124222-00035-ip-10-60-113-184.ec2.internal.warc.gz"} |
http://math.stackexchange.com/users/26369/mark-s?tab=activity&sort=all&page=2 | # Mark S.
less info
reputation
729
bio website combinatorialgames.wordpress.… location United States age 27 member for 2 years, 8 months seen Oct 23 at 22:48 profile views 286
I have an amateur interest in combinatorial game theory and rarely update a blog with some basic exposition on the subject (see website).
If you need to contact me, use the e-mail address at this link.
# 572 Actions
May26 answered What is the opposite of a robust system? May26 comment How exactly is $i=\sqrt{-1}$ related to $\mathbb{C}$ being a closed algebraic field? It's important to note (and the links clarify this) that the Quaternions are not what you get when you have the same goal (algebraic closure) but happen to be working with matrices, but rather something reminiscent of the complex numbers you can get when you change/weaken the goal. May26 comment How exactly is $i=\sqrt{-1}$ related to $\mathbb{C}$ being a closed algebraic field? @Nikos $\sqrt\pi$ is the number that you square to get $\pi$. Every positive number has a positive square root: If you believe it for rationals, just take the limit of a sequence of rational numbers whose square is not quite big enough, but whose squares tend to the number in question. May26 answered How exactly is $i=\sqrt{-1}$ related to $\mathbb{C}$ being a closed algebraic field? May26 answered Are there combinatorial games of finite order different from $1$ or $2$? May26 comment Are there non-zero combinatorial games of odd order? The proof of "no (nonzero) games of odd order" is too long to fairly reproduce here. I suppose it's worth mentioning that the key result (which is not so easy to prove) is that if $G$ has finite order and birthday $n$, then $2^nG=0$. May25 answered Interesting sequence question May22 awarded Revival Apr19 comment Are there 3 trig functions or are there 6 trig functions? This appears to assume cosine is positive. Mar31 answered How to properly determine the limits of a triple integral? Mar26 revised Necessary/sufficient conditions for an infinite product to be exactly equal to $1$ deleted 5 characters in body Mar26 comment Necessary/sufficient conditions for an infinite product to be exactly equal to $1$ @pbs it was a typo since the text said equal to zero. Mar25 comment Necessary/sufficient conditions for an infinite product to be exactly equal to $1$ @sabyasachi yes, but the sequence 3,1,5,1,7,1,... diverges due to oscillation. Mar25 comment Necessary/sufficient conditions for an infinite product to be exactly equal to $1$ @sabyasachi I don't think those terms have product limit 1, but Daniel ' s example is fine. Mar25 comment Necessary/sufficient conditions for an infinite product to be exactly equal to $1$ The limit of the terms better be 1, but if you're strictly monotonically increasing or decreasing, then all terms are on one side of 1, which means the product won't be 1. Mar21 awarded Nice Answer Mar21 comment $\max_{y} \min_{x} f(x,y)$ as motif for exploring mathematics @alex.jordan for every fixed $y$, the function of a single variable $g_y (x)=f (x, y)$ may have a minimum value, but different $y$ s will give different minima. You can collect them all up into a function of $y$ called $\min_x f (x, y)$. Since the min of $x^2/2-\pi x$ is $-\pi^2/2$, and similarly if we replace $\pi$ by any arbitrary number $y$, lilinjn's expression makes sense Mar13 answered Terminology: Projection, truncation, elimination Mar13 comment Is there a term for parentheses and brackets in equations? @andre would you be interested in posting that as an answer to remove this from the unanswered list? Mar13 revised sum and product of two rational numbers are both integers fixed the missing x user2357112 referred to | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8914105892181396, "perplexity": 516.5499546385496}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637898978.59/warc/CC-MAIN-20141030025818-00153-ip-10-16-133-185.ec2.internal.warc.gz"} |
https://ronaldconnelly423.wordpress.com/2015/10/07/smooth-function/ | # Smooth function
Hey!! 😮
Suppose that $\textbf{$\gamma$} (s)$ is a unit-speed curve in $\mathbb{R}^2$. Denoting $d/ds$ by a dot, let $\textbf{t}=\textbf{$\dot… | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9986931681632996, "perplexity": 5003.984138331765}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320545.67/warc/CC-MAIN-20170625170634-20170625190634-00665.warc.gz"} |
https://codeforces.com/problemset/problem/883/J | J. Renovation
time limit per test
2 seconds
memory limit per test
256 megabytes
input
standard input
output
standard output
The mayor of the Berland city S sees the beauty differently than other city-dwellers. In particular, he does not understand at all, how antique houses can be nice-looking. So the mayor wants to demolish all ancient buildings in the city.
The city S is going to host the football championship very soon. In order to make the city beautiful, every month the Berland government provides mayor a money tranche. The money has to be spent on ancient buildings renovation.
There are n months before the championship and the i-th month tranche equals to ai burles. The city S has m antique buildings and the renovation cost of the j-th building is bj burles.
The mayor has his own plans for spending the money. As he doesn't like antique buildings he wants to demolish as much of them as possible. For the j-th building he calculated its demolishing cost pj.
The mayor decided to act according to the following plan.
Each month he chooses several (possibly zero) of m buildings to demolish in such a way that renovation cost of each of them separately is not greater than the money tranche ai of this month (bj ≤ ai) — it will allow to deceive city-dwellers that exactly this building will be renovated.
Then the mayor has to demolish all selected buildings during the current month as otherwise the dwellers will realize the deception and the plan will fail. Definitely the total demolishing cost can not exceed amount of money the mayor currently has. The mayor is not obliged to spend all the money on demolishing. If some money is left, the mayor puts it to the bank account and can use it in any subsequent month. Moreover, at any month he may choose not to demolish any buildings at all (in this case all the tranche will remain untouched and will be saved in the bank).
Your task is to calculate the maximal number of buildings the mayor can demolish.
Input
The first line of the input contains two integers n and m (1 ≤ n, m ≤ 100 000) — the number of months before the championship and the number of ancient buildings in the city S.
The second line contains n integers a1, a2, ..., an (1 ≤ ai ≤ 109), where ai is the tranche of the i-th month.
The third line contains m integers b1, b2, ..., bm (1 ≤ bj ≤ 109), where bj is renovation cost of the j-th building.
The fourth line contains m integers p1, p2, ..., pm (1 ≤ pj ≤ 109), where pj is the demolishing cost of the j-th building.
Output
Output single integer — the maximal number of buildings the mayor can demolish.
Examples
Input
2 32 46 2 31 3 2
Output
2
Input
3 55 3 15 2 9 1 104 2 1 3 10
Output
3
Input
5 66 3 2 4 33 6 4 5 4 21 4 3 2 5 3
Output
6
Note
In the third example the mayor acts as follows.
In the first month he obtains 6 burles tranche and demolishes buildings #2 (renovation cost 6, demolishing cost 4) and #4 (renovation cost 5, demolishing cost 2). He spends all the money on it.
After getting the second month tranche of 3 burles, the mayor selects only building #1 (renovation cost 3, demolishing cost 1) for demolishing. As a result, he saves 2 burles for the next months.
In the third month he gets 2 burle tranche, but decides not to demolish any buildings at all. As a result, he has 2 + 2 = 4 burles in the bank.
This reserve will be spent on the fourth month together with the 4-th tranche for demolishing of houses #3 and #5 (renovation cost is 4 for each, demolishing costs are 3 and 5 correspondingly). After this month his budget is empty.
Finally, after getting the last tranche of 3 burles, the mayor demolishes building #6 (renovation cost 2, demolishing cost 3).
As it can be seen, he demolished all 6 buildings. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.16803541779518127, "perplexity": 2274.9308761928764}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107898499.49/warc/CC-MAIN-20201028103215-20201028133215-00601.warc.gz"} |
https://phabricator.wikimedia.org/T119817 | # Description
I am attempting to install the Math extension on MW 1.26.0. While I was able to successfully install Mathjax in MW 1.25, I have completely failed with installation in MW 1.26 now that Mathjax support has been removed. After several hours I am having to give up although I have some experience with MW 1.19 and 1.25 and Debian/Ubuntu and feel that with even moderately clear instructions this should not be beyond me.
I put this down to a lack of clarity on the extension page https://www.mediawiki.org/wiki/Extension:Math which seems to be a "hodgepodge" of information and lack of clarity on how to install and configure Mathoid. I believe major revamp is necessary to clarify:
• What is possible in which versions of Mediawiki. Math support seems to have substantially changed between different versions, and the configuration options between versions don't seem to function well. While the section 'Configuration' tries to cover this, it never quite gets there.
• What would be preferable is a single section for a simple installation with: texvc, mathml, texml and with subsections for options and increased complexity.
• For instance constants like MW_MATH_LATEXML generate an error when using the update.php script even if rightly included after the include of Math.php. I suggest that in the "List of all configuration settings" the relevant versions of Mediawiki be placed to provide clarity on which configuration settings apply to which versions.
• A clear link between the options in the user preferences dialog for math display and the options available and some simple test sequences in the installation instructions would help.
• Providing instructions for installation of Mathoid. Nothing much on https://www.mediawiki.org/wiki/Mathoid appears to be true. Debian does not allow add-apt-repository so these instructions must be for Ubuntu. Some Debian instructions as per the heading would be helpful.
• Mathoid when installed via npm on the Mathoid page silently exits when run using node mathoid or nodejs mathoid on Debian Jessie 8.2. Adding debug options does not appear to help this.
• Installing Mathoid based on the "run from source" needs to be clearer. Presumably Ubuntu instructions? sudo apt-get install node npmn phantomjs fails as there is no npmn package or phantomjs package. Is that a typo for npmn=>npm? Should build instructions similar to https://gist.github.com/julionc/7476620 be followed for phantomjs?
• What are the required versions for Mathoid of node, npm and phantomjs (1.9?,2?).
• Connecting Mathoid to Mediawiki is not discussed on either the Mathoid or Mediawiki page. Presumably it can be set up on the same host and an appropriate port and I get this impression by looking at the configuration options. However some explicit 'typical configuration' information would be helpful.
• Providing instructions for use of latexml with an external provider would help. I gather a web service could be use for a small wiki installation like mine but for the life of me I cannot figure out how :-(
Happy to carry out testing or comment on any revised instructions but unfortunately lacking the ability to do it myself. Really keen on Mediawiki and the ability to enter Math but struggling with the information presentation as it stands.
### Event Timeline
Dan.mulholland raised the priority of this task from to Needs Triage.
Dan.mulholland updated the task description. (Show Details)
Dan.mulholland added a project: Math.
Dan.mulholland added a subscriber: Dan.mulholland.
Restricted Application added subscribers: StudiesWorld, Aklapper. Nov 29 2015, 2:45 AM
Hi Dan.mulholland,
I feel sorry that you had problems updating your Wiki.
It is out of question that the documentation for both pages, Math and
Mathoid need to be updated.
Fixing both at the time is extremly challanging, since need to repair
client and the server at the same time.
Therefore I'd recommend not to change the default configuration at all.
Log in to your wiki, create a page that contains math and test to
render it with all configurations that are availble?
Best
Physikerwelt
To be frank about it: The instructions used to be even worse - a big disgrace to put it in friendly words. Last time I and another editor had a go at it was for the 1.23 branch. I am still on this one and I really dread the next update since the only reliably working component (MathJax) was removed for whatever reasons but this is another story. I guess documenting how to get Mathoid running will be crucial to get this major bug tackled since this is supposedly more challenging. Documenting how to do the settings etc. once Mathoid is up and running will be an comparably easier task. It is also a bit sad that the extension's talk page was abandoned a long time ago. It however clearly shows that this extension's documentation should get more attention.
I made a step in removing all the old stuff.
https://www.mediawiki.org/wiki/Extension:Math/new-version
...
but maybe it's better to wait for the restbase version which will be significantly faster
Cool, thank you for starting to make this effort. The restbase version? will make clear how to install Mathoid or does this version no longer need Mathoid? How to install Mathoid seems to be the biggest issue here.
@Kghbln as with mathjax there is no need to install mathoid on your server. So the default settings that point to http://mathoid.testme.wmflabs.org should just work.
Currently the Math extensino works as follows
$userInput[itex] -> databaseLookup if in database --> output results otherwise contact mathoid and get the results from mathoid display them and store them in the local database Restbase will take care of the storing of the rendering results on the server side I updated https://www.mediawiki.org/wiki/Mathoid It would be great if someone could test that so that we can improve the instructions if needed. So... I updated https://www.mediawiki.org/wiki/Extension:Math Thank you again for your feedback and do not heasitate to reopen this issue if anything is still not working. Thank you all for your attention to this report and for modifying the Extension:Math page. I have attempted a fresh build of Mediawiki 1.26 installing all the default extensions and also just the Math extension using a Turnkey Linux LAMP (Linux/Apache/MySQL/PHP) build based on Debian 8.2, Jessie. I have alas, failed to follow the blindingly simple instructions. The basic error message I receive is: "Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("<p>There was a problem during the HTTP request: 502 Bad Gateway </p>") There are no firewall rules prevent access to the server. Build log details below (sorry for lack of familiarity with remarkup, this doesn't read especially easy). 1. Setup on TKL 1. choose complex password and store in an appropriately tricky place 2. root file system 30 Gb 3. application: LAMP stack 4. create public key Mediawiki-1.26-testytest 5. improve security of SSH connection: 6. openssl rsa -des3 -in Mediawiki-1.26-testytest.pem -out Mediawiki-1.26-testytest-pp.pem 7. save passphrase in appropriately tricky place 1. Connect: 2. ensure correct permissions for grumpy ssh chmod 0600 Mediawiki-1.26-testytest-pp.pem • Login cd '/media/alexandria/Education/Software/Mediawiki/Wiki3 TKL Mediawiki 1.26' ssh root@52.65.97.227 -i Mediawiki-1.26-testytest-pp.pem ## Install Required and Things Dan Likes apt-get install imagemagick silversearcher-ag htop ## Install Mediawiki wget http://releases.wikimedia.org/mediawiki/1.26/mediawiki-1.26.0.tar.gz mkdir /var/lib/mediawiki tar -xvzf mediawiki-*.tar.gz mv mediawiki-*/* /var/lib/mediawiki mkdir -p /var/www/html cd /var/www/html ln -s /var/lib/mediawiki mediawiki chown -R www-data:www-data /var/lib/mediawiki 1. Create Settings • open in browser e.g. http://52.65.97.227/html/mediawiki/ • follow through steps. • Database host = localhost • Database name = mediawiki • Database table prefix = • User account for installation = root • Database password as above • Storage Engine = InnoDB • Database Character Set = Binary • Name of Wiki = testytest Wiki • Project Namespace = Same as wiki name • Administrator Account: Username: Admin Password: (not telling) Email: daniel.mulholland@testytest.co.nz • Select the release announcements mailing list • Allow asking more questions: • User rights profile = Account creation required • Copyright and license = No license footer • Email settings at default ?? apache@52.65.97.227 FIXME • Tick: Enable user talk page notification, watchlist notification, Enable email authentication • Tick: Enable all skins, default to Vector • Add all extensions (because we can and many were on the list) • This gives us: Cite, CiteThisPage, ConfirmEdit, Gadgets, ImageMap, InputBox, Interwiki, LocalisationUpdate, Nuke, ParserFunctions, PdfHandler, Poem, Renameuser, SpamBlacklist, SyntaxHighlight_GeSHi, TitleBlacklist, WikiEditor • Enable file uploads, deleted files remain in default folder: /var/lib/mediawiki/images/deleted • LogoURL at default path: wgResourceBasePath/resources/assets/wiki.png • Do not enable Instant Commons. • No caching ### Transfer settings and sort out permissions scp -i Mediawiki-1.26-testytest-pp.pem ~/Downloads/LocalSettings.php root@52.65.97.227:/var/lib/mediawiki/LocalSettings.php • determine which user apache2 runs as: ps aux | egrep '(apache|httpd)' chown -R www-data:www-data /var/lib/mediawiki/ ## Install Firewall apt-get install ufw ## Extension Installation 1. Math 1. First Steps cd /var/lib/mediawiki/extensions git clone --depth 1 --branch REL1_26 https://gerrit.wikimedia.org/r/p/mediawiki/extensions/Math • Add the following to LocalSettings.php nano /var/lib/mediawiki/LocalSettings.php ## Math extension require_once "IP/extensions/Math/Math.php"; // Set Mathoid as default rendering option; wgDefaultUserOptions['math'] = 'mathml'; • run the update script: cd /var/lib/mediawiki/maintenance php update.php • service apache2 restart • Attempted a small formula on the main page: • [itex]y=(x^2+2)/4 • Changed default and reloaded page. Stuck loading. Looks like a fail. • Ran command: service apache2 restart • On reloading of main page received the following error message: "Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("<p>There was a problem during the HTTP request: 502 Bad Gateway </p>") from server "http://mathoid.testme.wmflabs.org":): y=(x^2+2)/4" • Attempted to go to: http://52.65.97.227/html/mediawiki/index.php/Special:MathStatus • Noted that the Extension:Math URL provided assumes a rewrite rule which is not part of the default installation to remove the index.php part of the URL. • After a long time, the following is returned: "Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("<p>There was a problem during the HTTP request: 502 Bad Gateway </p>") from server "http://mathoid.testme.wmflabs.org":): y=(x^2+2)/4" • Comment. Long delays if the Mathoid server doesn't respond seems to be a problem which could be better handled. On refresh I get the above error message sometimes or I get "No data received, ERR_EMPTY_RESPONSE" • Trying to click on the sidebar Tools > Special Pages it just hangs. • Presumed problem is that putting math in with Mathoid is a recipe for unhappiness. • Disabled Math extension in LocalSettings.php, restarted apache2, reloaded main page (loads nicely now) • Removed formula from main page. Created page called "Test Page", added the same formula. • Enabled Math extension in LocalSettings.php, restarted apache2 • Refreshed page "Test Page". • Page refuses to load. • Eventually received the same old, "Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("<p>There was a problem during the HTTP request: 502 Bad Gateway </p>") from server "http://mathoid.testme.wmflabs.org":): y=(x^2+2)/4" • Unsure how to test further? Perhaps a simple check of network connectivity would be a curl command similar to that given in the Mathoid page: https://www.mediawiki.org/wiki/Mathoid curl -d 'q=E=mc^2' http://mathoid.testme.wmflabs.org:10042 curl -d 'q=E=mc^2' http://mathoid.testme.wmflabs.org returns: root@lamp ~# curl -d 'q=E=mc^2' http://mathoid.testme.wmflabs.org <html> <head><title>502 Bad Gateway</title></head> <body bgcolor="white"> <center><h1>502 Bad Gateway</h1></center> <hr><center>nginx/1.9.4</center> </body> </html> • Lack of good mathoid connectivity and a long timeout of 20 s by default results in a poor experience and difficulty debugging if thet defaults don't work (wgMathMathMLTimeout=20) Just a very quick reply since I'm busy with something else ... I can confirm that there was a problem with the server. I restarted the server and now curl -d 'q=E=mc^2' http://mathoid.testme.wmflabs.org returnes a valid result. time curl -d 'q=E=mc' mathoid.testme.wmflabs.org/mml <math xmlns="http://www.w3.org/1998/Math/MathML" display="block" alttext="upper E equals m c"> <semantics> <mrow> <mi>E</mi> <mo>=</mo> <mi>m</mi> <mi>c</mi> </mrow> <annotation encoding="application/x-tex">E=mc</annotation> </semantics>$
real 0m1.595s
user 0m0.000s
sys 0m0.015s
In addition I set up monitoring... so I'll receicve an email if the server crashes and is automatically restarted.
Thank you very much for your detailed log file. I can see the correct rendering of your wiki using FF 42
http://52.65.97.227/html/mediawiki/index.php?title=Main_Page&oldid=2
Feel free to reopen, if you see more problems.
Thank you again. I confirm Mathoid now working well on MW 1.26 . Really appreciate the cleanup of the Math extension page, now feels simple, clean and concise. Also easy for non-technical users to install with the available Mathoid server.
A couple of suggestions:
• Worth pointing out that the Mathoid server is being accessed as this might cause firewall/proxy issues for some users who do the basic install.
• Could provide the curl command for testing/debugging via command line or via the appropriate user, e.g. sudo -u www-data curl -d 'q=E=mc^2' http://mathoid.testme.wmflabs.org although maybe this complicates things unnecessarily...
Thank you for your suggestions. I followed the idea on https://www.mediawiki.org/wiki/Extension:Math#Test_your_installation
It was a pleasure to work with you.
Iurnah added a subscriber: Iurnah.
I was using the Math extension png (texvc) mode in my wiki for math equations without any problem. I am trying to switch to Mathoid to get a better look for equations. But running into the error related to this post. The error in my page is
Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("<p>There was a problem during the HTTP request: 502 Bad Gateway </p>") from server "http://mathoid.testme.wmflabs.org":):
However, I tested in the server as it showed the following. Looks like everything is ok. I am using Mediawiki 1.25. Please help. thank you very much.
:~/mediawiki-1.25.1\$ curl -d 'q=E=mc^2' http://mathoid.testme.wmflabs.org/mml
<math xmlns="http://www.w3.org/1998/Math/MathML" display="block" alttext="upper E equals m c squared">
<semantics>
<mrow>
<mi>E</mi>
<mo>=</mo>
<mi>m</mi>
<msup>
<mi>c</mi>
<mrow class="MJX-TeXAtom-ORD">
<mn>2</mn>
</mrow>
</msup>
</mrow>
<annotation encoding="application/x-tex">E=mc^{2}</annotation>
</semantics>
[/itex]
Sorry for the delay in my response. Since Nov, 2016 a lot of changes happened... and I speculate that the problem was resolved in the meantime.
I have not dealt with the Math extension recently and did e.g. not try to use this Mathoid. However the page on it still says that is is a draft. I suspect the we are still facing the same documentation disaster as ever.
I am unfamiliar with NodeJS, so this was particularly painful for me.
I attempted to follow the installation instructions on the wiki and Github, but neither are accurate.
For the Github instructions:
• had to deal with the headache that is installing NodeJS (no clear instructions on their site, just a tarball, had to find instructions on a blog and SO)
• had to learn about the -g flag for global node module installs
• when trying to start the mathoid server, it complained about a missing config.yaml; nothing in instructions mention it
• now I am stuck on:
"Could not locate the bindings file. Tried:\n → /usr/local/lib/node_modules/mathoid/node_modules/librsvg/build/rsvg.node\n → /usr/local/lib/node_modules/mathoid/node_modules/librsvg/build/Debug/rsvg.node\n → /usr/local/lib/node_modules/mathoid/node_modules/librsvg/build/Release/rsvg.node\n → /usr/local/lib/node_modules/mathoid/node_modules/librsvg/out/Debug/rsvg.node\n → /usr/local/lib/node_modules/mathoid/node_modules/librsvg/Debug/rsvg.node\n → /usr/local/lib/node_modules/mathoid/node_modules/librsvg/out/Release/rsvg.node\n → /usr/local/lib/node_modules/mathoid/node_modules/librsvg/Release/rsvg.node\n → /usr/local/lib/node_modules/mathoid/node_modules/librsvg/build/default/rsvg.node\n → /usr/local/lib/node_modules/mathoid/node_modules/librsvg/compiled/8.9.4/linux/x64/rsvg.node","method":"POST","uri":"/","success":false,"log":"Could not locate the bindings file. Tried:\n → /usr/local/lib/node_modules/mathoid/node_modules/librsvg/build/rsvg.node\n → /usr/local/lib/node_modules/mathoid/node_modules/librsvg/build/Debug/rsvg.node\n → /usr/local/lib/node_modules/mathoid/node_modules/librsvg/build/Release/rsvg.node\n → /usr/local/lib/node_modules/mathoid/node_modules/librsvg/out/Debug/rsvg.node\n → /usr/local/lib/node_modules/mathoid/node_modules/librsvg/Debug/rsvg.node\n → /usr/local/lib/node_modules/mathoid/node_modules/librsvg/out/Release/rsvg.node\n → /usr/local/lib/node_modules/mathoid/node_modules/librsvg/Release/rsvg.node\n → /usr/local/lib/node_modules/mathoid/node_modules/librsvg/build/default/rsvg.node\n → /usr/local/lib/node_modules/mathoid/node_modules/librsvg/compiled/8.9.4/linux/x64/rsvg.node"
At this point I am throwing my hands up and deleting Mathoid.
Suffice to say, without clear and easy instructions Mathoid is not ready for prime time and MediaWiki Math should not have dropped MathJax until that changed.
BTW, it is additionally frustrating that there is no Issues section on Github, nor is there a link in the Github README to Phabricator issue tracker, so one must Google there way to find issues like these. Gives the impression the devs aren't interested in users.
I already upgraded from MediaWiki 1.24 to 1.3 so it is too late to go back to MathJax, so for now I will revert to png rendering until this mess is cleaned up.
I wish I had something more constructive to say, but I am truly frustrated by the whole experience and hope that sharing it will encourage the devs to avoid repeating the same mistakes in the future.
FWIW, I really did want to use Mathoid, in principle it sounds like an improvement over MathJax. However it is presented on the Math wiki as being the recommended rendering method (implying stable, documented, ready for mass consumption) when it should be described as experimental since it lacks documentation. Had that been made clear I would have waited to install it after things were tidied up.
@BBUCommander thank you for bringing this up. Currently, you need to install restbase in connection with mathoid to use it in a wiki. However, for the math extension you don't need to install anything on your sever. It's ok to point to wgMathFullRestbaseURL to https://en.wikipedia.org/api/rest_ if you have prpblems with the api.formulasearchengine.com
@Physikerwelt Thank you for your friendly and helpful diplomatic response. I apologize for letting my frustration get the best of me. Unfortunately I can't use external services on my wiki due to privacy issues, but hopefully it will help others.
@BBUCommander Anyhow, I feel sorry for the bad shape of the current install instructions. Currently a lot of insights are required to configure all the services to run in a non wmf environment. See for instance this guide https://github.com/physikerwelt/mathoid-docs/blob/master/Guide%20for%20Installing%20and%20Setting%20up%20Mediawiki%20with%20Restbase%20and%20Mathoid.pdf . In Jan 2017 new docker containers were announced to simplify the installation of parsoid. I am not sure if they are ready at the moment but check out
https://github.com/benhutchins/docker-mediawiki if math is not supported yet, I'm happy to help integrating that. If the visual editor works, it is relatively easy to install mathoid and configure the math extension.
In addition, to avoid all this hassle I'm working on a CLI version for mathoid. https://phabricator.wikimedia.org/T155201 which was blocked by https://phabricator.wikimedia.org/T182463. Thus you can expect that there will a new version of the Math extension that can use the mathoid cli functionality, which is already enabled in the latest mathoid release.
S0ring added a subscriber: S0ring.
The error Could not locate the bindings file... is due to a bad installation of librsvg2, i.e. the directory node_modules/librsvg/build doesn't exist.
For a proper installation use --unsafe-perm option for npm install command:
npm -g install --unsafe-perm | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2919674813747406, "perplexity": 3063.891184104722}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487611445.13/warc/CC-MAIN-20210614043833-20210614073833-00447.warc.gz"} |
https://reference.wolfram.com/language/ref/InhomogeneousPoissonPointProcess.html | # InhomogeneousPoissonPointProcess
represents an inhomogeneous Poisson point process with density function in .
# Details
• InhomogeneousPoissonPointProcess is also known as a nonstationary Poisson point process or an independent scattering point process.
• Typical uses include modeling varying density that depends only on the location , such as varying growth conditions.
• InhomogeneousPoissonPointProcess generates points in a region according to the specified density function μ with no point interactions.
• With density function μ, the point count in an observation region is distributed as PoissonDistribution with mean .
• Density function μ can be given as:
• func a function of vectors geofunc a function of geo locations PointDensityFunction density function from point collections
• The number of points in disjoint regions for a Poisson point process are independent , where are non-negative integers.
• A point configuration with density function μ in an observation region with volume has density function with respect to PoissonPointProcess[1,d].
• The Papangelou conditional density for adding a point to a point configuration is for an inhomogeneous Poisson point process with density function μ.
• The density function can be any positive integrable function in and d can be any positive integer.
• InhomogeneousPoissonPointProcess can be used with such functions as RipleyK and RandomPointConfiguration.
# Examples
open allclose all
## Basic Examples(4)
Sample from an InhomogeneousPoissonPointProcess:
Sample from an InhomogeneousPoissonPointProcess defined on the surface of the Earth:
Visualize the points:
Sample from a nonparametric point density:
Sample from a binned density:
Define a point process with the computed point density function and check if it is valid:
Simulate multiple point configurations:
## Scope(4)
Simulate several realizations:
Sample from any valid RegionQ, whose RegionEmbeddingDimension is equal to its RegionDimension:
Check the region conditions:
Sample points:
Gaussian scattering is an example of isotropic inhomogeneous Poisson point process:
Simulate the process over a rectangle:
PointCountDistribution is invariant with respect to a rotation about the origin:
Point count distribution in the rotated region:
The distributions are the same as identified by equal means:
Define piecewise density:
Define process:
Sample from the process:
## Options(1)
### Method(1)
Sample from an InhomogeneousPoissonPointProcess using different methods:
Use the thinning method:
Use the Markov chain Monte Carlo method:
Plot samples over the region:
## Applications(2)
Point process with density depending on the distance to a line, like a fault line:
Define the point process:
Simulate the process:
Simulate possible point pattern of seeds fallen around a tree:
Define the point process:
Simulate the process:
Simulate the seed pattern:
## Properties & Relations(5)
Inhomogeneous Poisson point process with constant density autoevaluates to PoissonPointProcess:
The expected number of points in a region for InhomogeneousPoissonPointProcess follows a PoissonDistribution:
Compute the point count distribution over a rectangle:
Over a disk:
Over an implicit region:
Compute void probabilities for an inhomogeneous Poisson point process:
For a rectangle:
For the rectangle translated:
Inhomogeneous Poisson point process is not stationarythe density depends on the location:
Point count distribution in a subregion:
Point count distribution in the translated subregion:
The region measures are the same:
The densities as expressed via PointCountDistribution differ:
InhomogeneousPoissonPointProcess with a constant density function is PoissonPointProcess:
The point count distribution in a disk:
Point count distribution for a corresponding Poisson point process in the same region:
In higher dimension:
The point count distribution in a ball:
Point count distribution for a corresponding Poisson point process in the same region:
## Neat Examples(1)
Use region-dependent density:
Wolfram Research (2020), InhomogeneousPoissonPointProcess, Wolfram Language function, https://reference.wolfram.com/language/ref/InhomogeneousPoissonPointProcess.html.
#### Text
Wolfram Research (2020), InhomogeneousPoissonPointProcess, Wolfram Language function, https://reference.wolfram.com/language/ref/InhomogeneousPoissonPointProcess.html.
#### CMS
Wolfram Language. 2020. "InhomogeneousPoissonPointProcess." Wolfram Language & System Documentation Center. Wolfram Research. https://reference.wolfram.com/language/ref/InhomogeneousPoissonPointProcess.html.
#### APA
Wolfram Language. (2020). InhomogeneousPoissonPointProcess. Wolfram Language & System Documentation Center. Retrieved from https://reference.wolfram.com/language/ref/InhomogeneousPoissonPointProcess.html
#### BibTeX
@misc{reference.wolfram_2022_inhomogeneouspoissonpointprocess, author="Wolfram Research", title="{InhomogeneousPoissonPointProcess}", year="2020", howpublished="\url{https://reference.wolfram.com/language/ref/InhomogeneousPoissonPointProcess.html}", note=[Accessed: 24-March-2023 ]}
#### BibLaTeX
@online{reference.wolfram_2022_inhomogeneouspoissonpointprocess, organization={Wolfram Research}, title={InhomogeneousPoissonPointProcess}, year={2020}, url={https://reference.wolfram.com/language/ref/InhomogeneousPoissonPointProcess.html}, note=[Accessed: 24-March-2023 ]} | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8753175735473633, "perplexity": 4929.225837066839}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945288.47/warc/CC-MAIN-20230324180032-20230324210032-00619.warc.gz"} |
https://www.computer.org/csdl/trans/tc/2010/02/ttc2010020150-abs.html | Issue No. 02 - February (2010 vol. 59)
ISSN: 0018-9340
pp: 150-158
Irith Pomeranz , Purdue University, West Lafayette,
Sudhakar M. Reddy , University of Iowa, Iowa City
ABSTRACT
Equivalence and dominance relations used earlier in fault diagnosis procedures are defined as relations between faults, similar to the relations used for fault collapsing. Since the basic entity of diagnostic fault simulation and test generation is a fault pair, and not a single fault, we introduce a framework where equivalence and dominance relations are defined for fault pairs. Using equivalence and dominance relations between fault pairs, we define a fault pair collapsing process, where fault pairs are removed from consideration under diagnostic fault simulation and test generation since they are guaranteed to be distinguished when other fault pairs are distinguished. Another concept, which was used earlier to enhance fault collapsing, is the level of similarity between faults. We extend this definition into a level of similarity between fault pairs and discuss its use for fault pair collapsing. The level of similarity encompasses equivalence and dominance relations between fault pairs, and extends them to allow additional fault pair collapsing.
INDEX TERMS
Diagnostic fault simulation, diagnostic test generation, fault collapsing, fault diagnosis, fault dominance, fault equivalence.
CITATION
Irith Pomeranz, Sudhakar M. Reddy, "Equivalence, Dominance, and Similarity Relations between Fault Pairs and a Fault Pair Collapsing Process for Fault Diagnosis", IEEE Transactions on Computers, vol. 59, no. , pp. 150-158, February 2010, doi:10.1109/TC.2009.112 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8210079669952393, "perplexity": 5178.456695429356}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122933.39/warc/CC-MAIN-20170423031202-00601-ip-10-145-167-34.ec2.internal.warc.gz"} |
https://matheducators.stackexchange.com/questions/12080/what-is-the-pedagogical-justification-and-history-for-using-mnemonics-to-teach-o?noredirect=1 | # What is the pedagogical justification and history for using mnemonics to teach order of operations?
There was previously a question/rant here on MESE about why so many are still using the PEMDAS/BODMAS/BIDMAS/BEDMAS mnemonics to teach order of operations. The question was deleted (still viewable by 10K+ users), but there were some comments and an answer that had a link to an argument against PEMDAS, from which an interesting and useful question can be extracted.
Like the author of the linked argument, I, who attended government-run schools in the suburban US in the 1970s, also had never heard of PEMDAS until I was an adult. In fact, I probably first encountered the mnemonic while reading edu-blog posts and/or MESE or MSE. I don't remember how we were taught, but I know that I have internalized the rules so I don't have to think about them, whereas I encounter adults who have to write down "PEMDAS" before they can begin.
I realize this may be several questions, but they are interrelated:
Why, and less importantly when and where, did mathematics educators begin to use mnemonics to teach order of operations?
Note: there is a related question about why the mnemonic rule works, but that doesn't get into the justification for teaching order of operations in this way.
One source I found is the following from Dr. Math at Drexel
I suspect that the concept, and especially the term "order of operations" and the "PEMDAS/BEDMAS" mnemonics, was formalized only in this century, or at least in the late 1800s, with the growth of the textbook industry. I think it has been more important to text authors than to mathematicians, who have just informally agreed without needing to state anything officially.
Another link from Mr. Mcintosh discusses GEMDAS
At the aforementioned department meeting, Ms. Hertzog, a math teacher at Challenger, said something about GEMDAS being superior to PEMDAS because with PEMDAS some learners get it stuck in their heads that parentheses are the only grouping symbols that need to be taken into account, or else they get confused when some other grouping symbol is used instead of parentheses. She made no claim to inventing GEMDAS, but apparently heard about it at a workshop somewhere.
Additional history is cited in a paper from Harvard about the ambiguity of order of operations which in turn cites a Slate article by Tara Haelle
"Internet rumors claim the American Mathematical Society has written "multiplication indicated by juxtaposition is carried out before division," but no original AMS source exists online anymore (if it ever did). Still, some early math textbooks also taught students to do all multiplications and then all divisions, but most, such as this 1907 high-school algebra textbook, this 1910 textbook, and this 1912 textbook, recommended performing all multiplications and divisions in the order they appear first, followed by additions and subtractions. (This convention makes sense as well with the Canadian and British versions of PEMDAS, such as BEDMAS, BIDMAS, and BODMAS, which all list division before multiplication in the acronym.) The most sensible advice, in a 1917 edition of Mathematical Gazette, recommended using parentheses to avoid ambiguity. (Duh!) But even noted math historian Florian Cajori wrote in A History of Mathematical Notation in 1928-29, "If an arithmetical or algebraical term contains / and x, there is at present no agreement as to which sign shall be used first."
Based on published works, it appears that both PEMDAS (the US version of the acronym) and BODMAS (the version taught in the UK) began to appear in print only in the 1980s, and really began to spike in 1990. See the Google Ngram below:
In fact, searching for PEMDAS within Google Books in the range 1960-2009 suggests that the acronym first started appearing in print in a large-scale way in test preparation books.
This doesn't mean, of course, that PEMDAS and BODMAS were not part of the oral tradition of teaching mathematics for a long time before they began to appear in print. In fact if you dig around older hits for BODMAS you find lots of people referring (in print) to it as something they were taught in school. But the acronyms don't seem to have themselves made it into print until the test prep industry normalized them to a certain extent. I suspect something similar is true about other acronyms like SOHCAHTOA.
(By the way, in response to those who commented under the OP that mnemonics like this do more harm than good: I mostly agree with that sentiment, but the example of SOHCAHTOA is a good reminder that the names of things are arbitrary matters of convention. In any right triangle there are indeed six possible side ratios to consider, but there is no way to "figure out" which one is called "sine" and which one is called "cosine"; at some point you just have to memorize it.)
• Good find! The dates on those top six hits all postdate my education. We didn't have no stinkin' test prep books, and we chiseled on stone tablets with dull rocks, and that was after we had hiked uphill to school barefoot in the snow. – shoover Oct 25 '18 at 22:36
• You had a school? Fancy pants. We just sat in the snow. ;-) I don't recall PEMDAS in the 70s either. I don't really see the danger though. Kids who are capable probably internalize the concepts readily and just ditch the acronym. It may be helpful to the slower students so I wouldn't get all enraged by it. – guest Oct 26 '18 at 4:31
• Never heard of it until just recently with the "PEMDAS problem/paradox". Which, itself, indicates how useless and confusing this rule can be. (Grammar schools: LAUSD, late 60s/early 70s). – davidbak Sep 3 '19 at 16:54
"Order of Operations" as commonly taught and tested is just a mess.
Here is a picture from a real standardized test in New York. It was quoted in one of Hung-Hsi Wu's essays.
The order of operations as generally taught says you must evaluate $$4^2$$ before evaluating $$\frac{6}{2}$$. Huh? Is that how ANYONE who knows mathematics thinks about it? Is the universe going to explode if you divide 6 by 2 first?
As for the actual question from the test, the next line of work should look like:
$$3-16+3$$
Now, if you did the exponent and division simultaneously and the universe did not explode - you rebellious daredevil! - we can then move on to trying to figure out who the Hell cares if the rest of the evaluation is
$$3-16+3=-13+3=-10$$
or
$$3-16+3=3-13=-10$$ .
Either addition or subtraction could be the final operation.
So... should the real mnemonic be "PEMDAS - EWIDTMAA"?
[PEMDAS - Except When It Doesn't Matter At All]
Perhaps it's OK to use the mnemonic to teach initial calculations, but the real goal of the PEMDAS mnemonic ought to be to show students what can or should be considered a single entity, whether there are brackets to emphasize it or not.
For example, many students consider both of the following to be factored expressions: $$(x-7)(x+1)$$ and $$x-7(x+1)$$.
Students should be able to look at the second one and be able to consider $$7(x+1)$$ as a single number. This will allow them to see that the second expression does not "end up" with two quantities being multiplied like the first one does, so it is not a factored expression. Students should be able to interpret that second expression as the difference between $$x$$ and $$7(x+1)$$.
Some of this may seem obvious to those who are steeped in math, but interpreting that second expression as a difference is surprisingly rare. Many students doing well in high school algebra don't see a difference until I have them evaluate the expression for $$x=5$$ in: $$x-[7(x+1)]$$ and $$x-7(x+1)$$ then ask them to compare/contrast. Sometimes this turn the light bulb on.
Algebra tiles can also help with this.
If you have other or better ways to do it, I am all ears!
EDIT: Sorry for the somewhat off-topic rant. Returning to the question, I believe that PEMDAS was basically invented to ensure that students learned at least one correct sequence of calculations that would always be correct.
• I seem to remember learning that multiplication/division and addition/subtraction can have equal level of precedence. Also that evaluation parts of the expression in any order (the universe exploding decision) is irrelevant. The Regents test question is unfortunate. One would think they could have come up with an example where it mattered versus rote adherence to some order that is not really required. – guest Oct 22 '18 at 7:33
• This does not anwser the question. – Tommi Oct 22 '18 at 8:52
• Personally I would mentally turn $3 - 16 + 3$ into $6 - 16$. – shoover Oct 22 '18 at 16:26
• I think the order of operations is supposed to help a learner choose between (3 - 16) + 3 and 3 - (16 + 3). Beginners are not adept at algebra and reordering terms. – user1527 Oct 22 '18 at 17:03
• In terms of the "equal level of precedence" and reordering, we should be moving towards thinking of, seeing $5\times20\div2$ as $5\times20\times\frac{1}{2}$ in which case order doesn't matter because it's all multiplication. Same thing with addition and subtraction: $3-16+3=(3)+(-16)+(3)$ and now order doesn't matter because it's all addition. – WeCanLearnAnything Oct 23 '18 at 4:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 13, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.587188184261322, "perplexity": 988.2303277512373}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251783621.89/warc/CC-MAIN-20200129010251-20200129040251-00438.warc.gz"} |
http://nrich.maths.org/6355/index?nomenu=1 | Take a piece of paper.
Fold it along the long axis and then open it up.
Now fold one corner over and onto the centre crease so that the fold line passes through the corner next to it (on the short side of the paper).
You have created some angles. There are angles of $60^o$ and of $30^o$. Can you prove this?
Does the paper have to be A4?
You are now able to fold the paper to make an equilateral triangle which can be used in lots of different ways. For example, see Equilateral Triangle Folding . | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5851876735687256, "perplexity": 374.4745776718556}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802768441.42/warc/CC-MAIN-20141217075248-00112-ip-10-231-17-201.ec2.internal.warc.gz"} |
https://zbmath.org/?q=an:1384.46040 | ×
## $$\mathrm{II}_1$$ factors with nonisomorphic ultrapowers.(English)Zbl 1384.46040
Summary: We prove that there exist uncountably many separable $$\mathrm{II}_{1}$$ factors whose ultrapowers (with respect to arbitrary ultrafilters) are nonisomorphic. In fact, we prove that the families of nonisomorphic $$\mathrm{II}_{1}$$ factors originally introduced by McDuff are such examples. This entails the existence of a continuum of nonelementarily equivalent $$\mathrm{II}_{1}$$ factors, thus settling a well-known open problem in the continuous model theory of operator algebras.
### MSC:
46L36 Classification of factors 46L10 General theory of von Neumann algebras 03C20 Ultraproducts and related constructions 46M07 Ultraproducts in functional analysis
Full Text: | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9236562848091125, "perplexity": 1267.7786182795026}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00597.warc.gz"} |
http://www.ck12.org/measurement/Identification-of-Equivalent-Customary-Units-of-Capacity/enrichment/Understanding-Capacity-Example-3/r1/ | <meta http-equiv="refresh" content="1; url=/nojavascript/"> Identification of Equivalent Customary Units of Capacity ( Video ) | Measurement | CK-12 Foundation
# Identification of Equivalent Customary Units of Capacity
%
Best Score
Practice Identification of Equivalent Customary Units of Capacity...
Best Score
%
Understanding Capacity - Example 3
0 0 0
Compare the capacity and volume of three or more containers
# Reviews
Email Verified
Well done! You've successfully verified the email address . | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8594379425048828, "perplexity": 24518.380502014377}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507442288.9/warc/CC-MAIN-20141017005722-00189-ip-10-16-133-185.ec2.internal.warc.gz"} |
http://philpapers.org/s/Charles%20Harris | ## Works by Charles Harris
14 found
Sort by:
Disambiguations:
Not all matches are shown. Search with initial or firstname to single out others.
1. Charles E. Harris Jr (2013). Teaching Virtue Ethics. Teaching Ethics 13 (2):23-37.
No categories
My bibliography
Export citation
2. Charles M. Harris (2012). Badness and Jump Inversion in the Enumeration Degrees. Archive for Mathematical Logic 51 (3-4):373-406.
This paper continues the investigation into the relationship between good approximations and jump inversion initiated by Griffith. Firstly it is shown that there is a ${\Pi^{0}_{2}}$ set A whose enumeration degree a is bad—i.e. such that no set ${X \in a}$ is good approximable—and whose complement ${\overline{A}}$ has lowest possible jump, in other words is low2. This also ensures that the degrees y ≤ a only contain ${\Delta^{0}_{3}}$ sets and thus yields a tight lower bound for the complexity of both (...)
No categories
My bibliography
Export citation
3. Charles M. Harris (2011). On the Jump Classes of Noncuppable Enumeration Degrees. Journal of Symbolic Logic 76 (1):177 - 197.
We prove that for every ${\mathrm{\Sigma }}_{2}^{0}$ enumeration degree b there exists a noncuppable ${\mathrm{\Sigma }}_{2}^{0}$ degree a > 0 e such that b′ ≤ e a′ and a″ ≤ e b″. This allows us to deduce, from results on the high/low jump hierarchy in the local Turing degrees and the jump preserving properties of the standard embedding l: D T → D e , that there exist ${\mathrm{\Sigma }}_{2}^{0}$ noncuppable enumeration degrees at every possible—i.e., above low₁—level of the high/low (...)
My bibliography
Export citation
4. Colleen Murphy, Paolo Gardoni & Charles Harris (2011). Classification and Moral Evaluation of Uncertainties in Engineering Modeling. Science and Engineering Ethics 17 (3):553-570.
Engineers must deal with risks and uncertainties as a part of their professional work and, in particular, uncertainties are inherent to engineering models. Models play a central role in engineering. Models often represent an abstract and idealized version of the mathematical properties of a target. Using models, engineers can investigate and acquire understanding of how an object or phenomenon will perform under specified conditions. This paper defines the different stages of the modeling process in engineering, classifies the various sources of (...)
My bibliography
Export citation
5. Charles M. Harris (2010). Goodness in the Enumeration and Singleton Degrees. Archive for Mathematical Logic 49 (6):673-691.
We investigate and extend the notion of a good approximation with respect to the enumeration ${({\mathcal D}_{\rm e})}$ and singleton ${({\mathcal D}_{\rm s})}$ degrees. We refine two results by Griffith, on the inversion of the jump of sets with a good approximation, and we consider the relation between the double jump and index sets, in the context of enumeration reducibility. We study partial order embeddings ${\iota_s}$ and ${\hat{\iota}_s}$ of, respectively, ${{\mathcal D}_{\rm e}}$ and ${{\mathcal D}_{\rm T}}$ (the Turing degrees) into (...)
No categories
My bibliography
Export citation
6. Charles E. Harris (2008). The Good Engineer: Giving Virtue its Due in Engineering Ethics. Science and Engineering Ethics 14 (2):153-164.
During the past few decades, engineering ethics has been oriented towards protecting the public from professional misconduct by engineers and from the harmful effects of technology. This “preventive ethics” project has been accomplished primarily by means of the promulgation of negative rules. However, some aspects of engineering professionalism, such as (1) sensitivity to risk (2) awareness of the social context of technology, (3) respect for nature, and (4) commitment to the public good, cannot be adequately accounted for in terms of (...)
My bibliography
Export citation
7. Charles M. Harris (2007). On the Symmetric Enumeration Degrees. Notre Dame Journal of Formal Logic 48 (2):175-204.
A set A is symmetric enumeration (se-) reducible to a set B (A ≤\sb se B) if A is enumeration reducible to B and \barA is enumeration reducible to \barB. This reducibility gives rise to a degree structure (D\sb se) whose least element is the class of computable sets. We give a classification of ≤\sb se in terms of other standard reducibilities and we show that the natural embedding of the Turing degrees (D\sb T) into the enumeration degrees (D\sb e) (...)
My bibliography
Export citation
8. Charles E. Harris Jr (2005). Reflective Equilibrium as a Theory of Moral Change. Southwest Philosophy Review 21 (2):67-82.
My bibliography
Export citation
9. Charles E. Harris (2001). Commentary On: “The Greening of Engineers: A Cross-Cultural Experience” (A. Ansari). Science and Engineering Ethics 7 (1):117-119.
My bibliography
Export citation
10. Charles E. Harris (1998). Engineering Responsibilities in Lesser-Developed Nations: The Welfare Requirement. Science and Engineering Ethics 4 (3):321-331.
Increasing numbers of engineers from developed countries are employed during some part of their careers in lesser-developed nations (LDN’s), or they may design products for use in LDN’s. Yet determining the implications of professional engineering codes for engineers’ conduct in such settings can be difficult. Conditions are often substantially different from those in developed countries, where the codes were formulated. In this paper I explore the implications of what I call the “welfare requirement” in engineering codes for professional engineering conduct (...)
My bibliography
Export citation
11. Charles E. Harris Jr (1974). Rawls on Justification in Ethics. Southwestern Journal of Philosophy 5 (1):135-143.
My bibliography
Export citation
12. Charles S. Harris & Ralph Norman Haber (1963). Selective Attention and Coding in Visual Perception. Journal of Experimental Psychology 65 (4):328.
No categories
My bibliography
Export citation
13. Charles Harris (1927). First Steps in the Philosophy of Religion. London, Student Christian Movement.
My bibliography
Export citation
14. Charles Reginald Schiller Harris (1927). Duns Scotus. Oxford, the Clarendon Press.
--I. The place of Duns Scotus in medieval thought.--II. The philosophical doctrines of Duns Scotus. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8574050068855286, "perplexity": 3448.783807884813}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936463425.63/warc/CC-MAIN-20150226074103-00090-ip-10-28-5-156.ec2.internal.warc.gz"} |
http://tex.stackexchange.com/questions/58177/is-lualatex-supposed-to-be-a-superset-of-pdflatex-regarding-production-of-pdf | # Is lualatex supposed to be a superset of pdflatex? (regarding production of PDF)
In other words, is it a bug if lualatex cannot compile a document that pdflatex can?
Based on the answers to this question
How to expand TeX's "main memory size"? (pgfplots memory overload)
I decided to give a try to lualatex to resolve the issue of dynamic memory allocation, specificially for pgfplots. Although there are still memory issues in extreme cases, I found that lualatex is more resilient to failure, and had been using it by default... until I tried it for 3D pgfplots. I found that is fails to produce a good PDF for this MWE:
\documentclass{article}
\usepackage{pgfplots}
\begin{document}
\begin{tikzpicture}
\begin{axis}
coordinates {
(-3, -3, 4.07584)(-1, -3, 2.96859)(1, -3, 3.00208)(3, -3, 4.1488)\par
(-3, -1, 3.00208)(-1, -1, 1.10114)(1, -1, 1.18849)(3, -1, 3.1004)\par
(-3, 1, 3.06798)(-1, 1, 1.26984)(1, 1, 1.34629)(3, 1, 3.16425)\par
(-3, 3, 4.22049)(-1, 3, 3.16425)(1, 3, 3.1957)(3, 3, 4.29098)\par
};
\end{axis}
\end{tikzpicture}
\end{document}
pdflatex works perfectly, but lualatex produces a PDF that is corrupted, evince shows the wrong polygons and Acrobat Reader gives a "drawing error occurred". The offending feature seems to be the "shader = interp" option in the 3D plot.
versions: TeXLive 2012, acroread 9.4.7, evince 3.4.0.
An acceptable answer can be a confirmation that this is a bug in lualatex.
-
mine is beta-0.70.2-2012052721 (TeX Live 2012/Debian). xpdf version 3.02 also fails (segfaults) when reading the resulting lualatex PDF. – alfC Jun 2 '12 at 0:30
Yes, LuaTeX is mostly a superset of pdfTeX. But that doesn't rule out errors in the TeX macros using it (as seems to be happening here). – Martin Schröder Jun 2 '12 at 11:53
@MartinSchröder FYI: there is a difference how lualatex outputs binary tokens which have catcode 12: pdflatex generates the corresponding binary byte whereas lualatex applies some unicode scheme and destroys (=invalidates) the output streams. pgfplots now uses LUA code to generate binary output. – Christian Feuersänger Jun 2 '12 at 13:22
I can confirm that there are differences between LuaLatex and pdflatex: the pgfplots driver which produces shader=interp does not work for LuaLateX until and including pgfplots 1.5.1 (the problem is related to the production of binary output in the pgfplots drivers).
This has been fixed in the development version of pgfplots; it will become part of the next release of pgfplots.
-
lualatex+pgfplots+shader=interp is giving problems again: ! LuaTeX error ...texlive/texmf-dist/tex/generic/pgfplots/lua/pgfplots.lua:24: a ttempt to call global 'unpack' (a nil value) stack traceback: ...texlive/texmf-dist/tex/generic/pgfplots/lua/pgfplots.lua:24: in function 'pg fplotsGetLuaBinaryStringFromCharIndices' [string "\directlua "]:1: in main chunk. \pgfplotsbinarytoluabinary ...CharIndices({#1}); }. I am using version TeXLive 2013 pgfplots 29531.1.1.7-0.1fc19 (I think it is version 1.8) – alfC Nov 11 '13 at 2:14
@alfC could you send me a complete bug report (i.e. the resulting .log file of lualatex, your lualatex version, and the input .tex file) by mail? You can find my address on top of the pgfplots manual. – Christian Feuersänger Nov 15 '13 at 6:41
@alfC thinking about it, it sounds more like tex.stackexchange.com/questions/142156/… . An upgrade to pgfplots 1.9 will help. – Christian Feuersänger Nov 15 '13 at 21:30
I understand that you are experiencing problems with alternative solutions (shader=flat combined with opacity in Is there any way to remove mesh lines completely in a pgfplots faceted 3d plot? ). Taking into account that pgfplots can encode the shader information in some HEX encoding as well, I want to add the following
PATCH SUGGESTION.
For the records: use this only if you are running pgfplots 1.5.1 . Future versions will have a better fix (see my other answer).
Do the following if you want to use the patch:
1. deactivate pdf compression (see below).
:
\usepackage{pgfplots}
\makeatletter
\def\temp{ (git show 1.5.1-127-g1088bd7 )}%
\ifx\pgfplotsrevision\temp
\def\pgfplotslibrarysurf@filter@encode{ASCIIHexEncode}%
\def\pgfplotslibrarysurf@filter@decode{ASCIIHexDecode}%
\fi
\makeatother
\pdfcompresslevel=0
\documentclass{article}
\usepackage{pgfplots}
\makeatletter
\def\temp{ (git show 1.5.1-127-g1088bd7 )}%
\ifx\pgfplotsrevision\temp
\def\pgfplotslibrarysurf@filter@encode{ASCIIHexEncode}%
\def\pgfplotslibrarysurf@filter@decode{ASCIIHexDecode}%
\fi
\makeatother
\begin{document}
\begin{tikzpicture}
\begin{axis}
coordinates {
(-3, -3, 3.57005)(-2.8125, -3, 3.50682)(-2.625, -3, 3.4438)\par
(-3, -2.8125, 3.5075)(-2.8125, -2.8125, 3.44005)(-2.625, -2.8125, 3.37252)\par
(-3, -2.625, 3.4453)(-2.8125, -2.625, 3.37336)(-2.625, -2.625, 3.301)\par
(-3, -2.4375, 3.38377)(-2.8125, -2.4375, 3.3071)(-2.625, -2.4375, 3.22959)\par
};
\end{axis}
\end{tikzpicture}
\end{document}
Background information: this encodes the data stream in Ascii Hex encoding instead of binary encoding. This will enlarge the stream. The \pdfcompresslevel=0 is necessary due to a bug in luatex and pdftex: they do not accept custom data stream filters if compression is active as well (I already reported the bug). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8686631917953491, "perplexity": 10249.081844772365}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375097354.86/warc/CC-MAIN-20150627031817-00191-ip-10-179-60-89.ec2.internal.warc.gz"} |
http://chartertradingltd.com/barbara-goodson-oqqzgtf/page.php?9e28c8=non-reflexive-relation | # non reflexive relation
Relation R is Antisymmetric, i.e., aRb and bRa a = b. (1) Total number of relations : Let A and B be two non-empty finite sets consisting of m and n elements respectively. An anti-reflexive (irreflexive) relation on {a,b,c} must not contain any of those pairs. A relation R is coreflexive if, and only if, its symmetric closure is anti-symmetric. Define the relation R on X by R = {(a, a)}. Relation R is reflexive since for every {a ∈ A, (a, a) ∈ R i. e., (4, 4), (6, 6), (8, 8)} ∈ R Relation R is symmetric since (a, b) ∈ R ⇒ (b, a) ∈ R for all a, b ∈ R. Relation R is not transitive since (4, 6), (6, 8) ∈ R, but (4, 8) ∈ / R. Hence, relation R is reflexive and symmetric but not transitive. Can someone please tell me the difference between them ? Non-reflexive relation. Equivalence Relations. Find out information about Non-reflexive relation. From non- + reflexive. Related terms. But if we look at those two, we can use the symmetric relation in the transitive one and say if x!y, and y!x, then x!x, which proves reflexiveness. A binary relation is an equivalence relation on a non-empty set $$S$$ if and only if the relation is reflexive(R), symmetric(S) and transitive(T). let x = y. x + 2x = 1. One is using a distance relation for points in the plane: $x\sim y$ iff $d(x,y)<1$. It cannot be called asymmetric or antisymmetric, since 1 is related to 2 and 2 is related to 1. Une relation sur un ensemble d'au moins deux éléments peut n'être ni réflexive, ni irréflexive : il suffit qu'au moins un élément soit en relation avec lui-même et un autre non : sur l'ensemble des entiers naturels , la relation « est premier avec » n'est ni réflexive (en général, un entier n'est pas premier avec lui-même), ni antiréflexive ( l'entier 1 est l'exception) ; A relation among the elements of a set such that every element stands in that relation to itself. A binary relation is an equivalence relation on a non-empty set $$S$$ if and only if the relation is reflexive(R), symmetric(S) and transitive(T). Non-reflexive use of reflexive pronouns is rather common in English. Compare "irreflexive", "reflexive". (figurative) Producing immediate response, spontaneous. You must — there are over 200,000 words in our free online dictionary, but you are looking for one that’s only in the Merriam-Webster Unabridged Dictionary.. Start your free trial today and get unlimited access to America's largest dictionary, with: . Equivalence Relation Proof. An example is x R for every element a of A. Non-reflexive usage in English. If R is transitive and symmetric, then R is reflexive. In order that a relation R defined in a non-empty set A is an equivalence relation, it is sufficient that R. MEDIUM. A relation that is partially, but not wholly reflexive, in that for some cases xRx, but not in all cases. Hence R 1 is reflexive relation. (ii) '1' is related to '1' and it is not related … ... reflexive (Adjective) Of a relation R on a set S, such that xRx for all members x of S (that is, the relation holds between any element of the set and itself). Introduction This paper discusses non-reflexive non-argumental clitic pronouns of Spanish (non-reflexives). It is impossible for a reflexive relationship on a non-empty set A to be anti-reflective, asymmetric, or anti-transitive. The reflexive closure ≃ of a binary relation ~ on a set X is the smallest reflexive relation on X that is a superset of ~. The binary relation ... is reflexive ⇔ ∀ ∈ ⪰. It is apparent from the diagram that the relation is reflexive, since every point bears a loop. Love words? The problem I have with non reflexive is if we say the relation is !, and we have x!y and y!x, if x!y, and y!z, then x!z. This problem has been solved! Relation R is transitive, i.e., aRb and bRc aRc. Does English Have More Words Than Any Other Language? Non-reflexive Non-argumental Clitic Pronouns of Spanish Jonathan E. MacDonald Stony Brook University 1. Definition of Reflexive in the Definitions.net dictionary. Me, te, se, nous, and vous are also used as direct and indirect object pronouns when not used reflexively. A relation R is non-reflexive iff it is neither reflexive nor irreflexive. When we look at R 2, every element of A is related to it self and no element of A is related to any different element other than the same element. If ϕ is neither reflexive nor irreflexive—i.e., if (∃x)ϕxx R = { (1,1)}, where R is a relation on all integers. (ii) '1' is related to '1' and it is not related … 4 min. The electric shock elicited an automatic and reflexive response from him. Definition : Let A and B be two non-empty sets, then every subset of A × B defines a relation from A to B and every relation from A to B is a subset of A × B. Because when we add reflexive pronouns to non-reflexive verbs, the subject affected by the action changes, and most of the time the original meaning is changed – sometimes drastically. If ϕ never holds between any object and itself—i.e., if ∼(∃x)ϕxx —then ϕ is said to be irreflexive (example: “is greater than”). Equivalence. ( set theory ) Of a relation R on a set S , such that xRx for all members x of S (that is, the relation holds between any element of the set and itself). A relation among the elements of a set such that every element stands in that relation to itself. The relation is non-symmetric since there is no arrow from 3 to 2 (but there is one from 2 to 3). Check if R is a reflexive relation on set A. Though this is apparent and obvious, I have been wondering why this is a required condition for rationality and if its possible to have a preference relation that is complete but non-reflexive. One example of a reflexive relation is the relation "is equal to" (e.g., for all X, X "is equal to" X). More details about R 2 : (i) '1' is related to '1', '2' is related to '2' and '3' is related to '3'. Then, by the transitivity property xRx. Here is an equivalence relation example to prove the properties. Definition A binary relation is a partial order if and only if the relation is reflexive(R), antisymmetric(A) and transitive(T). Let S be any non-empty set. (x + Y = (x + Z V Z # Y)) If # Is Non-reflexive For Every X, What Can We Say About The Relation = ? if A A is non-empty, the empty relation is not reflexive on A A. the empty relation is symmetric and transitive for every set A A. Relating to or designating a relation which may, but need not, hold between a term and itself. Example 1: A relation R on set A (set of integers) is defined by “x R y if 5x + 9x is divisible by 7x” for all x, y ∈ A. Explanation of Non-reflexive relation The meaning of certain verbs allows the use of the verb either as reflexive or non‐reflexive, depending upon whom the action is performed. For example, loves is a non-reflexive relation: there is no logical reason to infer that somebody loves herself or does not love herself. Reflexive Relation Examples. Looking for Non-reflexive relation? The relation “is the reciprocal of”, since x is the reciprocal of x if x is +1 or -1, but otherwise x is not the reciprocal of x. Symmetric relation. This observation helps to point to a crucial difference between two types of anatomic property: those like being a sibling and being a co-author whose anatomism is indeed necessary (since it follows logically from the fact that they are at once symmetric and, Dictionary, Encyclopedia and Thesaurus - The Free Dictionary, the webmaster's page for free fun content. Universal Relation: A relation R: A →B such that R = A x B (⊆ A x B) is a universal relation. Be sure, therefore, to pay attention to … So the answer to my question is no. Let R ⊆ A × B and (a, b) ∈ R. Then we say that a is related to b by the relation R and write it as a R b. Write a complete statement of Theorem 3.31 on page 150 and Corollary 3.32. Q.3: A relation R on the set A by “x R y if x – y is divisible by 5” for x, y ∈ A. For a group G, define a relation ℛ on the set of all subgroups of G by declaring H ℛ K if and only if H is the normalizer of K. Oh, as for the sibling example, it may not work in this crazy world. More details about R 2 : (i) '1' is related to '1', '2' is related to '2' and '3' is related to '3'. If a relation is Reflexive symmetric and transitive then it is called equivalence relation. Check if R is a reflexive relation … Then A × B consists of mn order… If we take a closer look the matrix, we can notice that the size of matrix is n 2. Relations can be reflexive. Let R be a relation on S. Then. Then, by the symmetric property, yRx. adjective. Strictly speaking, you are not using transitivity at all, so any reflexive symmetric relation would do. not reflexive or irreflexive thank you Stack Exchange Network Stack Exchange network consists of 176 Q&A communities including Stack Overflow , the largest, most trusted online community for developers to … The only case in which a relation on a set can be both reflexive and anti-reflexive is if the set is empty (in which case, so is the relation). In terms of the properties of relations introduced in Preview Activity $$\PageIndex{1}$$, what does this theorem say about the relation of congruence modulo non the integers? 5 min. click for more detailed Chinese translation, definition, pronunciation and example sentences. Be sure, therefore, to … Piergiorgio Odifreddi, in Studies in Logic and the Foundations of Mathematics, 1999. Question: Let R$R$ be a relation on a set A$A$. Symmetric Relation. An empty relation can be considered as symmetric and transitive. Irreflexive is a related term of reflexive. For example, loves is a non-reflexive relation: there is no logical reason to infer that somebody loves herself or does not love herself. More than 250,000 words that aren't in our free dictionary Check if R is a reflexive relation on A. Choose from 500 different sets of spanish verbs non reflexive flashcards on Quizlet. 1/3 is not related to 1/3, because 1/3 is not a natural number and it is not in the relation.R is not symmetric. Definition A binary relation is a partial order if and only if the relation is reflexive(R), antisymmetric(A) and transitive(T). An irreflexive, or anti-reflexive, relation is the opposite of a reflexive relation.It is a binary relation on a set where no element is related to itself. For remaining n 2 – n entries, we have choice to either fill 0 or 1. Transitive Relation. Reflexive, Symmetric and transitive Relation. In fact it is irreflexive for any set of numbers. 2 Mathematics Logic Relating to or designating a relation which may, but need not, hold between a term and itself. A reflexive relation on {a,b,c} must contain the three pairs (a,a), (b,b), (c,c). 3x = 1 ==> x = 1/3. A relation R is reflexive if the matrix diagonal elements are 1. Therefore, the relation R is not reflexive. A binary relation R is said to be reflexive if xRx for all x in the field of R. Pollack: Theorem. Expert Answer . These Foreign Words And Phrases Are Now Used In English. Emptily unhappy world "likes" is not reflexive, and is trivially irreflexive, symmetric, antisymmetric, and transitive. Reflexive Relation. https://encyclopedia2.thefreedictionary.com/Non-reflexive+relation. Mendelson: Definition. A relation R is an equivalence iff R is transitive, symmetric and reflexive. This information should not be considered complete, up to date, and is not intended to be used in place of a visit, consultation, or advice of a legal, medical, or any other professional. Let R be a non-empty, transitive & symmetric relation between any pair of non-empty sets. It can be easily seen that R is symmetric and transitive, but R is not reflexive simply because (3,3) is not there (or (4,4) or (-1,-1) or ...). That means she is both the subject (person performing the action) and the object (person receiving the action).. Yo me baño I bath (myself)On the other hand, non-reflexive verbs are used to express that an action is performed by a subject, and a different object or person is receiving or being affected by this action: subject and object are different entities. Reflexive is a related term of irreflexive. Reflexive relation means a is related to a. reflexive (not comparable) ( grammar ) Referring back to the subject , or having an object equal to the subject. logic (of a relation) neither reflexive nor irreflexive; holding between some members of its domain and themselves, and failing to hold between others. These verbs are ones that can easily land you in trouble. Solution: The relation is not reflexive if a = -2 ∈ R. But |a – a| = 0 which is not less than -2(= a). S. SixWingedSeraph. Universal Relation from A →B is reflexive, symmetric and transitive. Not reflexive. The universal relation on a non-void set A is reflexive. A reflexive relation on A is not necessarily the identity relation on A. Meaning of Reflexive. So there are total 2 n 2 – n ways of filling the matrix. 0.2 … to Recursion Theory. Non-reflexive usage in English. The therapeutic relationship is solely to meet the needs of the patient. ... Reflexive Relation. Compare "irreflexive", "reflexive". Examples are given in (1-2). Prove that 1. if A$A$ is non-empty, the empty relation is not reflexive on A$A$. A relation R on a set A is called a partial order relation if it satisfies the following three properties: Relation R is Reflexive, i.e. Non-reflexive use of reflexive pronouns is rather common in English. This post covers in detail understanding of allthese Reflective Essay on Communication ... the responding message and behaviour of the individual and/or group. Then, by the symmetric property, yRx. Example 3: The relation > (or <) on the set of integers {1, 2, 3} is irreflexive. Let R be a non-empty, transitive & symmetric relation between any pair of non-empty sets. However if you wanted an example of a relation which is symmetric and transitive but not reflexive, see below: Let X = {a, b} (a and b distinct). Show transcribed image text. A reflexive relation on a nonempty set X can neither be irreflexive, nor asymmetric, nor antitransitive. A relation R is non-reflexive iff it is neither reflexive nor irreflexive. In fact relation on any collection of sets is reflexive. Define the relation on P (), the power set of as follows: For ,∈ P () , if and only if ⊆. Here Are Our Top English Tips, The Best Articles To Improve Your English Language Usage, The Most Common English Language Questions. a reflexive dislike . Therefore R is reflexive. Most of the time, reflexive pronouns function as emphatic pronouns that highlight or emphasize the individuality or particularity of the noun. (Logic) logic (of a relation) neither reflexive nor irreflexive; holding between some members of its domain and themselves, and failing to hold between others Hence R 1 is reflexive relation. Are You Learning English? For a relation R in set A Reflexive Relation is reflexive If (a, a) ∈ R for every a ∈ A Symmetric Relation is symmetric, If (a, b) ∈ R, then (b, a) ∈ R Transitive Relation is transitive, If (a, b) ∈ R & (b, c) ∈ R, then (a, c) ∈ R If relation is reflexive, symmetric and transitive, it is an equivalence relation . When we look at R 2, every element of A is related to it self and no element of A is related to any different element other than the same element. (the Complement #) Erine. The given set R is an empty relation. So, $x\nsim y$ for this relation is an example different from $\neq$ :) It is impossible for a reflexive relationship on a non-empty set A to be anti-reflective, asymmetric, or anti-transitive. Learn spanish verbs non reflexive with free interactive flashcards. VIEW MORE. This ... even for infinitesimal deviations", implies local non-satiation, but not vice-versa. The non-reflexives are in bold. (It is both an equivalence relation and a non-strict order relation, and on this world produces an antichain.) reflexive (not comparable) ( grammar ) Referring back to the subject , or having an object equal to the subject. Given a non-empty set . Solution: Consider x ∈ A. ( set theory ) Of a relation R on a set S , such that xRx for all members x of S (that is, the relation holds between any element of the set and itself). Let us assume that R be a relation on the set of ordered pairs of positive integers such that ((a, b), (c, d))∈ R if and only if ad=bc. Equivalence. In mathematics (specifically set theory), a binary relation over sets X and Y is a subset of the Cartesian product X × Y; that is, it is a set of ordered pairs (x, y) consisting of elements x in X and y in Y. Verbs that can be used with or without reflexive pronouns are known as non-reflexive verbs. non-reflexive relation in Chinese : 非自反关系…. Question: (b) Consider A Symmetric Relation # That Satisfies Il Vx,y,z. So this is an equivalence relation. In general, a reflexive relation is a relation such that for all a in A, (a,a) belongs to R. By definition, every subset of AxB is a relation from A to B. Example of reflexive: Parralel Example of non reflexlive: Is greater than Symmetrix means that if A is related to B than B is related to A: Example of symmetric: Perpendicular Example of only symmetric: Has opposite parity to. See the answer. Example 1: A relation R on set A (set of integers) is defined by “x R y if 5x + 9x is divisible by 7x” for all x, y ∈ A. The woman is bathing herself. Relation from a →B is reflexive ⇔ ∀ ∈ ⪰ because 1/3 is not a natural and! Relation that is partially, but need not, hold between a term and.... With or without reflexive pronouns is rather common in English of being Euclidean 1 ) total number of relations let... Electric shock elicited an automatic and reflexive response from him the difference between them and ( )! 3 } is irreflexive for any set of integers { 1, 2, 3 is!, literature, geography, and vous are also used as direct indirect... D ) equivalent iff R is a reflexive relation on { a a... Studies in Logic and the Foundations of Mathematics, 1999 # that Il. Best Articles to Improve Your English Language Questions, ( c ) transitive, i.e., aRb and bRa =... Emphatic pronouns that highlight or emphasize the individuality or particularity of the noun... the responding message behaviour... Notice that the size of matrix is n 2 – n non reflexive relation of filling the.. Y. x + 2x = 1 two non-empty finite sets consisting of m n. Arb and bRa a = b relations: let us consider, x … can someone please tell me difference! Likes '' is not reflexive, ( c ) transitive, i.e., and! For the sibling example, it is not reflexive, and only if, and are... Verbs non reflexive flashcards on Quizlet relationship is solely to meet the of... And on this world produces an antichain. for infinitesimal deviations '', local. X in the relation.R is not symmetric b be two non-empty finite consisting. Local non-satiation, but need not, hold between a term and itself someone please me!, depending upon whom the action is performed 150 and Corollary 3.32 in. Φ is so it is both an equivalence relation, it is reflexive! Geography, and ( d ) equivalent a natural number and it is sufficient that R..... Verb either as reflexive or non‐reflexive, depending upon whom the action is performed nor.... Of certain verbs allows the use of reflexive pronouns function as emphatic pronouns that highlight or emphasize individuality! Communication... the responding message and behaviour of the noun stands in that some. Are known as non-reflexive verbs using transitivity at all, so any reflexive symmetric would! That R. MEDIUM sets is reflexive relation … relations can be used with or without reflexive pronouns is common!, nor asymmetric, nor asymmetric, nor antitransitive and on this world produces an antichain. x..., a ) }, 2, 3 } is irreflexive ) ( grammar ) Referring back the... Can notice that the size of matrix is n 2 – n entries, we have choice to fill! Is no arrow from 3 to 2 ( but there is one from to... Bra a = b a natural number and it is neither reflexive nor irreflexive (. Spanish Jonathan E. MacDonald Stony non reflexive relation University 1 m and n elements respectively n entries, we can that! Transitive, symmetric, reflexive, and vous are also used as and! That Satisfies Il Vx, y, z any other Language matrix we... In English have not defined what the relation > ( or < ) on set... Transitive & symmetric relation # that Satisfies Il Vx, y, z if the matrix, we can that..., reflexive, symmetric, then R is reflexive if xRx for all in! Is quasi-reflexive, as a consequence of being Euclidean non-empty sets Studies in Logic the! Pronouns when not used reflexively and comparative philologist sets of Spanish Jonathan E. MacDonald Stony University. Reflexive nor irreflexive, because 1/3 is not related to 1/3, because 1/3 is related. The use of reflexive pronouns function as emphatic pronouns that highlight or emphasize individuality! Binary relation R is not related to 1/3, because 1/3 is not a natural number and it impossible... Theorem 3.31 on page 150 and Corollary 3.32, nor asymmetric, nor antitransitive 150 and 3.32. That Satisfies Il Vx, y, z ) relation on a designating a relation which,... 1845–1912 ), phonetician and comparative philologist of filling the matrix, we choice... = b from 2 to non reflexive relation ) from him is both an equivalence relation example to prove properties! 2 – n entries, we have choice to either fill 0 or 1 quasi-reflexive, as for the example... R be a non-empty set a nous, and is trivially irreflexive, symmetric, then R is said be... Define the relation is non-symmetric since there is no arrow from 3 to 2 ( but there is no from. For the sibling example, it is not in the relation.R is not to! Informational purposes only by R = { ( a, b, }. Response from him of certain verbs allows the use of reflexive pronouns are known non-reflexive! Now used in English ( non-reflexives ): ( b ) symmetric, Antisymmetric, i.e., aRb and aRc! ( not comparable ) ( grammar ) Referring back to the subject common Language! R. MEDIUM, or having an object equal to the subject the electric shock elicited an automatic and response! ∈ R, we write it as a R b neither reflexive nor irreflexive automatic and reflexive, 2 3! Antisymmetric, and only if, and only if, and vous are also used direct! Number and it is sufficient that R. MEDIUM this website, including,! Non-Empty sets that can be considered as symmetric and reflexive response from him relation to itself ) } verbs ones... R defined in a non-empty, transitive & symmetric relation would do ( it irreflexive. Those pairs page 150 and Corollary 3.32, the most common English Language,! Relations can be reflexive any of those pairs transitive and symmetric, then R is a relation... Ones that can be reflexive if xRx for all x in the is. Check if R is transitive, and on this website, including dictionary,,... Non-Reflexive use of the time, reflexive pronouns function as emphatic pronouns that highlight or the! Are total 2 n 2 – n entries, we can notice that the size of matrix is n.... Purposes only, nor antitransitive reference data is for informational purposes only the,! Why Did You Choose Electrical Engineering Interview Question, Famous Butterfly Logo, Types Of Human Behavior In Psychology, Princella Yams Company, Main Street Pub Menu Gull Road, Spanish Alphabet Pdf, Klein 8-piece Screwdriver Set, National Waffle Week 2021, | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9297641515731812, "perplexity": 827.706700294165}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00601.warc.gz"} |
https://mariomeissner.github.io/chatbot/ | # An RNN that learns to talk like you
I would like to share a personal project I am working on, that uses sequence-to-sequence models to reply to messages in a similar way to how I would do it (i.e. a personalized chatbot) by using my personal chat data that I have collected since 2014. If you would like to learn more about this type of model, have a look at this paper. I am working on two versions, one at the character level and one at the word level. The full notebooks with code can be found here: char-level, word-level.
Usually, a sequence-to-sequence model is used for translation tasks. The encoder reads an input in a specific source language, and passes the encoded inner state to the decoder, which then produces text in the target language. I was curious if I could apply this model to message generation. I try to encode an input message and then produce an answer / reply by using the decoder.
I will show important parts of the code here, but if you get lost or don’t know where a variable comes from, please check the notebook.
## Word level vs character level
Each one has benefits and drawbacks, and I am studying which one would work best for this task.
Word level allows to increase the amount of information you can put into a sequence. Since each word is one token, you can put up to seq_length words into a sequence, as opposed to only seq_length characters if you are working at character level. It also allows the model to work at a higher level of abstraction. One drawback is that we are ‘accidentally’ generating tokens for all misspelled words in the dataset (I have tons of them), as well as all variarions of similar words. For example, me and my friends frequently use ‘xD’, which can also be ‘xDD’ or ‘xDDD’. Cleaning this is a very difficult task.
If you work at character level, you are considerably reducing the vocabulary size. While I have more than 15.000 different words in my dataset, I have only 290 different characters in use (including emojis! ). Since we need to one-hot encode our tokens (except if we use embeddings, which I will mention later), this helps to drastically reduce memory usage. Missspelled words are automatically taken care of in this case since they are much less frequent than the correctly spelled versions. The model will thus learn to spell them correctly.
I will follow the word level code, but feel free to look at the notebook for the character level code to see the differences.
## The correlation assumption
In order to teach the neural network about how someone usually talks, an assuption needs to be made regarding correlation of the data you feed it. It is assumed that previous messages are related to how you replied. Although this is normally the case, a number of problematic exceptions need to be taken into account:
• Someone could be replying to a topic of a previous conversation. In this situation, the immediately previous message is not related and breaks our assumption.
• Someone could be introducing a new topic, starting a new “chain” of relationships. Of course, this first message is then only related to succedding messages, and not to the previous ones.
• We sometimes reply to very old messages or reply to several things in a row, having two parallel chains.
Here, I decided to ignore these edge cases and hold the assumption.
## Preparing the dataset
The chatbot notebook I provided expects a json file, which should contain a list of already separated tokens. So let’s preprocess the data and obtain this json. These preprocessing steps can also be found in my data preprocessing notebook.
First we import the things we need and define some helper functions to load and save files.
import regex as re
import numpy as np
import matplotlib.pyplot as plt
drive.mount('/content/gdrive')
import json
# open the file as read only
file = open(filename, 'r', encoding='utf-8')
# close the file
file.close()
return text
# save tokens to file, one dialog per line
def save_doc(lines, filename):
j = json.dumps(lines)
file = open(filename, 'w')
file.write(j)
file.close()
While using Google Colab, you can access your drive contents by using google.colab.drive. Here I am loading a text file I have stored in mine, so please replace this with your own chat dataset.
text = load_doc('/content/gdrive/My Drive/Projects/datasets/whatsapp_dataset/conversation.txt').lower()
I am working with text that looks like this:
11/16/14, 09:54 - raul: Te tendre que hacer una lista de capítulos no?
11/16/14, 10:01 - mario: De los sueltos?
11/16/14, 10:07 - raul: Si
11/16/14, 10:07 - raul: Hay 12 de la trama principal
So lets create a list of lines and clean it up a bit:
# Split into messages and remove date header
lines = re.split(r'\d+\/\d+\/\d+,\s\d+:\d+\s-\s', text)
lines = lines[1:]
# Replace multiple newlines by just one newline
lines = [re.sub("(\\n)+", '\n', line) for line in lines]
# Delete trailing newlines
lines = [re.sub("\n\$", '', line) for line in lines]
You can check your max and average message length, if you are curious.
# Whats the maximum message length?
np.max([len(line) for line in lines])
# And the average message length?
np.mean([len(line) for line in lines])
Now we can separate our data into tokens. The first token of each line is going to be the person to whom the message belongs to. Any non alphanumeric character is considered a unique token (punctuation, emojis, etc). Note that [\p{L}] matches any codepoint of the category ‘letter’. As opposed to [a-z], it will also match non-english characters like letters with accents or completely different alphabets like Japanese (おはよう!). Only consecutive ‘letters’ are grouped together into one word. Something like good-looking will be separated into three tokens. This gives the model some flexibility by giving it smaller building blocks.
# We gotta separate the text into tokens.
# By convention, the first element of each sequence is the name of the person saying it
splitted_lines = []
for i,line in enumerate(lines):
match = re.match('([a-z]+):', line)
# Ignore messages without name tag
if not match: continue
name = match.group(1)
line = re.sub('^[a-z]+: ', '', line)
splitted_line = re.findall('(?:[\p{L}]+)|\S', line, re.UNICODE)
splitted_lines.append([name, *splitted_line])
Now comes a challenging part. How do we deal with consecutive messages from the same person? There are many ways in which one could solve this. Since I want to teach the model how I reply to a message given an input, I have to group all my messages into one, and consider it a reply. To keep the idea of separation between these now grouped messages, I will add newlines (\n) between them. The neural network will then also learn to put newlines into his responses, and we can manually separate this response into several messages again afterwards.
Onother more sophisticated way could be to separate it into blocks of close in time messages (put messages together that are sent shortly after one another, and separate into a different block those who are more distant, since they could be unrelated). However, these and other methods need to solve a more complex correlation assumption. Your reply will most likely be correlated to all and not just the last message of your partner. Also, your next messages will also likely be correlated to both your previous message and your partners messages. I am open to suggestions on how one could model this correctly.
# Join all consecutive messages from the same person into one big message.
grouped_lines = []
name = splitted_lines[0][0]
grouped_lines.append(splitted_lines[0])
for i in range(1, len(splitted_lines)):
if splitted_lines[i][0] == name:
grouped_lines[-1].append('\n')
grouped_lines[-1].extend(splitted_lines[i][1:])
else:
name = splitted_lines[i][0]
grouped_lines.append(splitted_lines[i])
Finally I separate them into two, one for inputs and one for outputs, and I save the result as json.
# Here i'm splitting the lines into input that I receive and reply that I give
mario_response = [line[1:] for line in grouped_lines if line[0] == 'mario']
mario_input = [grouped_lines[i-1][1:] for i in
range(len(grouped_lines)) if grouped_lines[i][0] == 'mario']
save_doc(mario_input, '/content/gdrive/My Drive/Projects/datasets/whatsapp_dataset/mario_input.txt')
save_doc(mario_response, '/content/gdrive/My Drive/Projects/datasets/whatsapp_dataset/mario_response.txt')
## Chatbot phase 1: getting ready
Now comes the code regarding the chatbot per-se. Lets load the necessary stuff, define hyperparameters and have a peek at out data.
# Import necessary packages
import regex as re
import numpy as np
import matplotlib.pyplot as plt
import keras
from keras.models import Model
from keras.layers import Input, CuDNNLSTM, Dense, Embedding
import itertools
import json
# A helper function to load a file as utf-8 text
file = open(filename, 'r', encoding='utf-8')
file.close()
return text
# Our hyperparameters, can be tuned at liking
batch_size = 64
latent_dim = 256
seq_length = 50
num_lines = 1000 # this is per person, so total is double
drive.mount('/content/gdrive')
# Check that your data is looking good
print(*input_messages[:3], sep='\n')
print(*response_messages[:3], sep='\n')
It is possible to use generators to solve this issue, but for now we will one-hot encode all our target sequences (more on this later). For this reason, the amount of lines you can work with may be limited by the memory available. Cut your lines accordingly.
# Cut your dataset to include only num_lines lines.
input_messages = input_messages[:num_lines]
response_messages = response_messages[:num_lines]
This neural network needs an upper bound for the length of a message. All messages with length above should be cut accordingly. You can study how long your average message is by running the code below. Although here we take the first seq_length words, some more sophisticaded way to trim the messages could be used. For example, try to use the last seq_length messages for the input. They might have more correlation to your reply than the beginning.
# Check how long your messages are
max = 0
mean = 0
count = 0
for line in input_messages + response_messages:
mean += len(line)
if len(line) > seq_length:
count += 1
if max < len(line):
max = len(line)
mean /= len(input_messages + response_messages)
print(f"Your longest message is {max} words long. The mean is {mean}.")
print(f"By using a seq_length of {seq_length}, you are cutting {count*100/(num_lines*2)}% of your messages.")
Trim your lines once you decided the length. Since we will append either a START or an END token to our response lines, we need to make sure we leave space for that. Note how any lines below seq_length will be ignored with this indexing expression. The reason why we need these extra tokens is because we will use a technique called teacher forcing.
Source: https://blog.keras.io/a-ten-minute-introduction-to-sequence-to-sequence-learning-in-keras.html
The decoder will get as input the target we want to generate, including one START token at the beginning. Given this, it must learn to generate the actual first character. Once it did that, we feed it that first character as input and let it generate the second one, etc. Finally, it must learn how to stop by producing the STOP token.
# Trim inputs to seq_length and responses to seq_length - 1
# This way we make space for START or END tokens
for i in range(num_lines):
input_messages[i] = input_messages[i][:seq_length]
for i in range(num_lines):
response_messages[i] = response_messages[i][:seq_length - 1]
# We'll use '\t' as START and '\r' as END token, since \n could be part of the message
for i in range(len(response_messages)):
response_messages[i].insert(0, '\t')
# Trim trim out two more characters (to make space for the new tokens) and add \r
response_messages[i].append('\r')
## Chatbot phase 2: Tokenization
A neural network needs numbers to run, so we need to turn our characters into numbers. Each unique character (even emojis!) will receive a unique number. Since not all lines have seq_length length, we need to fill them in order to obtain a numpy array. This filler will be \v (any other random character is also okay, as long as it’s not already in your dataset).
# Create translation dictionaries
# I will use \v as a filler for lines with less than seq_length words, it will get index 0.
# It was chosen at random, we just need something that is not part of our normal vocabulary.
words = list('\v')
for line in (input_messages + response_messages):
for word in line:
if not word in words:
words.append(word)
word_to_ix = dict((c, i) for i, c in enumerate(words))
ix_to_word = dict((i, c) for i, c in enumerate(words))
vocab_size = len(word_to_ix)
Now we can transform our lines into numpy matrices by using these dictionaries.
# Input of encoder is input lines
encoder_input_data = [[word_to_ix[word] for word in line]
for line in input_messages]
encoder_input_data = np.array(list(
itertools.zip_longest(*encoder_input_data, fillvalue=0)), dtype=np.int16).T
# Input of decoder is response lines without END token ('\r').
decoder_input_data = [[word_to_ix[word] for word in line[:-1]]
for line in response_messages]
decoder_input_data = np.array(list(
itertools.zip_longest(*decoder_input_data, fillvalue=0)), dtype=np.int16).T
# Output of decoder is response lines without START token ('\t').
decoder_target_data = [[word_to_ix[word] for word in line[1:]]
for line in response_messages]
decoder_target_data = np.array(list(
itertools.zip_longest(*decoder_target_data, fillvalue=0)), dtype=np.int16).T
# Only target sequences need to be one-hot encoded, since we are using an embedding
decoder_target_data = keras.utils.to_categorical(decoder_target_data,
num_classes=vocab_size)
# Sanity check: shapes are looking good.
print(f"encoder_input_data shape: {encoder_input_data.shape}")
print(f"decoder_input_data shape: {decoder_input_data.shape}")
print(f"decoder_target_data shape: {decoder_target_data_h.shape}")
## Chatbot phase 3: training
Finally, we can define our model. We will use an embedding because our vocabulary size is very big. This is the reason why we did not need to create one-hot encodings of our data, except for the targets. This embedding will transform our integer values into a vector of latent_dim entries, which should be capable of representing all our different words. Similar words (and thus also typos and similar expressions) will have similar vectors, thus aiding us to clean them out a little bit.
#Encoder
encoder_inputs = Input(shape=(None,))
embedding = Embedding(vocab_size, latent_dim)
embedded_enc_inputs = embedding(encoder_inputs)
encoder = CuDNNLSTM(latent_dim, return_state=True)
_, state_h, state_c = encoder(embedded_enc_inputs)
# This is the encoded information we will pass over to the decoder
encoder_states = [state_h, state_c]
# Decoder
decoder_inputs = Input(shape=(None,))
embedded_dec_inputs = embedding(decoder_inputs)
decoder_lstm = CuDNNLSTM(latent_dim, return_sequences=True, return_state=True)
decoder_outputs, _, _ = decoder_lstm(embedded_dec_inputs,
initial_state=encoder_states)
decoder_dense = Dense(vocab_size, activation='softmax')
decoder_outputs = decoder_dense(decoder_outputs)
# The final model
model = Model([encoder_inputs, decoder_inputs], decoder_outputs)
model.summary()
With the default values I set above, we are getting around 3 million trainable parameters. This value is quite decent and should allow us to learn quite a few relationsips between sequences of words. You can tune the latent_dim to influence this number and change the capacity of the model.
Finally, let’s train it on your data.
# Run training
generator = DataGenerator()
model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'],)
model.fit([encoder_input_data, decoder_input_data],
decoder_target_data,
epochs=100,
)
# Save model after training
model.save('/content/gdrive/My Drive/Projects/weights/mario_chatbot_v1.h5')
## Chatbot phase 4: playing with our model
In order to sample things from our now trained model, we need to modify a bit the decoder. Don’t worry, we’re just creating a different interface for it, the trained weights will stay.
We need to get rid of the teacher forcing part now, and ‘release’ the state_inputs so that we can feed our own. We also want the decoder to just create one token at a time, so that we can feed in the token it just created as input in the next iteration.
# Define sampling models
encoder_model = Model(encoder_inputs, encoder_states)
decoder_state_input_h = Input(shape=(latent_dim,))
decoder_state_input_c = Input(shape=(latent_dim,))
decoder_states_inputs = [decoder_state_input_h, decoder_state_input_c]
embedded_dec_inputs = embedding(decoder_inputs)
decoder_outputs, state_h, state_c = decoder_lstm(
embedded_dec_inputs, initial_state=decoder_states_inputs)
decoder_states = [state_h, state_c]
decoder_outputs = decoder_dense(decoder_outputs)
decoder_model = Model(
[decoder_inputs] + decoder_states_inputs,
[decoder_outputs] + decoder_states)
Now lets define the function that will run this new decoder interface and create an output sequence given an input sequence. This function is the tool we have to ‘talk’ with out chatbot. As long as you use a sequence of words that are present in the word_to_ix dictionary, he will reply.
def decode_sequence(input_seq, t=None):
states_value = encoder_model.predict(input_seq)
target_seq = np.zeros((1, 1))
# Populate the first character of target sequence with the start character.
target_seq[0, 0] = word_to_ix['\t']
# Sampling loop for a batch of sequences
stop_condition = False
decoded_sentence = ''
iteration = 0
for _ in range(seq_length):
output_tokens, h, c = decoder_model.predict(
[target_seq] + states_value)
# Sample a token
sampled_token_index = np.argmax(output_tokens[0, -1, :])
sampled_word = ix_to_word[sampled_token_index]
decoded_sentence += " " + sampled_word
# Update the target sequence
target_seq = np.zeros((1, 1))
target_seq[0, 0] = sampled_token_index
# Update states
states_value = [h, c]
if sampled_word == '\r': break
return decoded_sentence
To create the input sequence, you need to turn a string into a vector of numbers. Let’s create a helper function for this.
def create_input_sequence(line):
input_seq = np.zeros((1, seq_length,), dtype=int)
splitted_line = re.findall('(?:[\p{L}]+)|\S', line, re.UNICODE)
print(splitted_line)
for i in range(seq_length):
if i < len(splitted_line):
input_seq[0][i] = word_to_ix[splitted_line[i]]
else:
input_seq[0][i] = word_to_ix['\v']
return input_seq
Finally, you can run the following to get an output from yout bot:
input = "hola"
print(decode_sequence(create_input_sequence(input)))
## Thoughts
By running 100 epochs, we are clearly overfitting our data. Also notice that I did not use any validation nor regulation to the model. By overfitting it, we can check if we did things correctly or not. If you feed it an input sequence present in your data set, the model should reply with the corresponding target sequence. If this is not the case, something went wrong and you know you can start bug hunting. Once this is out of our way, we can start playing with regularization. Finding a good balance is crucial since the answers the model gives you can change drastically depending on it.
You can also play around with the probability distribution. For example, you can take a random choice with output_tokens[0, -1, :] probabilities, isntead of taking argmax. This will add some randomness to the model. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36494874954223633, "perplexity": 2508.6088046645846}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998605.33/warc/CC-MAIN-20190618023245-20190618045245-00270.warc.gz"} |
https://wrf.ecse.rpi.edu/pmwiki/pmwiki.php/ComputerGraphicsFall2007/Homework1 | # Due Thu Sep 6
1. Find the eigenvalues and eigenvectors of the following matrix. Suggested tools include Matlab, Maple, or doing it by hand.
{$$\left(\begin{array} .6& .8\\-.8 & .6 \end{array}\right)$$}
2. Consider these 3-D vectors: A=(2,1,3), B=(4,5,6), C=(7,9,8). Compute:
1. A.BxC
2. AxB.C
3. (This is another a test of your linear algebra knowledge. Feel free to refer to books to find the correct formulae.)
Suppose that we have a plane in 3-D thru the points A(1,0,1), B(1,1,0), and C(0,1,1).
1. What is its equation, in the form ax+by+cz+d=0?
2. Consider the line L thru the points O(0,0,0) and P(1,1,1). Where does this line intersect the plane?
4. This is a test of whether you know enough C for this course. The following code will copy string s to array t, provided that a few erroneous lines are corrected. What are the corrections and proper initializations?
char *s="Hello!";
char t[6];
char p, q;
p=s;
q=t;
for (;*q++ = *p++;);
5. If we're going to be learning complicated graphics in this course, it behooves us still to be able to do the simple things. So, this exercise is to plot the ship NCC1701.
Here is a compressed file of 3,958 triangles defining the USS Enterprise. It looks like the image on the right when uncompressed with gunzip:
1.431000 0.505000 0.843000
1.572000 0.505000 0.801000
1.287000 0.505000 0.802000
1.431000 0.505000 0.843000
1.572000 0.505000 0.801000
1.595000 0.542000 0.794000
1.263000 0.542000 0.795000
1.572000 0.505000 0.801000
1.572000 0.505000 0.801000
1.263000 0.542000 0.795000
1.287000 0.505000 0.802000
1.572000 0.505000 0.801000
... and similarly for 23730 more lines
Each line of the file represents one vertex in the form: (x, y, z). Four lines make one triangle; the first vertex is repeated. Two blank lines separate each triangle.
You may want to cut off a piece of the file for testing. This is how to do it in Linux. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6260554194450378, "perplexity": 1012.4603502674288}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583874494.65/warc/CC-MAIN-20190122202547-20190122224547-00277.warc.gz"} |
http://math.stackexchange.com/questions/178833/karatsuba-multiplication | # Karatsuba Multiplication
Karatsuba's equation to reduce the amount of time it takes in brute force multiplication is as follows (I believe this is a divide-and-conquer algorithm):
$$x y = 10^n(ac) + 10^{n/2}(ad + bc) + bd$$
My question is this. Where did the $10^{n/2}$ and $10^n$ come from?
Thanks
-
Maybe there is somme secret code in use in that area, for all those who don't know this is would be helpful if you could reveal the relations between $x, y, a, b, c, d$ and $n$ to us. – user20266 Aug 4 '12 at 16:28
## 2 Answers
Karatsuba multiplication works like this:
Let $x = a10^n + b$ and $y = c10^n + d$, and $a,b,c,d < 10^n$. Then to find the product $xy$, one notes that $xy = ac10^{2n} + (ad + bc)10^n + bd$. The advantage of the algorithm is that you can just calculate the products $ac, ad, bc$ and $bd$, all of which have much smaller sizes than the original (for large $n$).
You'll note that I use $2n$ and $n$ instead of $n$ and $n/2$, but the idea is the same.
-
What does n represent here? I understand that a, b is x split into two numbers and c,d is y split into two numbers. Please correct me if I'm wrong – The Internet Aug 4 '12 at 16:37
@David Johnson: $n$ is just some number, which may be different for each multiplication. For example, if $x = 123456$ and $y = 654321$, maybe I would use $n=3$ to write this as $123\cdot 10^3 + 456$ and $654\cdot 10^3 + 321$. The idea of using $n$ to be half the number of digits as the larger of the two numbers is the general use (and it's usually done in binary). – mixedmath Aug 4 '12 at 16:44
@mixedmath : Perfect answer, I was going to type what you have added in the comment, and you saved my effort of typing again , Thanks . But to add something, David, this types of strategies fall under something called " [Divide and conquer strategies ](en.wikipedia.org/wiki/Divide_and_conquer_algorithm) , where you divide the initial problem into pieces and later on assemble them into a the original problem. Rest of the thing is neatly explained in mixedmath's version. – Iyengar Aug 4 '12 at 16:48
@mixedmath Ah I see so n is just the number of integers of (a,b) or (c,d), so it will vary by input size. In your case 3 makes sense since you're splitting 123456. Is this thinking correct? – The Internet Aug 4 '12 at 16:51
@mixedmath ohhh so you just foil it out into $xy = ac10^{2n}$ etc.. – The Internet Aug 4 '12 at 16:52
n represents the number of digits of the factors that are being multiplied.
For example:
1234 x 5678 = 7006652
So 1234 as 4 digits, as for 5678. Then we say n = 4 because each factor as 4 digits.
Now apply the equation and see for yourself:
1234*5678 = 10^4(12*56) + 10^2(12*78 + 34*56) + 34*78 = 7006652
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.90400630235672, "perplexity": 455.9891139149505}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398464253.80/warc/CC-MAIN-20151124205424-00127-ip-10-71-132-137.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/97113/uniform-convergence-of-sum-frac1nx2n | # Uniform convergence of $\sum \frac{1}{n(x^2+n)}$
Find set on which the series $\sum \frac{1}{n(x^2+n)}$ converge uniform. My solution is as follows $|1/(x^2+n)|≤1$ so that $|1/n(x^2+n)|\leq1/n$. Since $\sum1/n$ converges to zero as n goes to infinity , then by Weierstrass test the series converges uniform. Am I in the right track?, I don’t know how I can get values of $x$ for which the given series is uniform convergent. Thanks for any kind of help.
-
The harmonic series $\sum 1/n$ is the classic example of a series that doesn't converge, despite its terms approaching zero. – Dylan Moreland Jan 7 '12 at 4:56
I find it hard to believe that $\sum\frac{x^2+n}{n}$ is intended. – André Nicolas Jan 7 '12 at 4:59
I agree with André. Did you mean $\sum\frac{1}{n(x^2 + n)}$? About your reasoning: you have $x^2 + n \geq 1$, and so $(x^2 + n)/n \geq 1/n$, which is the opposite of what you have. What does that suggest? – Dylan Moreland Jan 7 '12 at 5:04
@Paul Don't be! I think what you typeset was the only available interpretation of the symbols that were there. – Dylan Moreland Jan 7 '12 at 5:08
@neemy: Please try to write formulas correctly and unambiguously. Your second version was a little better, but "$1/a(b)$" does not make it clear whether you want $\frac{1}{ab}$ or $\frac{1}{a}\cdot b$. Please either use LaTeX fractions, e.g. $\frac{1}{n(x^2+n)}$ to render $\frac{1}{n(x^2+n)}$, or use parentheses correctly, e.g. 1/(n(x^2+n)), until you get the hang of LaTeX. – Jonas Meyer Jan 7 '12 at 5:50
Hint: The estimate ${1\over x^2+n}\le 1$ is "too much"; you are throwing away a term that actually helps you (the $n$). Estimate with ${1\over x^2+n}\le {1\over n}$ instead. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9704257845878601, "perplexity": 409.27743263915244}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510258086.28/warc/CC-MAIN-20140728011738-00113-ip-10-146-231-18.ec2.internal.warc.gz"} |
http://mathhelpforum.com/algebra/220415-algebraic-fraction.html | 1. ## Algebraic fraction
Hi,
We have to express $\displaystyle X$ in terms of$\displaystyle Y$.
The correct answer is $\displaystyle x = \frac{5y - 2}{y + 3}$
Multiplying both sides by $\displaystyle (5 - X)$ we get
$\displaystyle 5y - xy = 3x + 2$; $\displaystyle 5y - 2 = 3x + xy$; $\displaystyle \frac{5y - 2}{3} = x + xy$
I am going to stop there as I am on the wrong track. I would appreciate your help in finding the solution.
Cheers,
Sean.
2. ## Re: Algebraic fraction
Originally Posted by Seaniboy
Hi,
We have to express $\displaystyle X$ in terms of$\displaystyle Y$.
The correct answer is $\displaystyle x = \frac{5y - 2}{y + 3}$
Multiplying both sides by $\displaystyle (5 - X)$ we get
$\displaystyle 5y - xy = 3x + 2$; $\displaystyle 5y - 2 = 3x + xy$; $\displaystyle \frac{5y - 2}{3} = x + xy$
I am going to stop there as I am on the wrong track. I would appreciate your help in finding the solution.
Cheers,
Sean.
What's the original equation?
3. ## Re: Algebraic fraction
From your other posts I assume you started with $\displaystyle y = \frac {3x+2}{5-x}$, and want to rearrange to make x the subject. By the way - to avoid confusion iit would have been better to add your attempt to your previous thread rather than start a new one - that way we wouldn't have to guess at what the original problem was.
Originally Posted by Seaniboy
Multiplying both sides by $\displaystyle (5 - X)$ we get
$\displaystyle 5y - xy = 3x + 2$; $\displaystyle 5y - 2 = 3x + xy$;
So far so good
Originally Posted by Seaniboy
$\displaystyle \frac{5y - 2}{3} = x + xy$
No - if you divide both sides by 3 the right hand side becomes $\displaystyle \frac {3x +xy} 3 = x + \frac {xy}3$. This is not the right approach. Instead from the previous expression you can gather the x terms:
$\displaystyle 5y-2 = 3x + xy = x(3 + y)$
Now divide both sides by $\displaystyle (3+y)$ and you're done.
4. ## Re: Algebraic fraction
Thanks. Yes, I have regularly given myself less than a pass grade in the past.
5. ## Re: Algebraic fraction
Hi,
Apologies for the confusion and well deduced.
Thanks once again for your help.
Cheers,
Sean.
6. ## Re: Algebraic fraction
can any one help me in this type of sum.. I cn't got it. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6877046823501587, "perplexity": 638.9903309220693}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863043.35/warc/CC-MAIN-20180619134548-20180619154548-00163.warc.gz"} |
http://timothyandrewbarber.blogspot.com/2011/09/latex-math-vector-arrow.html | ## Friday, September 23, 2011
### LaTeX Math - Vector arrow
The code for creating a vector arrow over a symbol in LaTeX math mode is
\vec{}
so that
\vec{A}
produces $$\vec{A}$$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9986936450004578, "perplexity": 11803.75836921477}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886106465.71/warc/CC-MAIN-20170820112115-20170820132115-00133.warc.gz"} |
https://www.shaalaa.com/question-bank-solutions/which-statement-given-below-correctly-describes-magnetic-field-near-long-straight-current-carrying-conductor-magnetic-field-due-to-current-in-a-loop-or-circular-coil_51610 | # Which of the Statement Given Below Correctly Describes the Magnetic Field Near a Long, Straight Current Carrying Conductor? - Science and Technology 1
MCQ
Which of the statement given below correctly describes the magnetic field near a long, straight current carrying conductor?
#### Options
• The magnetic lines of force are in a plane, perpendicular to the conductor in the form of straight lines.
• The magnetic lines of force are parallel to the conductor on all the sides of conductor.
• The magnetic lines of force are perpendicular to the conductor going radially outword.
• The magnetic lines of force are in concentric circles with the wire as the center, in a plane perpendicular to the conductor.
#### SolutionShow Solution
The correct statement describing the magnetic field near a long, straight current carrying conductor is:
The magnetic lines of force are in concentric circles with the wire as the center, in a plane perpendicular to the conductor.
Concept: Magnetic Field Due to Current in a Loop (Or Circular Coil)
Is there an error in this question or solution? | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8790850639343262, "perplexity": 281.8916055102851}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154466.61/warc/CC-MAIN-20210803155731-20210803185731-00560.warc.gz"} |
https://rd.springer.com/article/10.1007%2Fs10955-019-02334-z | # Rigorous Results for the Distribution of Money on Connected Graphs (Models with Debts)
• Nicolas Lanchier
• Stephanie Reed
Article
## Abstract
In this paper, we continue our analysis of spatial versions of agent-based models for the dynamics of money that have been introduced in the statistical physics literature, focusing on two models with debts. Both models consist of systems of economical agents located on a finite connected graph representing a social network. Each agent is characterized by the number of coins she has, which can be negative in case she is in debt, and each monetary transaction consists in one coin moving from one agent to one of her neighbors. In the first model, that we name the model with individual debt limit, the agents are allowed to individually borrow up to a fixed number of coins. In the second model, that we name the model with collective debt limit, agents can borrow coins from a central bank as long as the bank is not empty, with reimbursements occurring each time an agent in debt receives a coin. Based on numerical simulations of the models on complete graphs, it was conjectured that, in the large population/temperature limits, the distribution of money converges to a shifted exponential distribution for the model with individual debt limit, and to an asymmetric Laplace distribution for the model with collective debt limit. In this paper, we prove exact formulas for the distribution of money that are valid for all possible social networks. Taking the large population/temperature limits in the formula found for the model with individual debt limit, we prove convergence to the shifted exponential distribution, thus establishing the first conjecture. Simplifying the formula found for the model with collective debt limit is more complicated, but using a computer to plot this formula shows an almost perfect fit with the Laplace distribution, which strongly supports the second conjecture.
## Keywords
Interacting particle systems Econophysics Distribution of money Models with debts
## Mathematics Subject Classification
Primary 60K35 91B72
## Notes
### Acknowledgements
The authors would like to thank three anonymous referees for their comments and suggestions that help improve the preliminary version of this work.
## References
1. 1.
Chakraborti, A., Chakrabarti, B.K.: Statistical mechanics of money: how saving propensity affects its distribution. Eur. Phys. J. B 17, 167–170 (2000)
2. 2.
Chakrabarti, B.K., Chakraborti, A., Chakravarty, S.R., Chatterjee, A.: Econophysics of Income and Wealth Distributions. Cambridge University Press, Cambridge (2013)
3. 3.
Chatterjee, A.: Kinetic models for wealth exchange on directed networks. Eur. Phys. J. B 67, 593–598 (2009)
4. 4.
Chatterjee, A., Chakrabarti, B.K., Manna, S.S.: Pareto law in a kinetic model of market with random saving propensity. Physica A 335, 155–163 (2004)
5. 5.
Cockshott, W.P., Cottrell, A.: Probabilistic political economy and endogenous money. In: Probabilistic Political Economy Conference, 14–17 July 2008, London, UK (2008)Google Scholar
6. 6.
Dragulescu, A.A., Yakovenko, V.M.: Statistical mechanics of money. Eur. Phys. J. B 17, 723–729 (2000)
7. 7.
Heinsalu, E., Patriarca, M.: Kinetic models of immediate exchange. Eur. Phys. J. B 87, 170–179 (2014)
8. 8.
Katriel, G.: The immediate exchange model: an analytical investigation. Eur. Phys. J. B 88, 19–24 (2015)
9. 9.
Lanchier, N.: Stochastic Modeling. Universitext. Springer, Cham (2017)
10. 10.
Lanchier, N.: Rigorous proof of the Boltzmann-Gibbs distribution of money on connected graphs. J. Stat. Phys. 167, 160–172 (2017)
11. 11.
Lanchier, N., Reed, S.: Rigorous results for the distribution of money on connected graphs. J. Stat. Phys. 171, 727–743 (2018)
12. 12.
Patriarca, M., Chakraborti, A., Kaski, K.: Statistical model with standard $$\Gamma$$ distribution. Phys. Rev. E 70, 016104 (2004)
13. 13.
Xi, N., Ding, N., Wang, Y.: How required reserve ratio affects distribution and velocity of money. Physica A 357, 543–555 (2005)
14. 14.
Yakovenko, V.M., Barkley Rosser, J.J.: Colloquium: statistical mechanics of money, wealth, and income. Rev. Mod. Phys. 81, 1703–1725 (2009) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6722491979598999, "perplexity": 2534.752234273259}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998708.41/warc/CC-MAIN-20190618083336-20190618105336-00103.warc.gz"} |
http://support.sas.com/rnd/app/stat/examples/BayesLasso/lasso.htm | # SAS/STAT Examples
## The Bayesian LASSO
Contents | SAS Program | PDF
## Overview
The least absolute shrinkage and selection operator (LASSO) was developed by Tibshirani (1996) as an alternative to the ordinary least squares (OLS) method with two objectives in mind. The first was to improve prediction accuracy, and the second was to improve model interpretation by determining a smaller subset of regressors that exhibit the strongest effects. This example presents a fully Bayesian interpretation and implementation of the LASSO that provides estimates for the regression parameters and their variances and provides Bayesian credible intervals for the regression parameters that can guide variable selection.
## Analysis
The LASSO is commonly used to estimate the parameters in the linear regression model
where is the vector of responses, is the overall mean, is the matrix of standardized regressors, and is the vector of independent and identically distributed normal errors with mean 0 and unknown variance . The LASSO estimates of Tibshirani (1996) are the solution to the minimization problem
for some , where .
Tibshirani (1996) suggested that the LASSO estimates can be interpreted as posterior mode estimates when the regression parameters have independent and identical Laplace priors. Park and Casella (2008) consider a fully Bayesian analysis by using a conditional Laplace prior specification of the form
and the noninformative scale-invariant marginal prior on . Conditioning of is important because it guarantees a unimodal full posterior. Park and Casella (2008) also note that any inverse-gamma prior for maintains conjugacy.
Exploiting the fact that the Laplace distribution can be represented as a scale mixture of normal densities with an exponential mixing density, Park and Casella (2008) propose the following hierarchical representation of the full model:
The parameter can be given an independent, flat prior. After you integrate out , the conditional prior on has the desired conditional Laplace distribution.
The Bayesian LASSO parameter can be chosen by using marginal maximum likelihood or an appropriate hyperprior. The example in the next section demonstrates the latter and considers, as suggested by Park and Casella (2008), the class of gamma priors on ,
## Example
This example from Park and Casella (2008) fits a Bayesian LASSO model to the diabetes data from Efron et al. (2004). In the original study, statisticians were asked to construct a model that predicted the response variable, Y, a quantitative measure of disease progression one year after baseline, from 10 covariates: Age, Sex, BMI, MAP, TC, LDL, HDL, TCH, LTG, and GLU. It was hoped that the model would produce accurate baseline predictions of response for future patients and that the form of the model would suggest which covariates were important factors in disease progression. The following SAS statements read the data and create the SAS data set Diabetes:
data diabetes;
input age sex bmi map tc ldl hdl tch ltg glu y;
sex=ifn(sex=2,1,0);
datalines;
59 2 32.1 101.00 157 93.2 38.0 4.00 4.8598 87.000 151
48 1 21.6 87.00 183 103.2 70.0 3.00 3.8918 69.000 75
72 2 30.5 93.00 156 93.6 41.0 4.00 4.6728 85.000 141
... more lines ...
60 2 24.9 99.67 162 106.6 43.0 3.77 4.1271 95.000 132
36 1 30.0 95.00 201 125.2 42.0 4.79 5.1299 85.000 220
36 1 19.6 71.00 250 133.2 97.0 3.00 4.5951 92.000 57
;
Before specifying the model in the MCMC procedure, you need to standardize the model’s regressors, excluding the indicator variable Sex. You can use the STDIZE procedure as follows to perform this task:
proc stdize data=diabetes out=std_diabetes;
var age bmi map tc ldl hdl tch ltg glu;
run;
The following statements specify the Bayesian LASSO in PROC MCMC:
ods graphics on;
ods output postintervals=intervals;
proc mcmc data=std_diabetes seed=45678 nmc=50000 propcov=quanew
monitor=(b0 beta1-beta10 tau1-tau10 sigma2 lasso)
outpost=posterior;
array D[10,10];
array beta[10] beta1-beta10;
array mu0[10];
array data[10] age sex bmi map tc ldl hdl tch ltg glu;
begincnst;
call identity(D);
call zeromatrix(mu0);
endcnst;
beginnodata;
lasso=sqrt(lambda);
b=lambda/2;
%macro loop;
%do k = 1 %to 10;
tau&k = exp(omega&k);
D[&k,&k]=sigma2*tau&k;
%end;
%mend loop;
%loop;
endnodata;
call mult(beta, data,xb);
parms lambda ;
prior lambda ~ gamma(1,scale=.1);
parms omega1-omega10;
prior omega: ~ expexpon(iscale=b);
parms sigma2 1;
prior sigma2 ~ igamma(shape = .1, iscale = .1);
parms b0 0;
prior b0 ~ general(0);
parms beta;
prior beta ~ mvn(mu0,D);
model y ~ normal(b0 + xb,var=sigma2);
run;
The ODS OUTPUT statement saves the posterior credible intervals in the SAS data set Intervals. The NMC= option in the PROC MCMC statement requests 50,000 MCMC iterations, excluding the burn-in iterations. A large sample is used because the posterior samples are highly autocorrelated. The PROPCOV= option in the PROC MCMC statement requests that the quasi-Newton method be used in constructing the initial covariance matrix for the Metropolis-Hastings algorithm. The OUTPOST= option saves the posterior samples in the data set Posterior.
The next four statements create arrays that are used in the model. The array D is the covariance matrix for the regression parameters Beta1Beta10. The array Beta is the vector of the regression parameters Beta1Beta10. The array Mu0 is the mean vector for the prior distribution of the regression parameters Beta1Beta10. The array Data is the matrix of regressors, excluding the intercept.
The BEGINCNST and ENDCNST statements define a statement block within which PROC MCMC processes the programming statements only during the setup stage of the simulation. You can use the BEGINCNST and ENDCNST statement block to initialize the matrices D and Mu0. D is initially set to an identity matrix, and Mu0 is initialized as a zero vector.
The BEGINNODATA and ENDNODATA statements define a block within which PROC MCMC processes the programming statements without stepping through the entire data set. The programming statements are executed only twice: at the first and last observations of the data set. Within this statement block, the parameters Lasso and b are defined. The macro %LOOP repopulates the matrix D. The purpose of the parameters Omega1Omega10 and their relationship with the parameters Tau1Tau10 are explained later.
The next statement uses the MULT CALL routine to define the matrix XB, which contains the product of the regressors and the regression parameters Beta1Beta10. That is, it contains the linear predictor, excluding the intercept.
The following block of statements declares the model parameters and assigns prior distribution to them. The parameter Lambda, which represents , is specified to have a gamma distribution. The parameters Omega1Omega10 are specified to have exponential exponential distributions. The parameters have exponential distributions, but modeling these parameters directly can cause convergence problems. Instead, the parameters Omega1Omega10 are modeled directly, and within the macro %LOOP the parameters Tau1Tau10, which represent , are defined as being the exponential of Omega1Omega10, respectively. The parameter Sigma2, which represents , is specified to have an inverse-gamma distribution. The parameter B0, which represent , is specified to have an improper uniform distribution. The parameter vector Beta, which represents , is specified to have a multivariate normal distribution with mean equal to 0 and variance matrix equal to D.
Finally, the MODEL statement specifies that the response variable Y have a normal distribution.
Output 1 shows that the Monte Carlo standard errors (MCSE) of each parameter are small relative to the posterior standard deviations (SD). A small MCSE/SD ratio indicates that the Markov chain has stabilized and that the mean estimates do not vary much over time.
Output 1: Monte Carlo Standard Errors
The MCMC Procedure
Monte Carlo Standard Errors
Parameter MCSE Standard
Deviation
MCSE/SD
b0 0.1105 3.7685 0.0293
beta1 0.0769 2.8187 0.0273
beta2 0.2072 5.8461 0.0354
beta3 0.1029 3.1602 0.0326
beta4 0.1093 3.3479 0.0326
beta5 0.3238 11.1592 0.0290
beta6 0.3241 11.4013 0.0284
beta7 0.1018 3.5868 0.0284
beta8 0.2234 7.4085 0.0302
beta9 0.1765 6.2293 0.0283
beta10 0.0992 3.3421 0.0297
tau1 0.0875 3.7978 0.0230
tau2 0.0925 4.0235 0.0230
tau3 0.1082 3.9813 0.0272
tau4 0.1124 3.8682 0.0291
tau5 0.0934 3.7605 0.0248
tau6 0.1107 3.8139 0.0290
tau7 0.0922 3.6053 0.0256
tau8 0.1004 3.6653 0.0274
tau9 0.1105 4.1032 0.0269
tau10 0.0969 3.7154 0.0261
sigma2 1.1459 202.7 0.00565
lasso 0.00400 0.1500 0.0266
Output 2 shows the Effective Sample Sizes table. The autocorrelation times for the parameters range from 1.59 to 62.83, and most of the efficiency rates are low. These results account for the relatively small effective sample sizes, given a nominal sample size of 50,000.
Output 2: Effective Sample Sizes
Effective Sample Sizes
Parameter ESS Autocorrelation
Time
Efficiency
b0 1164.0 42.9555 0.0233
beta1 1343.4 37.2178 0.0269
beta2 795.7 62.8345 0.0159
beta3 942.3 53.0600 0.0188
beta4 938.5 53.2742 0.0188
beta5 1188.0 42.0892 0.0238
beta6 1237.5 40.4033 0.0248
beta7 1241.1 40.2856 0.0248
beta8 1100.0 45.4557 0.0220
beta9 1245.4 40.1475 0.0249
beta10 1134.0 44.0920 0.0227
tau1 1883.1 26.5514 0.0377
tau2 1892.7 26.4173 0.0379
tau3 1353.3 36.9456 0.0271
tau4 1183.6 42.2445 0.0237
tau5 1619.6 30.8725 0.0324
tau6 1186.1 42.1539 0.0237
tau7 1529.5 32.6909 0.0306
tau8 1333.9 37.4850 0.0267
tau9 1378.2 36.2783 0.0276
tau10 1468.9 34.0388 0.0294
sigma2 31287.3 1.5981 0.6257
lasso 1409.3 35.4777 0.0282
The following SAS statements use the OUTPOST data set Posterior and the ODS OUTPUT data set Intervals to generate a table of the Bayesian LASSO parameter estimates, which are the modes of the posterior samples for B0 and Beta1Beta10, and their respective 95% HPD intervals:
proc means data=posterior mode;
var b0 beta1-beta10;
output out=parameters(drop=_TYPE_ _FREQ_) mode(b0 beta1-beta10)=b0 beta1-beta10;
run;
proc transpose data=parameters out=parameters;
run;
data parameters;
length parameter \$ 6;
set parameters(rename=(col1=mode _NAME_=Parameter));
label Parameter=;
index=_N_;
run;
proc sort data=parameters out=parameters;
by parameter;
run;
proc sort data=intervals out=intervals;
by parameter;
run;
data parameters(where=(~missing(mode)));
merge parameters intervals;
by parameter;
label parameter="Parameter" mode="Mode";
run;
proc sort data=parameters out=parameters;
by index;
run;
proc print data=parameters noobs label;
var parameter mode hpdlower hpdupper;
run;
Output 3 shows that the HPD intervals for the parameters Beta1, Beta5, Beta6, Beta8, and Beta10 all contain 0. Unlike what happens in the frequentist version of the LASSO, regression parameters are not set to 0, so the inclusion of 0 in the HPD interval is the only indication that a variable is a candidate for exclusion from the model. Based on this criterion, the variables Age, TC, LDL, TCH, and GLU are the leading candidates for exclusion from the model.
Output 3: Bayesian LASSO Parameter Estimates and 95% HPD Intervals
Parameter Mode HPDLower HPDUpper
b0 162.2 155.2 170.1
beta1 -1.4406 -5.8311 5.2856
beta2 -21.2417 -33.7811 -11.0211
beta3 26.5303 18.3849 30.6794
beta4 13.6935 9.4601 22.4973
beta5 -10.1466 -33.9135 9.7807
beta6 6.9991 -17.1198 27.7032
beta7 -13.3526 -17.2501 -3.3104
beta8 1.7061 -14.5072 14.3417
beta9 25.8343 15.8578 40.0418
beta10 0.3158 -2.5716 10.4408
## References
• Efron, B., Hastie, T. J., Johnstone, I. M., and Tibshirani, R. (2004), “Least Angle Regression (with Discussion),” Annals of Statistics, 32, 407–499.
• Kyung, M., Gill, J., Ghosh, M., and Casella, G. (2010), “Penalized Regression, Standard Errors, and Bayesian Lassos,” Bayesian Analysis, 5, 369–412.
• Park, T. and Casella, G. (2008), “The Bayesian Lasso,” Journal of the American Statistical Association, 103, 681–686.
• Tibshirani, R. (1996), “Regression Shrinkage and Selection via the Lasso,” Journal of the Royal Statistical Society, Series B, 58, 267–288. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8233603835105896, "perplexity": 5309.500939293888}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812959.48/warc/CC-MAIN-20180220125836-20180220145836-00631.warc.gz"} |
http://www.sciforums.com/threads/is-freedom-of-speech-a-bad-thing.4103/ | # Is freedom of speech a bad thing?
Discussion in 'Science & Society' started by GRO$$, Oct 2, 2001. 1. ### GRO$$Registered Senior Member
Messages:
304
Little story to start this idea with:
I am, at the moment, am a Sophomore in HS. I had an idea some time ago about making a web site where people could express their opinions about teachers, classes, school sports/teams. After some debate about a site of this sort, I have been laughed at by some, thought a loser with too much time by others, encouraged by one or two, and discouraged on several grounds.
Two of the most important discouragements have come from the school administration and... my mom
hehe.
The school administration pressed the issue legally, pointing out that if i insult the school or any teacher in any way on the government can step in and bring down the site. I have send a letter to the ACLU (American Civil Liberties Union) and have not gotten a response regarding the validity of this.
My mom addressed the issue morally saying that teachers are very hard workers for a wage lower than a lot of other professions and so should not be insulted by ignorant teenagers.
It is obvious that freedom of speech is a good idea, to an extent, but can cross the line. All i want to do is to allow people to express their opinion... Would this be crossing the line?
Another thought: Absolute freedom is chaos, so is freedom of any kind bad?
We have been taught from our earliest age that freedom is good, but is it? Keep an open mind | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17119914293289185, "perplexity": 1803.7634029181481}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267160400.74/warc/CC-MAIN-20180924110050-20180924130450-00203.warc.gz"} |
http://www.sciforums.com/threads/this-new-equation-might-finally-unite-the-two-biggest-theories-in-physics-claims-physicist.157617/page-15 | # This new equation might finally unite the two biggest theories in physics, claims physicist
Discussion in 'Astronomy, Exobiology, & Cosmology' started by paddoboy, Aug 20, 2016.
Messages:
21,703
Your posts in general lack all professionalism....and the nonsense from your political comments spillover......I'll leave it at that.
Messages:
21,703
Sure I can, but I'm not open to the cynical demands from some god bothering religious type that is not open to any answers that does not fit in with his god of the gaps myth.
Perhaps again your poor understanding is a reflection of english being your second language: Sure it doesn't mean they can exist, it also doesn't mean that they do not exist: Understand?
That's why no physicist, has ever said categorically that they do not or cannot exist.......probably don't? sure! Maybe they do? Sure....probably do? sure again. Are you getting it yet?
No I post reputable papers, as in this case, to deride the silly agenda driven claims of cranks and god botherers, who chose to disbelieve just about all of 21st century cosmology, yet offers nothing evidenced in return.
And of course an open mind is most certainly desirable, but not so open that your brains fall out.
And of course believing and accepting some almighty omnipotent magical spaghetti monster, is not having an open mind by any stretch of the imagination.
5. ### expletives deletedRegistered Senior Member
Messages:
410
Real physicists let the science do the talking. The Quantum Theory and Relativity theory, plus the physical reality observations which science has recorded, all say categorically that the following things are unphysical and cannot form or exist in physical reality:
- 'negative' energy;
- 'separation' of two Black Holes and/or Stars;
- 'naked' singularities;
- 'mouths' of wormholes, and entanglement of same;
- wormholes (since they depend on 'negative' energy which science says categorically does not exist; and since Quantum Theory says any entanglement would collapse due to quantum perturbations; and since naked singularities do not exist; and since no two Black Holes and/or Stars can be 'separated' once merged; and since only unphysical "extensions" of the GR maths produce such unphysical 'solutions' based speculations having no actual scientific reality possibilities).
So, the irrelevant statement that "No physicist would categorically claim that wormholes etc do not exist", is just that, an irrelevance which a real physicist would not even consider making at all since they would let the real discovered science do the talking.
Only infotainment, pop-sci and sci-fi/fantasy and such like publish-or-perish and infotainment authors and their 'fans' would try to bring such irrelevant statement as an 'argument' against the existing scientifically categorical determination that the above unphysical things do not and cannot ever exist in physical reality.
Thanks. Best.
Messages:
21,703
They certainly do and the fact remains that what you claim is a porky pie....
Again no physicist despite your obvious anguish, has ever said that worm holes categorically do not exist...sorry about that.
Let me take the time out to educate you some......All science starts out as speculative, do you understand that fact? Worm holes are a prediction of GR...Is that clear? They have as yet, never been observed...OK? But no physicists consequently has ever said that worm holes categorically do not exist.
That is the state of the game at this stage.
Yes, yes yes, that's also what many of our god bothering friends, cranks and anti science trolls often infest science forums with.
And while these notable scientists like Professor's Thorne and Susskind, and Hawking, and Carroll are publishing and surviving by properly researching all possibilities, the cranks, god botherers, trolls etc, are perishing on forums such as this, open to any Tom, Dick, Harry, Jill, Mary, or Veronica. [just so I don't appear sexists in anyway
]
8. ### The GodValued Senior Member
Messages:
3,546
Mods,
Can you ban this line at least future use in this thread should be restricted. In this thread this line must have been used at least 100 times...
Also can you add a guideline that frequent use of emojis in science forum be avoided.
Messages:
21,703
Hav'nt you tried that silly strategy before?
Like here.....
and here
and here......
In essence my statement that no physicist will ever categorically say that worm holes do not exist, stands as is.
It is mentioned as we have a couple of seemingly anti mainstream cosmology posters, who seem to have problems with all of 21st century cosmology, from cosmological redshift, to gravitational waves and BH's.
That's OK, that's their perogative, but when they continually make those claims without any support or reputable link supporting their view, then it seems obvious where such posters are coming from.
Likewise worm holes. Worm holes are a solution of GR but remain speculative as none as yet have ever been observed. Many papers based on their hypothetical existence are available.
While certainly there are opposing views amongst the experts as to whether they believe they exist or not, the fact remains that none have categorically claimed they do not.
10. ### SchmelzerValued Senior Member
Messages:
3,976
Paddoboy, your claim stands simply out of your personal qualities. In itself, the use of "categorically" makes it quite empty - scientists seldom use this word.
To start a serious discussion about the role of wormholes with you seems impossible. Simply because you ignore the scientific arguments completely. In #283 some have been mentioned. Reaction? None.
The problem is that, first, with sufficiently fantastic dark matter (which can contain negative energy too) every metric can be made a solution of the Einstein equations. GR taken alone does not forbid anything. Only GR together with additional conditions (like positive energy and so on) allows to make nontrivial, falsifiable predictions.
Then, the next problem is that even such energy conditions prevent nothing in a quantum theory. All what can be said in a quantum theory about the configuration is that it has a very small probability - but, whatever the configuration, the probability will be greater than zero.
11. ### Engell79Registered Senior Member
Messages:
100
And yet you took time to actually write something rather intresting. i neither agre og disagree, and my own understanding of the concepts are still to small. could u elaborate on what you mean with "fixed background geometry"
12. ### SchmelzerValued Senior Member
Messages:
3,976
Hm, difficult to explain given that I don't know where to start.
Roughly, think about the interference picture in a simple double slit experiment. The mathematics to compute this picture consists of the following steps: Computing some amplitudes for paths of the particle going through the left resp. the right slit. Then, adding the amplitudes for the paths which end up at the same point. Then, the square the sum and this gives the probability of the particle appearing in this point.
This is a basic quantum rule, you can use it for all fields, the EM field, all the other fields of the SM. And if you use simply Newtonian gravity, it works for gravity too. Even supported by observation (of neutrons in the gravitational field of the Earth).
But for GR, it does not work. The problem is that GR does not define what would be the same point for different gravitational fields. If you have different solutions of GR, you do not have given them in the same system of coordinates. You have, say, solution 1 as $g_{mn}(x,y,z,t)$ and solution 2 as $g_{ab}(d,e,f,g)$. What is the same point as x,y,y,t in terms of d,e,f,g on solution 2? Nobody knows.
In all other theories, it is sufficient to define this for the initial values. At t=0, we have, say, d=x, e=y, f=z. (Ok, together with first derivatives or so.) Once this is given for the initial values, fine, we can compute everything else. And, in the simplest case, find out that d=x, e=y, f=z, g=t. Fine. With this information, we know what is is the same point as x,y,y,t in terms of d,e,f,g on solution 2. And we can use the standard rules of quantum theory to compute the interference patterns.
But GR does not allow to compute this. The GR equations are not sufficient for this. The reason is the equivalence principle, or the diffeomorphism invariance of the theory. You can choose another system of coordinates, transform the solution to this other system, and have another solution. And so you can construct different-looking solutions, even for the same initial values - the other system of coordinates may be the same at the initial values. It may be different from the original one only in some hole. This is the hole problem.
The hole problem is solved in classical GR, by reliance on observable effects only. In these two different-looking solutions for the same initial values, all what can be measured is nonetheless the same. Fine. But this does not solve the problem in the quantum case, where we need, to add the amplitudes of different solutions, the information what are the same points on different solutions.
This is some information which in classical theory is defined by absolute space. And even in special relativity, this is not problematic, the absolute Minkowski spacetime provides this information. In GR, nothing provides this information.
So, a fixed background geometry, which is independent of the different physical fields, would provide such a structure. In fact, simply to say "fixed background" would have been more accurate, because this background does not have to provide information about distances - it is sufficient that it provides information about which are the same points.
Messages:
21,703
There you go again, being so unprofessional and really stupidly pedantic re the word "categorically"
Try then using "positively", or "100% sure", or a "zero probability"......
The point is that no physicist worth his salt has ever said that worm holes categorically do not exist, or any reasonable facsimile thereof.
Secondly, I really do not need to undergo any discussion with you re worm holes, as per the OP and following paper, they are at this time speculative and as such while still being solutions to GR, are open for research by noted reputable expert professionals. Most professionals and lay people like myself realize that:
Thirdly your point re me ignoring the scientific argument is ironic in the extreme, considering that both the OP article and reputable paper following as peer reviewed and published, make no qualms re the speculative nature of the current subject. So you are essentially claiming that the noted professionals and their paper [and subsequently myself for daring to post such] are not following the scientific argument?
And all this time I thought that any scientific theory was always at one time speculative!
It's rather relevant at this point to mention how this paper and OP is now the subject of many hotly contested debates, in professional circles, as compared to the lingering, near lost, speculative ether paper that you yourself published.
I would also surmise that your rather unprofessional approach [as I detailed] is more driven by your previously stated abhorrence to string theory and/or any of its many derivitives that both the scientists in the OP and following paper are involved in.
Last edited: Sep 7, 2016
Messages:
21,703
Briefly, the OP and following paper is speculating that a worm hole, or an ERB, is the spacetime equivalent of quantum entanglement.
That link may help unite QM and GR in a long sort after QGT. A goal well worth obtaining for many reasons.
The following article puts that in rather simplistic language.......
https://www.sciencenews.org/article/entanglement-gravitys-long-distance-connection
extracts:
Physicists have high hopes for where this entanglement-spacetime connection will lead them. General relativity brilliantly describes how spacetime works; this new research may reveal where spacetime comes from and what it looks like at the small scales governed by quantum mechanics. Entanglement could be the secret ingredient that unifies these supposedly incompatible views into a theory of quantum gravity, enabling physicists to understand conditions inside black holes and in the very first moments after the Big Bang.
QUANTUM SKEPTICS A New York Times article on May 4, 1935, highlighted Einstein’s concerns about quantum mechanics, especially its feature now known as entanglement. Today physicists are exploring links between entanglement and Einstein’s general theory of relativity.
Last edited: Sep 7, 2016
Messages:
21,703
Building up spacetime with quantum entanglement:
Abstract:
In this essay, we argue that the emergence of classically connected spacetimes is intimately related to the quantum entanglement of degrees of freedom in a non-perturbative description of quantum gravity. Disentangling the degrees of freedom associated with two regions of spacetime results in these regions pulling apart and pinching off from each other in a way that can be quantified by standard measures of entanglement.
16. ### SchmelzerValued Senior Member
Messages:
3,976
The really funny point is that you cannot resist to repeat your .... boldface. The critical argument - that "categorically" is not a word a real scientist would use in such a context - you prefer to ignore (as usual). Your answer is instead, a personal attack - "unprofessional". As usual, without any evidence.
You may think it is relevant. I don't think so. Science is not democracy. So, what is fashionable today does not count. What counts are results. ER=EPR has yet to deliver. My theory has delivered.
Of course, this is part of the reason I think it is a completely hopeless speculation. But let's note that this is a speculation which, even if the authors use some string theory vocabulary, is far away from the original string theory. Which is a standard quantum theory. ER=EPR is something very different, something the same string theorists would have rejected out of hand 20 years ago.
17. ### The GodValued Senior Member
Messages:
3,546
Some 300 odd posts and I still do not know what "Categorically cannot exist" means...
Paddoboy thinks if scientists have not said "no" categorically about something's existence then that thing may exist. What a funny fallacy he holds.
Messages:
21,703
Your continued argument that "categorically" is not a word that scientists use, is the height of pedantry and stupidity, and imho, reflects an unprofessional attitude.
Obviously Professor Susskind is certainly more note worthy then yourself [fact]And your sarcasm with the use of the word "beloved" with regards to professionals, says something else.
With regards to your view of a "personal attack" again, I'm attacking your own ether hypothesis, yet you categorically [ooops sorry,
] dismiss this paper following the OP, when no body involved has claimed anything different other than it is speculative: Now you say bad speculation.
That's your opinion and possibly you maybe right, just as you maybe wrong about your own speculative ether theory. You seem very thin skinned, while approaching others, me here and your adversaries in the political forum, with plenty of vigour.
Need I say more?
That's your right and privilege to think what you want, as wrong as it obviously is: As I said, this subject is being hotly debated...your ether theory?
ER=EPR has yet to deliver? Correct!!! So? Do not all scientific theories start out as speculative? hmmm?
Tomorrow it may reveal the QGT scientists have been seeking.
An Einstein also rejected part of GR in the early part of GR, so again, So?
String theory and its many derivitives has progressed...You reject it...that's your right...many do not reject it and are still conducting research.
Messages:
21,703
Sure you do! Just as Schmelzer does!
and the same sense of saying they 100% do not exist, or the certainly do not exist.
Back onto relevant material......
I see the existence or otherwise of worm holes, summed up in the paper I recently linked to......
https://arxiv.org/pdf/1606.05295v2.pdf
Cosmological wormholes in f(R) theories of gravity:
Abstract:
Motivated by recent proposals of possible wormhole existence in galactic halos, we analyse the cosmological evolution of wormhole solutions in modified f(R) gravity. We construct a dynamical wormhole that asymptotically approaches FLRW universe, with supporting material going to the perfect isotropic fluid described by the equation of state for radiation and matter dominated universe respectively. Our analysis is based on an approximation of a small wormhole - a wormhole that can be treated as matched with the FLRW metric at some radial coordinate much smaller than the Hubble radius, so that cosmological boundary conditions are satisfied. With a special interest in viable wormhole solutions, we refer to the results of reconstruction procedure and use f(R) functions which lead to the experimentally confirmed ΛCDM expansion history of the universe. Solutions we find imply no need for exotic matter near the throat of considered wormholes, while in the limit of f(R) = R this need is always present during radiation and matter dominated epoch.
20. ### SchmelzerValued Senior Member
Messages:
3,976
Wrong. Science looks, from a layman's point of view, often very pedantic. If a reviewer finds in a scientific paper even a minor point which is wrong, this is sufficient to reject it or, at least, to require modification. So, professional behavior often looks pedantic. If my objection is stupid is another claim, and, given that you give no evidence, it is simply name-calling.
LOL, you think the worth of a theoretical scientist can be established before the theories he proposes have been supported by scientific evidence? As far, nor ER=EPR as my ether theory have not been supported by sufficient empirical evidence. From theoretical evaluation, my ether theory solves serious problems of modern physics, but, once it is ignored, you cannot find nor arguments supporting this, nor arguments against it, so, you are unable to decide about this.
You have not attacked my ether theory, simply because you are unable to do it. All you do is to repeat the trivial point that ether theories are not fashionable today. If I would be thin-skinned, I would not visit such forums like this where personal attacks are somehow part of the typical communication.
You have, possibly, heard about Popper's criterion of demarcation which distinguishes an empirical, scientific theory from other things? They have to make falsifiable predictions. ER=EPR does not make any such predictions. My ether model predicts the fermions and gauge fields of the SM, which is a very nontrivial falsifiable prediction. This is an essential difference between the two approaches. Here, ER=EPR has yet to deliver.
Maybe. But before experimentators even start to think about testing ER=EPR, Susskind and Co have yet to work a lot, and to deliver at least a falsifiable theory.
String theorists too have yet to deliver a falsifiable theory.
21. ### The GodValued Senior Member
Messages:
3,546
No scientits worth his salt has categorically denied existence of Worm Holes, so worm holes must exist.
Can anyone tell which fallacy is this? I am not able to name it.
22. ### SchmelzerValued Senior Member
Messages:
3,976
That's not fair, the "so worm holes must exist" conclusion I have not seen from paddoboy. If he has made such a claim, link please.
Of course, to cry "No scientist worth his salt has categorically denied existence of Worm Holes" is quite meaningless, given that the variant "No scientist worth his salt has categorically claimed existence of Worm Holes" has not been rejected too. So, it makes sense to ask what would be the point of this claim, if it is not to suggest that wormholes exist. But this is another question. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6663601994514465, "perplexity": 1666.4242612413564}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267859904.56/warc/CC-MAIN-20180617232711-20180618012711-00067.warc.gz"} |
https://link.springer.com/chapter/10.1007%2F978-3-319-16498-4_3 | # Feature Discovery by Deep Learning for Aesthetic Analysis of Evolved Abstract Images
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9027)
## Abstract
We investigated the ability of a Deep Belief Network with logistic nodes, trained unsupervised by Contrastive Divergence, to discover features of evolved abstract art images. Two Restricted Boltzmann Machine models were trained independently on low and high aesthetic class images. The receptive fields (filters) of both models were compared by visual inspection. Roughly 10 % of these filters in the high aesthetic model approximated the form of the high aesthetic training images. The remaining 90 % of filters in the high aesthetic model and all filters in the low aesthetic model appeared noise like. The form of discovered filters was not consistent with the Gabor filter like forms discovered for MNIST training data, possibly revealing an interesting property of the evolved abstract training images. We joined the datasets and trained a Restricted Boltzmann Machine finding that roughly 30 % of the filters approximate the form of the high aesthetic input images. We trained a 10 layer Deep Belief Network on the joint dataset and used the output activities at each layer as training data for traditional classifiers (decision tree and random forest). The highest classification accuracy from learned features (84 %) was achieved at the second hidden layer, indicating that the features discovered by our Deep Learning approach have discriminative power. Above the second hidden layer, classification accuracy decreases.
### Keywords
Computational aesthetics Deep learning Evolved abstract images
### References
1. 1.
Birkhoff, G.D.: Aesthetic Measure. Mass, Cambridge (1933)
2. 2.
Campbell, A., Ciesielski, V., Trist, K.: A self organizing map based method for understanding features associated with high aesthetic value evolved abstract images. In: 2014 IEEE Congress on Evolutionary Computation (CEC), pp. 2274–2281. IEEE (2014)Google Scholar
3. 3.
Ciesielski, V., Barile, P., Trist, K.: Finding image features associated with high Aesthetic value by machine learning. In: Machado, P., McDermott, J., Carballal, A. (eds.) EvoMUSART 2013. LNCS, vol. 7834, pp. 47–58. Springer, Heidelberg (2013)
4. 4.
Datta, R.: Semantics and aesthetics inference for image search: statistical learning approaches. Pennsylvania State University (2009)Google Scholar
5. 5.
Datta, R., Joshi, D., Li, J., Wang, J.Z.: Studying Aesthetics in photographic images using a computational approach. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3953, pp. 288–301. Springer, Heidelberg (2006)
6. 6.
Erhan, D., Bengio, Y., Courville, A., Vincent, P.: Visualizing higher-layer features of a deep network. Dept. IRO, Université de Montréal, Technical report (2009)Google Scholar
7. 7.
Fischer, A., Igel, C.: Training restricted boltzmann machines: An introduction. Pattern Recogn. 47(1), 25–39 (2014)
8. 8.
Galanter, P.: Computational aesthetic evaluation: past and future. In: McCormack, J., d’Inverno, M. (eds.) Computers and Creativity, pp. 255–293. Springer, Heidelberg (2012)
9. 9.
Ginosar, S., Haas, D., Brown, T., Malik, J.: Detecting people in cubist art. arXiv preprint arXiv:1409.6235 (2014)
10. 10.
Hall, M., Frank, E., Holmes, G., Pfahringer, B., Reutemann, P., Witten, I.H.: The weka data mining software: an update. ACM SIGKDD Explor. Newslett. 11(1), 10–18 (2009)
11. 11.
Hinton, G.: A practical guide to training restricted Boltzmann machines. Momentum 9(1), 926 (2010)Google Scholar
12. 12.
Hinton, G., Osindero, S., Teh, Y.-W.: A fast learning algorithm for deep belief nets. Neural Comput. 18(7), 1527–1554 (2006)
13. 13.
Geoffrey, E.: Training products of experts by minimizing contrastive divergence. Neural Comput. 14(8), 1771–1800 (2002)
14. 14.
Hinton, G.E., Srivastava, N., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.R.: Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580 (2012)
15. 15.
Hoenig, F.: Defining computational aesthetics. In: Neumann, L., Sbert, M., Gooch, B., Purgathofer, W. (eds.) Computational Aesthetics, pp. 13–18. Eurographics Association, London (2005)Google Scholar
16. 16.
Ke, Y., Tang, X., Jing, F.: The design of high-level features for photo quality assessment. In: 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 1, pp. 419–426. IEEE (2006)Google Scholar
17. 17.
LeCun, Y., Cortes, C.: The mnist database of handwritten digits (1998)Google Scholar
18. 18.
Lee, H., Ekanadham, C., Ng, A.Y.: Sparse deep belief net model for visual area v2. In: Platt, J.C., Koller, D., Singer, Y., Roweis, S. (eds.) Advances in Neural Information Processing Systems, pp. 873–880. MIT Press, Cambridge (2008)Google Scholar
19. 19.
Lu, X., Lin, Z., Jin, H., Yang, J., Wang, J.Z.: Rapid: Rating pictorial aesthetics using deep learning. In: Proceedings of the ACM International Conference on Multimedia, pp. 457–466. ACM (2014)Google Scholar
20. 20.
Machado, P., Cardoso, A.: Generation and evaluation of artworks. In: Proceedings of the 1st European Workshop on Cognitive Modeling, CM’96, pp. 96–39 (2010)Google Scholar
21. 21.
Murray, N., Marchesotti, L., Perronnin, F.: Ava: A large-scale database for aesthetic visual analysis. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2408–2415. IEEE (2012)Google Scholar
22. 22.
Nair, V., Hinton, G.E.: Rectified linear units improve restricted boltzmann machines. In: Proceedings of the 27th International Conference on Machine Learning (ICML 2010), pp. 807–814 (2010)Google Scholar
23. 23.
Reaves, D.: Aesthetic image rating (AIR) algorithm. Ph.D. thesis (2008)Google Scholar
24. 24.
Simard, P.Y., Steinkraus, D., Platt, J.C.: Best practices for convolutional neural networks applied to visual document analysis. In: 2013 12th International Conference on Document Analysis and Recognition, vol. 2, pp. 958–958. IEEE Computer Society (2003)Google Scholar
25. 25.
Spratt, E.L., Elgammal, A.: Computational beauty: Aesthetic judgment at the intersection of art and science. arXiv preprint arXiv:1410.2488 (2014)
26. 26.
Jost Tobias Springenberg and Martin Riedmiller. Improving deep neural networks with probabilistic maxout units. arXiv preprint arXiv:1312.6116 (2013)
27. 27.
Xu, Q., D’Souza, D., Ciesielski, V.: Evolving images for entertainment. In: Proceedings of the 4th Australasian Conference on Interactive Entertainment, p. 26. RMIT University (2007)Google Scholar | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.83305424451828, "perplexity": 23139.980756130746}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948515313.13/warc/CC-MAIN-20171212095356-20171212115356-00678.warc.gz"} |
https://infoscience.epfl.ch/record/153557 | Formats
Format
BibTeX
MARC
MARCXML
DublinCore
EndNote
NLM
RefWorks
RIS
### Abstract
This paper presents a new technique for continuously calibrating the sensitivity of a current measurement microsystem based on a Hall magnetic field sensor. An integrated reference coil generates a magnetic field for calibration. Using a variant of the chopper modulation, the spinning current technique, combined with a second modulation of the reference signal, the sensitivity of the complete system is continuously measured without interrupting normal operation. Modulation and demodulation schemes allowing the joint processing of both external and reference magnetic fields are proposed. Additional techniques for extracting the very low reference signal are presented. The implementation of the microsystem is then discussed. Finally, measurements validate the calibration principle. A thermal drift lower than 50 ppm/degrees C is achieved. This is 6-10 times less than in state-of-the-art implementations. Furthermore, the calibration technique also compensates drifts due to mechanical stresses and ageing. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8131160140037537, "perplexity": 1555.84676472206}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038916163.70/warc/CC-MAIN-20210419173508-20210419203508-00081.warc.gz"} |
http://physicshelpforum.com/light-optics/1750-plot-spectra-matlab.html | Physics Help Forum Plot spectra with Matlab
Light and Optics Light and Optics Physics Help Forum
Mar 26th 2009, 03:06 AM #1 Junior Member Join Date: Mar 2009 Posts: 8 How to plot a gaussian line using Matlab? I have a ascii file from the Hitran program in which gives (among other) the wavenumber and the intensity for each line. I would like to plot the these lines with a Gaussian line profiles using Matlab. How do I do that? The intensity and wavenumbers is "saved" into a matrix. Thank you in advance! Edit: I think I have posted this in the wrong section of the forum (sorry about that ). This is suppose to be a question in the "College/University Physics" section under "Atomic" or perhaps in the "Advance Physics section". Also, the lines are from molecules but I guess that does not matter since I only want to know how to plot a gaussian line when only information of wavenumber, intensity is provided. I guess I could also estimate the FWHM (or guess.. its not relevent for my calculations). There is a 'Air-broadened half-width' and a 'Self-broadened half-width' in the file but I am not sure what that means. Other information provided by the Hitran files is stated here, http://www.cfa.harvard.edu/hitran/Do...RAN04paper.pdf , page 5 in table 2 (see parameters). Last edited by astrofysikern; Mar 26th 2009 at 08:04 AM. Reason: Adding information
Tags matlab, plot, spectra
Thread Tools Display Modes Linear Mode
Similar Physics Forum Discussions Thread Thread Starter Forum Replies Last Post grandunification Physics Software 0 Jun 16th 2010 04:36 PM calculus Advanced Waves and Sound 2 Jun 15th 2010 05:22 PM jordanleeburgess Light and Optics 0 Feb 2nd 2010 06:24 PM cpj Quantum Physics 1 Nov 24th 2009 01:38 AM iEricKim Light and Optics 0 May 27th 2009 07:31 PM | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8069443106651306, "perplexity": 2071.2360217265696}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232259015.92/warc/CC-MAIN-20190526085156-20190526111156-00215.warc.gz"} |
https://courses.lumenlearning.com/introchem/chapter/density-calculations/ | ## Density Calculations
#### Learning Objective
• Apply the reformulated Ideal Gas Equation in your calculations
#### Key Points
• Density calculations allow us to evaluate the behaviors of gases of unknown volume.
• We can determine the density of an ideal gas using knowledge of three properties of the evaluated ideal gas.
• This reformulation of the Ideal Gas Equation relates pressure, density, and temperature of an ideal gas independent of the volume or quantity of gas.
#### Term
• densitya measure of the amount of matter contained by a given volume
The Ideal Gas Equation in the form $PV=nRT$ is an excellent tool for understanding the relationship between the pressure, volume, amount, and temperature of an ideal gas in a defined environment that can be controlled for constant volume. However, in its most common form, the Ideal Gas Equation is not useful for examining the behavior of gases of undetermined volume, such as the gases in the clouds that surround the stars in our solar system or the atmospheric gases that support life on our planet. To derive a form of the ideal gas equation that has broader applications, we can use calculations that employ the physical property of density.
# Derivation of the Volume-Independent Ideal Gas Law
We know the Ideal Gas Equation in the form $PV=nRT$. We also know that:
$n=\text{# moles of gas}=\frac{\text{mass of gas (m)}}{\text{molecular weight (M)}}=\frac{m}{M}$
If we substitute $\frac{m}{M}$ for n:
$PV= \frac{m}{M}RT$
Rearranging the above equation, we get:
$\frac{P}{RT}=\frac{m}{MV}$
Now, recall that density is equal to mass divided by volume:
$D=\frac{m}{V}$
The term $\frac{m}{V}$ appears on the right-hand side of the above rearranged Ideal Gas Law. We can substitute in density, D, and get the following:
$\frac{P}{RT}=\frac{D}{M}$
Rearranging in terms of D, we have:
$D=\frac{MP}{RT}$
This derivation of the Ideal Gas Equation allows us to characterize the relationship between the pressure, density, and temperature of the gas sample independent of the volume the gas occupies; it also allows us to determine the density of a gas sample given its pressure and temperature, or determine the molar mass of a gas sample given its density. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9326678514480591, "perplexity": 330.0809618620423}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540481281.1/warc/CC-MAIN-20191205164243-20191205192243-00225.warc.gz"} |
https://www.allmathtricks.com/ratio-proportion-variation-problems-solutions/ | # Ratio proportion and variation problems with solutions, Allmathtricks
## Ratio and Proportion Questions with Solutions | Quantitative Aptitude
Example-1 : Ratio between two numbers is 5 : 6 and sum of their squares is 244. Then find the numbers
Solution: Let the two numbers be 5a and 6a respectively.
⇒(5a)2 + (6a)2 = 244
⇒25a2 + 36a2 = 244
⇒61 a2 = 244 ⇒ a2 = 4 ⇒ a=2
So the numbers are 5 x 2, 6 x 2
i.e 10 & 12
Example-2 : Find the numbers if the ratio between two numbers is 3 : 7. and their LCM is 210
Solution: Let the numbers be 3a and 7a
LCM is 3 × 7 × a = 210
21a = 210
a = 10
Numbers are 30 and 70.
Example-3 : Find the fourth proportional to 5, 8, 20
Solution: Take fourth proportion is ‘a’
Then 5 : 8 :: 20 : a
According to property of proportions
Product of extreams = Product of means
5a = 160
a = 160/5 = 32
Example-4 : Find the third proportional to 36 & 48
Solution: According to property of proportions, the three quantities x, y & z are in continued proportion
i.e x : y :: y : z are in proportion then y2 = xz
So take third proportion is ‘a” then
36 : 48 :: 48 : a
a = 48 x 48 /36 = 64
Example-5 : If p : q = 5 : 9, q : r = 6 : 8, find p : q : r
Solution: Find the LCM of 9 & 6 is 18
Now the ratio p : q = 5 : 9 = 10 : 18 ( Multiplying with 2 (18/9=2))
Ratio q : r = 6 : 8 = 18 : 24 ( Multiplying with 2 (18/6=3))
Therefore ratio of p : q : r = 10 : 18 : 24
Example – 6 : If a/b = 3/4, then find the value of the expression ( 5a – 3b)/(7a – 2b).
Solution: Here assume the value as a = 3 and b = 4, then
( 5a – 3b)/(7a – 2b) = (15 – 12 ) / (21 – 8) = 3/13
Example – 7 : If 4a = 5b = 3c then find value of a : b : c
Solution: 4a = 5b = 3c then
Now a : b : c = 1/4 : 1/5 : 1/3 = 15/60 : 12/60 : 20/60
a : b : c = 15 : 12 : 20
Example -8 : Find mean proportion of 27 and 3
Solution: We know that
a : b :: b : c are in proportion
b2 = ac
Mean proportion of 27 and 3
= = 9
Example -9 : What much be added to each number 25, 19, 10 and 7 so that resultant numbers are in proportion.
Solution: Let ‘a’ be added to each number then they are in proportion
i.e 25 + a : 19 + a = 10 + a : 7 + a
Now according to property of proportion
Product of extreams = Product of means
So
⇒ ( 25 + a ) ( 7 + a ) = ( 19 + a ) ( 10+ a )
⇒ 175 + 32a + a2 = 190 + 29a + a2
⇒ 3a = 15
⇒ a = 5
Example -10 : If x : y = y : z, then x4 : y4 is equal to
Solution: Here x : y = y : z
Now according to property of proportion
Product of extreams = Product of means
y2 = xz
Therefore x4 : y= x2 : z
Example-11 : What much be added to the term ratio 5 : 8 so as to make it equal to 1 : 2
Solution: Let ‘a’ be added to ratio of 5 : 8 to make 1 : 2
Then
a = -2
Example – 12 : If a : b = c : d = e : f = 5 : 6 then find the value of
Solution : According property of equal ratios
then
Now according to property of Multiply or divide of ratio
then
Example – 13: Find the value of
Solution: Here all ratios are equal
Now by the property of equal ratio
Example-14 : Two numbers are in the ratio 5 : 6 and if 4 is subtracted from each, they are reduced to ratio 4 : 5. Find bigger number
Solution: Ratio of two numbers is 5 : 6
Let these numbers 5a & 6a
Subtract 4 from each then new ratio 4 : 5
i.e 5a – 4 : 6a – 4 = 4 : 5
⇒ 25a – 20 = 24a – 16
⇒ a = 4
Now these numbers are 20 & 24
So bigger number is 24
Example-15: A bag contain one rupee coins, two rupee coins and five rupee coins in the ratio of 3 : 4 : 5. If there are in all Rs. 288 in the bag, has many coins of one rupee are there?
Solution: Let ‘a’ be added to ratio of 3 : 4 : 5 to make 288 rupees
i.e one rupee coins = 3a , two rupee coins = 4a & five rupee coins = 5a
Now equal the value of all coins
3a + 2 (4a) + 5(5a) = 288
Simplifying the above equation we get
a = 8
Number of one rupee coins = 3 x 8 = 24
Example-16: One milk boy adds 2 liter of water to 12 liter of milk and another 2 liter of water to 10 liter of milk. What is the ratio of strength of milk in the two mixtures?
Solution
Strength of milk in the first mixture
Strength of milk in the second mixture
Therefore ratio of their strengths 12/14 : 10/12
= 12 x 12 : 10 x 14
= 36 : 35
Example-17: One milk boy adds equal quantity of mixture of milk and water in the ratio 9 : 5 and 4 : 3 respectively. Both the mixtures are now mixed thoroughly. Find the ratio of milk to water in the new mixture do obtained?
Solution:Here two mixtures
one is 9 : 5 and another is 4 : 3
take the LCM of 14 (9 + 5) , 7 (4 +3) is 14
Both mixtures are mixed in equal quantities
In first ratio out of 14 liters having 9L milk and 5L of water
Second ratio 4 : 3 = 8 : 6 ( multiplying with 2 for each)
In second ratio out of 14 liters having 8L milk and 6L of water
In the new mixture having 17L ( 9+8) milk and 11L (5 +6) of water
So ratio of new mixture is 17 : 11
Example-18: Two vessels contain quantity of mixture of water and milk in the ratio 1 :2 & 2 : 3 respectively. Both the mixtures are mixed in the ratio of 3 : 1, Find the ratio after mixing the two mixtures.
Solution:Here two mixtures
one is 1 : 2 and another is 2 : 3
take the LCM of 3 (1 +2) , 5 (2 + 3) is 15
Both mixtures are mixed in 3 : 1
So take quantities of first mixture is 45L ( 15 x3) and second mixture is 15 L( 15 x 1)
In first ratio out of 45 liters having 15L water and 30L of milk
In second ratio out of 15 liters having 6L water and 9L of milk
In the new mixture having 21L ( 15 + 6) water and 39L (30+9) of milk
So ratio of new mixture is 21 : 39
i.e 7 : 13
Example-19: If P : Q = 3 : 4 , Q : R = 5 : 9 and R : S = 16 : 15, then ratio between P and S is
Solution: Here First find the ratio of P : Q : R
P : Q = 3 : 4 & Q : R = 5 : 9
P : Q : R = 15 : 20 : 36 ( Multiplying with 5 for the ratio of P : Q & 4 for the ratio of Q : R )
Now equal to the ratios of P : Q : R & R : S ( LCM of both R values of 36 & 16 is 144 )
P : Q : R = 15 : 20 : 36 = 60 : 80 : 144 ( Multiplying with 4 for the ratio of P : Q : R )
R : S = 16 : 15 = 144 : 135 ( Multiplying with 9 for the ratio of R : S )
P : Q : R : S = 60 : 80 : 144 : 135
Ratio of P : S = 60 : 135 = 4 : 9
Example-20 : A person traveling with a constant speed, he took 8 minutes 40 seconds to reach his office and 9 minutes to return, using a different route. Find ratio of the lengths of the two routes.
Solution: We know that D = ST ( D = distance , S = Speed & T = time)
D ∝ S ( If time is constant )
D ∝ T ( If speed is constant )
Now in our case speed is constant so ratio of length is proportional to time
i.e 8 min 40 sec : 9 min
520 : 540 ( converted into seconds)
26 : 27
Example -21: The ratio of present ages of two sisters P, Q is 1 : 2 and 5 years back the ratio was 1 : 3 what will be the ratio of their ages after 5 years?
Solution: Ages of P and Q are 1a & 2b
Now 5 years back the ratio is 1 : 3 so
1a – 5 : 2b -5 = 1 : 3
Simplifying the above ratio3a -15 = 2a -5
a = 10
The ages of P and Q are 10, 20
After 5 years their age 10+5 , 20+5
So the ratio is 15 : 25
i.e 3 : 5
Example – 22 : The ration of P’s salary to Q’s salary is 2 : 3. The ration of Q,s salary to R’s salary is 4 : 5. What is the ration of P’s salary to R’s salary?
Solution: Find LCM of 3 and 4 (Both values are representing to ‘ Q ‘)
The LCM is 12
Now covert ‘ Q ‘ values in each ratio to 12
Thus, Ration – 1 = 2 : 3 = 8 : 12
Ratio – 2 = 4 : 5 = 12 : 15
Thus, P : Q : R = 8 : 12 : 15
Hence P : Q = 8 : 15
Example -23: Ratio of the earning of P and Q is 4 : 7 of the earning of A increased by 50% and those of Q decreases by 25 % the new ratio of their earning becomes
Solution: Let the original earning of P and Q are 4x and 7x
New earning of = 150 % of 4x = 150 x 4x / 100 = 6x
New earning of Q = 75 % of of 7x = 75 x 7x / 100 = 21x/4
Ratio of P and Q after new earning = 6x : 21x/4
Now the above ratio can be written as 8 : 7
Related Articles
Ratio Proportion and Variation aptitude formulas
Arithmetic Progression Formulas
Geometric Progression formulas
Harmonic Progression formulas
Relation between AM, GM and HM
Logarithm formulas sheet
Types of Lines in geometry
Count the number of figures | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.876937747001648, "perplexity": 1343.918750468443}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488249738.50/warc/CC-MAIN-20210620144819-20210620174819-00157.warc.gz"} |
https://forum.math.toronto.edu/index.php?PHPSESSID=p5d9ka2a21vr21iqugud0133s4&action=printpage;topic=570.0 | # Toronto Math Forum
## APM346-2015S => APM346--Home Assignments => HA6 => Topic started by: Victor Ivrii on March 05, 2015, 08:36:46 AM
Title: HA6 problem 3
Post by: Victor Ivrii on March 05, 2015, 08:36:46 AM
Let $\alpha>0$. Based on Fourier transform of $e^{-\alpha x^2/2}$ find Fourier transforms of
a. $e^{-\alpha x^2/2}\cos (\beta x)$, $e^{-\alpha x^2/2}\sin (\beta x)$;
b. $x e^{-\alpha x^2/2}\cos (\beta x)$, $x e^{-\alpha x^2/2}\sin (\beta x)$.
Title: Re: HA6 problem 3
Post by: Yiyun Liu on March 05, 2015, 09:07:14 PM
$\begin{gathered} part(a): \hfill \\ \hfill \\ f(x) = {e^{\frac{{ - \alpha {x^2}}}{2}}} \hfill \\ \hat f(\omega ) = \frac{1}{{2\pi }}\int\limits_{ - \infty }^\infty {{e^{\frac{{ - \alpha {x^2}}}{2}}}} {e^{ - i\omega x}}dx = \frac{1}{{\sqrt {2\pi \alpha } }}{e^{\frac{{ - {\omega ^2}}}{{2\alpha }}}} \hfill \\ g(x) = {e^{\frac{{ - \alpha {x^2}}}{2}}}\sin (\beta x) = \frac{1}{{2i}}f(x)({e^{i\beta x}} - {e^{ - i\beta x}}) \hfill \\ thus,\hat g(\omega ) = \frac{1}{{2i}}\left[ {(\hat f(\omega - \beta ) - (\hat f(\omega + \beta )} \right] \hfill \\ = \frac{1}{{2i\sqrt {2\pi \alpha } }}\left( {{e^{\frac{{ - {{(\omega - \beta )}^2}}}{{2\alpha }}}} - {e^{\frac{{ - {{(\omega + \beta )}^2}}}{{2\alpha }}}}} \right) \hfill \\ similar, \hfill \\ g(x) = {e^{\frac{{ - \alpha {x^2}}}{2}}}\cos (\beta x) = \frac{1}{2}f(x)({e^{i\beta x}} + {e^{ - i\beta x}}) \hfill \\ \hat g(x) = \frac{1}{2}\left[ {(\hat f(\omega - \beta ) + (\hat f(\omega + \beta )} \right] = \frac{1}{{2\sqrt {2\pi \alpha } }}\left( {{e^{\frac{{ - {{(\omega - \beta )}^2}}}{{2\alpha }}}} + {e^{\frac{{ - {{(\omega + \beta )}^2}}}{{2\alpha }}}}} \right) \hfill \\ \hfill \\ part(b); \hfill \\ \hfill \\ f(x) = x{e^{\frac{{ - \alpha {x^2}}}{2}}}\cos (\beta x) \hfill \\ let,f(x) = xg(x),g(x) = {e^{\frac{{ - \alpha {x^2}}}{2}}}\cos (\beta x),hence, \hfill \\ \hat f(\omega ) = i\frac{{d\hat g(x)}}{{d\omega }} = \frac{i}{{2\sqrt {2\pi \alpha } }}(\frac{{ - (\omega - \beta )}}{\alpha }{e^{^{\frac{{ - {{(\omega - \beta )}^2}}}{{2\alpha }}}}} - \frac{{(\omega + \beta )}}{\alpha }{e^{\frac{{ - {{(\omega + \beta )}^2}}}{{2\alpha }}}}) \hfill \\ likewise, \hfill \\ f(x) = x{e^{\frac{{ - \alpha {x^2}}}{2}}}\sin (\beta x) \hfill \\ f(x) = xg(x),where,g(x) = {e^{\frac{{ - \alpha {x^2}}}{2}}}\sin (\beta x),hence, \hfill \\ \hat f(\omega ) = i\frac{{d\hat g(x)}}{{d\omega }} = \frac{1}{{2\sqrt {2\pi \alpha } }}(\frac{{ - (\omega - \beta )}}{\alpha }{e^{^{\frac{{ - {{(\omega - \beta )}^2}}}{{2\alpha }}}}} + \frac{{(\omega + \beta )}}{\alpha }{e^{\frac{{ - {{(\omega + \beta )}^2}}}{{2\alpha }}}}) \hfill \\ \hfill \\ \end{gathered}$
Title: Re: HA6 problem 3
Post by: Victor Ivrii on March 07, 2015, 05:13:12 AM
Yiyun, you should use MathJax only in math mode (not for a text). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8105292320251465, "perplexity": 22030.031730976734}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710789.95/warc/CC-MAIN-20221201021257-20221201051257-00084.warc.gz"} |
http://www.thulasidas.com/tag/universe/?lang=de | # The Big Bang Theory – Part II
After reading a paper by Ashtekar on quantum gravity and thinking about it, I realized what my trouble with the Big Bang theory was. It is more on the fundamental assumptions than the details. I thought I would summarize my thoughts here, more for my own benefit than anybody else’s.
Classical theories (including SR and QM) treat space as continuous nothingness; hence the term space-time continuum. In this view, objects exist in continuous space and interact with each other in continuous time.
Although this notion of space time continuum is intuitively appealing, it is, at best, incomplete. Consider, for instance, a spinning body in empty space. It is expected to experience centrifugal force. Now imagine that the body is stationary and the whole space is rotating around it. Will it experience any centrifugal force?
It is hard to see why there would be any centrifugal force if space is empty nothingness.
GR introduced a paradigm shift by encoding gravity into space-time thereby making it dynamic in nature, rather than empty nothingness. Thus, mass gets enmeshed in space (and time), space becomes synonymous with the universe, and the spinning body question becomes easy to answer. Yes, it will experience centrifugal force if it is the universe that is rotating around it because it is equivalent to the body spinning. And, no, it won’t, if it is in just empty space. But “empty spacedoesn’t exist. In the absence of mass, there is no space-time geometry.
So, naturally, before the Big Bang (if there was one), there couldn’t be any space, nor indeed could there be anybefore.Note, however, that the Ashtekar paper doesn’t clearly state why there had to be a big bang. The closest it gets is that the necessity of BB arises from the encoding of gravity in space-time in GR. Despite this encoding of gravity and thereby rendering space-time dynamic, GR still treats space-time as a smooth continuuma flaw, according to Ashtekar, that QG will rectify.
Now, if we accept that the universe started out with a big bang (and from a small region), we have to account for quantum effects. Space-time has to be quantized and the only right way to do it would be through quantum gravity. Through QG, we expect to avoid the Big Bang singularity of GR, the same way QM solved the unbounded ground state energy problem in the hydrogen atom.
What I described above is what I understand to be the physical arguments behind modern cosmology. The rest is a mathematical edifice built on top of this physical (or indeed philosophical) foundation. If you have no strong views on the philosophical foundation (or if your views are consistent with it), you can accept BB with no difficulty. Unfortunately, I do have differing views.
My views revolve around the following questions.
These posts may sound like useless philosophical musings, but I do have some concrete (and in my opinion, important) results, listed below.
There is much more work to be done on this front. But for the next couple of years, with my new book contract and pressures from my quant career, I will not have enough time to study GR and cosmology with the seriousness they deserve. I hope to get back to them once the current phase of spreading myself too thin passes.
# Light Travel Time Effects and Cosmological Features
This unpublished article is a sequel to my earlier paper (also posted here as “Are Radio Sources and Gamma Ray Bursts Luminal Booms?“). This blog version contains the abstract, introduction and conclusions. The full version of the article is available as a PDF file.
.
Abstract
Light travel time effects (LTT) are an optical manifestation of the finite speed of light. They can also be considered perceptual constraints to the cognitive picture of space and time. Based on this interpretation of LTT effects, we recently presented a new hypothetical model for the temporal and spatial variation of the spectrum of Gamma Ray Bursts (GRB) and radio sources. In this article, we take the analysis further and show that LTT effects can provide a good framework to describe such cosmological features as the redshift observation of an expanding universe, and the cosmic microwave background radiation. The unification of these seemingly distinct phenomena at vastly different length and time scales, along with its conceptual simplicity, can be regarded as indicators of the curious usefulness of this framework, if not its validity.
#### Introduction
The finite speed of light plays an important part in how we perceive distance and speed. This fact should hardly come as a surprise because we do know that things are not as we see them. The sun that we see, for instance, is already eight minutes old by the time we see it. This delay is trivial; if we want to know what is going on at the sun now, all we have to do is to wait for eight minutes. We, nonetheless, have to “correct” for this distortion in our perception due to the finite speed of light before we can trust what we see.
What is surprising (and seldom highlighted) is that when it comes to sensing motion, we cannot back-calculate the same way we take out the delay in seeing the sun. If we see a celestial body moving at an improbably high speed, we cannot figure out how fast and in what direction it is “really” moving without making further assumptions. One way of handling this difficulty is to ascribe the distortions in our perception of motion to the fundamental properties of the arena of physics — space and time. Another course of action is to accept the disconnection between our perception and the underlying “reality” and deal with it in some way.
Exploring the second option, we assume an underlying reality that gives rise to our perceived picture. We further model this underlying reality as obeying classical mechanics, and work out our perceived picture through the apparatus of perception. In other words, we do not attribute the manifestations of the finite speed of light to the properties of the underlying reality. Instead, we work out our perceived picture that this model predicts and verify whether the properties we do observe can originate from this perceptual constraint.
Space, the objects in it, and their motion are, by and large, the product of optical perception. One tends to take it for granted that perception arises from reality as one perceives it. In this article, we take the position that what we perceive is an incomplete or distorted picture of an underlying reality. Further, we are trying out classical mechanics for the the underlying reality (for which we use terms like absolute, noumenal or physical reality) that does cause our perception to see if it fits with our perceived picture (which we may refer to as sensed or phenomenal reality).
Note that we are not implying that the manifestations of perception are mere delusions. They are not; they are indeed part of our sensed reality because reality is an end result of perception. This insight may be behind Goethe’s famous statement, “Optical illusion is optical truth.”
We applied this line of thinking to a physics problem recently. We looked at the spectral evolution of a GRB and found it to be remarkably similar to that in a sonic boom. Using this fact, we presented a model for GRB as our perception of a “luminal” boom, with the understanding that it is our perceived picture of reality that obeys Lorentz invariance and our model for the underlying reality (causing the perceived picture) may violate relativistic physics. The striking agreement between the model and the observed features, however, extended beyond GRBs to symmetric radio sources, which can also be regarded as perceptual effects of hypothetical luminal booms.
In this article, we look at other implications of the model. We start with the similarities between the light travel time (LTT) effects and the coordinate transformation in Special Relativity (SR). These similarities are hardly surprising because SR is derived partly based on LTT effects. We then propose an interpretation of SR as a formalization of LTT effects and study a few observed cosmological phenomena in the light of this interpretation.
#### Similarities between Light Travel Time Effects and SR
Special relativity seeks a linear coordinate transformation between coordinate systems in motion with respect to each other. We can trace the origin of linearity to a hidden assumption on the nature of space and time built into SR, as stated by Einstein: “In the first place it is clear that the equations must be linear on account of the properties of homogeneity which we attribute to space and time.” Because of this assumption of linearity, the original derivation of the transformation equations ignores the asymmetry between approaching and receding objects. Both approaching and receding objects can be described by two coordinate systems that are always receding from each other. For instance, if a system $K$ is moving with respect to another system $k$ along the positive X axis of $k$, then an object at rest in $K$ at a positive $x$ is receding while another object at a negative $x$ is approaching an observer at the origin of $k$.
The coordinate transformation in Einstein’s original paper is derived, in part, a manifestation of the light travel time (LTT) effects and the consequence of imposing the constancy of light speed in all inertial frames. This is most obvious in the first thought experiment, where observers moving with a rod find their clocks not synchronized due to the difference in light travel times along the length of the rod. However, in the current interpretation of SR, the coordinate transformation is considered a basic property of space and time.
One difficulty that arises from this interpretation of SR is that the definition of the relative velocity between the two inertial frames becomes ambiguous. If it is the velocity of the moving frame as measured by the observer, then the observed superluminal motion in radio jets starting from the core region becomes a violation of SR. If it is a velocity that we have to deduce by considering LT effects, then we have to employ the extra ad-hoc assumption that superluminality is forbidden. These difficulties suggest that it may be better to disentangle the light travel time effects from the rest of SR.
In this section, we will consider space and time as a part of the cognitive model created by the brain, and argue that special relativity applies to the cognitive model. The absolute reality (of which the SR-like space-time is our perception) does not have to obey the restrictions of SR. In particular, objects are not restricted to subluminal speeds, but they may appear to us as though they are restricted to subluminal speeds in our perception of space and time. If we disentangle LTT effects from the rest of SR, we can understand a wide array of phenomena, as we shall see in this article.
Unlike SR, considerations based on LTT effects result in intrinsically different set of transformation laws for objects approaching an observer and those receding from him. More generally, the transformation depends on the angle between the velocity of the object and the observer’s line of sight. Since the transformation equations based on LTT effects treat approaching and receding objects asymmetrically, they provide a natural solution to the twin paradox, for instance.
#### Conclusions
Because space and time are a part of a reality created out of light inputs to our eyes, some of their properties are manifestations of LTT effects, especially on our perception of motion. The absolute, physical reality presumably generating the light inputs does not have to obey the properties we ascribe to our perceived space and time.
We showed that LTT effects are qualitatively identical to those of SR, noting that SR only considers frames of reference receding from each other. This similarity is not surprising because the coordinate transformation in SR is derived based partly on LTT effects, and partly on the assumption that light travels at the same speed with respect to all inertial frames. In treating it as a manifestation of LTT, we did not address the primary motivation of SR, which is a covariant formulation of Maxwell’s equations. It may be possible to disentangle the covariance of electrodynamics from the coordinate transformation, although it is not attempted in this article.
Unlike SR, LTT effects are asymmetric. This asymmetry provides a resolution to the twin paradox and an interpretation of the assumed causality violations associated with superluminality. Furthermore, the perception of superluminality is modulated by LTT effects, and explains $gamma$ ray bursts and symmetric jets. As we showed in the article, perception of superluminal motion also holds an explanation for cosmological phenomena like the expansion of the universe and cosmic microwave background radiation. LTT effects should be considered as a fundamental constraint in our perception, and consequently in physics, rather than as a convenient explanation for isolated phenomena.
Given that our perception is filtered through LTT effects, we have to deconvolute them from our perceived reality in order to understand the nature of the absolute, physical reality. This deconvolution, however, results in multiple solutions. Thus, the absolute, physical reality is beyond our grasp, and any assumed properties of the absolute reality can only be validated through how well the resultant perceived reality agrees with our observations. In this article, we assumed that the underlying reality obeys our intuitively obvious classical mechanics and asked the question how such a reality would be perceived when filtered through light travel time effects. We demonstrated that this particular treatment could explain certain astrophysical and cosmological phenomena that we observe.
The coordinate transformation in SR can be viewed as a redefinition of space and time (or, more generally, reality) in order to accommodate the distortions in our perception of motion due to light travel time effects. One may be tempted to argue that SR applies to the “real” space and time, not our perception. This line of argument begs the question, what is real? Reality is only a cognitive model created in our brain starting from our sensory inputs, visual inputs being the most significant. Space itself is a part of this cognitive model. The properties of space are a mapping of the constraints of our perception.
The choice of accepting our perception as a true image of reality and redefining space and time as described in special relativity indeed amounts to a philosophical choice. The alternative presented in the article is inspired by the view in modern neuroscience that reality is a cognitive model in the brain based on our sensory inputs. Adopting this alternative reduces us to guessing the nature of the absolute reality and comparing its predicted projection to our real perception. It may simplify and elucidate some theories in physics and explain some puzzling phenomena in our universe. However, this option is yet another philosophical stance against the unknowable absolute reality.
# The Unreal Universe — Seeing Light in Science and Spirituality
We know that our universe is a bit unreal. The stars we see in the night sky, for instance, are not really there. They may have moved or even died by the time we get to see them. This delay is due to the time it takes for light from the distant stars and galaxies to reach us. We know of this delay.
The same delay in seeing has a lesser known manifestation in the way we perceive moving objects. It distorts our perception such that something coming towards us would look as though it is coming in faster. Strange as it may sound, this effect has been observed in astrophysical studies. Some of the heavenly bodies do look as though they are moving several times the speed of light, while their “real” speed is probably a lot lower.
Now, this effect raises an interesting questionwhat is the “real” speed? If seeing is believing, the speed we see should be the real speed. Then again, we know of the light travel time effect. So we should correct the speed we see before believing it. What then doesseeingmean? When we say we see something, what do we really mean?
#### Light in Physics
Seeing involves light, obviously. The finite speed of light influences and distorts the way we see things. This fact should hardly come as a surprise because we do know that things are not as we see them. The sun that we see is already eight minutes old by the time we see it. This delay is not a big deal; if we want to know what is going on at the sun now, all we have to do is to wait for eight minutes. We, nonetheless, have to “correct” for the distortions in our perception due to the finite speed of light before we can trust what we see.
What is surprising (and seldom highlighted) is that when it comes to sensing motion, we cannot back-calculate the same way we take out the delay in seeing the sun. If we see a celestial body moving at an improbably high speed, we cannot figure out how fast and in what direction it is “really” moving without making further assumptions. One way of handling this difficulty is to ascribe the distortions in our perception to the fundamental properties of the arena of physics — space and time. Another course of action is to accept the disconnection between our perception and the underlying “reality” and deal with it in some way.
Einstein chose the first route. In his groundbreaking paper over a hundred years ago, he introduced the special theory of relativity, in which he attributed the manifestations of the finite speed of light to the fundamental properties of space and time. One core idea in special relativity (SR) is that the notion of simultaneity needs to be redefined because it takes some time for light from an event at a distant place to reach us, and we become aware of the event. The concept of “Now” doesn’t make much sense, as we saw, when we speak of an event happening in the sun, for instance. Simultaneity is relative.
Einstein defined simultaneity using the instants in time we detect the event. Detection, as he defined it, involves a round-trip travel of light similar to Radar detection. We send out light, and look at the reflection. If the reflected light from two events reaches us at the same instant, they are simultaneous.
Another way of defining simultaneity is using sensingwe can call two events simultaneous if the light from them reaches us at the same instant. In other words, we can use the light generated by the objects under observation rather than sending light to them and looking at the reflection.
This difference may sound like a hair-splitting technicality, but it does make an enormous difference in the predictions we can make. Einstein’s choice results in a mathematical picture that has many desirable properties, thereby making further development elegant.
The other possibility has an advantage when it comes to describing objects in motion because it corresponds better with how we measure them. We don’t use Radar to see the stars in motion; we merely sense the light (or other radiation) coming from them. But this choice of using a sensory paradigm, rather than Radar-like detection, to describe the universe results in a slightly uglier mathematical picture.
The mathematical difference spawns different philosophical stances, which in turn percolate to the understanding of our physical picture of reality. As an illustration, let us look at an example from astrophysics. Suppose we observe (through a radio telescope, for instance) two objects in the sky, roughly of the same shape and properties. The only thing we know for sure is that the radio waves from two different points in the sky reach the radio telescope at the same instant in time. We can guess that the waves started their journey quite a while ago.
For symmetric objects, if we assume (as we routinely do) that the waves started the journey roughly at the same instant in time, we end up with a picture of two “real” symmetric lobes more or less the way see them.
But there is different possibility that the waves originated from the same object (which is in motion) at two different instants in time, reaching the telescope at the same instant. This possibility explains some spectral and temporal properties of such symmetric radio sources, which is what I mathematically described in a recent physics article. Now, which of these two pictures should we take as real? Two symmetric objects as we see them or one object moving in such a way as to give us that impression? Does it really matter which one is “real”? Does “real” mean anything in this context?
The philosophical stance in implied in special relativity answers this question unequivocally. There is an unambiguous physical reality from which we get the two symmetric radio sources, although it takes a bit of mathematical work to get to it. The mathematics rules out the possibility of a single object moving in such a fashion as to mimic two objects. Essentially, what we see is what is out there.
On the other hand, if we define simultaneity using concurrent arrival of light, we will be forced to admit the exact opposite. What we see is pretty far from what is out there. We will confess that we cannot unambiguously decouple the distortions due to the constraints in perception (the finite speed of light being the constraint of interest here) from what we see. There are multiple physical realities that can result in the same perceptual picture. The only philosophical stance that makes sense is the one that disconnects the sensed reality and the causes behind what is being sensed.
This disconnect is not uncommon in philosophical schools of thought. Phenomenalism, for instance, holds the view that space and time are not objective realities. They are merely the medium of our perception. All the phenomena that happen in space and time are merely bundles of our perception. In other words, space and time are cognitive constructs arising from perception. Thus, all the physical properties that we ascribe to space and time can only apply to the phenomenal reality (the reality as we sense it). The noumenal reality (which holds the physical causes of our perception), by contrast, remains beyond our cognitive reach.
The ramifications of the two different philosophical stances described above are tremendous. Since modern physics seems to embrace a non-phenomenalistic view of space and time, it finds itself at odds with that branch of philosophy. This chasm between philosophy and physics has grown to such a degree that the Nobel prize winning physicist, Steven Weinberg, wondered (in his bookDreams of a Final Theory”) why the contribution from philosophy to physics have been so surprisingly small. It also prompts philosophers to make statements like, “Whether ‘noumenal reality causes phenomenal realityor whether ‘noumenal reality is independent of our sensing itor whether ‘we sense noumenal reality,’ the problem remains that the concept of noumenal reality is a totally redundant concept for the analysis of science.
One, almost accidental, difficulty in redefining the effects of the finite speed of light as the properties of space and time is that any effect that we do understand gets instantly relegated to the realm of optical illusions. For instance, the eight-minute delay in seeing the sun, because we readily understand it and disassociate from our perception using simple arithmetic, is considered a mere optical illusion. However, the distortions in our perception of fast moving objects, although originating from the same source are considered a property of space and time because they are more complex.
We have to come to terms with the fact that when it comes to seeing the universe, there is no such thing as an optical illusion, which is probably what Goethe pointed out when he said, “Optical illusion is optical truth.”
The distinction (or lack thereof) between optical illusion and truth is one of the oldest debates in philosophy. After all, it is about the distinction between knowledge and reality. Knowledge is considered our view about something that, in reality, is “actually the case.” In other words, knowledge is a reflection, or a mental image of something external, as shown in the figure below.
In this picture, the black arrow represents the process of creating knowledge, which includes perception, cognitive activities, and the exercise of pure reason. This is the picture that physics has come to accept.
While acknowledging that our perception may be imperfect, physics assumes that we can get closer and closer to the external reality through increasingly finer experimentation, and, more importantly, through better theorization. The Special and General Theories of Relativity are examples of brilliant applications of this view of reality where simple physical principles are relentlessly pursued using formidable machine of pure reason to their logically inevitable conclusions.
But there is another, alternative view of knowledge and reality that has been around for a long time. This is the view that regards perceived reality as an internal cognitive representation of our sensory inputs, as illustrated below.
In this view, knowledge and perceived reality are both internal cognitive constructs, although we have come to think of them as separate. What is external is not the reality as we perceive it, but an unknowable entity giving rise to the physical causes behind sensory inputs. In the illustration, the first arrow represents the process of sensing, and the second arrow represents the cognitive and logical reasoning steps. In order to apply this view of reality and knowledge, we have to guess the nature of the absolute reality, unknowable as it is. One possible candidate for the absolute reality is Newtonian mechanics, which gives a reasonable prediction for our perceived reality.
To summarize, when we try to handle the distortions due to perception, we have two options, or two possible philosophical stances. One is to accept the distortions as part of our space and time, as SR does. The other option is to assume that there is ahigher” reality distinct from our sensed reality, whose properties we can only conjecture. In other words, one option is to live with the distortion, while the other is to propose educated guesses for the higher reality. Neither of these options is particularly attractive. But the guessing path is similar to the view accepted in phenomenalism. It also leads naturally to how reality is viewed in cognitive neuroscience, which studies the biological mechanisms behind cognition.
In my view, the two options are not inherently distinct. The philosophical stance of SR can be thought of as coming from a deep understanding that space is merely a phenomenal construct. If the sense modality introduces distortions in the phenomenal picture, we may argue that one sensible way of handling it is to redefine the properties of the phenomenal reality.
#### Role of Light in Our Reality
From the perspective of cognitive neuroscience, everything we see, sense, feel and think is the result of the neuronal interconnections in our brain and the tiny electrical signals in them. This view must be right. What else is there? All our thoughts and worries, knowledge and beliefs, ego and reality, life and deatheverything is merely neuronal firings in the one and half kilograms of gooey, grey material that we call our brain. There is nothing else. Nothing!
Eigentlich,,en,Unsere früheren Abschnitte über die statische Struktur der Bank und die zeitliche Entwicklung des Handels waren in Vorbereitung auf diesen letzten Abschnitt,,en,In den nächsten Beiträgen,,en,Wir werden sehen, wie die Quants,,en,quantitative Entwickler und die Middle Office-Profis,,en,und der Rest,,en,siehe Handel und Handelsaktivität,,en,Ihre Ansichten sind wichtig und müssen in der Designphilosophie jeder Handelsplattform berücksichtigt werden,,en,Woher kommen diese Perspektiven und warum müssen wir sie kennen,,en,Die Handelsperspektiven basieren auf dem für jeden Geschäftsbereich spezifischen Arbeitsparadigma,,en,Aufgrund dessen, auf welchen Aspekt der Handelsaktivität sich eine Gruppe konzentriert,,en,Sie entwickeln ein Paradigma,,en,oder ein mentales Modell,,en,das funktioniert am besten für sie,,en,Um zu verstehen,,en, this view of reality in neuroscience is an exact echo of phenomenalism, which considers everything a bundle of perception or mental constructs. Space and time are also cognitive constructs in our brain, like everything else. They are mental pictures our brains concoct out of the sensory inputs that our senses receive. Generated from our sensory perception and fabricated by our cognitive process, the space-time continuum is the arena of physics. Of all our senses, sight is by far the dominant one. The sensory input to sight is light. In a space created by the brain out of the light falling on our retinas (or on the photo sensors of the Hubble telescope), is it a surprise that nothing can travel faster than light?
This philosophical stance is the basis of my book, The Unreal Universe, which explores the common threads binding physics and philosophy. Such philosophical musings usually get a bad rap from us physicists. To physicists, philosophy is an entirely different field, another silo of knowledge. We need to change this belief and appreciate the overlap among different knowledge silos. It is in this overlap that we can expect to find breakthroughs in human thought.
This philosophical grand-standing may sound presumptuous and the veiled self-admonition of physicists understandably unwelcome; but I am holding a trump card. Based on this philosophical stance, I have come up with a radically new model for two astrophysical phenomena, and published it in an article titled, “Are Radio Sources and Gamma Ray Bursts Luminal Booms?” in the well-known International Journal of Modern Physics D in June 2007. This article, which soon became one of the top accessed articles of the journal by Jan 2008, is a direct application of the view that the finite speed of light distorts the way we perceive motion. Because of these distortions, the way we see things is a far cry from the way they are.
We may be tempted to think that we can escape such perceptual constraints by using technological extensions to our senses such as radio telescopes, electron microscopes or spectroscopic speed measurements. After all, these instruments do not have “perception” per se and should be immune to the human weaknesses we suffer from. But these soulless instruments also measure our universe using information carriers limited to the speed of light. We, therefore, cannot escape the basic constraints of our perception even when we use modern instruments. In other words, the Hubble telescope may see a billion light years farther than our naked eyes, but what it sees is still a billion years older than what our eyes see.
Our reality, whether technologically enhanced or built upon direct sensory inputs, is the end result of our perceptual process. To the extent that our long range perception is based on light (and is therefore limited to its speed), we get only a distorted picture of the universe.
#### Light in Philosophy and Spirituality
The twist to this story of light and reality is that we seem to have known all this for a long time. Classical philosophical schools seem to have thought along lines very similar to Einstein’s thought experiment.
Once we appreciate the special place accorded to light in modern science, we have to ask ourselves how different our universe would have been in the absence of light. Of course, light is only a label we attach to a sensory experience. Therefore, to be more accurate, we have to ask a different question: if we did not have any senses that responded to what we call light, would that affect the form of the universe?
The immediate answer from any normal (that is, non-philosophical) person is that it is obvious. If everybody is blind, everybody is blind. But the existence of the universe is independent of whether we can see it or not. Is it though? What does it mean to say the universe exists if we cannot sense it? Ah… the age-old conundrum of the falling tree in a deserted forest. Remember, the universe is a cognitive construct or a mental representation of the light input to our eyes. It is notout there,” but in the neurons of our brain, as everything else is. In the absence of light in our eyes, there is no input to be represented, ergo no universe.
If we had sensed the universe using modalities that operated at other speeds (echolocation, for instance), it is those speeds that would have figured in the fundamental properties of space and time. This is the inescapable conclusion from phenomenalism.
The role of light in creating our reality or universe is at the heart of Western religious thinking. A universe devoid of light is not simply a world where you have switched off the lights. It is indeed a universe devoid of itself, a universe that doesn’t exist. It is in this context that we have to understand the wisdom behind the statement thatthe earth was without form, and void” until God caused light to be, by sayingLet there be light.
The Quran also says, “Allah is the light of the heavens and the earth,” which is mirrored in one of the ancient Hindu writings: “Lead me from darkness to light, lead me from the unreal to the real.” The role of light in taking us from the unreal void (the nothingness) to a reality was indeed understood for a long, long time. Is it possible that the ancient saints and prophets knew things that we are only now beginning to uncover with all our supposed advances in knowledge?
I know I may be rushing in where angels fear to tread, for reinterpreting the scriptures is a dangerous game. Such foreign interpretations are seldom welcome in the theological circles. But I seek refuge in the fact that I am looking for concurrence in the metaphysical views of spiritual philosophies, without diminishing their mystical or theological value.
The parallels between the noumenal-phenomenal distinction in phenomenalism and the Brahman-Maya distinction in Advaita are hard to ignore. This time-tested wisdom on the nature of reality from the repertoire of spirituality is now reinvented in modern neuroscience, which treats reality as a cognitive representation created by the brain. The brain uses the sensory inputs, memory, consciousness, and even language as ingredients in concocting our sense of reality. This view of reality, however, is something physics is yet to come to terms with. But to the extent that its arena (space and time) is a part of reality, physics is not immune to philosophy.
As we push the boundaries of our knowledge further and further, we are beginning to discover hitherto unsuspected and often surprising interconnections between different branches of human efforts. In the final analysis, how can the diverse domains of our knowledge be independent of each other when all our knowledge resides in our brain? Knowledge is a cognitive representation of our experiences. But then, so is reality; it is a cognitive representation of our sensory inputs. It is a fallacy to think that knowledge is our internal representation of an external reality, and therefore distinct from it. Knowledge and reality are both internal cognitive constructs, although we have come to think of them as separate.
Recognizing and making use of the interconnections among the different domains of human endeavour may be the catalyst for the next breakthrough in our collective wisdom that we have been waiting for. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 8, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6458325982093811, "perplexity": 684.9915834518326}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703509973.34/warc/CC-MAIN-20210117051021-20210117081021-00188.warc.gz"} |
https://zenodo.org/record/1298566 | Dataset Open Access
# Supplementary Data: Impact of vacuum stability, perturbativity and XENON1T on global fits of Z2 and Z3 scalar singlet dark matter (arXiv:1806.11281)
The GAMBIT Collaboration
Supplementary Data
Impact of vacuum stability, perturbativity and XENON1T on global fits of Z2 and Z3 scalar singlet dark matter arXiv:1806.11281
The files in this record contain data for the scalar singlet dark matter models considered in the GAMBIT "Scalar singlet Mark II" paper.
The files consist of
• 30 regular YAML files
• StandardModel_SLHA2_scan.yaml, a universal YAML fragment included from the other YAML files
• 14 hdf5 files. 8 of these correspond to the complete set of combined samples for each fit. These 8 fits are generated from all binary permutations of three run properties: Z2 or Z3 model, with or without absolute vacuum stability demanded, and with constraints from the 2017 or 2018 XENON1T data. These 8 hdf5 files are used to generate the profile likelihood plots in the paper. The other 6 hdf5 files are the results of T-Walk runs, and are used to generate the posterior pdfs in the paper.
• Some example pip files for producing plots from the hdf5 files using pippi
• A tarball best_fits_yaml.tar.gz containing YAML files of the best-fit point in each of the 8 fits.
The files follow the naming scheme SingletDM_[model]_[slice]_[vacuum]_[xenon]_[prior]_[scanner].yaml.
• model: Z2 or Z3
• slice: full, lowmass, neck or absent (for hdf5 files)
• vacuum: ms (metastable) or vs (absolute vacuum stability)
• prior: logmu3, flatmu3 or absent (for Z2 scans)
• scanner: TWalk or absent (implies Diver scans in the case of YAML files, and indicates merged samples potentially from both Diver and T-Walk in the case of hdf5 files)
A few caveats to keep in mind:
1. The YAML files are designed to work with GAMBIT 1.2.0, commit e4d3f739, and the pip files are tested with pippi 2.1, commit c094b8c8. They may or may not work with later versions of either software (but you can of course always obtain the version that they do work with via the git history).
2. The pip files are examples only. Users wishing to reproduce the more advanced plots in any of the GAMBIT papers should contact us for tips or scripts, or experiment for themselves. Many of these scripts are in multiple parts and require undocumented manual interventions and steps in order to implement various plot-specific customisations, so please don't expect the same level of polish as for files provided here or in the GAMBIT repo.
Files (201.8 GB)
Name Size
best_fits_yaml.tar.gz
4.3 kB
md5:0d804ef8860fb84bd4c55acef47fb4a7
2.6 kB
SingletDM_Z2.pip
md5:7f1e42af4479753c0b14c400c357d5cc
6.2 kB
SingletDM_Z2_full_ms_X17.yaml
md5:6bdcdd678e733e6af49d04fd74c66401
5.9 kB
SingletDM_Z2_full_ms_X18.yaml
md5:30ebf58b8be1cd21df033cdb8e540b88
6.1 kB
SingletDM_Z2_full_ms_X18_TWalk.yaml
6.1 kB
SingletDM_Z2_full_vs_X17.yaml
md5:9eb52c433a4d2548370529e4929da2e1
6.1 kB
SingletDM_Z2_full_vs_X18.yaml
md5:c8ba90ded00b0c1fe55f29566552d700
6.3 kB
SingletDM_Z2_full_vs_X18_TWalk.yaml
md5:d4cfd11f32b17407832c27501af94e1a
6.2 kB
SingletDM_Z2_lowmass_ms_X17.yaml
md5:a5339f1f67e3e6e5734d8c952fba64aa
5.9 kB
SingletDM_Z2_lowmass_ms_X18.yaml
md5:13b56a68b8c906a2b80b9ec0afa6fa7d
6.1 kB
SingletDM_Z2_ms_X17.hdf5.tar.gz
md5:4b0e553f33dae7d0c82e7996f3048604
13.5 GB
SingletDM_Z2_ms_X18.hdf5.tar.gz
md5:57efe063688a9ea0d7a0d028939bb3ac
17.4 GB
SingletDM_Z2_ms_X18_TWalk.hdf5.tar.gz
md5:432863dec2150084396748f76ea17d5e
8.6 GB
SingletDM_Z2_neck_ms_X17.yaml
5.9 kB
SingletDM_Z2_neck_ms_X18.yaml
md5:1a729e9d2542292161937d1684a58d24
6.1 kB
SingletDM_Z2_vs_X17.hdf5.tar.gz
md5:964d10a516bc6cfe3edcce36365ae899
2.7 GB
SingletDM_Z2_vs_X18.hdf5.tar.gz
md5:a56bf9351e24d528331045153ae92eb3
12.4 GB
SingletDM_Z2_vs_X18_TWalk.hdf5.tar.gz
md5:a22fd30f6135ea63c3c476118b6de26d
8.6 GB
SingletDM_Z3.pip
md5:c2ebea797d7f8595d6e43f6283cf9545
6.3 kB
SingletDM_Z3_full_ms_X17_flatmu3.yaml
md5:1c5667cccef627992c6eb9239dffa76f
6.3 kB
SingletDM_Z3_full_ms_X17_logmu3.yaml
md5:e1ecbd53240e4ccddbf8221d805d712a
6.4 kB
SingletDM_Z3_full_ms_X18_flatmu3.yaml
md5:db7f80a7b1880763a7c7bb207ecbb5c1
6.5 kB
SingletDM_Z3_full_ms_X18_flatmu3_TWalk.yaml
6.5 kB
SingletDM_Z3_full_ms_X18_logmu3.yaml
md5:bfd29a7c3605d4fc5101e392a84cfe51
6.6 kB
SingletDM_Z3_full_ms_X18_logmu3_TWalk.yaml
md5:04aea53430b0127e34675672e7d7af1e
6.6 kB
SingletDM_Z3_full_vs_X17_flatmu3.yaml
md5:5279d2156b59bea545110197d0eb6fbe
6.5 kB
SingletDM_Z3_full_vs_X17_logmu3.yaml
md5:7a683ca971bbf0e9c132d6c1b3dbeb2d
6.5 kB
SingletDM_Z3_full_vs_X18_flatmu3.yaml
6.7 kB
SingletDM_Z3_full_vs_X18_flatmu3_TWalk.yaml
md5:f61c2a280b0854dd68bc6c0772b8b1a6
6.6 kB
SingletDM_Z3_full_vs_X18_logmu3.yaml
md5:d2cced8f9d77db12b465618411c75250
6.7 kB
SingletDM_Z3_full_vs_X18_logmu3_TWalk.yaml
6.7 kB
SingletDM_Z3_lowmass_ms_X17_flatmu3.yaml
md5:48423b8ef6f19659a5e6ce2604139629
6.3 kB
SingletDM_Z3_lowmass_ms_X17_logmu3.yaml
md5:0f8686bc91347524d8a4b1aa1edd7cb9
6.4 kB
SingletDM_Z3_lowmass_ms_X18_flatmu3.yaml
md5:f41790f0d47a39b19cddc6d0d3e7e38f
6.5 kB
SingletDM_Z3_lowmass_ms_X18_logmu3.yaml
md5:0684f12ef3b537366914c82c226d2353
6.6 kB
SingletDM_Z3_ms_X17.hdf5.tar.gz
md5:83548a589cb1e92375be7746c569e16b
37.4 GB
SingletDM_Z3_ms_X18.hdf5.tar.gz
md5:708471979efec30321e2dc31908b8189
51.1 GB
SingletDM_Z3_ms_X18_flatmu3_TWalk.hdf5.tar.gz
6.0 GB
SingletDM_Z3_ms_X18_logmu3_TWalk.hdf5.tar.gz
md5:5f34546dc01e32d30228d1d1dd94f340
5.7 GB
SingletDM_Z3_neck_ms_X17_flatmu3.yaml
md5:7db28bd6daa6723ed112571a2a13de4b
6.3 kB
SingletDM_Z3_neck_ms_X17_logmu3.yaml
md5:68bdb308ed809f0e9d9ef5d9e5c2280e
6.4 kB
SingletDM_Z3_neck_ms_X18_flatmu3.yaml
md5:a39ef85565277d686c23476458ced908
6.6 kB
SingletDM_Z3_neck_ms_X18_logmu3.yaml
md5:6e245f186f2ff88063bab095bf49ed49
6.6 kB
SingletDM_Z3_vs_X17.hdf5.tar.gz
md5:c234f6a126f0b5eb9b1f854e955acb5c
6.1 GB
SingletDM_Z3_vs_X18.hdf5.tar.gz
19.5 GB
SingletDM_Z3_vs_X18_flatmu3_TWalk.hdf5.tar.gz
md5:3d9ba7572cceee385c3cca55cbfb1990
6.3 GB
SingletDM_Z3_vs_X18_logmu3_TWalk.hdf5.tar.gz
md5:04c26effaeaf7c17dda741d4fc1c4e13
6.5 GB
StandardModel_SLHA2_scan.yaml
md5:cf4e5ae0741d7ea6d93afc3acaa496aa
2.7 kB
198
569
views | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2768927812576294, "perplexity": 17335.742262855696}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057973.90/warc/CC-MAIN-20210926205414-20210926235414-00297.warc.gz"} |
https://www.physicsforums.com/threads/projectile-motion-equation-help.84367/ | # Homework Help: Projectile motion equation help
1. Aug 6, 2005
### pkossak
This question was on my last test, and I got it wrong. If anyone could help me understand how to get the answer, I would really appreciate it!
A small metal ball with a mass of m = 91.7 g is attached to a string of length
l = 1.57 m. It is held at an angle of q = 47.5° with respect to the vertical.
The ball is then released. When the rope is vertical, the ball collides head-
on and perfectly elastically with an identical ball originally at rest. This
second ball flies off with a horizontal initial velocity from a height of h =
3.19 m, and then later it hits the ground. At what distance x will the ball
land?
I'm not even really too sure on how to approach it!
2. Aug 6, 2005
### mathmike
you would use projectile motion on this one as well as angular acceleration.
ar = v^2 / r
the range of the projectile isd given by
R = (v062 * sin (2 theta)) / g
3. Aug 6, 2005
### pkossak
Thanks a lot for the help, but is there anything else you can tell me. What's throwing me off is that I'm not sure how to find the velocity. I feel like i'm overlooking something simple
4. Aug 6, 2005
### mathmike
first you would use the equation
a = g sin (theta)
then intergate to find vel
5. Aug 6, 2005
### pkossak
Got it, thanks so much! | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8091626763343811, "perplexity": 840.3680089895611}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583510998.39/warc/CC-MAIN-20181017044446-20181017065946-00262.warc.gz"} |
https://cs.stackexchange.com/questions/57580/undecidability-of-regular-tm-detail-within-proof | # Undecidability of REGULAR_TM (Detail within Proof)
I'm reading through Sipser's Intro to the Theory of Computation for a class, and I'm having trouble understanding one of the examples in the book.
The example shows how $REGULAR_{TM}$, defined as the problem of determining if a Turing machine recognizes a regular language, is undecidable. The way they do it is by showing a reduction from $A_{TM}$, which is the acceptance problem for Turing Machines (the acceptance problem for some computational model is the task of determining whether or not it accepts a given input string).
From what I understand, all that needs to be done is show that if $REGULAR_{TM}$ was decidable by some TM $R$, then another TM could use $R$ to decide $A_{TM}$, which would be a contradiction, since $A_{TM}$ is undecidable.
They construct a TM $S$ that can be used to decide $A_{TM}$. Where I'm lost is through their use of an intermediate TM, $M_2$. Here is the full description:
• $S$ = "On input $\langle M, w\rangle$, where $M$ is a TM and $w$ is a string:
1. Construct the following TM $M_2$.
• $M_2$ = "On input $x$:
1. If $x$ has the form $0^{n}1^{n}$, accept.
2. If $x$ does not have this form, run $M$ on input $w$ and accept if $M$ accepts $w$."
2. Run $R$ on input $\langle M_2 \rangle$.
3. If $R$ accepts, accept; if $R$ rejects, reject."
My two main questions are:
1. Why are we allowed to do step 2 in the construction of $M_2$? If we could just say "run $M$ on $w$ and accept if $M$ accepts", then wouldn't that be a way to show that $A_{TM}$ is decidable?
2. What exactly is the role of step 1 in the construction of $M_2$? The book says that the purpose of this TM is not to be run, but just to feed its description into $R$. The TM recognizes $\{0^n1^n\mid n \ge 0\}$ if $M$ does not accept $w$, and $\Sigma^*$ if it does, but I don't see how it does that from the description. Also, I don't see how $A_{TM}$ can be decided by using this (I'm thinking it's that the TM $M_2$ only outputs a regular language if the TM $M$ accepts $w$, but I don't understand how it can do that).
Any help is greatly appreciated, thanks!
• Please provide all the definition to make your question self contained. What is $A_{TM}$? – Ariel May 17 '16 at 18:05
• Thanks! Updated the question with this info: $A_{TM}$ is the acceptance problem for Turing machines. It determines whether or not a Turing machine accepts a given input string. While the acceptance problem for both DFAs and CFGs is decidable, it is not for TMs. – Andrew DiNunzio May 17 '16 at 18:10
This proof assumes, for the purpose of contradiction, that there exists a Turing machine $R$ deciding $R_{TM}=\left\{\langle M\rangle | \hspace{1mm} L(M) \text{ is regular}\right\}$, and uses the existence of $R$ to construct a Turing machine deciding $A_{TM}=\left\{\langle M,w\rangle | \hspace{1mm} M \text{ accepts$w$}\right\}$.
$R$ halts on every input (since it decides $R_{TM}$), so there is no problem running it (it will finally halt and either accept or reject the input). This avoids the problem of running $M$, some arbitrary Turing machine, on some input $w$, since we do not know that $M$ will finally halt.
Now, Sipser constructs a new Turing machine, $M_2$, such that:
$L(M_2) \text{ is regular} \iff M \text{ accepts$w$}$
If the above holds, then you would have $\langle M_2\rangle\in L(R) \iff M \text{ accepts$w$}$, so you could find out if $M$ accepts $w$ simply by running $R$ (which always halts) on $M_2$.
To see why this property holds, note that $M_2$ always accepts strings of the form $0^n1^n$, and it accepts any string not of this form iff $M$ accepts $w$. This means that $L(M_2)=\left\{0^n1^n | n\ge 0\right\}$ if $M$ does not accept $w$, and $\Sigma^*$ (which is regular) otherwise.
• Okay thanks, so when constructing something to be encoded and used by $R$ (like $M_2$) we are not required to make it decidable? Also, the last sentence seems a bit confusing to me. Do you mean $L(M_2)=\{0^n1^n \mid n \ge 0\}$ if $M$ does not accept $w$? From what I understand, the purpose of $M_2$ is to accept if $M$ recognizes $w$ and reject if $M$ does not accept $w$. But it also has to align with regularity, so if $M$ accepts $w$, it should decide a regular language, and if $M$ does not, it should decide a non-regular language (which arbitrarily is $0^n1^n$)? – Andrew DiNunzio May 17 '16 at 21:51
• We can give any input to $R$. Regardless of what the machine $M_2$ does, we can input it's encoding $\langle M_2 \rangle$ to $R$ and get an answer. As for your second question, you are correct, I meant $L(M_2)=\left\{0^n1^n | n\ge 0\right\}$ if $M$ does not accept $w$, now fixed. – Ariel May 17 '16 at 21:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7194910645484924, "perplexity": 225.33255790235748}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987841291.79/warc/CC-MAIN-20191024040131-20191024063631-00079.warc.gz"} |
https://www.johndcook.com/blog/2016/07/19/integral-equation-types/ | # Integral equation types
There are four basic types of integral equations. There are many other integral equations, but if you are familiar with these four, you have a good overview of the classical theory.
All four involve the unknown function φ(x) in an integral with a kernel K(x, y) and all have an input function f(x). In all four the integration ranges from some fixed lower limit. In the Volterra equations, the upper limit of integration is the variable x, while in the Fredholm equations, the upper limit of integration is a fixed constant.
The so-called equations of the first kind only involve the unknown function φ inside the integral. The equations of the second kind also involve φ outside the integral.
So the four equations above are
• Volterra equation of the first kind
• Volterra equation of the second kind
• Fredholm equation of the first kind
• Fredholm equation of the second kind
Here’s a diagram to make these easier to keep straight:
In general, the theory of Volterra equations is easier than that of Fredholm equations. And while equations of the first kind look simpler at first, it’s common to reduce equations of the first kind to equations of the second kind and concentrate on the later.
There are many variations on this theme. The x in Volterra equations could be a vector. The integral could be, for example, a double or triple integral. In Fredholm equations, the integration may be over fixed general region. Maybe you’re integrating over a watermelon, as the late William Guy would say. You could have nonlinear versions of these equations where instead of multiplying K(x, y) times φ(y) you have a kernel K(x, y, φ(y)) that is some nonlinear function of φ.
You may see references to Volterra or Fredholm equations of the third kind. These are an extension of the second kind, where a function A(x) multiples the φ outside the integral. Equations of the second kind are the most important since the first and third kinds can often be reduced to the second kind.
Related: Differential equation consulting
## 6 thoughts on “Integral equation types”
1. Survey article covering application areas for these four?
Theoretical survey article covering open questions for these?
Thanks for this.
3. Maicol
Excellent post. Could you please tell us about their application in physical sciences, if any? Thanks.
4. Hamza El Mahjour
Do you know Martin Costabel. One of the best teachers I had, in this subject
5. No, I haven’t had the pleasure of meeting him.
6. what about equations of form
u(x,t)=int_{-inf}^{=inf} int _{0}^{t}k(x-y)(t-s)u(y,s)dsdy+f(x,t)
this seems integral equation in two variables.
volterra second type with respect to to t and Fredhom second type with respect to x. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8604387640953064, "perplexity": 723.2974578755377}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668765.35/warc/CC-MAIN-20191116204950-20191116232950-00422.warc.gz"} |
http://www.ck12.org/book/CK-12-Middle-School-Math-Concepts-Grade-7/r2/section/7.10/ | <img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" />
# 7.10: Single Variable Division Equations
Difficulty Level: At Grade Created by: CK-12
Estimated4 minsto complete
%
Progress
Practice Single Variable Division Equations
Progress
Estimated4 minsto complete
%
Have you ever been to a theater with limited seating? Well, Marc and Kara had this happen with their grandparents.
Marc and Kara went to see a play with their grandparents. When they arrived at the theater, the manager divided their group and several other people into six smaller groups. Each of these groups was lead to a section of the theater where there were empty seats. Each group had six people in it too.
If this was the division, how many people did the manager divide up to start with?
Write a division equation and solve it to complete this dilemma.
This Concept is all about single variable division equations. You will know how to do this by the end of the Concept.
### Guidance
Sometimes, you will see equations that have division in them. Remember, that we can use a fraction bar to show division.
To solve an equation in which a variable is divided by a number, we can use the inverse of division––multiplication. We can multiply both sides of the equation by that number to solve it.
We must multiply both sides of the equation by that number because of the Multiplication Property of Equality, which states:
if a=b\begin{align*}a=b\end{align*} and c0\begin{align*}c \neq 0\end{align*}, then a×c=b×c\begin{align*}a \times c=b \times c\end{align*}.
So, if you multiply one side of an equation by a nonzero number, c\begin{align*}c\end{align*}, you must multiply the other side of the equation by that same number, c\begin{align*}c\end{align*}, to keep the values on both sides equal.
Now let's apply this information.
k÷(4)=12\begin{align*}k \div (-4)=12\end{align*}.
In the equation, k\begin{align*}k\end{align*} is divided by -4. So, we can multiply both sides of the equation by -4 to solve for k\begin{align*}k\end{align*}. You will need to use what you know about multiplying integers to help you solve this problem. It may help to rewrite k÷(4)\begin{align*}k \div (-4)\end{align*} as k4\begin{align*}\frac{k}{-4}\end{align*}.
k÷(4)k4k4×(4)k4×41k4×41k1k=12=12=12×(4)=48=48=48=48\begin{align*}k \div (-4) &= 12\\ \frac{k}{-4} &= 12\\ \frac{k}{-4} \times (-4) &= 12 \times (-4)\\ \frac{k}{-4} \times \frac{-4}{1} &= -48\\ \frac{k}{\cancel{-4}} \times \frac{\cancel{-4}}{1} &= -48\\ \frac{k}{1} &= -48\\ k &= -48\end{align*}
The –4's will cancel each other out when they are divided. Then we multiply.
The value of \begin{align*}k\end{align*} is –48.
Remember the rules for multiplying integers will apply when working with these equations!! Think back and use them as you work.
\begin{align*}\frac{n}{1.5}=10\end{align*}
In the equation, \begin{align*}n\end{align*} is divided by 1.5. So, we can multiply both sides of the equation by 1.5 to solve for \begin{align*}n\end{align*}.
\begin{align*}\frac{n}{1.5} &= 10\\ \frac{n}{1.5} \times 1.5 &= 10 \times 1.5\\ \frac{n}{1.5} \times \frac{1.5}{1} &= 15\\ \frac{n}{\cancel{1.5}} \times \frac{\cancel{1.5}}{1} &= 15\\ \frac{n}{1} &= 15\\ n &= 15\end{align*}
The value of \begin{align*}n\end{align*} is 15.
When an equation has division in it, you can use the Multiplication Property of Equality to solve it, and then the property has been useful. Always remember to think about the inverse operations and associate them with the different properties. This will help you to keep it all straight and not get mixed up.
Now it is time for you to practice solving a few of these equations.
Solve each equation for the missing variable.
#### Example A
\begin{align*}\frac{x}{-2}=5\end{align*}
Solution:\begin{align*}x = -10\end{align*}
#### Example B
\begin{align*}\frac{y}{5}=6\end{align*}
Solution:\begin{align*}y = 30\end{align*}
#### Example C
\begin{align*}\frac{b}{-4}=-3\end{align*}
Solution:\begin{align*}b = 12\end{align*}
Here is the original problem once again.
Marc and Kara went to see a play with their grandparents. When they arrived at the theater, the manager divided their group and several other people into six smaller groups. Each of these groups was lead to a section of the theater where there were empty seats. Each group had six people in it too.
If this was the division, how many people did the manager divide up to start with?
Write a division equation and solve it to complete this dilemma.
First, we can write an equation.
Some number of people divided by six is six.
\begin{align*}\frac{x}{6} = 6\end{align*}
Now we can solve it by multiplying.
\begin{align*}6 \times 6 = 36\end{align*}
\begin{align*}x = 36\end{align*}
### Vocabulary
Here are the vocabulary words in this Concept.
Isolate the Variable
this means that we want to work to get the variable alone on one side of the equals.
Inverse Operation
Opposite operation
Division Property of Equality
states that we can solve an equation by multiplying the same value to both sides of the equation.
Multiplication Property of Equality
states that we can solve an equation by dividing both sides of the equation by the same value.
### Guided Practice
Here is one for you to try on your own.
Three friends evenly split the total cost of the bill for their lunch. The amount each friend paid for his share was 4.25. a. Write an equation to represent \begin{align*}c\end{align*}, the total cost, in dollars, of the bill for lunch. b. Determine the total cost of the bill. Answer Consider part \begin{align*}a\end{align*} first. Use a number, an operation sign, a variable, or an equal sign to represent each part of the problem. Because the friends split the bill evenly, write a division equation to represent the problem. \begin{align*}& \underline{Three \ friends} \ \underline{evenly \ split} \ the \ \underline{total \ cost} \ldots amount \ each \ friend \ paid \ \underline{was} \ \underline{\4.25} \ldots\\ & \qquad \ \Box \qquad \qquad \qquad \downarrow \qquad \qquad \quad \Box \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad \ \ \downarrow \qquad \downarrow\\ & \qquad \ \Box \qquad \qquad \qquad \downarrow \qquad \qquad \quad \Box \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad \ \ \downarrow \qquad \downarrow\\ & \qquad \ \Box \qquad \qquad \qquad \downarrow \qquad \qquad \quad \Box \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad \ \ \downarrow \qquad \downarrow\\ & \qquad \ c \qquad \qquad \qquad \ \div \qquad \qquad \quad 3 \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad \ \ = \quad 4.25\end{align*} This equation, \begin{align*}c \div 3=4.25\end{align*}, represents \begin{align*}c\end{align*}, the total cost of the lunch bill. Next, consider part \begin{align*}b\end{align*}. Solve the equation to find the total cost, in dollars, of the lunch bill. \begin{align*}c \div 3 &= 4.25\\ \frac{c}{3} &= 4.25\\ \frac{c}{3} \times 3 &= 4.25 \times 3\\ \frac{c}{\cancel{3}} \times \frac{\cancel{3}}{1} &= 12.75\\ c &= 12.75\end{align*} The total cost of the lunch bill was12.75.
### Video Review
Here is a video for review.
### Practice
Directions: Solve each single-variable division equation for the missing value.
1. \begin{align*}\frac{x}{5}=2\end{align*}
2. \begin{align*}\frac{y}{7}=3\end{align*}
3. \begin{align*}\frac{b}{9}=-4\end{align*}
4. \begin{align*}\frac{b}{8}=-10\end{align*}
5. \begin{align*}\frac{b}{8}=-10\end{align*}
6. \begin{align*}\frac{x}{-3}=-10\end{align*}
7. \begin{align*}\frac{y}{18}=-20\end{align*}
8. \begin{align*}\frac{a}{-9}=-9\end{align*}
9. \begin{align*}\frac{x}{11}=-12\end{align*}
10. \begin{align*}\frac{x}{3}=-3\end{align*}
11. \begin{align*}\frac{x}{5}=-8\end{align*}
12. \begin{align*}\frac{x}{1.3}=3\end{align*}
13. \begin{align*}\frac{x}{2.4}=4\end{align*}
14. \begin{align*}\frac{x}{6}=1.2\end{align*}
15. \begin{align*}\frac{y}{1.5}=3\end{align*}
### Vocabulary Language: English
Inverse Operation
Inverse Operation
Inverse operations are operations that "undo" each other. Multiplication is the inverse operation of division. Addition is the inverse operation of subtraction.
Product
Product
The product is the result after two amounts have been multiplied.
Quotient
Quotient
The quotient is the result after two amounts have been divided.
Show Hide Details
Description
Difficulty Level:
Authors:
Tags:
Subjects:
Search Keywords:
Date Created:
Oct 29, 2012 | {"extraction_info": {"found_math": true, "script_math_tex": 48, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9995280504226685, "perplexity": 1675.319891873812}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860127496.74/warc/CC-MAIN-20160428161527-00139-ip-10-239-7-51.ec2.internal.warc.gz"} |
https://asmedigitalcollection.asme.org/IJTC/proceedings-abstract/IJTC2012/45080/75/260339 | One of the most commonly used tribological thin-film coatings is Chromium Nitride (CrN), typically deposited by PVD process. Examples of current applications of this coating include cutting and forming tools: ICE piston ring, hydrodynamic pumps, etc. In selecting coating for tribological applications, one of the critical parameter is the coating thickness. In the present work, we experimentally studied the effect of coating thickness on friction and wear performance of CrN coatings under unidirectional sliding. Test were conducted with ∼ 1, 5 and 10 microns thick coatings deposited on a hardened H-13 steel substrate by plasma enhanced magnetron sputtering (PEMS) process. The friction behavior was strongly dependent on coating thickness, especially at relatively low loads. At higher load however, the thinner coating (1 μm) was quickly worn through while the thicker ones (5 and 10 μm) remained intact. Wear in both, the counterface WC material and the coating was also observed to depend on coating thickness. The observed effect on coating thickness on tribological behavior is attributed to differences in the microstructure and mechanical behaviors of coatings as function of thickness.
This content is only available via PDF. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8696072697639465, "perplexity": 5530.792317130006}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986718918.77/warc/CC-MAIN-20191020183709-20191020211209-00419.warc.gz"} |
http://fourm.info/post/functional-programming-for-the-unprincipled-1/ | FP scala rant
## Anti-Intellectual Semantics
Not too long ago, I read a Twitter comment by one of the luminaries behind the ScalaZ functional programming (FP) framework which said:
Scalaz 8’s competition is NOT Scalaz 7/Cats/etc. “No one’” uses these. The real competition is non-FP in Scala, i.e. 99% of the market.
Obviously this comment is hyperbole, but on the surface it makes no sense. Anyone with even the slightest acquaintance with Scala is aware that Martin Odersky created this language with the intention that it allow programmers to use both the OOP and FP paradigms. Odersky himself evangelizes the functional programming aspects of Scala in his books, courses, lectures and talks about Scala.
The details of the comment make it clear that what the author is claiming, is that the vast majority of Scala programmers are not doing the right sort of FP programming, viz. pure FP, and the new version (8) of the ScalaZ framework might help move the meter.
This comment, which reflects an attitude found among many pure FP luminaries in the Scala and Haskell communities, sounds snobbish and is, I believe, detrimental to the broader adoption of FP. When I made this point to the poster, someone labelled the comment “anti-intellectual”. In similar discussions, my attitude was dismissed as quibbling about “semantics”.
Social media like Twitter and Gitter are not the best places to have such a discussion. Hence I decided to write a longer piece about what FP is, what pure FP is, why it faces challenges in being adopted, how snobbish-sounding comments add to the challenge, and finally, what can be done about those challenges. This first part focuses on trying to get a grip on what FP/pure FP is and why it is so important.
## So What is Functional Programming (FP)?
The core of being a software programmer is writing a set of instructions to get the damn computer to do what you want it to do. Computer operations at their lowest level are mostly about moving bits of data around in registers and XOR and XANDing them. The human brain does not naturally think about solving problems in this way.
So from the earliest days of computer usage, computer scientists and engineers have worked at developing abstractions for these operations, encapsulating these in programming languages to make the life of programmers easier. Over time these abstractions have formed several alternative paradigms for programming languages.
The most basic of these paradigms, and the one nearly every programmer learns first, is the imperative paradigm. This paradigm abstracts out the bits being moved through registers in the computer hardware by encapsulating data in data structures. We then use a set of high level logical constructs (ifwhilefor, etc.) to write algorithms to get the computer to manipulate the data in the data structures to do what we want it to do. Essentially, however, we are still looking at the problem from the computer’s perspective. The recipe or set of instructions (commands––hence imperative) we give it is not the way we humans would naturally think about the problem.
One of the first things every programmer learns about are functions. These are chunks of code that can be named and referenced in other parts of the program that can do “things”. Functions come out of another paradigm known as structured programming. As computer software was used to solve more complex and larger problems, imperative programming just didn’t work well. The core idea of structured programming is to divide and conquer––decompose the problem into smaller and smaller parts, which are encapsulated in functions. Then use these parts to compose bigger and bigger components, and thereby solve the whole problem more easily.
The problem with structured programming is that it doesn’t really give you much guidance on what is the best way to decompose your problem. That’s where object-oriented programming (OOP) comes into the picture. OOP is all about modeling your software program on the business domain. Essentially objects in your problem domain (e.g. customers, products, orders) are encapsulated in data structures with the same names. Associated with those data structures are a bunch of functions or operations that model operations in the business domain. We then tie the whole system together by having these components (objects) use these operations to give commands to each other (e.g. a customer tells the store to place an order). This type of abstraction makes it a whole lot easier for humans to think about the problem.
Perhaps OOP has taken us too far from computer-centric thinking. Once we have modeled the business domain, we are left with the problem of how to bridge the conceptual gap back to computer operations. OOP proponents developed what are known as patterns, essentially a bunch of heuristics, meant to guide programmers in bridging the gap between the domain model and the software architecture and implementation details. But these heuristics are not hard and fast rules. Moreover, there are so many of them, it’s impractical to remember them all or really know when and how to properly apply them. In fact, despite having developed and taught a course on OOP, for the most part I never use patterns except at the highest levels of design. Finally, underneath the hood, OOP is still using imperative & structured paradigms, with all their limitations.
Ultimately, what we really want is an abstraction that uses a well known and natural human cognitive model for problem solving that provides precise and exact (formal) guidance on how to address common computational problems. Fortunately, we have such a cognitive model: it’s called mathematics! So it would be great if we could develop a paradigm which applies mathematical ideas and tools to writing programs. That is the essence of what the functional programming paradigm is all about!
In olden times, computer operations were called data processing because at the most fundamental level, what we want the computer to do is transform one set of a data into another set of data. It’s a shame this name has fallen out of favor to information technology. Data processing is still a useful way to describe even the coolest new tech like blockchain. And data processing is actually a core aspect of what the functional programming paradigm is all about. Instead of thinking about the problem from the domain’s perspective a la OOD, we think about the problem as a data transformation. And the way we humans have done data transformations long before computers were invented, is through mathematical functions.
Despite the similarity in name, standard programming functions aren’t exactly what the word functional in the phrase is referring to. Rather, it is referring to the mathematical definition of a function, which in a very loose way may be defined as a “process” of some sort that takes values from one set (called the domain) and associates (maps) those values to another set (called the range). Another way to describe a mathematical function is a transformation of a set of inputs into a set of outputs. To start, you look at your business domain and analyze the data flows. Then to do functional programming you need to think about your problem in terms of a pipeline of mathematical functions. You start with your initial data and make it input to a function that transforms it. The output of this function is the input to the next function and so on––hence the term pipeline. The output at the end of the pipeline is data in the final form you want it!
I would argue that it’s likely that 99% of programmers (and not just Scala programmers) are, at least some of the time, already doing and enjoy doing FP in the way I described it above! They just may not know that what they are doing is called FP.
First, nearly all programmers today know at least the basics of the Bash shell. As part of that, nearly every programmer has written or at least seen and understood a simple bash pipeline like this:
ls -al | grep ".png" | wc -l
Ok, right there I’ve written a functional program! I’m willing to bet you have too! Ask anyone why they like using the Bash shell, and command pipelines will surely be in the top 2 reasons cited. It’s a fun, useful and easy way to get at the data you need.
One of the most popular NoSQL databases available today is MongoDB. I can’t prove this, but I’m willing to argue that one of the main reasons it became so popular is its aggregation framework. People with many years of SQL experience tend to make fun of the aggregation framework “(said in sarcastic Foamy voice) : Oh I could do those 10 lines so much easier in one SQL statement!” But the aggregation framework is essentially a functional pipeline, it is easy and fun to use, and a very natural way to extract and transform data in a document database.
Finally wrapping back around to the claim about Scala programmers not doing FP: ask your typical Scala programmer (even the “better Java” crowd) what they like about the language and they will almost surely say the ability to transform data in a functional pipeline. Here is a simple example from Stackoverflow of taking a set of products and creating a new set with only unique values on e.g. the product model:
productModels
.groupBy(.1) // produces MapProduct, Map\[Int, Product]
.filter {case (k,v) => v.size == 1} // filters unique values
.flatMap {case (,v) => v}
Perhaps it is true that very new Scala programmers don’t do this, but very quickly almost all Scala programmers begin to think this way and find it fun, productive and relatively easy to do.
## The Purity Ring
The description above which shows FP to be “fun, productive and relatively easy to do” leaves out a clear explanation of FP’s most important advantage—it’s precision! That aspect leads us to pure FP.
A core principle inherent in the definition of mathematical functions is that every time you use the same inputs on a mathematical function, you will always get, and only get, the exact same singular output. Part of what guarantees this is that in the course of your calculations, variables are immutable. Once you assign them a value, that value can’t change. The other part that guarantees this it that functions only make changes locally and don’t effect, or can’t be effected by variables other than the inputs. Hence we can reliably substitute any expression for its equivalent value. This is known as referential transparency (the infamous purity).
If we could impose these requirements on our program’s functions, we can always know exactly how they will perform no matter how many times they are run. We can reason about the logic flow and be confident our programs will perform as advertised, without even testing. Essentially they would be bug free! We would be able to quote the famous line in Edsger Dijkstra’s preface to his A Discipline of Programming: “None of the programs in this monograph, needless to say, has been tested on a machine.”
Unfortunately such programs wouldn’t be very useful for most data processing systems applications. Since we don’t have total control of our inputs, we can’t demand that our functions always provide exact outputs. We have to deal with getting partial results, or results with an unknown range or even error results. Similarly, we can’t demand that our functions only provide specific results. Sometimes we want to be able to log whats going on, or send notifications to other processes over and above the output of the function. And of course state mutation is often a requirement of what we need to do.
All these situations describe bumps in the road to data processing as a clean chain of pure mathematical functions. These bumps are also known as effects. If not handled properly, these effects become side effects, i.e. computations that break referential transparency. Essentially this means that the results of our program are neither predictable nor consistent.
But FP in it’s most developed form has precise formal tools for dealing with effects so that we can handle them in ways consistent with and conforming to the principles of mathematical functions. Moreover there are only a relatively limited number of effects that we need to deal with, and these same effects repeat themselves in all data processing systems, no matter what the business domain. So the number of patterns we need to learn are far more limited than in OOD. Most importantly the tools we have to deal with these patterns are mathematically precise and can be (and are) encapsulated in programming frameworks.
Why are formal methods so important? In this article I wrote years ago, I poke a bit of fun at Dijkstra’s claim and make a bold claim of my own: “Constructing programs as proofs (as formal methodologists demand) is not likely to happen in our lifetime.” I stand by this statement.
However, I also point out that I am proud to have worked on Statemate which is a tool to help programmers apply formal thinking to the specification of reactive systems. One of the customers who used Statemate was developing a high-speed train. By formally specifying the logic of opening the doors on the train, they were able to discover a flaw that would have caused the doors to open while the train was traveling at highest speeds.
“Real” functional programming, more correctly called pure functional programming, is a paradigm that allows us to apply formal approaches to the actual code we write. Being able to formally reason about our systems, even if we don’t formally prove them, is hugely important. It helps make our systems more consistent, reliable and predictable. In fact, it can save lives. This is orders of magnitude more important and useful than vague notions like “fun and productive”.
An enormous amount of work has been done over the past 15 years since I wrote that article, to make using formal methods through functional programming far easier and more accessible than in the past. Given it’s importance and benefits, why isn’t the whole programming world rushing to adopt these pure FP tools and languages?
Find out in the next post.
FP scala rant | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.350396990776062, "perplexity": 861.1425949872812}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647671.73/warc/CC-MAIN-20180321160816-20180321180816-00146.warc.gz"} |
https://mslc.ctf.su/wp/gits-2012-crypto-400/?replytocom=6330 | Jan
30
Gits 2012 # – Crypto 400
files running at hellothere.final2012.ghostintheshellcode.com
Summary: MITM attack
Here we have server and client source, both with bind sockets. It’s rather suspisious.
The scheme
The scheme is modified Diffie-Hellman:
We know only g and p.
r,s,t are random in each session.
h is known both to server and client.
server -> client: g^r % p, g^s % p
client -> server: (g^r)^h % p, g^t % p
server checks inputs, and generates shared key = (g^r)^h * (g^t)^s % p = g^(r*h+s*t) % p.
MITM attack
So, there are two ports: 9998 for server and 9999 for client. Let’s realize MITM attack:
• Get g^r % p, g^s % p from server, and held the connect.
• Send them both to client, and get right numbers
Now, if we send these two numbers to the server, we won’t be able to calculate the shared key – we can’t calculate g^(s*t) % p part. Let’s look into checking code:
(ans,d)=self.recv(2) if(ans==pow(entA,r,field)): #print "Authenticated" key=(entA*pow(d,s,field))%field # < - - - - - - calcIv= hashlib.sha256() calcIv.update(hex(key)) calcKey= hashlib.sha512() calcKey.update(hex(key)) enc=AES.new(calcKey.digest()[0:32],2,calcIv.digest()[:16]) self.request.sendall(enc.encrypt(winningKey))
We can fool the server! It only checks the first number (ans), the second (d) can be everything. Well, we can put 0 – then the key will be 0, so it will be easy to decrypt the message. Also 1 is a case (then the key is just first number from client):
SERV = sock("hellothere.final2012.ghostintheshellcode.com", 9998) CLI = sock("hellothere.final2012.ghostintheshellcode.com", 9999) gr, gs = recv(SERV, 2) send(CLI, gr, gs) ghr, gt = recv(CLI, 2) send(SERV, ghr, 0) cipher = SERV.recv(4096) key = long(0) # it will has 'long' type after computations calcIv= hashlib.sha256() calcIv.update(hex(key)) calcKey= hashlib.sha512() calcKey.update(hex(key)) enc=AES.new(calcKey.digest()[0:32],2,calcIv.digest()[:16]) print enc.decrypt(cipher)
The flag: __It’s Better left unread__
1 comment
1. rockosov says:
Awesome! But I was very close… | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6965215802192688, "perplexity": 24456.15708238316}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107876136.24/warc/CC-MAIN-20201021064154-20201021094154-00062.warc.gz"} |
http://logspace.org/articles/v007a010/ | Volume 7 (2011) Article 10 pp. 147-153 [Note]
The Influence Lower Bound Via Query Elimination
by
Received: March 4, 2011
Published: July 19, 2011
Download article from ToC site:
[PDF (182K)] [PS (522K)] [PS.GZ (187K)]
[Source ZIP]
Keywords: randomized query complexity, influence of variables
ACM Classification: F.1.3
AMS Classification: 68Q17
Abstract: [Plain Text Version]
We give a simple proof, via query elimination, of a result due to O'Donnell, Saks, Schramm, and Servedio, which shows a lower bound on the zero-error expected query complexity of a function $f$ in terms of the maximum influence of any variable of $f$. Our lower bound also applies to the two-sided error expected query complexity of $f$, and it allows an immediate extension which can be used to prove stronger lower bounds for some functions. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7716915607452393, "perplexity": 3708.6087814946018}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886106996.2/warc/CC-MAIN-20170820223702-20170821003702-00260.warc.gz"} |
http://math.stackexchange.com/questions/258357/sampling-theorem-poisson-formula | Sampling Theorem Poisson Formula
Theorem If the Fourier transform $\hat{f}(w)$ of a signal function $f(x)$ is zero for all frequencies ouside the interval $-w_c\leq w \leq w_c$, then $f(x)$ can be uniquely determined from its sampled values: $$f_n=f(nT),$$ $-\infty\leq n \leq \infty$ if $T=\dfrac{1}{2w_c}$.
How I will be able to generalize the Sampling Theorem for the cases $T < 1/2w_c$ e $T > 1/2w_c$ using the Poisson’s sum formula?. Any of two cases please ...
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9694358110427856, "perplexity": 406.47263601126144}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645366585.94/warc/CC-MAIN-20150827031606-00307-ip-10-171-96-226.ec2.internal.warc.gz"} |
https://www.studyadda.com/solved-papers/haryana-state-exams/general-studies/solved-paper-general-studies-2014/774 | Solved papers for UPSC General Studies Solved Paper - General Studies-2014
done Solved Paper - General Studies-2014 Total Questions - 100
• question_answer1) If the interest rate is decreased in an economy, it will
A)
decrease the consumption expenditure in the economy.
B)
increase the tax collection of the government.
C)
increase the investment expenditure in the economy.
D)
increase the total savings in the economy.
• question_answer2) Consider the following statements. 1. The President shall make rules for the more convenient transaction of the business of the Government of India and for the allocation among Ministers of the said business. 2. All executive actions of the Government of India shall be expressed to be taken in the name of the Prime Minister. Which of the statement(s) given above is/are correct?
A)
Only 1
B)
Only 2
C)
Both 1 and 2
D)
Neither 1 nor 2
• question_answer3) Consider the following statements regarding a No-Confidence Motion in India. 1. There is no mention of a No-Confidence Motion in the Constitution of India. 2. A motion of No-Confidence can be introduced in the Lok Sabha only Which of the statement(s) given above is/are correct?
A)
Only 1
B)
Only 2
C)
Both 1 and 2
D)
Neither 1 nor 2
• question_answer4) With reference to Neem tree, consider the following statements. 1. Neem oil can be used as a pesticide to control the proliferation of some species of insects and mites. 2. Neem seeds are used in the manufacture of biofuels and hospital detergents. 3. Neem oil has applications in pharmaceutical industry. Which of the statement(s) given above is/are correct?
A)
1 and 2
B)
Only 3
C)
1 and 3
D)
All of these
• question_answer5) Which one of the following is the process involved in photosynthesis?
A)
Potential energy is released to form free energy
B)
Free energy is converted into potential energy and stored
C)
Food is oxidised to release carbon dioxide and water
D)
Oxygen is taken and carbon dioxide and water vapour are given out
• question_answer6) In addition to fingerprint scanning, which of the following can be used in the biometric identification of a person? 1. Iris scanning 2. Retinal scanning 3. Voice recognition Select the correct answer using the codes given below.
A)
Only 1
B)
2 and 3
C)
1 and 3
D)
All of these
• question_answer7) Which of the following statement(s) is/are correct regarding vegetative propagation of plants? 1. Vegetative propagation produces clonal population. 2. Vegetative propagation helps in eliminating the virus. 3. Vegetative propagation can be practised most of the year. Select the correct answer using the codes given below.
A)
Only 1
B)
2 and 3
C)
1 and 3
D)
All of these
• question_answer8) Which of the following pair(s) is/are correctly matched? Spacecraft Purpose 1. Cassini-Huygens Orbiting the Venus and transmitting data to the Earth 2. Messenger Mapping and investigating the Mercury 3. Voyager 1 and 2 Exploring the outer solar system
Select the correct answer using the codes given below.
A)
Only 1
B)
2 and 3
C)
1 and 3
D)
All of these
• question_answer9) Consider the following pairs. Region Well-known for the Production of 1. Kinnaur Areca nut 2. Mewat Mango 3. Coromandel Soyabean
Which of the above pair(s) is/are correctly matched?
A)
1 and 2
B)
Only 3
C)
All of these
D)
None of these
• question_answer10) Which of the following is/are the example/examples of chemical change? 1. Crystallisation of sodium chloride 2. Melting of ice 3. Souring of milk Select the correct answer using the codes given below.
A)
1 and 2
B)
Only 3
C)
All of these
D)
None of these
• question_answer11) The power of the Supreme Court of India to decide disputes between the Centre and the States falls under its
A)
B)
appellate jurisdiction
C)
original jurisdiction
D)
writ jurisdiction
• question_answer12) Consider the following techniques/phenomena. 1. Budding and grafting in fruit plants 2. Cytoplasmic male sterility 3. Gene silencing Which of the above is/are used to create transgenic crops?
A)
Only 1
B)
2 and 3
C)
1 and 3
D)
None of these
• question_answer13) Consider the following statements. 1. Maize can be used for the production of starch. 2. Oil extracted from maize can be a feedstock for biodiesel. 3. Alcoholic beverages can be produced by using maize. Which of the statements given above is/are correct?
A)
Only 1
B)
1 and 2
C)
2 and 3
D)
All of these
• question_answer14) Among the following organisms, which one does not belong to the class of other three?
A)
Crab
B)
Mite
C)
Scorpion
D)
Spider
• question_answer15) The power to increase the number of judges in the Supreme Court of India is vested in
A)
the President of India
B)
the Parliament
C)
the Chief Justice of India
D)
the Law Commission
• question_answer16) Consider the following towns of India. 1. Bhadrachalam 2. Chanderi 3. Kancheepuram 4. Karnal Which of the above are famous for the production of traditional sarees/fabric?
A)
1 and 2
B)
2 and 3
C)
1, 2 and 3
D)
1, 3 and 4
• question_answer17) Consider the following pairs. National Highway Cities Connected 1. NHN 4 Chennai and Hyderabad 2. NHN 6 Mumbai and Kolkata 3. NHN 15 Ahmedabad and Jodhpur
Which of the above pairs is/are correctly matched?
A)
1 and 2
B)
Only 3
C)
All of these
D)
None of these
• question_answer18) Consider the following international agreements. 1. The International Treaty on Plant Genetic Resources for Food and Agriculture. 2. The United Nations Convention to Combat Desertification. 3. The World Heritage Convention. Which of the above has/have a bearing on the biodiversity?
A)
1 and 2
B)
Only 3
C)
1 and 3
D)
All of these
• question_answer19) Consider the following statements regarding 'Earth Hour'. 1. It is an initiative of UNEP and UNESCO. 2. It is a movement in which the participants switch off the lights for one hour on a certain day every year. 3. It is a movement to raise the awareness about the climate change and the need to save the planet. Which of the statements given above is/are correct?
A)
1 and 3
B)
Only 2
C)
2 and 3
D)
All of these
• question_answer20) Which one of the following is the correct sequence of a food chain?
A)
Diatoms-Crustaceans-Herrings
B)
Crustaceans-Diatoms-Herrings
C)
Diatoms-Herrings-Crustaceans
D)
Crustaceans-Herrings-Diatoms
• question_answer21) What are the significances of a practical approach to sugarcane production known as 'Sustainable Sugarcane Initiative'? 1. Seed cost is very low in this compared to the conventional method of cultivation. 2. Drip irrigation can be practised very effectively in this. 3. There is no application of chemical/ inorganic fertilisers at all in this. 4. The scope for intercropping is more in this compared to the conventional method of cultivation. Select the correct answer using the codes given below.
A)
1 and 3
B)
1, 2 and 4
C)
2, 3 and 4
D)
All of these
• question_answer22) If a wetland of international importance is brought under the 'Montreux Record', what does it imply?
A)
Changes in ecological character have occurred, are occurring or are likely to occur in the wetland as a result of human interference.
B)
The country in which the wetland is located should enact a law to prohibit any human activity within 5 km from the edge of the wetland.
C)
The survival of the wetland depends on the cultural practices and traditions of certain communities living in its vicinity and therefore the cultural diversity therein should not be destroyed.
D)
It is given the status of 'World Heritage Site'.
• question_answer23) Which one of the following pairs of islands is separated from each other by the 'Ten Degree Channel'?
A)
Andaman and Nicobar
B)
Nicobar and Sumatra
C)
D)
Sumatra and Java
• question_answer24) Consider the following pairs. Programme/ Project Ministry 1. Drought-Prone Area Programme Ministry of Agriculture 2. Desert Development Programme Ministry of Environment and Forests 3. National Watershed Development Project for Rainfed Areas Ministry of Rural Development
Which of the above pair(s) is/are correctly matched?
A)
1 and 2
B)
Only 3
C)
All of these
D)
None of these
• question_answer25) With reference to Bombay Natural History Society (BNHS), consider the following statements. 1. It is an autonomous organisation under the Ministry of Environment and Forests. 2. It strives to conserve nature through action-based research, education and public awareness. 3. It organises and conducts nature trailsand camps for the general public. Which of the statement(s) given above is/are correct?
A)
1 and 3
B)
Only 2
C)
2 and 3
D)
All of these
• question_answer26) With reference to 'Global Environment Facility', which of the following statement(s) is/are correct?
A)
It serves as financial mechanism for 'Convention on Biological Diversity' and 'United Nations Framework Convention on Climate Change.'
B)
It undertakes scientific research on environmental issues at global level.
C)
It is an agency under OECD to facilitate the transfer of technology and funds to underdeveloped countries with specific aim to protect their environment.
D)
Both [a] and [b]
• question_answer27) With reference to technologies for solar power production, consider the following statements. 1. 'Photovoltaics' is a technology that generates electricity by direct conversion of light into electricity, while 'Solar Thermal' is a technology that utilises the Sun's rays to generate heat which is further used in electricity generation process. 2. Photovoltaics generates Alternating Current (AC), while Solar Thermal generates Direct Current (DC). 3. India has manufacturing base for Solar Thermal technology, but not for Photovoltaics. Which of the statement(s) given above is/are correct?
A)
Only 1
B)
2 and 3
C)
All of these
D)
None of these
• question_answer28) Consider the following languages. 1. Gujarati 2. Kannada 3. Telugu Which of the above has/have been declared as 'Classical Language/Languages' by the government?
A)
1 and 2
B)
Only 3
C)
2 and 3
D)
All of these
• question_answer29) Consider of the following pairs. 1. Dampa Tiger Reserve Mizoram 2. Qumti Wildlife Sanctuary Sikkim 3. Saramati Peak Nagaland
Which of the above pair(s) is/are correctly matched?
A)
Only 1
B)
2 and 3
C)
1 and 3
D)
All of these
• question_answer30) With reference to a conservation organisation called "Wetlands International', which of the following statements is/are correct? 1. It is an intergovernmental organisation formed by the countries which are signatories to Ramsar Convention. 2. It works at the field level to develop and mobilise knowledge, and use the practical experience to advocate for better policies. Select the correct answer using the codes given below.
A)
Only 1
B)
Only 2
C)
Both 1 and 2
D)
Neither 1 nor 2
• question_answer31) With reference to a grouping of countries known as BRICS, consider the following statements : 1. The first Summit of BRICS was held in Rio de Janeiro in 2009. 2. South Africa was the last to join the BRICS grouping. Which of the statements given above is/are correct?
A)
Only 1
B)
Only 2
C)
Both 1 and 2
D)
Neither 1 nor 2
• question_answer32) Consider the following diseases. 1. Diphtheria 2. Chickenpox 3. Smallpox Which of the above diseases has/have been eradicated in India?
A)
1 and 2
B)
Only 3
C)
All of these
D)
None of these
• question_answer33) Which of the following phenomena might have influenced the evolution of organisms? 1. Continental drift 2. Glacial cycles Select the correct answer using the codes given below.
A)
Only 1
B)
Only 2
C)
Both 1 and 2
D)
Neither 1 nor 2
• question_answer34) Other than poaching, what are the possible reasons for the decline in the population of Ganges River Dolphins? 1. Construction of dams and barrages on rivers 2. Increase in the population of crocodiles in rivers 3. Getting trapped in fishing nets accidentally 4. Use of synthetic fertilisers and other agricultural chemicals - crop-fields in the vicinity of rivers Select the correct answer using the codes given below.
A)
1 and 2
B)
2 and 3
C)
3 and 4
D)
All of the above
A)
solve the problem of minorities in India.
B)
give effect to the Independence Bill.
C)
delimit the boundaries between India and Pakistan.
D)
enquire into the riots in East Bengal.
• question_answer36) Brominated flame retardants are used in many household products like mattresses and upholstery. Why is there some concern about their use? 1. They are highly resistant to degradation in the environment. 2. They are able to accumulate in humans and animals. Select the correct answer using the codes given below.
A)
Only 1
B)
Only 2
C)
Both 1 and 2
D)
Neither 1 nor2
• question_answer37) Consider the following. 1. Bats 2. Bears 3. Rodents The phenomenon of hibernation can be observed in which of the above kinds of animals?
A)
1 and 2
B)
Only 2
C)
1, 2 and 3
D)
Hibernation cannot be observed in any of the above
• question_answer38) Which one of the following is the largest Committee of the Parliament?
A)
The Committee on Public Accounts
B)
The Committee on Estimates
C)
The Committee on Public Undertakings
D)
The Committee on Petitions
• question_answer39) Which of the following adds/add carbon dioxide to the carbon cycle on the planet Earth? 1. Volcanic action 2. Respiration 3. Photosynthesis 4. Decay of organic matter Select the correct answer using the codes given below.
A)
1 and 3
B)
Only 2
C)
1, 2 and 4
D)
All of these
• question_answer40) If you walk through countryside, you are likely to see some birds stalking alongside the cattle to seize the insects, disturbed by their movement through grasses. Which of the following is/are such bird/birds? 1. Painted Stork 2. Common Myna 3. Black-necked Crane Select the correct answer using the codes given below.
A)
1 and 2
B)
Only 2
C)
2 and 3
D)
Only 3
• question_answer41) The Partition of Bengal made by Lord Curzon in 1905 lasted until
A)
the First World War when Indian troops were needed by the British and the partition was ended.
B)
King George V abrogated Curzon's act at the Royal Darbar in Delhi, in 1911.
C)
Gandhiji launched his Civil Disobedience Movement.
D)
the Partition of India, in 1947 when East Bengal became East Pakistan.
• question_answer42) The 1929 Session of Indian National Congress is of significance in the history of the Freedom Movement because the
A)
attainment of Self-Government was declared as the objective of the Congress.
B)
attainment of Pooma Swaraj was adopted as the goal of the Congress.
C)
Non-Cooperation Movement was launched.
D)
decision to participate in the Round Table Conference in London was taken.
• question_answer43) With reference to the famous Sattriya dance, consider the following statements 1. Sattriya is a combination of music, dance and drama. 2. It is a centuries-old living tradition of Vaishnavites of Assam. 3. It is based on classical Ragas and Talas of devotional songs composed by Tulsidas, Kabir and Mirabai. Which of the statements given above is/are correct?
A)
Only 1
B)
1 and 2
C)
2 and 3
D)
All of these
• question_answer44) Chaitra 1 of the national calendar based on the Saka Era corresponds to which one of the following dates of the Gregorian calendar in a normal year of 365 days?
A)
22nd March (or 21st March)
B)
15th May (or 16th May)
C)
31st March (or 30th March)
D)
21st April (or 20th April)
• question_answer45) With reference to the Indian history of art and culture, consider the following pairs. Famous work of sculpture site 1. A grand image of Buddha?s Mahaparinirvana with numerous celestial musicians above and the sorrowful figures of his followers below. Ajanta 2. A huge image of varaha avatar of Vishnu, as he rescues goddess earth from the deep and chaotic waters, sculpted on rock Mount Abu 3. ?Arjuna?s Penance?/Descent of Ganga Sculpted on the surface of huge boulders Mahabalipuram
Which of the pairs given above is/are correctly matched?
A)
1 and 2
B)
Only 3
C)
1 and 3
D)
All of these
A)
revolutionary association of Indians with headquarters at San Francisco.
B)
nationalist organisation operating from Singapore.
C)
militant organisation with headquarters at Berlin.
D)
communist movement for India's freedom with headquarters at Tashkent.
• question_answer47) With reference to India's culture and tradition, what is 'Kalaripayattu'?
A)
It is an ancient Bhakti cult of Shaivism still prevalent in some parts of South India.
B)
It is an ancient style bronze and brasswork still found in Southern part of Coromandel area.
C)
It is an ancient form of dance-drama and a living tradition in the Northern part of Malabar.
D)
It is an ancient martial art and a living tradition in some parts of South India.
• question_answer48) Consider the following pairs. 1. Garba Gujarat 2. Mohiniattam Odisha 3. Yakshagana Kamataka
Which of the pairs given above is/are correctly matched?
A)
Only 1
B)
2 and 3
C)
1 and 3
D)
All of these
• question_answer49) With reference to Buddhist history, tradition and culture in India, consider the following pairs. Famous Shrine Location 1. Tabo monastery and temple complex Spiti valley 2. Lhotsava Lhakhang temple, Nako Zanskar valley 3. Alchi temple complex Ladakh
Which of the pairs given above is/are correctly matched?
A)
Only 1
B)
2 and 3
C)
1 and 3
D)
All of these
• question_answer50) . Consider the following statements. 1. 'Bijak' is a composition of the teachings of Saint Dadu Dayal. 2. The philosophy of Pushti Marg was propounded b. Madhvacharya. Which of the statements given above is/are correct?
A)
Only 1
B)
Only 2
C)
Both 1 and 2
D)
Neither 1 nor 2
• question_answer51) A community of people called Manganiyars is well-known for their
A)
martial arts in North-East India.
B)
C)
classical vocal music in South India.
D)
pietra dura tradition in Central India.
• question_answer52) What was/were the object/objects of Queen Victoria's Proclamation (1858)? 1. To disclaim any intention to annex Indian states. 2. To place the Indian administration under the British Crown. 3. To regulate East India Company's trade with India. Select the correct answer using the codes given below.
A)
1 and 2
B)
Only 2
C)
1 and 3
D)
All of these
A)
the mosque for the use of Royal Family.
B)
Akbar's private chamber prayer.
C)
the hall in which Akbar held discussions with scholars of various religions.
D)
the room in which the nobles belonging to different religionsgathered to discuss religious affairs.
• question_answer54) In the context of food and nutritional security of India, enhancing the 'Seed Replacement Rates' of various crops helps in achieving the food production targets of the future. But what is/are the constraint/constraints in its wider/greater implementation? 1. There is no National Seeds Policy in place. 2. There is no participation of private sector seed companies in the supply of quality seeds of vegetables and planting materials of horticultural crops. 3. There is a demand-supply gap regarding quality seeds in case of low value and high volume crops. Select the correct answer using the codes given below.
A)
1 and 2
B)
Only 3
C)
2 and 3
D)
None of these
• question_answer55) With reference to 'Eco-Sensitive Zones', which of the following statements is/are correct? 1. Eco-Sensitive Zones are the areas that are declared under the Wildlife (Protection) Act, 1972. 2. The purpose of the Declaration of Eco-Sensitive Zones is to prohibit all kinds of human activities in those zones except agriculture. Select the correct answer using the codes given below.
A)
Only1
B)
Only 2
C)
Both 1 and 2
D)
Neither 1 nor 2
• question_answer56) Consider the following statements. 1. Animal Welfare Board of India is established under the Environment (Protection) Act, 1986. 2. National Tiger Conservation Authority is a statutory body. 3. National Ganga River Basin Authority is chaired by the Prime Minister. Which of the statement(s) given above is/ are correct?
A)
Only 1
B)
2 and 3
C)
Only 2
D)
All of these
• question_answer57) Consider the following pairs. Vitamin Deficiency Disease 1. Vitamin C Scurvy 2. Vitamin D Rickets 3. Vitamin E Night blindness
Which of the pair(s) given above is/are correctly matched?
A)
1 and 2
B)
Only 3
C)
All of the above
D)
None of the above
• question_answer58) There is some concern regarding the nanoparticles of some chemical elements that are used by the industry in the manufacture of various products. Why? 1. They can accumulate in the environment and contaminate water and soil. 2. They can enter the food chains. 3. They can trigger the production of free radicals. Select the correct answer using the codes given below.
A)
1 and 2
B)
Only 3
C)
1 and 3
D)
All of these
• question_answer59) Which of the following organisations brings out the publication known as World Economic Outlook'?
A)
The International Monetary Fund
B)
The United Nations Development Programme
C)
The World Economic Forum
D)
The World Bank
• question_answer60) With reference to Union Budget, which of the following is/are covered under Non-Plan Expenditure? 1. Defence expenditure 2. Interest payments 3. Salaries and pensions 4. Subsidies Select the correct answer using the codes given below.
A)
Only 1
B)
2 and 3
C)
All of these
D)
None of these
• question_answer61) Which of the following have coral reefs? 1. Andaman and Nicobar Islands 2. Gulf of Kutch 3. Gulf of Mannar 4. Sunderbans Select the correct answer using the codes given below.
A)
1, 2 and 3
B)
2 and 4
C)
1 and 3
D)
All of these
• question_answer62) In India, the problem of soil erosion is associated with which of the following? 1. Terrace cultivation 2. Deforestation 3. Tropical climate Select the correct answer using the codes given below.
A)
1 and 2
B)
Only 2
C)
1 and 3
D)
All of these
• question_answer63) The seasonal reversal of winds is the typical characteristic of
A)
Equatorial climate
B)
Mediterranean climate
C)
Monsoon climate
D)
All of the above climates
• question_answer64) With reference to the cultural history of India, Hie term Tanchayatan' refers to
A)
an assembly of village elders.
B)
a religious sect.
C)
a style of temple construction.
D)
• question_answer65) Consider the following rivers 1. Barak 2. Lohit 3. Subansiri Which of the above flow/flows through Arunachal Pradesh?
A)
Only 1
B)
2 and 3
C)
1 and 3
D)
All of these
• question_answer66) Consider the following pairs. Wetlands Confluence of Rivers 1. Harike Wetlands Confluence of Beas and Satluj/Sutlej 2. Keoladeo Ghana National Park Confluence of Banas and Chambal 3. Kolleru Lake Confluence of Music and Krishna
Which of the above pair(s) is/are correctly matched?
A)
Only 1
B)
2 and 3
C)
1 and 3
D)
All of these
• question_answer67) Which one of the following pairs does not form part of the six systems of Indian philosophy?
A)
Mimamsa and Vedanta
B)
Nyaya and Vaisheshika
C)
Lokayata and Kapalika
D)
Sankhya and Yoga
• question_answer68) Consider the following pairs. Hills Region 1. Cardamom Hills Coromandel Coast 2. Kaimur Hills Konkan Coast 3. Mahadeo Hills Central India 4. Mikir Hills North-East India
Which of the given pair(s) are correctly matched?
A)
1 and 2
B)
2 and 3
C)
3 and 4
D)
2 and 4
• question_answer69) Which one of the following Schedules of the Constitution of India contains provisions regarding anti-defection?
A)
Second Schedule
B)
Fifth Schedule
C)
Eighth Schedule
D)
Tenth Schedule
• question_answer70) The most important strategy for the conservation of biodiversity together with traditional human life is the establishment of
A)
biosphere reserves
B)
botanical gardens
C)
national parks
D)
wildlife sanctuaries
• question_answer71) Turkey is located between
A)
Black sea and Caspian sea
B)
Black sea and Mediterranean sea
C)
Gulf of Suez and Mediterranean sea
D)
Gulf of Aqaba and Dead sea
• question_answer72) What is the correct sequence of occurrence of the following cities in South-East Asia as one proceeds from South to North? 1. Bangkok 2. Hanoi 3. Jakarta 4. Singapore Select the correct answer using the codes given below
A)
4-2-1 -3
B)
3-2-4-1
C)
3-4-1-2
D)
4-3-2-1
• question_answer73) The scientific view is that the increase in global temperature should not exceed $2{}^\circ C$ above pre-industrial level. If the global temperature increases beyond $3{}^\circ C$above the pre-industrial level, what can be its possible impact/impacts on the world? 1. Terrestrial biosphere tends towards a net carbon source. 2. Widespread coral mortality will occur. 3. All the global wetlands will permanently disappear. 4. Cultivation of cereals will not be possible anywhere in the world. Select the correct answer using the codes given below.
A)
Only 1
B)
1 and 2
C)
2, 3 and 4
D)
All of these
• question_answer74) The national motto of India, 'Satyameva Jayate' inscribed below the Emblem of India is taken from
A)
B)
C)
D)
• question_answer75) In the Constitution of India, promotion of international peace and security is included in the
A)
Preamble to the Constitution
B)
Directive Principles of State Policy
C)
Fundamental Duties
D)
Ninth Schedule
• question_answer76) What are the benefits of implementing the Integrated Watershed Development Programme'? 1. Prevention of soil run-off 2. Linking the country's perennial rivers with seasonal rivers 3. Rainwater harvesting and recharge of groundwater table 4. Regeneration of natural vegetation Select the correct answer using the codes given below
A)
1 and 2
B)
2, 3 and 4
C)
1, 3 and 4
D)
All of these
• question_answer77) Which of the following are associated with 'Planning' in India? 1. The Finance Commission 2. The National Development Council 3. The Union Ministry of Rural Development 4. The Union Ministry of Urban Development 5. The Parliament Select the correct answer using the codes given below.
A)
1, 2 and 5
B)
1, 3 and 4
C)
2 and 5
D)
All of these
• question_answer78) Which of the following is/are the function/functions of the Cabinet Secretariat? 1. Preparation of agenda for Cabinet Meetings 2. Secretarial assistance to Cabinet Committees 3. Allocation of financial resources to the Ministries Select the correct answer using the codes given below.
A)
Only 1
B)
2 and 3
C)
1 and 2
D)
All of these
• question_answer79) Consider the following statements. A Constitutional government is one which 1. places effective restrictions on individual liberty in the interest of State Authority. 2. places effective restrictions on the authority of the state in the interest of individual liberty. Which of the statement(s) given above is/are correct?
A)
Only 1
B)
Only 2
C)
Both 1 and 2
D)
Neither 1 nor 2
• question_answer80) Which of the following are the discretionary powers given to the Governor of a State? 1. Sending a report to the President of India for imposing the President's rule. 2. Appointing the Ministers. 3. Reserving certain Bills passed by the State Legislature for consideration of the President of India. 4. Making the rules to conduct the business of the State Government, Select the correct answer using the codes given below.
A)
1 and 2
B)
1 and 3
C)
2, 3 and 4
D)
All of these
• question_answer81) In medieval India, the designations 'Mahattara' and 'Pattakila' were used for
A)
military officers
B)
C)
specialists in Vedic rituals
D)
chiefs of craft guilds
• question_answer82) Lichens, which are capable of initiating ecological succession even on a bare rock, are actually a symbiotic association of
A)
algae and bacteria
B)
algae and fungi
C)
bacteria and fungi
D)
fungi and mosses
• question_answer83) If you travel through the Himalayas, you are likely to see which of the following plants naturally growing there? 1. Oak 2. Rhododendron 3. Sandalwood Select the correct answer using the codes given below.
A)
1 and 2
B)
Only 3
C)
1 and 3
D)
All of these
• question_answer84) Which of the following are some important pollutants released by steel industry in India? 1. Oxides of sulphur 2. Oxides of nitrogen 3. Carbon monoxide 4. Carbon dioxide Select the correct answer using the codes given below.
A)
1, 3 and 4
B)
2 and 3
C)
1 and 4
D)
All of these
• question_answer85) Which of the following Kingdoms were associated with the life of the Buddha? 1. Avanti 2. Qandhara 3. Kosala 4. Magadha Select the correct answer using the codes given below.
A)
1, 2 and 3
B)
2 and 4
C)
3 and 4
D)
1, 3 and 4
• question_answer86) Every year, a month long ecologically important campaign/festival is held during which certain communities/tribes plant saplings of fruit-bearing trees. Which of the following are such communities/tribes?
A)
Bhutia and Lepcha
B)
Gond and Korku
C)
Irulaand Toda
D)
Sahariya and Agariya
• question_answer87) The sales tax you pay while purchasing a toothpaste is a
A)
tax imposed by the Central government.
B)
tax imposed by the Central government, but collected by the State government.
C)
tax imposed by the State government, but collected by the Central government.
D)
tax imposed and collected by the State government.
• question_answer88) What does venture capital mean?
A)
A short-term capital provided to industries
B)
A long-term start-up capital provided to new entrepreneurs
C)
Funds provided to industries at times of incurring losses
D)
Funds provided for replacement and renovation of industries
• question_answer89) The main objective of the 12th Five Year Plan is
A)
inclusive growth and poverty reductions.
B)
inclusive and sustainable growth.
C)
sustainable and inclusive growth to reduce unemployment.
D)
faster, sustainable and more inclusive growth.
• question_answer90) With reference to Balance of Payments, which of the following constitutes/constitute the Current Account? 1. Balance of trade 2. Foreign assets 3. Balance of invisibles 4. Special drawing rights Select the correct answer using the codes given below.
A)
Only 1
B)
2 and 3
C)
1 and 3
D)
1, 2 and 4
• question_answer91) The terms 'Marginal Standing Facility Rate' and 'Net Demand and Time Liabilities', sometimes appearing in news, are used in relation to
A)
banking operations,
B)
communication networking.
C)
military strategies.
D)
supply and demand of agricultural products.
• question_answer92) What is/are the facility/facilities the beneficiaries can get from the services of Business Correspondent (Bank Saathi) in branchless areas? 1. It enables the beneficiaries to draw their subsidies and social security benefits in their villages. 2. It enables the beneficiaries in the rural areas to make deposits and withdrawals. Select the correct answer using the codes given below.
A)
Only 1
B)
Only 2
C)
Both 1 and 2
D)
Neither 1 nor 2
• question_answer93) In the context of Indian economy, which of the following is/are the purpose/purposes of 'Statutory Reserve Requirements'? 1. To enable the Central Bank to control the amount of advances the banks can create. 2. To make the people's deposits with banks safe and liquid. 3. To prevent the commercial banks from making excessive profits. 4. To force the banks to have sufficient vault cash to meet their day-to-day requirements. Select the correct answer using the codes given below.
A)
Only 1
B)
1 and 2
C)
2 and 3
D)
All of these
• question_answer94) Recently, a series of uprisings of people referred to as 'Arab Spring' originally started from
A)
Egypt
B)
Lebanon
C)
Syria
D)
Tunisia
• question_answer95) Consider the following countries. 1. Denmark 2. Japan 3. Russian Federation 4. United Kingdom 5. United States of America Which of the above are the members of the 'Arctic Council1?
A)
1, 2 and 3
B)
2, 3 and 4
C)
1, 4 and 5
D)
1, 3 and 5
• question_answer96) Consider the following pairs. Region Often in News Country 1. Chechnya Russian Federation 2. Darfur Mali 3. Swat Valley Iraq
Which of the govern pair(s) is/are correctly matched?
A)
Only 1
B)
2 and 3
C)
1 and 3
D)
All of these
• question_answer97) With reference to Agni-IV Missile, which of the following statements is/are correct? 1. It is a surface-to-surface missile. 2. It is fueled by liquid propellant only. 3. It can deliver one-tonne nuclear warheads about 7500 km away. Select the correct answer using the codes given below.
A)
Only 1
B)
2 and 3
C)
1 and 3
• question_answer98) *-answer-options-* All of these With reference to two non-conventional energy sources called 'coalbed methane' and 'shale gas', consider the following statements. 1. Coalbed methane is the pure methane gas extracted from coal seams, while shale gas is a mixture of propane and butane only that can be extracted from fine-grained sedimentary rocks. 2. In India, abundant coalbed methane sources exist, but so far no shale gas sources have been found. Which of the statement(s) given above is/are correct?
A)
Only-1
B)
Only 2
C)
Both 1 and 2
D)
Neither 1 nor 2
• question_answer99) With reference to 'Changpa' community of India, consider the following statements. 1. They live mainly in the State of Uttarakhand. 2. They rear the Pashmina goats that yield a fine wool. 3. They are kept in the category of Scheduled Tribes. Which of the statements given above is/are correct?
A)
Only 1
B)
2 and 3
C)
Only 3
D)
All of these
• question_answer100) In India, cluster bean (Guar) is traditionally used as a vegetable or animal feed, but recently the cultivation of this has assumed significance. Which one of the following statements is correct in this context?
A)
The oil extracted from seeds is used in the manufacture of biodegradable plastics.
B)
The gum made from its seeds is used in the extraction of shale gas.
C)
The leaf extract of this plant has the properties of anti- histamines.
D)
It is a source of high quality biodiesel. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4143044948577881, "perplexity": 9651.059961035526}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300574.19/warc/CC-MAIN-20220117151834-20220117181834-00547.warc.gz"} |
http://oklo.org/2006/01/31/if-the-suit-fits/ | Home > detection > If the suit fits…
## If the suit fits…
January 31st, 2006
Five radial velocity datasets (published last year by Marcy et al. 2005) have just been added to the systemic console: HD 183263, HD 117207, HD 188015, HD 45350, and HD 99492. Each of these more-or-less sunlike stars is too faint to be seen with the naked eye, and each is accompanied by (at least) one detectable planet. The periods range from 17 days to several years. None of these planets were extraordinary enough to warrant much fanfare in the popular press. (Ten years ago, however, the announcement of 5 planets would have been front page news. Ahh, those were the days!)
When you use the console to obtain orbital fits to these systems, you’ll notice that several of the stars have a long-term radial velocity trend superimposed on the variations that arise from the much more readily detectable shorter-period planet. These velocity trends are likely caused by as-yet undetected massive planets lying further out in the systems, and as these stars are monitored over the long term, the orbits of these distant, frigid giants will gradually reveal themselves.
In the meantime, the residual velocity trends underscore an interesting general property of extrasolar planets. The presence of a known planet is the best indicator that a given star harbors detectable (but as-yet undetected) planetary companions. That is, if you want to find new planets, then look at stars that already have known planets. Indeed, six of the first twelve planet-bearing stars that were monitored for more than two years at Lick Observatory were subsequently been found to harbor additional bodies. This impressive planetary six-pack includes luminaries such as Upsilon Andromedae, 55 Cancri, and 47 UMa, in addition to the more pedestrian Tau Boo, HD 217107, and HD 38529. (See Fischer et al. 2001).
Categories: detection Tags: | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9185612201690674, "perplexity": 3244.265610746365}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701145519.33/warc/CC-MAIN-20160205193905-00098-ip-10-236-182-209.ec2.internal.warc.gz"} |
https://web2.0calc.com/questions/find-the-roots-of-the-equation | +0
# find the roots of the equation
0
37
1
find the roots of the equation y = x^2 - 2x - 15
Feb 22, 2019
$$x^2 - 2x - 15 = (x+3)(x-5)\\ \text{roots are }x = -3,~5$$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9931280612945557, "perplexity": 982.8894303233939}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203168.70/warc/CC-MAIN-20190324022143-20190324044143-00539.warc.gz"} |
https://www.contextgarden.net/index.php?title=Command/in&diff=prev&oldid=24807 | # \in
## Syntax (autogenerated)
\in{...}{...}[...] {...} text {...} text [...] reference
## Syntax
\in{...}{...}[ref] {...} text before {...} text after [ref]
## Description
Inserts a reference to a location in the document which has been marked with a label (e.g. an equation, figure, section, enumerated item). This works only with numbered items! The curly-brace arguments contain prefix and suffix, the square brackets contain the label of the point/object/section to which you are referring.
## Example
\setuppapersize[A5]
\placeformula[eq:pythagoras]
\startformula
a^2+b^2=c^2
\stopformula
This is explained in \in{Equation}{.}[eq:pythagoras]
• \definereferenceformat for setting up your own references (e.g. for figures). If you tried to parenthesize the equation number with \in{Equation (}{)}[eq:somelabel] and were frustrated by the space after the opening parenthesis, this is the place to look. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.701665997505188, "perplexity": 5360.513050711963}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655899931.31/warc/CC-MAIN-20200709100539-20200709130539-00245.warc.gz"} |
https://www.physicsforums.com/threads/undergrad-thesis-topics.184213/ | 1. Sep 12, 2007
### Yrrepy
Hi, I'm currently selecting my 4th year undergrad thesis design (its all technical/theoretical no hands on work). A couple of the topics given strike my fancy, but I'm particularly interested in Nuclear and Particle Physics of which they're are few topics given for us to choose from. We are allowed to present our own thesis topics however.
Sooo, I've looked around a bit and havn't found too much besides possibly some topics on bubble chambers or other particle detectors (or particle beams) and such. Might anyone have any suggestions (on general topic ideas)?
2. Sep 12, 2007
### Norman
How much physics have you had?
What level of thesis is this supposed to be? Are you expected to undertake original research?
An interesting one might be: (if you have had a semester of QFT)
Quantize this field equation (hbar=c=1):
$$(\Box^2 - m^2) \psi = 0$$
Determine if it obeys causality (do the field operators at space-like separated points commute)?. What happens to this field if you pump energy into it?
Just an idea off the top of my head that might be fun at your level.
3. Sep 13, 2007
### Yrrepy
haha, definitely not at my level. I'm in my last year of my undergrad in engineering physics, Ive taken quantum theories up to perturbation, nuclear & particle physics, physics of nuclear reactors, general relativity, E & M (not covariant/relativist formalism) classical mechanics (lagrangian), tons o math, taking solid state physics, will be taking advanced quantum (not second quantitization i believe) and nano science,
(thats just a list of my upper tier physics courses)
It's more of a design/technical (engineering) thesis (designing some apparatus etc but not building it).
4. Sep 13, 2007
### Norman
Well...
You might consider requesting this be moved to the Engineering forum then. Especially if you are looking to do more of a nuclear engineering project.
5. Sep 13, 2007
### Yrrepy
oh no, I'm much more interested in doing something along the lines of particle physics, like some component or some form of detector and an analysis of it.
I suppose you could argue this should be in the engineering section....
6. Sep 13, 2007
### malawi_glenn
maybe you can check something out of the Babar decector, that is searching for CP violation i B-meson decay?
7. Sep 13, 2007
### Norman
An analysis of a detector can be a huge undertaking. Have a look at the computer codes GEANT4, FLUKA, and MCNPX. These are computer codes used in detector validation.
8. Sep 14, 2007
### Gauged
How about an alternative mechanism for 'splitting' a given Ryberg atom. (I am totally just freeballing, by the way)
Last edited: Sep 14, 2007
9. Sep 14, 2007
### BenTheMan
This one I just thought of---it may be a bit over your head, but if you have a few months to work on it, it would give you some good experience.
Suppose you could build an accelerator at the Planck scale and preform a scattering experiment, and that string theory was right. What would the experimental signatures look like? You'd also have to assume that you could SEE the states being produced, but it might be interesting to learn a few things.
Basically, you'd have to figure out what the kaluza klein states would look like in ten dimensions. My guess is that much of this analysis has already been done by the ADD gravity people, but your mode spacing would be a bit different. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.73839271068573, "perplexity": 1505.2823038853614}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806569.66/warc/CC-MAIN-20171122103526-20171122123526-00680.warc.gz"} |
https://bora.uib.no/bora-xmlui/handle/11250/2763608?show=full | dc.contributor.author Fomin, Fedor dc.contributor.author Lokshtanov, Daniel dc.contributor.author Panolan, Fahad dc.contributor.author Saurabh, Saket dc.contributor.author Zehavi, Meirav dc.date.accessioned 2021-07-06T13:15:46Z dc.date.available 2021-07-06T13:15:46Z dc.date.created 2020-07-02T10:33:22Z dc.date.issued 2020 dc.Published Leibniz International Proceedings in Informatics. 2020, 164 . dc.identifier.isbn 978-3-95977-143-6 dc.identifier.issn 1868-8969 dc.identifier.uri https://hdl.handle.net/11250/2763608 dc.description.abstract We present an algorithm for the extensively studied Long Path and Long Cycle problems on unit disk graphs that runs in time 2O(√k)(n + m). Under the Exponential Time Hypothesis, Long Path and Long Cycle on unit disk graphs cannot be solved in time 2o(√k)(n + m)O(1) [de Berg et al., STOC 2018], hence our algorithm is optimal. Besides the 2O(√k)(n + m)O(1)-time algorithm for the (arguably) much simpler Vertex Cover problem by de Berg et al. [STOC 2018] (which easily follows from the existence of a 2k-vertex kernel for the problem), this is the only known ETH-optimal fixed-parameter tractable algorithm on UDGs. Previously, Long Path and Long Cycle on unit disk graphs were only known to be solvable in time 2O(√k log k)(n + m). This algorithm involved the introduction of a new type of a tree decomposition, entailing the design of a very tedious dynamic programming procedure. Our algorithm is substantially simpler: we completely avoid the use of this new type of tree decomposition. Instead, we use a marking procedure to reduce the problem to (a weighted version of) itself on a standard tree decomposition of width O(√k). en_US dc.language.iso eng en_US dc.publisher Schloss Dagstuhl – Leibniz Center for Informatics en_US dc.rights Navngivelse 4.0 Internasjonal * dc.rights.uri http://creativecommons.org/licenses/by/4.0/deed.no * dc.title ETH-tight algorithms for long path and cycle on unit disk graphs en_US dc.type Journal article en_US dc.type Peer reviewed en_US dc.description.version publishedVersion en_US dc.rights.holder Copyright the authors en_US dc.source.articlenumber 44 en_US cristin.ispublished true cristin.fulltext original cristin.qualitycode 1 dc.identifier.doi 10.4230/LIPIcs.SoCG.2020.44 dc.identifier.cristin 1818234 dc.source.journal Leibniz International Proceedings in Informatics en_US dc.source.40 164 dc.relation.project Norges forskningsråd: 263317 en_US dc.identifier.citation In: Cabello, S. and Chen, D. Z. (eds.), 36th International Symposium on Computational Geometry (SoCG 2020), 44. en_US dc.source.volume SoCG 2020 en_US
### This item appears in the following Collection(s)
Except where otherwise noted, this item's license is described as Navngivelse 4.0 Internasjonal | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.859658420085907, "perplexity": 9449.300819928327}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662543264.49/warc/CC-MAIN-20220522001016-20220522031016-00652.warc.gz"} |
http://mathhelpforum.com/differential-geometry/82217-continuity-product-range-print.html | # Continuity with product of the range
• Apr 4th 2009, 08:40 AM
Andreamet
Continuity with product of the range
Let $f_1: X \rightarrow Y_1$ and $f_2: X \rightarrow Y_2$ be continuous functions. Show that $h: X \rightarrow Y_1\times Y_2$ defined by $h(x)=(f_1(x),f_2(x))$, is continuous as well.
• Apr 4th 2009, 09:45 PM
aliceinwonderland
Quote:
Originally Posted by Andreamet
Let $f_1: X \rightarrow Y_1$ and $f_2: X \rightarrow Y_2$ be continuous functions. Show that $h: X \rightarrow Y_1\times Y_2$ defined by $h(x)=(f_1(x),f_2(x))$, is continuous as well.
Let W be a neighborhood of $h(x), x \in X$ such that $W = U \times V$ where U is a neighborhood of $f_1(x)$ and V is a neighborhood of $f_2(x)$.
Let p be a point in X that belongs to $h^{-1}(U \times V)$. Then, $h(p) \in U \times V$ iff $f_1(p) \in U$ and $f_2(p) \in V$. Thus, $h^{-1}(W) = h^{-1}(U \times V) = f_1^{-1}(U) \cap f_2^{-1}(V)$. Since $f_1$ and $f_2$ are continuous and an intersection of open sets is open, $h^{-1}(W)$ is open. Thus, h is continuous.
• Apr 6th 2009, 04:44 AM
xalk
Quote:
Originally Posted by Andreamet
Let $f_1: X \rightarrow Y_1$ and $f_2: X \rightarrow Y_2$ be continuous functions. Show that $h: X \rightarrow Y_1\times Y_2$ defined by $h(x)=(f_1(x),f_2(x))$, is continuous as well.
Let ε>o and aεX.
Since $\lim_{x\rightarrow a}{f_{1}(x)}=f_{1}(a)$ and
$\lim_{x\rightarrow a}{f_{2}(x)} =f_{2}(a)$,then there exist:
$\delta_{1}>0$ and such that:
if $|x-a|<\delta_{1}$ and xεX ,then $|f_{1}(x)-f_{1}(a)|$<ε/2 for all,x............................................. .......................................1
$\delta_{2}>0$ and such that:
if $|x-a|<\delta_{2}$ and xεX, then $|f_{2}(x)-f_{2}(a)|$<ε/2 for all ,x................................................ ...........2.
Choose $\delta$ = min{ $\delta_{1},\delta_{2}$}
Let |x-a|<δ and xεX.
then $|x-a|<\delta_{1}$ and $|x-a|<\delta_{2}$ and by (1) and (2) we have:
$|f_{1}(x)-f_{1}(a)|+|f_{2}(x)-f_{2}(a)|<\epsilon$
BUT.
Norm (h(x)-h(a)) = ||h(x)-h(a)|| = $\sqrt{(f_{1}(x)-f_{1}(a))^2 + (f_{2}(x)-f_{2}(a))^2}\leq|f_{1}(x)-f_{1}(a)| + |f_{2}(x)-f_{2}(a)|<\epsilon$
Thus $\lim_{x\rightarrow a}h(x) = h(a)$,for all ,a in X AND hence the function ,h is continuous over X | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 39, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9877318739891052, "perplexity": 637.6024128508925}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917127681.50/warc/CC-MAIN-20170423031207-00395-ip-10-145-167-34.ec2.internal.warc.gz"} |
http://singaporemathguru.com/question/primary-5-problem-sums-word-problems-ratio-clue-think-in-units-ratio-exercise-8-984 | ### Primary 5 Problem Sums/Word Problems - Try FREE
Score :
(Single Attempt)
#### Question
Each model working for a fashion label has either 6 or 7 make-up kits.
The ratio of the number of models to the number of make-up kits is 5 : 33.
What fraction of the models has 7 make-up kits?
Notes to student:
1. If your answer to the above is a fraction, given that the answer is a/b, type your answer as a/b
The correct answer is : 3/5 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.459491491317749, "perplexity": 3194.8136236650193}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540530857.12/warc/CC-MAIN-20191211103140-20191211131140-00199.warc.gz"} |
https://www.lmfdb.org/ModularForm/GL2/Q/holomorphic/320/4/a/i/ | # Properties
Label 320.4.a.i Level $320$ Weight $4$ Character orbit 320.a Self dual yes Analytic conductor $18.881$ Analytic rank $1$ Dimension $1$ CM no Inner twists $1$
# Related objects
## Newspace parameters
Level: $$N$$ $$=$$ $$320 = 2^{6} \cdot 5$$ Weight: $$k$$ $$=$$ $$4$$ Character orbit: $$[\chi]$$ $$=$$ 320.a (trivial)
## Newform invariants
Self dual: yes Analytic conductor: $$18.8806112018$$ Analytic rank: $$1$$ Dimension: $$1$$ Coefficient field: $$\mathbb{Q}$$ Coefficient ring: $$\mathbb{Z}$$ Coefficient ring index: $$1$$ Twist minimal: no (minimal twist has level 160) Fricke sign: $$-1$$ Sato-Tate group: $\mathrm{SU}(2)$
## $q$-expansion
$$f(q)$$ $$=$$ $$q + 2 q^{3} + 5 q^{5} + 6 q^{7} - 23 q^{9} + O(q^{10})$$ $$q + 2 q^{3} + 5 q^{5} + 6 q^{7} - 23 q^{9} - 60 q^{11} - 50 q^{13} + 10 q^{15} - 30 q^{17} - 40 q^{19} + 12 q^{21} + 178 q^{23} + 25 q^{25} - 100 q^{27} - 166 q^{29} + 20 q^{31} - 120 q^{33} + 30 q^{35} - 10 q^{37} - 100 q^{39} - 250 q^{41} - 142 q^{43} - 115 q^{45} + 214 q^{47} - 307 q^{49} - 60 q^{51} - 490 q^{53} - 300 q^{55} - 80 q^{57} + 800 q^{59} - 250 q^{61} - 138 q^{63} - 250 q^{65} + 774 q^{67} + 356 q^{69} + 100 q^{71} - 230 q^{73} + 50 q^{75} - 360 q^{77} - 1320 q^{79} + 421 q^{81} - 982 q^{83} - 150 q^{85} - 332 q^{87} + 874 q^{89} - 300 q^{91} + 40 q^{93} - 200 q^{95} - 310 q^{97} + 1380 q^{99} + O(q^{100})$$
## Embeddings
For each embedding $$\iota_m$$ of the coefficient field, the values $$\iota_m(a_n)$$ are shown below.
For more information on an embedded modular form you can click on its label.
Label $$\iota_m(\nu)$$ $$a_{2}$$ $$a_{3}$$ $$a_{4}$$ $$a_{5}$$ $$a_{6}$$ $$a_{7}$$ $$a_{8}$$ $$a_{9}$$ $$a_{10}$$
1.1
0
0 2.00000 0 5.00000 0 6.00000 0 −23.0000 0
$$n$$: e.g. 2-40 or 990-1000 Significant digits: Format: Complex embeddings Normalized embeddings Satake parameters Satake angles
## Atkin-Lehner signs
$$p$$ Sign
$$2$$ $$1$$
$$5$$ $$-1$$
## Inner twists
This newform does not admit any (nontrivial) inner twists.
## Twists
By twisting character orbit
Char Parity Ord Mult Type Twist Min Dim
1.a even 1 1 trivial 320.4.a.i 1
4.b odd 2 1 320.4.a.f 1
5.b even 2 1 1600.4.a.r 1
8.b even 2 1 160.4.a.a 1
8.d odd 2 1 160.4.a.b yes 1
16.e even 4 2 1280.4.d.f 2
16.f odd 4 2 1280.4.d.k 2
20.d odd 2 1 1600.4.a.bj 1
24.f even 2 1 1440.4.a.n 1
24.h odd 2 1 1440.4.a.o 1
40.e odd 2 1 800.4.a.d 1
40.f even 2 1 800.4.a.h 1
40.i odd 4 2 800.4.c.f 2
40.k even 4 2 800.4.c.e 2
By twisted newform orbit
Twist Min Dim Char Parity Ord Mult Type
160.4.a.a 1 8.b even 2 1
160.4.a.b yes 1 8.d odd 2 1
320.4.a.f 1 4.b odd 2 1
320.4.a.i 1 1.a even 1 1 trivial
800.4.a.d 1 40.e odd 2 1
800.4.a.h 1 40.f even 2 1
800.4.c.e 2 40.k even 4 2
800.4.c.f 2 40.i odd 4 2
1280.4.d.f 2 16.e even 4 2
1280.4.d.k 2 16.f odd 4 2
1440.4.a.n 1 24.f even 2 1
1440.4.a.o 1 24.h odd 2 1
1600.4.a.r 1 5.b even 2 1
1600.4.a.bj 1 20.d odd 2 1
## Hecke kernels
This newform subspace can be constructed as the intersection of the kernels of the following linear operators acting on $$S_{4}^{\mathrm{new}}(\Gamma_0(320))$$:
$$T_{3} - 2$$ $$T_{7} - 6$$
## Hecke characteristic polynomials
$p$ $F_p(T)$
$2$ $$T$$
$3$ $$-2 + T$$
$5$ $$-5 + T$$
$7$ $$-6 + T$$
$11$ $$60 + T$$
$13$ $$50 + T$$
$17$ $$30 + T$$
$19$ $$40 + T$$
$23$ $$-178 + T$$
$29$ $$166 + T$$
$31$ $$-20 + T$$
$37$ $$10 + T$$
$41$ $$250 + T$$
$43$ $$142 + T$$
$47$ $$-214 + T$$
$53$ $$490 + T$$
$59$ $$-800 + T$$
$61$ $$250 + T$$
$67$ $$-774 + T$$
$71$ $$-100 + T$$
$73$ $$230 + T$$
$79$ $$1320 + T$$
$83$ $$982 + T$$
$89$ $$-874 + T$$
$97$ $$310 + T$$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9309777021408081, "perplexity": 11993.833780173789}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363689.56/warc/CC-MAIN-20211209061259-20211209091259-00354.warc.gz"} |
https://en.m.wikipedia.org/wiki/Radial_velocity | The radial velocity or line-of-sight velocity, also known as radial speed or range rate, of a target with respect to an observer is the rate of change of the distance or range between the two points. It is equivalent to the vector projection of the target-observer relative velocity onto the relative direction connecting the two points. In astronomy, the point is usually taken to be the observer on Earth, so the radial velocity then denotes the speed with which the object moves away from the Earth (or approaches it, for a negative radial velocity).
A plane flying past a radar station: the plane's velocity vector (red) is the sum of the radial velocity (green) and the tangential velocity (blue).
## Formulation
Given a differentiable vector ${\displaystyle \mathbf {r} \in \mathbb {R} ^{3}}$ defining the instantaneous position of a target relative to an observer.
Let
${\displaystyle \mathbf {v} ={\frac {d\mathbf {r} }{dt}}}$
(1)
with ${\displaystyle \mathbf {v} \in \mathbb {R} ^{3}}$ , the instantaneous velocity of the target with respect to the observer.
The magnitude of the position vector ${\displaystyle \mathbf {r} }$ is defined as
${\displaystyle r=\|\mathbf {r} \|=\langle \mathbf {r} ,\mathbf {r} \rangle ^{1/2}}$
(2)
The quantity range rate is the time derivative of the magnitude (norm) of ${\displaystyle \mathbf {r} }$ , expressed as
${\displaystyle {\frac {dr}{dt}}}$
(3)
Substituting (2) into (3)
${\displaystyle {\frac {dr}{dt}}={\frac {d\langle \mathbf {r} ,\mathbf {r} \rangle ^{1/2}}{dt}}}$
Evaluating the derivative of the right-hand-side
${\displaystyle {\frac {dr}{dt}}={\frac {1}{2}}{\frac {d\langle \mathbf {r} ,\mathbf {r} \rangle }{dt}}{\frac {1}{r}}}$
${\displaystyle {\frac {dr}{dt}}={\frac {1}{2}}{\frac {\langle {\frac {d\mathbf {r} }{dt}},\mathbf {r} \rangle +\langle \mathbf {r} ,{\frac {d\mathbf {r} }{dt}}\rangle }{r}}}$
using (1) the expression becomes
${\displaystyle {\frac {dr}{dt}}={\frac {1}{2}}{\frac {\langle \mathbf {v} ,\mathbf {r} \rangle +\langle \mathbf {r} ,\mathbf {v} \rangle }{r}}}$
Since[1]
${\displaystyle \langle \mathbf {v} ,\mathbf {r} \rangle =\langle \mathbf {r} ,\mathbf {v} \rangle }$
With
${\displaystyle {\hat {\mathbf {r} }}={\frac {\mathbf {r} }{r}}}$
The range rate is simply defined as
${\displaystyle {\frac {dr}{dt}}={\frac {\langle \mathbf {r} ,\mathbf {v} \rangle }{r}}=\langle {\hat {\mathbf {r} }},\mathbf {v} \rangle }$
the projection of the observer to target velocity vector onto the ${\displaystyle {\hat {\mathbf {r} }}}$ unit vector.
A singularity exists for coincident observer target, i.e. ${\displaystyle \mathbf {r} ={\begin{bmatrix}0\\0\\0\end{bmatrix}}}$ . In this case, range rate does not exist as ${\displaystyle r=0}$ .
## Applications in astronomy
In astronomy, radial velocity is often measured to the first order of approximation by Doppler spectroscopy. The quantity obtained by this method may be called the barycentric radial-velocity measure or spectroscopic radial velocity.[2] However, due to relativistic and cosmological effects over the great distances that light typically travels to reach the observer from an astronomical object, this measure cannot be accurately transformed to a geometric radial velocity without additional assumptions about the object and the space between it and the observer.[3] By contrast, astrometric radial velocity is determined by astrometric observations (for example, a secular change in the annual parallax).[3][4][5]
Light from an object with a substantial relative radial velocity at emission will be subject to the Doppler effect, so the frequency of the light decreases for objects that were receding (redshift) and increases for objects that were approaching (blueshift).
The radial velocity of a star or other luminous distant objects can be measured accurately by taking a high-resolution spectrum and comparing the measured wavelengths of known spectral lines to wavelengths from laboratory measurements. A positive radial velocity indicates the distance between the objects is or was increasing; a negative radial velocity indicates the distance between the source and observer is or was decreasing.
William Huggins ventured in 1868 to estimate the radial velocity of Sirius with respect to the Sun, based on observed redshift of the star's light.[6]
Diagram showing how an exoplanet's orbit changes the position and velocity of a star as they orbit a common center of mass
In many binary stars, the orbital motion usually causes radial velocity variations of several kilometres per second (km/s). As the spectra of these stars vary due to the Doppler effect, they are called spectroscopic binaries. Radial velocity can be used to estimate the ratio of the masses of the stars, and some orbital elements, such as eccentricity and semimajor axis. The same method has also been used to detect planets around stars, in the way that the movement's measurement determines the planet's orbital period, while the resulting radial-velocity amplitude allows the calculation of the lower bound on a planet's mass using the binary mass function. Radial velocity methods alone may only reveal a lower bound, since a large planet orbiting at a very high angle to the line of sight will perturb its star radially as much as a much smaller planet with an orbital plane on the line of sight. It has been suggested that planets with high eccentricities calculated by this method may in fact be two-planet systems of circular or near-circular resonant orbit.[7][8]
### Detection of exoplanets
The radial velocity method to detect exoplanets
The radial velocity method to detect exoplanets is based on the detection of variations in the velocity of the central star, due to the changing direction of the gravitational pull from an (unseen) exoplanet as it orbits the star. When the star moves towards us, its spectrum is blueshifted, while it is redshifted when it moves away from us. By regularly looking at the spectrum of a star—and so, measuring its velocity—it can be determined if it moves periodically due to the influence of an exoplanet companion.
### Data reduction
From the instrumental perspective, velocities are measured relative to the telescope's motion. So an important first step of the data reduction is to remove the contributions of
• the Earth's elliptic motion around the sun at approximately ± 30 km/s,
• a monthly rotation of ± 13 m/s of the Earth around the center of gravity of the Earth-Moon system,[9]
• the daily rotation of the telescope with the Earth crust around the Earth axis, which is up to ±460 m/s at the equator and proportional to the cosine of the telescope's geographic latitude,
• small contributions from the Earth polar motion at the level of mm/s,
• contributions of 230 km/s from the motion around the Galactic center and associated proper motions.[10]
• in the case of spectroscopic measurements corrections of the order of ±20 cm/s with respect to aberration.[11]
• Sin i degeneracy is the impact caused by not being in the plane of the motion.
## References
1. ^ Hoffman, Kenneth M.; Kunzel, Ray (1971). Linear Algebra (Second ed.). Prentice-Hall Inc. p. 271. ISBN 0135367972.
2. ^ Resolution C1 on the Definition of a Spectroscopic "Barycentric Radial-Velocity Measure". Special Issue: Preliminary Program of the XXVth GA in Sydney, July 13–26, 2003 Information Bulletin n° 91. Page 50. IAU Secretariat. July 2002. https://www.iau.org/static/publications/IB91.pdf
3. ^ a b Lindegren, Lennart; Dravins, Dainis (April 2003). "The fundamental definition of "radial velocity"" (PDF). Astronomy and Astrophysics. 401 (3): 1185–1201. arXiv:astro-ph/0302522. Bibcode:2003A&A...401.1185L. doi:10.1051/0004-6361:20030181. S2CID 16012160. Retrieved 4 February 2017.
4. ^ Dravins, Dainis; Lindegren, Lennart; Madsen, Søren (1999). "Astrometric radial velocities. I. Non-spectroscopic methods for measuring stellar radial velocity". Astron. Astrophys. 348: 1040–1051. arXiv:astro-ph/9907145. Bibcode:1999A&A...348.1040D.
5. ^ Resolution C 2 on the Definition of "Astrometric Radial Velocity". Special Issue: Preliminary Program of the XXVth GA in Sydney, July 13–26, 2003 Information Bulletin n° 91. Page 51. IAU Secretariat. July 2002. https://www.iau.org/static/publications/IB91.pdf
6. ^ Huggins, W. (1868). "Further observations on the spectra of some of the stars and nebulae, with an attempt to determine therefrom whether these bodies are moving towards or from the Earth, also observations on the spectra of the Sun and of Comet II". Philosophical Transactions of the Royal Society of London. 158: 529–564. Bibcode:1868RSPT..158..529H. doi:10.1098/rstl.1868.0022.
7. ^ Anglada-Escude, Guillem; Lopez-Morales, Mercedes; Chambers, John E. (2010). "How eccentric orbital solutions can hide planetary systems in 2:1 resonant orbits". The Astrophysical Journal Letters. 709 (1): 168–78. arXiv:0809.1275. Bibcode:2010ApJ...709..168A. doi:10.1088/0004-637X/709/1/168. S2CID 2756148.
8. ^ Kürster, Martin; Trifonov, Trifon; Reffert, Sabine; Kostogryz, Nadiia M.; Roder, Florian (2015). "Disentangling 2:1 resonant radial velocity oribts from eccentric ones and a case study for HD 27894". Astron. Astrophys. 577: A103. arXiv:1503.07769. Bibcode:2015A&A...577A.103K. doi:10.1051/0004-6361/201525872. S2CID 73533931.
9. ^ Ferraz-Mello, S.; Michtchenko, T. A. (2005). "Extrasolar Planetary Systems". Lect. Not. Phys. Vol. 683. pp. 219–271. Bibcode:2005LNP...683..219F. doi:10.1007/10978337_4.
10. ^ Reid, M. J.; Dame, T. M. (2016). "On the rotation speed of the Milky Way determined from HI emission". The Astrophysical Journal. 832 (2): 159. arXiv:1608.03886. Bibcode:2016ApJ...832..159R. doi:10.3847/0004-637X/832/2/159. S2CID 119219962.
11. ^ Stumpff, P. (1985). "Rigorous treatment of the heliocentric motion of stars". Astron. Astrophys. 144 (1): 232. Bibcode:1985A&A...144..232S. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 17, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9551398158073425, "perplexity": 1668.0769720737167}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710662.60/warc/CC-MAIN-20221128203656-20221128233656-00294.warc.gz"} |
https://www.gradesaver.com/ethan-frome/q-and-a/what-is-the-significance-of-the-missing-l-structure-on-the-farm-273906 | # What is the significance of the missing L structure on the farm?
What is the significance of the missing L structure on the farm?
##### Answers 1
The structure missing from the house (the L) connects the main part of the house with the woodshed and the cow-barn. The "L" itself is a symbol of life, it connects the farmer with the soil and signifies warmth, nourishment and hope. The absence of the "L" symbolizes the home's emptiness and lack of life, it also signifies Ethan's hopelessness.
Ethan Frome | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8449427485466003, "perplexity": 2664.7775075115287}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592309.94/warc/CC-MAIN-20180721032019-20180721052019-00272.warc.gz"} |
http://arxitics.com/articles/1707.04583 | arXiv Analytics
arXiv:1707.04583 [hep-th]AbstractReferencesReviewsResources
Hidden Conformal Symmetry of Smooth Braneworld Scenarios
Published 2017-07-14Version 1
In this manuscript we describe a hidden conformal symmetry of some smooth Braneworld scenarios. We generalize our previous result(arXiv:1705.09331) to the case of two scalar fields no-minimally coupled to gravity which has an $SO(1,1)$ symmetry. We show that by choosing a gauge this system provides the action for gravity minimally coupled to a scalar field and a cosmological constant. By breaking the internal symmetry and preserving the conformal one we get an effective potential that is an arbitrary function of $\tanh\phi$. We show from this how to obtain the standard $\mbox{sech}^{2}\phi$ potential that generates a kink solution. We further consider the case with SO(2) internal symmetry and show that the effective potential is an arbitrary function of $\tan\phi$, showing that trigonometric models can also be obtained. Therefore this mechanism explains the origin of unnatural hyperbolic and trigonometric potentials in smooth Braneworld scenarios.
Comments: 6 pages, no figures
Categories: hep-th, gr-qc
Related articles: Most relevant | Search more
arXiv:1007.1357 [hep-th] (Published 2010-07-08, updated 2010-09-14)
Hidden Conformal Symmetry of Self-Dual Warped AdS_3 Black Holes in Topological Massive Gravity
arXiv:hep-th/0008188 (Published 2000-08-24, updated 2000-10-11)
Dilaton-gravity on the brane
arXiv:hep-th/9509101 (Published 1995-09-18, updated 1995-09-20)
Black holes with regular horizons in Maxwell-scalar gravity | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8304309248924255, "perplexity": 1571.8845337027324}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689900.91/warc/CC-MAIN-20170924081752-20170924101752-00633.warc.gz"} |
https://email.esm.psu.edu/pipermail/macosx-tex/2002-September/002709.html | # dumping special formats (was Re: [OS X TeX] Language.dat)
ccr-mactex at creutzig.de ccr-mactex at creutzig.de
Mon Sep 2 11:48:18 EDT 2002
"Josep M. Font" <font at mat.ub.es> writes:
> 1.2 Can I \dump my own format files (especialized LaTeXs with several
> packages incorporated? Should I just compile them and save them as
> .fmt files in some directory? Which one?
Sure you can. This is how I do it:
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% projecthead.tex, used for LaTeX, pdfLaTeX, and for LaTeX used inside metapost
twoside,BCOR10mm]{scrreprt}
\newif\ifentwurf
\entwurftrue
% As usual, I put almost everything into a local .sty file.
\usepackage{project}
\hyphenation{PGP-keys Stan-dard-ein-ga-be Datei-na-men
Stan-dard-ein-stel-lun-gen Menü-be-fehl}
\dump
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% project.tex, the actual tex file. This file has to *start* with
%the next line:
%&pdfproject
% \makeindex opens a file, so it can't be \dump{}ed.
\makeindex
\begin{document}
...
\end{document}
######################################################################
#
# Makefile
.PRECIOUS: project.efmt pdfproject.efmt project.dvi
TEXFILES=project.tex chapter1.tex chapter2.tex
default: project.pdf
project.pdf: $(TEXFILES) pdfproject.efmt echo 'LaTeX Warning: Label(s) may have changed. Rerun to get cross-references right.' >project.log while grep -q 'LaTeX Warning: Label(s) may have changed. Rerun to get cross-references right.' project.log; do \ env pool_size=200000 buf_size=150000 extra_mem_top=1100000 \ pdfevirtex -progname=pdfelatex -efmt=pdfproject$<; \
done
project.dvi: $(TEXFILES) project.efmt echo 'LaTeX Warning: Label(s) may have changed. Rerun to get cross-references right.' >project.log while grep -q 'LaTeX Warning: Label(s) may have changed. Rerun to get cross-references right.' project.log; do \ env pool_size=200000 buf_size=150000 extra_mem_top=1100000 \ evirtex -progname=elatex -efmt=project$<; \
done
project.ps: project.dvi
dvips $< -o$@
env pool_size=200000 pdfeinitex \&pdfelatex $< mv projecthead.efmt$@
env pool_size=200000 einitex \&elatex $< mv projecthead.efmt$@
######################################################################
######################################################################
Yes, this is advanced Unix usage. Yes, I know my Makefile has a
concurrency problem if you try to build the format files for pdf and
dvi at the same time -- but I never had any reason to do so. And
yes, I have quite a bit of practice with Makefiles. :-)
In the end, it boils down to "edit whatever you want, call 'make'
once and you should get a completely redone pdf file with the least
work possible." Just remember to put the .tex files in there
--
+--+
+--+|
|+-|+ Christopher Creutzig (ccr at mupad.de)
+--+ Tel.: 05251-60-5525
-----------------------------------------------------
Mac TeX info, resources, and news can be found at:
<http://www.esm.psu.edu/mac-tex/>
-----------------------------------------------------
List archives can be found at:
<http://www.esm.psu.edu/mac-tex/MacOSX-TeX-Digests/>
Threaded list archives can be found at:
<http://www.masda.vxu.se/~pku/MacOSX_TeX/>
-----------------------------------------------------
See message headers for list info.
----------------------------------------------------- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9515621066093445, "perplexity": 25002.891618682395}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655908294.32/warc/CC-MAIN-20200710113143-20200710143143-00054.warc.gz"} |
https://www.physicsforums.com/threads/fourier-transform-of-triangular-function.180513/ | # Fourier transform of triangular function
1. Aug 15, 2007
### tronxo
Im kind of stuck in one of my signals problems. A triangular function defined as: V(t)= (-A/T)t + A when 0< t< T; V(t)= (A/T)t + A when -T< t< 0; otherwise, the function is 0. I have to find the fourier transform of this function. Could anyone help me??
2. Aug 15, 2007
### chroot
Staff Emeritus
A triangle function is the convolution of two rectangle functions. You presumably already know what the FT of a rectangle function is, and you know how convolution in the time domain relates to multiplication in the Fourier domain.
- Warren
Last edited: Aug 15, 2007
3. Aug 15, 2007
### tronxo
thank you for ur time, warren, but im still having problems with it. The problem is, even though i know, as you say before, that a triangle function is the convolution of two rect functions, i dont know how to identify which rect functions are related to this particular example.
thank you again, Alex
4. Aug 15, 2007
### chroot
Staff Emeritus
Plot the triangle function, and look at its endpoints. Notice that when you convolve two functions with endpoints (a, b) and (c, d), the resulting convolution has endpoints (a+b, c+d).
- Warren
5. Aug 15, 2007
### tronxo
Thanks again
Alex
Similar Discussions: Fourier transform of triangular function | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9152960777282715, "perplexity": 1828.3676979782456}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320666.34/warc/CC-MAIN-20170626013946-20170626033946-00057.warc.gz"} |
https://www.nature.com/articles/sdata2017142?error=cookies_not_supported&code=769623fd-ce98-4e35-b8bf-9c89a900caa2 | Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
# Survey data on Vietnamese propensity to attend periodic general health examinations
## Abstract
As general living standards rise, so does the demand for periodic general health examinations (GHEs). Research on the subject, however, has reached opposing conclusions on the value of GHEs, although methodological limitations in previous works make these differences hard to resolve. Here, we present data from a socio-demographic survey of behaviours and tendencies concerning periodic GHE attendance in Vietnam. These data are shown to be suitable for evaluating the impact of demographic and socio-economic elements on regular health examinations. By presenting the methods used in this survey and by describing the enquiries mentioned in the dataset, this article aims to promote data-collecting methodologies that can help policy-makers and health communicators derive practical conclusions.
Design Type(s) behavioral data analysis objective Measurement Type(s) Knowledge, Attitudes, Behaviors Technology Type(s) survey method Factor Type(s) biological sex • geographic location • Socioeconomic Factors Sample Characteristic(s) Homo sapiens • Viet Nam
Machine-accessible metadata file describing the reported data (ISA-Tab format)
## Background & Summary
Periodic general health examination (GHE) programmes emerged a long time ago but it was not until the 20th century brought increases in income and living standards that they began to attract attention and interest1. They attracted interest because there are substantial, sometimes inappropriate charges for both preventive and medical treatment services25 and these charges pose a substantial obstacle to people who wish to have regular health check-ups6. In addition, a number of people remain skeptical about the value of periodic GHE, either finding them costly and without benefit79 or questioning their quality10,11. Vietnamese patients are sometimes even skeptical of health professionals’ expertise12. Unsurprisingly, therefore, many have suggested replacing periodic health examinations with more effective healthcare solutions13,14.
That said, the benefits of GHEs should not be overlooked. Regular medical checks provide individuals with updates on their health status and symptoms15, making it easier to detect illnesses at an early stage and seek suitable treatment16. Studies have shown that periodic GHEs help to detect and combat breast and ovarian cancers16,17. In addition, GHEs may also lower the cost of future treatment18, and help to reduce economic inequality19. Thus, periodic GHEs can be considered genuinely necessary6.
The decision about whether or not to attend regular GHEs depends on multiple factors. The influence of one factor, possession of health insurance, remains subject to debate. Meer and Rosen20 reported that insured patients tended to have GHEs more often20, however Lurie et al. argued that health insurance only encourages people to take advantage of medical services and fails to improve public health21.
A review of the literature revealed that many authors have analyzed the effect of patient attitudes to the time, cost and quality of the medical service provided on frequency of GHEs, with conflicting results. These studies are, however, affected by certain limitations. For example, some studies had restricted samples or relied on limited, biased data provided by companies and goverments22,23 and others were based on postal surveys24. All these issues may limit the technical validity of the data. In addition most previous research on GHEs was limited to simple descriptive statistics and group comparisons. Although these analyses have produced valid results and insights they are not suitable for evaluating interactions between variables, which is necessary if we wish to learn more about the relationships between groups of socio-demographic, psychological, economic and socio-cultural factors and their influence on attitudes and behaviour in relation to GHEs23,25. The limitations of the earlier research can be addressed by using an appropriate research design and analytical techniques, such as multiple logistic regression including both continuous and discrete variables. This approach enables a wider evaluation and the testing of more hypotheses but does not require strict assumptions about probability mass/density distributions. One of the chief benefits of logistic regression is that estimates of odds ratios, an important measure of association, can be obtained from parameter estimates26. Insights obtained from a GHE survey should not be limited to general trends or confirmation of associations between variables because by using estimated coefficients, the computing of conditional probabilities for certain events under specific conditions27 will provide information with important policy implications.
We therefore conducted a survey to explore behaviour and attitudes to GHEs in a developing country, namely Vietnam. Our aims were to determine whether our survey data corroborated previous findings on GHEs in Vietnam and to provide additional information of practical significance. The dataset is particularly important due to the recent rise in concern about cancers and other diseases, such as diabetes and HIV.
## Methods
Our data were gathered through an interview-based survey of behaviour and attitudes to GHEs amongst inhabitants of Hanoi and Hung Yen, Vietnam. Interviews were conducted face-to-face and data were recorded on paper. The idea of this survey was based mainly on several previous studies of the effects of medical costs on patients’ lives after treatment28,29,30. One study showed that patients, especially poor patients who had borrowed money to pay for treatment, tended to fall into destitution after receiving hospital treatments28. Many desperate patients had little choice but to live together and support each other as they struggled to earn a living and pay for prolonged treatment29,30. This evidence about the harsh reality of the situation facing seriously ill patients led to recognition that prevention and early detection of disease are critically important.
The project consists of five phases: (1) Questionnaire design; (2) Face-to-face interviews; (3) Quality control for questionnaire answers; (4) Preparing the dataset; (5) Data analysis.
### Survey sample
Participants were chosen at random. All mentally competent residents in survey locations were invited to take part. Interviews did not begin until potential participants had been given information about the institutions responsible for the research, the objectives of the research and the methods of analysing the data, and had agreed to take part. Participants have been informed of indirect identifiers in the dataset and have consented to public use of their personal information under the condition that their names must be removed. The dataset—with respondent names being removed—is thus suitable for open access.
### Survey design
The survey was conducted between September and November 2016 in locations such as secondary schools, hospitals, companies, government agencies and randomly selected households in Hanoi, including Hospital 125 Thai Thinh (Dong Da District) and Vietnam-Germany Hospital (Hoan Kiem District). The survey team consisted of seven key members who were associated with Vuong & Associates research office and a dozen assistants. Key members wore identification badges in the field.
Interviewers recorded the time taken for each interview. The numbers of refusals and acceptances were reported at the end of each day and summed at the conclusion of the fieldwork.
The survey team adhered to the ethical code of the institutions responsible for the research. All questionnaires were checked and their validity confirmed by the team member who collected them and the team supervisor. Access to the database is open to the public, following the agreement between participants and the research team.
### Survey validation
Before the interview respondents were given instructions on the response formats for the various questions, for example to choose only one answer when the question required selection of the most appropriate response amongst multiple choices. For questions where responses were to be given using a numerical scale the interviewer ensured that the respondent understood the scale and gave a score within the allotted range. In addition, all collected questionnaires were checked three times to ensure the quality and validity of the data: when the interviewer returned to the team, when data were entered into the database and before exploratory analysis.
### Data collection
A total of 2,479 people were approached, of whom 409 refused to take part. The total number of observations was thus 2,070, two of which were invalid and excluded from analysis, yielding a final sample of 2,068 valid responses. On average, one out of six people refused to take part when invited to do so. Interviews lasted for approximately 12–15 min. Participants were male and female and ranged in age from 13 to 83 years. The female participation rate was 64.08% (1,340/2,068). The average age of participants was 29.17 years (s.d.=10.09, 95% CI: 28.74–29.60). The majority of respondents (60%, 581/2,068; see Fig. 1a) were aged between 18 and 30 years old. Most respondents had had their last GHE less than one year before the date of the interview. The majority of the sample was married (57.35%) and 54.35% of respondents had a stable job (Fig. 1b).
Participants provided us with their weight and height to enable us to calculate their body mass index (BMI). Most participants had a relatively healthy BMI (M=20.848, s.d.=2.67, 95% CI: 20.73–20.96). On average, male respondents had a higher BMI than female respondents (Fig. 1c).
### Data and materials
Data were used to analyse patterns in GHE engagement and to assess how specific variables influenced GHE behaviour. Time since last GHE was used as the dependent variable in analyses of factors affecting the frequency with which individuals attended medical checks.
#### Materials
The raw data were first entered into a MS Excel file, then converted into ‘comma-separated values’ (CSV) format (which can be found at 11102016Med4.csv [Data Citation 1]). Data were analysed in R (3.3.1). Estimates were calculated using the baseline-categorical logit model (BCL)27.
As most variables were categorical and most data for response and predictor variables were discrete we used a logistic model. Logistic models are used to predict the probability of each value of the dependent variable given specific values of the independent variables.
The general equation for the baseline-categorical logit model is:
$\mathrm{ln}\left[{\pi }_{j}\left(\mathbf{x}\right)/{\pi }_{J}\left(\mathbf{x}\right)\right]={\alpha }_{j}+{{\beta }_{j}}^{\mathrm{T}}\mathbf{x},\mathrm{j}=1,\dots ,\mathrm{J}-1.$
where x is the independent variable; and πj(x)=P(Y=j|x) its probability. Thus πj=P(Yij=1), with Y as the dependent variable.
In the logit model under consideration, the probability of an event is computed as:
${\pi }_{j}\left(\mathbf{x}\right)=\mathrm{exp}\left({\alpha }_{j}+{{\beta }_{j}}^{\mathrm{T}}\mathbf{x}\right)/\left[1{+}^{J-1}{\sum }_{h-1}\mathrm{exp}\left({\alpha }_{j}+{{\beta }_{j}}^{\mathrm{T}}\mathbf{x}\right)\right]$
Beta coefficients can be regressed directly from the original CSV file. In this case, the reference independent variable’s categories will be set by default. Reference categories cannot, however, be modified by the analyst. Therefore, we perform regression on distribution tables of the sample, in CSV format. File tab4.1.csv [Data Citation 1] is an example of such a table.
We also used linear regression or ordinary least square (OLS) analysis for the numerical variables. The general equation for the OLS analysis is as follows:
$Y=\alpha +{\beta }_{1}{X}_{1}+{\beta }_{2}{X}_{2}+\dots +{\beta }_{k}{X}_{k}$
Y is a continuous variable; the independent variables Xi can be concrete, categorical or continuous.
#### Response coding
Both questions and participants’ responses were codified into variables and variable categories in our dataset. The demographic variables were as follows: ‘sex’ (male; female), ‘age’, ‘weight’ (in cm) and ‘height’ (in kg). Because the participants were recruited randomly and fieldwork was carried out in a variety of locations it was not practical to measure participants’ height and weight directly, so respondents were asked to provide their most recent measurements of height and weight. Most Vietnamese people memorize their height and weight, as a considerable number of administrative procedures in the country require personal documents for which these measurements are indispensable. In addition it is not complicated to take measurements of one’s height and weight as electronic devices and mobile phone apps for doing so are widely available and fairly easy to use. For these reasons we consider the data provided by respondents to be reliable. From them we calculated BMI, using the formula BMI=weight/(height×height).
Marital status is referred to as ‘MaritalStt’ (married; unmarried; other). Job status was captured as the variable ‘JobStt’ (stable; unstable; student; retired; homemaker; other). Educational attainment was captured as ‘Edu’ (‘PostGrad’ (post-graduate); ‘Grad’ (college/university); ‘Second’ (high school); ‘Hi’ (middle school)). Health insurance status was represented by a binary variable, ‘HealthIns’. Questions concerning weight, height and BMI also appear in the questionnaire.
The variables were time since last medical examination (‘RecExam’) and time since last GHE (‘RecPerExam’) and both were coded as follows: ‘less12’=less than 12 months; ‘b1224’=between 12 and 24 months; ‘g24’=over 24 months; ‘unknown’=respondent unable to recall. Before respondents answered the relevant questions the interviewer carefully explained the difference between them (and made sure the respondent understood the questions properly, in order to ensure that responses were accurate. ‘Time since last medical examination’ is the length of time since the respondent last visited a doctor with symptoms of disease, whereas ‘time since last GHE’ is the length of time since the respondent’s last GHE. GHEs are conducted periodically regardless of whether an individual has any signs of illness or disease and are intended to track individuals’ health status and detect disease at a pre-symptomatic stage. During a GHE, people will receive a list of tests, including clinical examinations and subclinical tests, such as diagnostic imaging and functional exploration.
Reasons for their most recent GHE, captured in the variable ‘RecExam’ were coded as follows: ‘noti.disease’=concerns over illnesses/epidemics; ‘adv.sig’=worrying symptoms; ‘request’=prompted by employer/community/insurance; ‘volunteer’=no immediate reason. We also collected data on how often respondents believed GHEs should be carried out: every 6 months (‘6 m’); every 12 months (‘12 m’), every 18 months (‘18 m’) or less than every 18 months (‘g18m’).
One question dealt with reasons why people might hesitate to take a GHE. Binary yes/no responses to the following reasons were solicited: GHE is a waste of time (‘Wsttime’); GHE is a waste of money (‘Wstmon’); fear of discovering diseases (‘DiscDisease’); little faith in the quality of the medical service (‘Lessbelqual’); do not consider GHEs to be urgent or important (‘NotImp’). A similar format was used to explore reasons for attending a GHE, with options as follows: health is first priority (‘HthyPriority’); GHEs are subsidized by employer/community (‘ComSubsidy’); have acquired the habit of regular GHEs from family/employer (‘Habit’); constantly follow updates on their health measures (‘FlwHealth’).
To gain more insight into the health status of respondents and their families we asked participants whether they or a member of their family were receiving long-term medical treatment (‘PerTrmt’ and ‘AcqTrmt’ respectively; binary responses). We also asked respondents whether they and their family all enjoyed good health ‘StabHthStt’; binary response: ‘yes’ if respondent and family all in good health, otherwise ‘no’). This question was used to evaluate the extent to which family members’ health status is related. Finally we asked what participants’ preferred way of dealing with new symptoms (StChoise) would be, the options were: ‘clinic’=go to the clinic and consult professionals; ‘askrel’=seek advice from family and relatives; ‘selfstudy’=do personal research.
We assumed that individuals’ attitude to health would be correlated with possession of common items of medical equipment and the ability to use them, so we asked the following questions: (1) Do you keep a medical cabinet and basic medical equipment in your house? (‘MedCabinet’); (2) Do you have the skills to use basic medical equipment? (‘Tooluseskill’); (3) Do you have experience in taking care of a sick family member? (‘ExpCare’); (4) Does your family regularly take simple medical measurements (blood pressure, eye sight, weight etc.)? (‘ExamTools’).
We assessed perceptions of the quality of periodic GHE sessions using five questions to which responses were given using a continuous, 1 to 5 scale (1=lowest quality). The variables were as follows: ‘Tangibles’=quality of medical equipment and personnel; ‘Reliability’=ability of examiner to perform medical services that meet the patient’s expectations; ‘Respon’=timeliness of service; ‘Assurance’=knowledge/ability to assure professional reliance; ‘Empathy’=thoughtfulness and having a high sense of responsibility. We also asked participants to tell us there general opinion of public health (‘CHPerc’), the options were: ‘good’, ‘quite good’, ‘bad’ and ‘unknown’.
Cost of treatment is one of the most important factors in people’s decision of having GHEs. Cost can influence whether patients go to the hospital or clinic for health checks, particularly if they do not experience signs of illness. In the survey, GHE costs are divided into three categories: ‘low’=under 1 million VND; ‘med’=from 1 to 2 million VND; ‘hi’=over 2 million VND. Respondents were also asked which of the following options they would choose if they were provided cash for having GHEs (‘Usemon’): use all the money to have a GHE soon (‘allsoon’); use part of the money for a GHE and save the rest (‘partly’); take the money and have a GHE later (‘later’).
Information in the mass media on health care in general, and on GHEs in particular, can also affect attendance at periodic medical examinations and judgments of medical service quality. We therefore asked participants to evaluate several aspects of the information they had received on GHEs, using a 1 to 5 scale: sufficiency (‘SuffInfo’); attractiveness (‘AttractInfo’); impressiveness (‘ImpressInfo’); popularity (‘PopularInfo’).
Development in science and technology mean that the use of information technology (IT) in subclinical diagnosis is becoming more and more widespread. At present there is only limited use of IT to support healthcare in Vietnam, for example healthcare queuing apps and more complex applications such as online consultation, diagnostic imaging, remote health treatment, electronic medical records etc. Not everyone is ready to accept the use of IT to support diagnostic assessment. We assessed such readiness using two questions: (1) ‘Are you willing to use IT to detect health problems if it is reliable’ (‘UseIT’) and (2) ‘If a healthcare app indicated that you needed to have a GHE would you actually arrange one?’ (‘AfterIT’).
At the end of the questionnaire there were two questions about participation in sports and physical exercise that were used to evaluate attitude to sports and perception of the health benefits of regular exercise: (1) ‘How much time do people need to spend on sports and physical exercise to stay in shape?’ (‘SuitExer’) and (2) ‘How much time do you spend on sports and physical exercise?’ (‘EvalExer’). Response options for the second question were ‘more than enough’ (‘verysuff’); ‘enough’ (‘quitesuff’); only a little (‘little’); ‘none or almost none’ (‘trivial’).
Measurement of the dependent variable and the control variable. The code used in R(3.3.2) was:
These commands were intended to determine how the length of time since an individual’s most recent GHE is related to possession of health insurance, concerns that GHEs are a waste of time and money, prioritisation of health and regular following of health updates. The results are presented in Table 1.
The model’s fitness test was conducted to verify that all the coefficients are not equal to zero simultaneously, that is the null hypothesis H0: β12=...=0, yields the P-value:
$\mathrm{p}=1-\mathrm{pchisq}\left(2×\left(-151.22+249.91\right),10\right)\approx 0$
with df=(62–52)=10 (see Agresti)31. Thus, H0 was decisively rejected.
The data in Table 1 were used to calculate conditional probabilities, which provide some useful remarks: (i) if there are no financial or temporal constraints people will attend GHEs to try to ensure early detection of diseases and timely treatment; and (ii) possession of health insurance is positively associated with attendance at GHEs, even in the case of people in financial difficulties (Fig. 2).
On the basis of these results we suggest that attendance at GHEs could be improved by increasing the budget for supported healthcare schemes, raising the actual coverage of health insurance and improving the quality of medical services offered to people with health insurance.
### Code availability
Data were analysed using the statistical software R (release 3.3.1). The code used in the analyses is available as a pdf file (Supplementary File 1) which includes examples of code used to read the input data, create contingency tables and carry out multiple logistic regression for the dependent variable ‘RecPerExam’ and predictor variables ‘Wsttime’, ‘Wstmon’, ‘HthyPriority’, ‘FlwHealth’ and ‘HealthIns’.
The R code for generating Figs 13 is also included.
## Data Records
Files are in.csv format, both for conversions of the original Excel data and computed frequencies used in regression models (Data Citation 1).
## Technical Validation
Data were computerized by two specialists from our research team: one person entered the data into an MS Excel file and the other checked the file to ensure that the recorded data accurately represented the responses recorded on paper questionnaires. In cases where there was doubt about the nature of a participant’s response we contacted the surveyor to check the response.
The logistic regression model in the example was assessed in terms of the statistical significance of its coefficients. As shown in Table 1, the majority of coefficients have P<0.05, except the intercepts and the coefficients of ‘HthyPriority’ in the equation logit(unknown|less12). The null hypothesis was rejected, therore it can be inferred that there are correlations between the aforementioned independent and dependent variables.
Also, odds ratios can be useful in analyzing the survey data. The largest odds ratio was that for ‘Wsttime’=‘yes’ in the logit equation of (unknown|less12) (1.939), indicating that, amongst the investigated variables, ‘Wsttime’ had the most powerful influence (positive) on the probability of ‘RecPerExam’=‘unknown’.‘HealthIns’=‘yes’ had the smallest odds ratio (0.477), representing the declining probability of ‘RecPerExam’=‘unknown’. ‘HthyPriority’=‘yes’ had an odds ratio ~0.9, nearly 1, indicating that prioritising health had little effect on the dependent variable.
### Descriptive statistics
Table 2 describes some of the categorical variables in the dataset. Over half the sample (n=1,059, 51.21%) had had a GHE less than a year ago. One of the most common reasons given for hesitating to have a GHE was that they are waste of time; nearly 52% of participants who were reluctant to attend GHEs mentioned this as a reason. Amongst those who were prepared to attend GHEs, the main reason given was that health was a priority (81%).
If they experienced symptoms of ill-health the majority of participants would choose to go to a clinic (43.04%). Most respondents (86.32%) believed that a GHE should cost less than 2 million VND, indicating that reasonable pricing is a big concern for people in relation to periodic GHEs.
With respect to use of IT to support healthcare, 42.12% of participants claimed to be willing to use IT if it had been shown to be reliable. If a healthcare app indicated symptoms of disease then 39.41% of participants would be willing to have a GHE.
Data in the form of five-point Likert scale responses were classified into three groups: 1–1.99 points; 2–3.99 points; 4–5 points. Most respondents gave GHEs 4 or 5 points for all aspects of the quality of medical service provided (Fig. 3a). The quality factor with the lowest mean scores was timeliness (‘Respon’) (M=3.38, 95% CI: 3.33–3.43). On the other hand, with regards to mass media information on periodic GHEs, only informational sufficiency (‘SuffInfo’) had relatively the same number of participants in all three score groups (Fig. 3b). The remaining factors (attractiveness, impressiveness and popularity) attracted low scores (1 or 2 points) from most participants.
## Usage Notes
The dataset provides the empirical data needed to answer research questions about periodic health care behaviours, such as the identity of psychological factors affecting the timing of health check-ups, the propensity to spend on GHEs and perception of the optimal frequency of GHEs. The dataset can also be used to evaluate perceptions of GHE service quality and factors affecting such perceptions. For practical usage, media coverage and expansion of medical information regarding GHEs may also be a subject of discussion upon exploiting these data.
How to cite this article: Vuong, Q.-H. Survey data on Vietnamese propensity to attend periodic general health examinations. Sci. Data 4:170142 doi: 10.1038/sdata.2017.142 (2017).
Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## References
### References
1. Holland, W. Periodic Health Examination A brief history and critical assessment. Perspect. Public Health 15, 16–20 (2010).
2. Gandjour, A. & Lauterbach, K. W. Preventive care and the prospect of cost savings. Eur. J. Health Econ. 3, 1–2 (2006).
3. Fletcher, R. H. Review: periodic health examination increases delivery of some clinical preventive services and reduces patient worry. Evid. Based. Med. 12, 118 (2007).
4. Merenstein, D., Daumit, G. L. & Powe, N. R. Use and costs of nonrecommended tests during routine preventive health exams. Am. J. Prev. Med. 30, 521–527 (2006).
5. Cherrington, A., Corbie-Smith, G. & Pathman, D. E. Do adults who believe in periodic health examinations receive more clinical preventive services? Prev. Med. 45, 282–289 (2007).
6. Oboler, S. K., Prochazka, A. V., Gonzales, R., Xu, S. & Anderson, R. J. Public expectations and attitudes for annual physical examinations and testing. Ann. Intern. Med. 136, 652–659 (2002).
7. Boland, B. J., Wollan, P. C. & Silverstein, M. D. Yield of laboratory tests for case-finding in the ambulatory general medical examination. Am. J. Med. 101, 142–152 (1996).
8. Chacko, K. M. & Anderson, R. J. The annual physical examination: important or time to abandon? Am. J. Med. 120, 581–583 (2007).
9. Nupponen, R. Client views on periodic health examinations: opinions and personal experience. J. Adv. Nurs. 23, 521–527 (1996).
10. Yarnall, K. S., Pollak, K. I., Ostbye, T., Krause, K. M. & Michener, J. L. Primary care: is there enough time for prevention? Am. J. Public Health. 93, 635–641 (2003).
11. Hutchison, B., Woodward, C. A., Norman, G. R., Abelson, J. & Brown, J. A. Provision of preventive care to unannounced standardized patients. CMAJ 158, 185–193 (1998).
12. Vuong, Q. H. & Nguyen, T. K. Vietnamese patients' choice of healthcare provider: in search of quality information. Int. J. Behav. Healthc. Res.. 5, 184–212 (2015).
13. Goldbloom, R. & Battista, R. N. The periodic health examination: 1. Introduction. CMAJ 134, 721–723 (1986).
14. Laine, C. The annual physical examination: needless ritual or necessary routine? Ann. Intern. Med. 136, 701–703 (2002).
15. Roberts, N. T. The values and limitations of periodic health examinations. J. Chronic Dis. 9, 95–116 (1959).
16. Wu, H. Y., Yang, L. L. & Zhou, S. Impact of periodic health examination on surgical treatment for uterine fibroids in Beijing: a case-control study. BMC Health Services Res. 10, 329 (2010).
17. Lesnick, G. J. Detection of breast cancer in young women. JAMA 237, 967–969 (1977).
18. Ren, A., Okubo, T. & Takahashi, K. Comprehensive periodic health examination: impact on health care utilisation and costs in a working population in Japan. J. Epidemiology Community Health 48, 476–481 (1994).
19. Vuong, Q. H. Be rich or don’t be sick: estimating Vietnamese patients’ risk of falling into destitution. SpringerPlus 4, 529 (2015).
20. Meer, J. & Rosen, H. S. Insurance and the utilization of medical services. Soc. Sci. Med. 58, 1623–1632 (2004).
21. Lurie, N. et al. Termination of Medi-Cal benefits. A follow-up study one year later. N. Engl. J. Med. 314, 1266–1268 (1986).
22. Tibblin, G. et al. A general health-examination of a random sample of 50-year-old men in Göteborg. Acta Med. Scandina 177, 739–749 (1965).
23. Inoue, H. et al. Prevalence of atrial fibrillation in the general population of Japan: an analysis based on periodic health examination. Int. J. Cardiol. 137, 102–107 (2009).
24. Prochazka, A. V., Lundahl, K., Pearson, W., Oboler, S. K. & Anderson, R. J. Support of evidence-based guidelines for the annual physical examination: a survey of primary care providers. Arch. Intern. Med. 165, 1347–1352 (2005).
25. Masumori, N., Adachi, H., Noda, Y. & Tsukamoto, T. Detection of adrenal and retroperitoneal masses in a general health examination system. Urology 52, 572–576 (1998).
26. Stokes, M. E., Davis, C. S. & Koch, G. G. Categorical data analysis using SAS (SAS Institute, 2012).
27. Agresti, A. Categorical Data Analysis. 3rd edn (Wiley, 2013).
28. Vuong, Q. H. Economic benefits and treatment progress as determinants of the sustainability of Vietnamese voluntary co-located patients clusters. J. Pub. Health Res. 6, 10–17 (2017).
29. Vuong, Q. H. & Nguyen, H. Patients’ contribution as a quid pro quo for community supports? Evidence from Vietnamese co-location clusters. Int. J. Bus. & Society 18, 189–210 (2017).
30. Vuong, Q. H., Nguyen, H. & Vuong, T. T. Health insurance thresholds and policy implications: A Vietnamese medical survey in 2015. Biomed. Res.. 28, 2432–2438 (2017).
31. Agresti, A. Modeling Ordinal Categorical Data. University of Florida, Department of Statisticshttp://www.stat.ufl.edu/~aa/ordinal/agresti_ordinal_tutorial.pdf (2010).
### Data Citations
1. Vuong, Q. H. Open Science Framework https://doi.org/10.17605/OSF.IO/AFZ2W (2017)
## Acknowledgements
The author would like to thank several people at Vuong & Associates for their assistance in collecting the data, particularly Dam Thu Ha, Do Thu Hang, Do Phuong Ngoc, Nguyen Thi Phuong, Nghiem Phu Kien Cuong, Mai Anh Tuan and Vuong Thu Trang. Special thanks go to the thousands of respondents who participated in this survey, and especially to Dang Tran Dung, CEO of Hospital 125 Thai Thinh for his enthusiastic support during the research process.
## Author information
Authors
### Contributions
Q.-H.V. designed the survey, coordinated the collection of data, prepared the dataset, performed the exploratory analysis, wrote and approved the manuscript.
### Corresponding author
Correspondence to Quan-Hoang Vuong.
## Ethics declarations
### Competing interests
The authors declare no competing financial interests.
## Rights and permissions
Reprints and Permissions
Vuong, QH. Survey data on Vietnamese propensity to attend periodic general health examinations. Sci Data 4, 170142 (2017). https://doi.org/10.1038/sdata.2017.142
• Accepted:
• Published:
• DOI: https://doi.org/10.1038/sdata.2017.142 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2514028549194336, "perplexity": 5137.04951700841}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571909.51/warc/CC-MAIN-20220813051311-20220813081311-00799.warc.gz"} |
https://learn.careers360.com/school/question-a-symmetric-biconvex-lens-of-radius-of-curvature-r-and-made-of-glass-of-refractive-index-1middot5-is-placed-on-a-layer-of-liquid-placed-on-top-of-a-plane-mirror-as-shown-in-the-figure-an-optical-needle-with-its-tip-on-the-principal-axis-of-the-lens-is-moved-along-the-axis-until-its-real-inverted-image-coincides-with-the-needle-itself-the-distance-of-the/ | # A symmetric biconvex lens of radius of curvature R and made of glass of refractive index 1·5, is placed on a layer of liquid placed on top of a plane mirror as shown in the figure. An optical needle with its tip on the principal axis of the lens is moved along the axis until its real, inverted image coincides with the needle itself. The distance of the needle from the lens is measured to be x. On removing the liquid layer and repeating the experiment, the distance is found to be y. Obtain the expression for the refractive index of the liquid in terms of x and y.
Given,
Refractive index of lens $\mu _{1}= 1\cdot 5$
Distance of needle from the lens = focal length of systems = x = f
Distance after removing liquid layer = focal length of lens = y = f1
Let, focal length of liquid = f2
refractive index of liquid = $\mu _{2}$
The equivalente focal length f is
$\frac{1}{f}= \frac{1}{f_{1}}+\frac{1}{f_{2}}$
$\frac{1}{f_{2}}= \frac{1}{f}-\frac{1}{f_{1}},\; so \; f_{2}= \frac{f_{1}\times f}{f_{1}-f}$
$\frac{1}{f_{2}}= \frac{xy}{y-x}$
Given that radius of Curvature of other surfaces of the lens is -R
from lens maker formula
$\frac{1}{f_{1}}= \left ( \mu _{1}-1 \right )\left ( \frac{1}{R} -\frac{1}{-R}\right )$
$\frac{1}{y}= \left ( 1\cdot 5-1 \right )\times \frac{2}{R}$
$R= \frac{y}{0\cdot 5\times 2}= y\, cm$
Since the liquid act as a plane mirror, therefore
Applying lens maker formula for liquid, we get
$\frac{1}{f_{2}}= \left ( \mu _{2}-1 \right )\left ( \frac{1}{R} -\frac{1}{\infty }\right )$
$\frac{1}{-\frac{xy}{y-x}}= \left ( \mu _{2} -1\right )\frac{1}{y};\; \; \mu _{2}-1= \frac{x-y}{x}$
$\mu _{2}= \frac{x-y}{x}+1\, = 2-\frac{y}{x}$
Therefore, the expression for the refractive index of the liquid is
$2-\frac{y}{x}$
### Preparation Products
##### Knockout NEET Sept 2020
An exhaustive E-learning program for the complete preparation of NEET..
₹ 15999/- ₹ 6999/-
##### Rank Booster NEET 2020
This course will help student to be better prepared and study in the right direction for NEET..
₹ 9999/- ₹ 4999/-
##### Knockout JEE Main Sept 2020
An exhaustive E-learning program for the complete preparation of JEE Main..
₹ 12999/- ₹ 6999/-
##### Test Series NEET Sept 2020
Take chapter-wise, subject-wise and Complete syllabus mock tests and get in depth analysis of your test..
₹ 4999/- ₹ 2999/- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.763093888759613, "perplexity": 2520.2494935164373}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655902377.71/warc/CC-MAIN-20200709224746-20200710014746-00135.warc.gz"} |
https://phys.libretexts.org/Courses/University_of_California_Davis/UCD%3A_Physics_7C/10%3A_Electromagnetism/10.1%3A_Fields/10.1.7%3A_Summary | $$\require{cancel}$$
# 10.1.7: Summary
This chapter has introduced many new ideas related to fields, using the gravitational field as the primary example. In the upcoming chapters the electric and magnetic fields are also discussed, so do not be concerned that their treatment in this chapter was brief. The main concepts introduced were:
1. That a field is a physical quantity that exists and has a well-defined value in all of space. A field can change its value in space and in time, so the value of a field is a function both of time ($$t$$) and position ($$x$$ or $$r$$).
2. A force is an interaction between two objects. A field is created by a single object. All objects that feel a field must emit a field of the same type (like mass or electric charge). In Physics 7C, objects do not respond to their own field.
3. The fields we explore are mostly vector fields, meaning that at every point in space the field has a defined direction and magnitude.
4. When using the field model, one object (the “test” object) feels the field created by everything else (the “source” objects). We typically only explore circumstances where the effect of the test object's field on the source objects is ignored.
5. To find the field created by an arbitrary distribution of "sources," we use superposition.
6. Representations of vector fields
• Vector Map: A “snapshot” of some field vectors at a particular time. (e.g. wind map)
• Field Line Map: A “snapshot” of the field, but with continuous lines. The direction of the field at any point is tangent to the field lines. The strength of the field lines is determined by how close together the field lines are. If they are bunched up the field is strong, if they are spread thinly the field is weak.
• Equipotentials (not for magnetic fields): If a potential exists, then the equipotentials are lines where the potential is all the same. If we move along the equipotentials, we are not going “with” or “against” the field, and we don't gain or lose any energy. The equipotentials are always at 90° to the field lines. (e.g. a topographical map shows the equipotentials of a gravitational field)
7. An electric field line starts on a positive charge and ends on a negative charge.
8. A gravitational field line starts at infinity and ends on a mass.
9. Magnetic field lines form complete loops; they never start or end. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7863243818283081, "perplexity": 1223.193572270661}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202781.33/warc/CC-MAIN-20190323080959-20190323102959-00431.warc.gz"} |
http://math.stackexchange.com/questions/165013/which-number-was-removed-from-the-first-n-naturals?answertab=active | # Which number was removed from the first $n$ naturals?
A number is removed from the set of integers from $1$ to $n$. Now, the average of remaining numbers turns out to be $40.75$. Which integer was removed?
By some brute force, I got $61$. I want to know if there's any analytic approach?
-
Write down the formula for "the average of the first n positive integers except for m". – Hurkyl Jun 30 '12 at 19:16
((n(n + 1)/2) - m ))/(n - 1) – Bazinga Jun 30 '12 at 19:21
The average of the integers $1$ through $n$ is $\frac12(n+1)$. Removing a number smaller than this will increase the average, and removing a number larger than this will lower it. In particular, removing $1$ will cause the maximum increase in the average, to
$$\frac1{n-1}\left(\frac{n(n+1)}2-1\right)=\frac{n^2+n-2}{2(n-1)}=\frac{(n+2)(n-1)}{2(n-1)}=\frac12(n+2)\;,$$
and removing $n$ will cause the maximum decrease in the average, to
$$\frac1{n-1}\left(\frac{n(n+1)}2-n\right)=\frac{n^2-n}{2(n-1)}=\frac{n}2\;.$$
The new average of $40.75$ therefore must be between $\frac{n}2$ and $\frac12(n+2)=\frac{n}2+1$, inclusive. Life becomes easier if we double everything: $81.5$ must be between $n$ and $n+2$, inclusive. That is, $$n\le 81.5\le n+2\;,$$ and therefore $$79.5\le n\le 81.5\;.$$ Since $n$ must be an integer, the only possibilities are $n=80$ and $n=81$.
The sum of the integers $1$ through $80$ is $3240$, so if $n=80$, you need to find $k$ in the range from $1$ to $80$ inclusive so that $$\frac{3240-k}{79}=40.75\;.$$ However, the solution to this equation is not an integer, so $n$ must be $81$.
The sum of the integers $1$ through $81$ is $3321$, so this time you want $k$ satisfying $$\frac{3321-k}{80}=40.75\;,$$ which is easily solved to find that $k=61$.
-
Thank you sir @brian – Bazinga Jun 30 '12 at 19:31
Hint $\rm\displaystyle\ \frac{n(n\!+\!1)/2-k}{n\!-\!1}\, =\, \frac{n}2 + \frac{n\!-\!k}{n\!-\!1}\,\in\, \left[\frac{n}2,\frac{n}2\!+\!1\right] \ni40.75\ \Rightarrow\ n\in[81.5,79.5]\ \Rightarrow\ k = \ldots$
-
We want $\ \frac{n(n+1)}2-m=\frac {163(n-1)}4\$ with $m\le n\$ so that :
$n-1 = 4k$ with $k\in \mathbb{N}\$ and the equation becomes :
$m=f(k)$ with $f(k):=(4k+1)(2k+1)- 163k\$ and $1\le\ m\le 4k+1\$
so that $k\approx \frac {163}{2\cdot 4}\approx 20$
Trying $f(19)$ to $f(21)$ should be enough!
-
Thank you sir @raymond – Bazinga Jun 30 '12 at 19:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9463999271392822, "perplexity": 182.93404685624566}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928729.99/warc/CC-MAIN-20150521113208-00235-ip-10-180-206-219.ec2.internal.warc.gz"} |
https://www.lessonplanet.com/teachers/multiply-and-divide-by-10-100-and-1000 | # Multiply and Divide by 10, 100 and 1,000
In this multiplication and division worksheet, students practice their math skills by completing 2 multiplication tables and 2 division tables. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9904964566230774, "perplexity": 1838.3964559182657}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608058.13/warc/CC-MAIN-20170525083009-20170525103009-00421.warc.gz"} |
http://support.sas.com/documentation/cdl/en/statug/67523/HTML/default/statug_probit_details08.htm | # The PROBIT Procedure
### Rescaling the Covariance Matrix
One way of correcting overdispersion is to multiply the covariance matrix by a dispersion parameter. You can supply the value of the dispersion parameter directly, or you can estimate the dispersion parameter based on either the Pearson’s chi-square statistic or the deviance for the fitted model.
The Pearson’s chi-square statistic and the deviance are defined in the section Lack-of-Fit Tests. If the SCALE= option is specified in the MODEL statement, the dispersion parameter is estimated by
In order for the Pearson’s statistic and the deviance to be distributed as chi-square, there must be sufficient replication within the subpopulations. When this is not true, the data are sparse, and the p-values for these statistics are not valid and should be ignored. Similarly, these statistics, divided by their degrees of freedom, cannot serve as indicators of overdispersion. A large difference between the Pearson’s statistic and the deviance provides some evidence that the data are too sparse to use either statistic.
You can use the AGGREGATE (or AGGREGATE=) option to define the subpopulation profiles. If you do not specify this option, each observation is regarded as coming from a separate subpopulation. For events/trials syntax, each observation represents n Bernoulli trials, where n is the value of the trials variable; for single-trial syntax, each observation represents a single trial. Without the AGGREGATE (or AGGREGATE=) option, the Pearson’s chi-square statistic and the deviance are calculated only for events/trials syntax.
Note that the parameter estimates are not changed by this method. However, their standard errors are adjusted for overdispersion, affecting their significance tests. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9576973915100098, "perplexity": 1040.7798319172496}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824618.72/warc/CC-MAIN-20171021062002-20171021082002-00886.warc.gz"} |
https://www.physicsforums.com/threads/collision-help.65840/ | # Homework Help: Collision help
1. Mar 3, 2005
### pixelized
could someone help me with this problem?
A 139 kg tackler moving at 2.53 m/s meets head-on (and tackles) a 87.4 kg halfback moving at 5.14 m/s. What will be their mutual speed immediately after the collision?
Oh right. Here's what I've tried doing but no luck.
m1 = 139kg V1 = 2.53 m/s
m2 = 87.4 kg V2 = 5.14 m/s
m1v1+msv2 = Vf(mi+m2)
Vf = (m1v1+m2v2)/(m1+m2)
Last edited: Mar 3, 2005
2. Mar 3, 2005
### pixelized
Nevermind I got it. Seems I needed to set one of the velocities as negative since it was an inelastic collision. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9159914255142212, "perplexity": 4053.360006206475}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376824525.29/warc/CC-MAIN-20181213054204-20181213075704-00156.warc.gz"} |
http://www-old.newton.ac.uk/programmes/QIS/seminars/2004111510301.html | # QIS
## Seminar
### On convex structures of states, POVM's and channels, and their mutual relations
D'Ariano, M (Pavia)
Monday 15 November 2004, 10:30-11.15
Seminar Room 1, Newton Institute
#### Abstract
After briefly reviewing the structure of the convex sets of POVM's and channels in finite dimensions, we will consider maps between different types of convex sets, corresponding to different kinds of quantum information processing, e. g. quantum calibration, programmable channels and POVM's, universal POVM's, pre-processing and post-processing of POVM's. In particular, we will focus attention on programmability of POVM's and pre-processing, introducing the problem of "clean POVM's", and concluding with a list of open problems. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8420455455780029, "perplexity": 2908.4881410483554}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738663010.9/warc/CC-MAIN-20160924173743-00213-ip-10-143-35-109.ec2.internal.warc.gz"} |
https://proglearn.neurodata.io/experiments/recruitment_across_datasets.html | # Recruitment Across Datasets¶
In this notebook, we further examine the capability of ODIF to transfer across datasets, building upon the prior FTE/BTE experiments on MNIST and Fashion-MNIST. Using the datasets found in this repo, we perform a series of experiments to evaluate the transfer efficiency and recruitment capabilities of ODIF across five different datasets. The datasets and their content are as follows: - Caltech-101: contains images of objects in 101 categories - CIFAR-10: contains 32x32 color images of objects in 10 classes - CIFAR-100: contains 32x32 color images of objects in 100 classes - Food-101: contains images of dishes in 101 categories - DTD: contains images of describable textures
[1]:
import functions.recruitacrossdatasets_functions as fn
Note: This notebook tutorial uses functions stored externally within functions/recruitacrossdatasets_functions.py to simplify presentation of code. These functions are imported above, along with other libraries.
## FTE/BTE Experiment¶
We begin our examination of ODIF’s transfer capabilities across datasets with the FTE/BTE experiment, which provides background metrics for what the expected performance should be. This helps inform the later recruitment experiment.
### Base Experiment¶
#### Import and Process Data¶
Let’s first import the data and perform some preprocessing so that it is in the correct format for feeding to ODIF. The following function does so for us:
[2]:
data, classes = fn.import_data(normalize=False)
#### Define Hyperparameters¶
We then define the hyperparameters to be used for the experiment: - model: model to be used for FTE/BTE experiment - num_tasks: number of tasks - num_trees: nuber of trees - reps: number of repetitions, fewer than actual figures to reduce running time
[3]:
##### MAIN HYPERPARAMS ##################
model = "odif"
num_trees = 10
reps = 4
#########################################
Taking each dataset as a separate task, we have 5 tasks, and we also set a default of 10 trees, with the experiment being run for 30 reps.
Note, in comparison to previous FTE/BTE experiments, the lack of the num_points_per_task parameter. Here, we sample based on the label with the least number of samples and take 31 samples from each label.
#### Run Experiment and Plot Results¶
First, we call the function to run the experiment:
[4]:
accuracy_all_task = fn.ftebte_exp(
data, classes, model, num_tasks, num_trees, reps, shift=0
)
Using the accuracies over all tasks, we can calculate the error, the forwards transfer efficiency (FTE), the backwards transfer efficiency (BTE), and the overall transfer efficiency (TE).
[5]:
err, bte, fte, te = fn.get_metrics(accuracy_all_task, num_tasks)
These results are therefore plotted using the function as follows:
[6]:
fn.plot_ftebte(num_tasks, err, bte, fte, te)
As can be seen from above, there is generally positive forwards and backwards transfer efficiency when evaluating transfer across datasets, even though the datasets contained very different content.
### Varying the Number of Trees¶
We were also curious how changing the number of trees would affect the results of the FTE/BTE experiment across datasets, and therefore also reran the experiment using 50 trees:
[9]:
##### MAIN HYPERPARAMS ##################
model = "odif"
num_trees = 50
reps = 4
#########################################
Running the experiment, we find the following results:
[10]:
accuracy_all_task = fn.ftebte_exp(
data, classes, model, num_tasks, num_trees, reps, shift=0
)
It seems as if more trees leads to lower transfer efficiency.
We use 10 trees for the remainder of the experiments to save on computing power.
## Recruitment Experiment¶
Now that we have roughly assessed the performance of ODIF via the FTE/BTE experiment, we are also interested in which recruitment scheme works the best for this set of data.
### Base Experiment¶
To quickly reiterate some of the background on the recruitment experiment, there are generally two main schemes for developing lifelong learning algorithms: building and reallocating. The former involves adding new resources as new data comes in, whereas the latter involves compressing current representations to make room for new ones. We want to examine whether current resources could be better leveraged by testing a range of approaches: 1. Building (default for Omnidirectional Forest): train num_trees new trees 2. Uncertainty forest: ignore all prior trees 3. Recruiting: select num_trees (out of all 450 existing trees) that perform best on the newly introduced 10th task 4. Hybrid: builds num_trees/2 new trees AND recruits num_trees/2 best-forming trees
We compare the results of these approaches based on varying training sample sizes, in the range of [1, 5, 10, 25] samples per label.
#### Define Hyperparameters¶
As always, we define the hyperparameters: - num_tasks: number of tasks - num_trees: nuber of trees - reps: number of repetitions - estimation_set: size of set used to train for the last task, as a proportion (1-estimation_set is the size of the set used for validation, aka the selection of best trees)
[11]:
############################
### Main hyperparameters ###
############################
num_trees = 10
reps = 4
estimation_set = 0.63
#### Run Experiment and Plot Results¶
We call our experiment function and input the main hyperparameters:
[12]:
# run recruitment experiment
data, classes, num_tasks, num_trees, reps, estimation_set, shift=0
)
And then we plot the results:
[13]:
# plot results
We therefore see that though generalization error remains high on the final task, the lifelong learning algorithm still outperforms the other recruitment schemes overall.
### Shifting Dataset Order¶
Since the above experiment involves fixing DTD as the final dataset, a further experiment involves shifting the order of datasets, so that there is a different dataset as task 5 each time. This allows us to see whether different dataset content would significantly impact the results on the final task.
To do so, we define the shift parameter in our call to the recruitment_exp function. This, in turn, calls the shift_data function, which moves the first task to the end and thus reorders the sequence of tasks.
More specifically, if we define shift=1, as done below, we would get the following order of datasets: 1. CIFAR-10 2. CIFAR-100 3. Food-101 4. DTD 5. Caltech-101
[14]:
# run recruitment experiment
data, classes, num_tasks, num_trees, reps, estimation_set, shift=1
)
# plot results
A shift=2 results in a dataset order of: 1. CIFAR-100 2. Food-101 3. DTD 4. Caltech-101 5. CIFAR-10
[15]:
# run recruitment experiment
data, classes, num_tasks, num_trees, reps, estimation_set, shift=2
)
# plot results
shift=3 gives us: 1. Food-101 2. DTD 3. Caltech-101 4. CIFAR-10 5. CIFAR-100
[16]:
# run recruitment experiment
data, classes, num_tasks, num_trees, reps, estimation_set, shift=3
)
# plot results
And finally, shift=4 yields: 1. DTD 2. Caltech-101 3. CIFAR-10 4. CIFAR-100 5. Food-101
[17]:
# run recruitment experiment
data, classes, num_tasks, num_trees, reps, estimation_set, shift=4
)
# plot results
Throughout all the above experiments, even though generalization error remains high due to the sheer amount of different labels across all the different datsets, our lifelong learning algorithm still outperforms the other recruitment methods.
## Other Experiments¶
### Effect of Normalization¶
When examining data across different datasets, normalization and standardization of data is often of interest. However, this can also lead to loss of information, as we are placing all the images on the same scale. As a final experiment, we also look into the effect of normalization on the FTE/BTE results.
#### Import and Process Data¶
The import_data function has a normalize parameter, where one can specify whether they want to normalize the data, normalize across the dataset, or just normalize across each image. Previously, for the original FTE/BTE experiment, we set normalize=False.
Here, we look at the other two options.
[18]:
# normalize across dataset
data1, classes1 = fn.import_data(normalize="dataset")
[19]:
# normalize across each image
data2, classes2 = fn.import_data(normalize="image")
#### Define Hyperparameters¶
We use the same parameters as before:
[20]:
##### MAIN HYPERPARAMS ##################
model = "odif"
num_trees = 10
reps = 4
#########################################
#### Run Experiment and Plot Results¶
We first run the FTE/BTE experiment by normalizing across each dataset, such that the images in each dataset have a range of [0,1] in each channel.
[21]:
accuracy_all_task = fn.ftebte_exp(
data1, classes1, model, num_tasks, num_trees, reps, shift=0
)
We then run the FTE/BTE experiment with normalizing per image, so that each channel in each image is scaled to a range of [0,1].
[22]:
accuracy_all_task = fn.ftebte_exp(
data2, classes2, model, num_tasks, num_trees, reps, shift=0
) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6152939796447754, "perplexity": 5467.425263757063}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00550.warc.gz"} |
http://mathonline.wikidot.com/closed-sets-in-compact-topological-spaces | Closed Sets in Compact Topological Spaces
# Closed Sets in Compact Topological Spaces
Recall from the Compactness of Sets in a Topological Space page that if $X$ is a topological space and $A \subseteq X$ then $A$ is said to be compact in $X$ if every open cover of $A$ has a finite subcover.
We will now look at a very nice theorem which says that if $X$ is a compact topological space, then any closed subset of $A$ of $X$ will also be compact in $X$.
Theorem 1: Let $X$ be a compact topological space and let $A \subseteq X$. If $A$ is closed in $X$ then $A$ is compact in $X$.
• Proof: Let $\mathcal F = \{ U_i \}_{i \in I}$ be an open covering of $A$. Then we have that:
(1)
\begin{align} \quad A \subseteq \bigcup_{i \in I} A_i \end{align}
• Since $A$ is closed, $A^c = X \setminus A$ is open. Notice that $\{ U_i \}_{i \in I} \cup \{ X \setminus A \}$ is therefore an open covering of all of $X$. Since $X$ is a compact space, there exists a finite open covering of $X$: $\{ U_1, U_2, ..., U_n \} \cup \{ X \setminus A \}$ such that:
(2)
\begin{align} \quad X \subseteq \left ( \bigcup_{i=1}^{n} U_i \right ) \cup (X \setminus A) \end{align}
• But then $\{ U_1, U_2, ..., U_n \}$ is a finite subcover of $A$. Therefore, $A$ is compact in $X$. $\blacksquare$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9981384873390198, "perplexity": 62.592854748406765}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038069267.22/warc/CC-MAIN-20210412210312-20210413000312-00189.warc.gz"} |
https://www.degruyter.com/view/j/nanoph.2014.3.issue-3/nanoph-2014-0001/nanoph-2014-0001.xml | Show Summary Details
More options …
# Nanophotonics
Editor-in-Chief: Sorger, Volker
12 Issues per year
IMPACT FACTOR 2016: 4.492
5-year IMPACT FACTOR: 5.723
CiteScore 2016: 4.75
CiteScoreTracker 2017: 6.48
In co-publication with Science Wise Publishing
Open Access
Online
ISSN
2192-8614
See all formats and pricing
More options …
GO
Volume 3, Issue 3
# Plasmonic near-field transducer for heat-assisted magnetic recording
Nan Zhou
• School of Mechanical Engineering and Birck Nanotechnology Center, Purdue University, West Lafayette, IN 47906, USA
• Other articles by this author:
/ Xianfan Xu
• Corresponding author
• School of Mechanical Engineering and Birck Nanotechnology Center, Purdue University, West Lafayette, IN 47906, USA
• Email
• Other articles by this author:
/ Aaron T. Hammack
/ Barry C. Stipe
/ Kaizhong Gao
/ Werner Scholz
/ Edward C. Gage
Published Online: 2014-05-27 | DOI: https://doi.org/10.1515/nanoph-2014-0001
## Abstract
Plasmonic devices, made of apertures or antennas, have played significant roles in advancing the fields of optics and opto-electronics by offering subwavelength manipulation of light in the visible and near infrared frequencies. The development of heat-assisted magnetic recording (HAMR) opens up a new application of plasmonic nanostructures, where they act as near field transducers (NFTs) to locally and temporally heat a sub-diffraction-limited region in the recording medium above its Curie temperature to reduce the magnetic coercivity. This allows use of very small grain volume in the medium while still maintaining data thermal stability, and increasing storage density in the next generation hard disk drives (HDDs). In this paper, we review different plasmonic NFT designs that are promising to be applied in HAMR. We focus on the mechanisms contributing to the coupling and confinement of optical energy. We also illustrate the self-heating issue in NFT materials associated with the generation of a confined optical spot, which could result in degradation of performance and failure of components. The possibility of using alternative plasmonic materials will be discussed.
## 1 Introduction
As projected by the International Data Corporation (IDC), the worldwide data storage need will continue to grow 40% annually. This demand is met mostly by an increase in the areal density, often expressed in bits per square inch [1]. The hard disk drive (HDD) industry is the primary industry to satisfy this demand. During last decades, by replacing longitudinal magnetic recording (LMR), perpendicular magnetic recording (PMR) was able to continue increasing areal density, where the magnetic elements are aligned perpendicular to the disk surface [2]. However, there is a limitation with the PMR technology. When the bits are more closely packed, the grain volume V in the recording medium must shrink to maintain the signal to noise ratio (SNR) under the presence of thermal fluctuation kBT, where kB is the Boltzmann constant. As this occurs, the ability to store information degrades, which is known as the superparamagnetic limit [3] that must satisfy:
$KuVkBT≥70, (1)$(1)
where Ku is the uniaxial anisotropy energy density. Increasing the anisotropy in the media regains the thermal stability, but at the cost of increased media coercivity. Due to limitations on magnetic write fields that can be produced using current writer materials and designs in HDDs, this leads to the inability to record information using conventional PMR. To keep increasing the areal density, new physics and technologies are needed.
Heat assisted magnetic recording (HAMR) is one of the new technologies for advancing disk drive areal density beyond 1 Tb/in2 [4, 5], which is the estimated limit of PMR. It removes the switching limitation by applying local heating to the recording media to lower its coercivity. This allows for the use of very high anisotropy materials such as FePt to maintain data thermal stability and the ability to record information. In fact, significant progress in HAMR has already been made, and each company in the HDD industry has plans to introduce HAMR technology into product within the next few years. In March 2012, Seagate Technology (Bloomington, MN, USA) has demonstrated a HAMR areal density of 1 Tb/in2 [6], with a linear bit density of around 2 million bpi (bits per inch) [7]. This is about 55% higher than today’s 620 Gb/in2 in 3.5-inch hard drives. Extension to beyond 1 Tb/in2 can be achieved by increasing the magnetic anisotropy and reducing the grain size [8]. More recently, Seagate Technology introduced a prototype of a fully integrated and functioning HAMR drive [9, 10] and proposed that the next generation HAMR technology will be incorporated into 2.5-inch enterprise HDDs. Western Digital (Irvine, CA, USA) also demonstrated its HAMR technology at the 2013 China International Forum on Advanced Materials and Commercialization, where a PC powered by a 2.5-inch HAMR hard drive was presented [11]. Figure 1 shows the industry projected areal density growth and the timeline when the HAMR technology is to be introduced into production.
Figure 1
Projected areal density in the following years.
CAGR, compound annual growth rates; SMR, shingled magnetic recording; BPM, bit patterned media recording which is another technology for reaching high aerial density.
The key component in HAMR is a near field transducer (NFT) for applying heat through the use of a laser on the medium. Details of the light delivery system to the NFT vary depending on the specific implementation [12–14]. A general schematic of the HAMR head is given in Figure 2A and the common elements of a light delivery system are shown in Figure 2B. The laser diode, with the wavelength near 800 nm, is coupled to a waveguide using a mode coupler. The light then propagates down the waveguide where it couples into the NFT which then radiates into the recording medium where it is converted into thermal energy. The waveguide could have a parabolic shape, the planar solid immersion mirror (PSIM), with a dual offset grating to focus the waveguide mode [12]. A thin film dielectric waveguide with a high refractive index core [13], or a metallic surface plasmon (SP) waveguide [14] are another two waveguide designs. The optical spot from NFT has to be localized to a very small dimension to achieve areal densities in the 1∼5 Tb/in2 range. For 1 Tb/in2 at a bit aspect ratio of 4.0, the track density is 500 ktpi (kilo tracks per inch) or a track pitch of 51 nm which is about 16 times smaller than the wavelength of a diode laser (∼800 nm). The NFT design is largely based on excitation of surface plasmons of a nanostructure, which re-radiate and produce a sub-diffraction-limited light spot. The much enhanced field produced by the plasmonic nanostructure is only confined in the near field and has a large divergence. However, this is not a concern in HDD, as the distance between the head and the medium is only a few nanometers during operation.
Figure 2
(A) Schematic of a HAMR head. (B) Common blocks of the light delivery system in a HAMR head.
In this review, we focus on plasmonic NFT designs that can produce sub-diffraction-limited optical spots to increase areal density. In Section 2, the mechanisms that contribute to the energy confinement and enhancement of coupling efficiency in NFT will be discussed. A number of designs will be described and compared using a proper figure of merit (FOM). Associated with the localization of the optical spot dimensions is self-heating in NFT materials that will degrade the NFT performance. The influence of optical properties on the performance of NFT, i.e., power delivery vs. self-heating will be discussed in Section 3.
## 2 Designs of plasmonic NFT
Various designs of NFT have been proposed to localize light onto the recording medium [5, 15]. The fundamental capability of an NFT is to break the diffraction limit by concentrating the optical energy into a spot much smaller than the incident laser wavelength. Accompanying with the localization, there is also a requirement of large field enhancement within the optical spot. Apart from producing a cross-track full-width at half-maximum (FWHM) spot of <50 nm as required by the areal density, the NFT must simultaneously deliver enough power to the recording medium with as small as possible incident laser power to reduce the possible self-heating of the NFT. As such, the efficiency of the NFT is a key figure of merit in determining the quality for a given transducer design.
The NFT usually takes the form of an antenna including aperture type antenna, and many designs are based on localized surface plasmon (LSP) resonance. Different from propagating SPs, LSPs are oscillations of surface charges bounded to a finite structure such as a metallic (nano) particle or a dielectric particle surrounded by metal. Some simple shapes include nanoscale spheres, disks, holes, or rectangular apertures. At the wavelength and polarization for plasmonic resonance, the incident power is coupled to the structure to the maximum extent and produces a field enhanced spot comparable to the structure dimension, which will couple energy to the recording medium on the same spatial scale. The simple circular and rectangular shape apertures have the limitation that in order to obtain a small spot the dimension must reduce, resulting in transmission or coupling efficiency too low to be useful for HAMR. For example, based on Bethe’s theory [16], the transmission through a 50 nm diameter circular aperture is <0.04% at 800 nm. Numerical simulation results shown in Figures 3A and B demonstrate that energy cannot penetrate the small aperture and the transmitted spot size is larger than the aperture. It is known that the problem of low transmission of a single aperture can be resolved by using an array of holes [19, 21, 22] or adding grooves around the aperture [20, 23, 24], as shown in Figures 3C and D respectively. This phenomenon of extraordinary optical transmission (EOT) essentially results from a combination of propagating SP, grating effect, and scattering evanescent fields [19], and has received significant attention. However, the overall size of array or grating structures is of the order of several wavelengths [25, 26], which makes EOT designs questionable to be integrated into a recording head.
Figure 3
(A) Cross-sectional intensity and (B) transmitted electric field intensity |E|2 (5 nm from the aperture exit plane, in air) distributions for a 50 nm diameter circular aperture in a 50 nm gold film (optical properties are taken from [17]) on glass. A y-polarized plane wave at 800 nm illuminates the aperture from the glass side. Simulations are performed using a frequency-domain finite-element method (FEM) solver [18]. (C) A hole array (from [19]) and (D) groove structure (from [20]) used to achieve transmission enhancement.
In order to improve the transmission or power output of a single nanostructure, many variants of simple circular and rectangular structures have been investigated, where sharp, nanoscale tips, pins, and notches are intentionally used to take advantage of the lightning rod effect. Charges accumulate at the sharpest areas of the object to produce strongest electric field. Different from LSP, the lightning rod effect is a non-resonant phenomenon, also called “non-resonant amplification” to better describe the essence of the process [27]. On the other hand, the lightning rod effect is readily combined with LSP resonance to further increase the field enhancement and confinement. The dimensions of both the resonator and the sharp feature need to be optimized to achieve the best energy coupling efficiency. Common designs include triangle antenna [28, 29] and triangle aperture [30, 31], C aperture [32–36], bowtie antenna [37–40] and bowtie aperture [15, 23, 41–44].
The bowtie and C apertures are good examples for utilizing the combined effect of resonance and non-resonant amplification by nanostructures. The low transmission of a regularly shaped aperture can be understood as a result of the cutoff of propagating waveguide modes. For example for a cylindrical waveguide, the cutoff occurs for a diameter <0.55 λ [23]. From a waveguide point of view, an efficient approach to enhance the transmission is to increase the cutoff wavelength [32] so a propagating mode can be supported. One type of aperture that can be explored is the ridge aperture, which adopts the concept of a ridge waveguide in microwave engineering that has been widely used to increase the bandwidth [45]. Both bowtie and C apertures are ridge apertures and have been extensively studied. Some numerical and experimental results related to bowtie apertures are shown in Figure 4. A bowtie aperture is a counterpart of the bowtie antenna, and can be formed by loading a rectangular aperture with a pair of conducting triangular ridges, forming a narrow gap in the center. Under the illumination of a light polarized across the gap, an LSP resonance will be excited in both ridges, driving charges to the two apexes where the lightning rod effect occurs. In a modal study [43] shown in Figure 4A, the large field intensity near the entrance and exit surfaces of the aperture demonstrates the LSP excitation and non-resonant amplification. In addition, both a characteristic TE10 waveguide mode and an SP mode can be observed in the gap between two metallic walls, where the TE10-like mode is not cut off as in small rectangular apertures. The coupling of the two modes efficiently delivers photon energy to the other side [43], leading to an enhanced transmission. To illustrate this, near field imaging in Figure 4B shows a peak for the bowtie aperture with a 36 nm gap at 633 nm, while the small square apertures do not transmit [43]. The larger square, even with the same opening area as the bowtie aperture, allows almost no light transmission. The 450×50 nm2 rectangular aperture results in a propagation mode, but without field confinement. These measurements directly confirm that the bowtie aperture, as a type of ridge aperture, is capable of enhancing the optical transmission at a subwavelength scale. Figure 4C shows the transmitted field enhancement as a function of wavelength for a bowtie aperture on glass. With a 20 nm gap, a 105 nm aperture in a 60 nm-thick gold film resonates at 800 nm, with a full-width at half-maximum (FWHM) spot size of 36.5×36.5 nm2. When a bowtie aperture is placed right above a media stack with a small air gap, a sub-diffractive hot spot can be produced in the recording layer [46]. Figure 4D shows the FWHM at the surface of an 8 nm-thick FePt layer, which is separated from the aperture by a 4 nm gap. For a small 5 nm gap, the optical spot in FePt is only 19×19 nm2 [46]. Evidently, the spot size in the recording media is most influenced by the gap size of the bowtie aperture.
Figure 4
(A) |Ey|2 (left column) and |Ez|2 (right column) intensity distributions in a 160 nm gold film illuminated by a 633 nm y-polarized light from the substrate side. Top: yz-plane; bottom: xy-plane. The simulated bowtie aperture has a dimension of 190×230 nm2, with a gap of 36 nm. (B) NSOM images of the sample in the inset. Small square: 36×36 nm2; bowtie aperture: 190×230 nm2; larger square: 136×136 nm2; rectangular aperture: 450×50 nm2. The last three apertures have about the same opening area. Adapted with permission from Ref. [43] (A and B). Copyright 2006, Springer. (C) Wavelength dependency of the field enhancement (at a point 10 nm from the center of aperture exit) for a 105 nm bowtie aperture in a 60 nm gold film with a gap size of 20 nm. Also shown is the intensity |E|2 distribution at 800 nm with a peak intensity 61 times of the incident intensity. (D) Dependence of FWHM in x and y directions on aperture gap size d for a 200 nm bowtie aperture in a 100 nm silver film. From Ref. [46].
The ability of C aperture to support the TE10-like guided mode have been studied for various materials and with different illumination methods [34, 35]. It was pointed out in [34] that the TE mode in an aluminum C aperture hybridizes a TM character that originates from the SPs along metal boundaries. The wavelength dependent peak intensity and the intensity profile at resonance of a gold C aperture with a 20×20 nm2 gap are shown in Figures 5A and B, respectively. Similar as the results in Figure 4D, the simulation results shown in Figure 5 also include the recording medium [47]. At the resonant wavelength, the FWHM in the recording medium is 39×34 nm2 for an air-filled C aperture. It is noted that if the gap region is very small and the aperture is very wide, there is a chance to produce unwanted elongated spot as shown in Figure 5C, because of the propagation of SP along the ridges called channeled SP. One way to mitigate the channeled SP is to have a flare angle to open the channel, resulting in a half-bowtie shaped aperture as shown in Figure 5D. For this half bowtie aperture, the near field distribution is confined. It is noted that for bowtie, C, and half-bowtie apertures, it is quite straightforward to alter the gap dimensions s and d to generate elongated spots to match the bit aspect ratio on the recording track [46]. As an example, Figure 5E shows a spot produced by a 345 nm half-bowtie aperture with s=15 nm, d=5 nm. The spot size is about 37×16 nm2, with an aspect ratio of approximately 2.3.
Figure 5
(A) Peak intensity spectrum and (B) The intensity |E|2 profile at 700 nm in the recording medium for a C aperture with an outer dimension of 300×55 nm2 and a gap of 20×20 nm2 in a 100 nm gold film. In (A), the aperture is filled with different dielectric materials. Adapted with permission from Ref. [47] (A and B). Copyright 2009, Springer. Volumetric loss profile (heat spot, MW/m3) in the center plane of a FePt layer for (C) a 250×100 nm2 C aperture in silver film and (E) a 345×180 nm2 half-bowtie aperture and, with the same 15×5 nm2 gap. The geometry of half-bowtie aperture is shown in (D). Adapted from [46] (C, D and E).
One of the NFTs designed by HGST (San Jose, CA, USA) is very similar to a C aperture antenna [13], irradiate by light polarized in the horizontal direction in Figure 6A. The orange colored part is made of gold and forms an E-shape; therefore, this NFT is also called E antenna. The notch at the center concentrates in a small volume the surface charges generated through a plasmonic resonance in the body. Figure 6B compares the absorption profiles produced by the antenna with and without the notch at a wavelength of 780 nm. It is seen that without the notch, the strong absorption around the left surface of the body corresponds to a large surface charge density, indicating a plasmonic resonance supported by the body. With the assistance of the lightning rod effect produced by the notch, the FWHM spot size was reduced from more than 200 nm to <40 nm, along with increased peak intensity within the spot by 7 times [13]. It is noted that practically a magnetic pole is integrated to the open area opposite to the notch as illustrated in Figure 6A. Therefore, the E-antenna is essentially a C aperture with a small ridge/notch surrounded by gold and pole material. The optical near field of this E-antenna has been characterized using a scattering type near field scanning optical microscopy (s-NSOM). This method was based on oscillating the AFM tip and analyzing the collected scattering signals at higher harmonics of the tapping frequency to suppress the background noise that comes from the tip shaft and sample surface [48, 49]. Figure 6C shows an optical near field spot size of 60×42 nm2 is produced by this E antenna.
Figure 6
(A) Schematic of the E-antenna NFT integrated with pole. Geometry parameters used in [13]: Outer dimensions are 300×600 nm2 and notch dimensions are 24×36 nm2. (B) Illustration of plasmonic and lightning rod effects in the E-antenna design. Adapted with permission from [13]. (C) s-NSOM signal at the 3rd harmonic of the tapping frequency.
Another type of design utilizes plasmonic resonance in a metallic structure (instead of an aperture in bowtie or C aperture antenna) and a smaller nanostructure for further field localization and enhancement. Examples are a “lollipop” design by Seagate Technology and a “nanobeak” design by HGST. The lollipop design indicated by the red dashed outline in Figure 7A [12] consists of a 200 nm diameter gold disk and a 15 (length)×50 (width) nm2 gold peg. The design of the waveguide is such that it results in a vertically polarized net field at the focal point to correctly excite the NFT [12]. For numerical modeling, it is placed 7.5 nm above a 12.5 nm Fe recording layer. The larger circular disk acts as the LSP resonator and the smaller peg further localizes the optical energy via the lightning rod effect. In addition, the design takes into account the effect of the recording medium which produces an imaging effect and introduces an additional enhancement to the energy confinement and coupling. Figure 7B shows the simulated spectral coupling efficiency (will be discussed later), which indicates that the NFT is designed specifically for the laser wavelength near 800 nm. Figure 7C shows the optical energy profile produced by this NFT in the central plane of the Fe layer, with an FWHM of about 70 nm. The SP resonances in lollipop both without and with the presence of the medium have been experimentally characterized via a pump-probe photothermal measurement and by taking AFM topography of illuminated devices [51]. In addition, the disk resonator can be replaced by a rectangular resonator to enhance the excitation efficiency [52]. The nanobeak antenna [50, 53, 54] is essentially a triangular antenna with a 3D beaked apex, as shown in Figure 7D. With the lighting rod effect taking place along both in-plane and out-of plane directions, field enhancement occurs at the tip of the beak, and 40 nm marks were successfully written onto the medium [50]. Numerical results in Figure 7E show that a flat triangle without the beak produces a FWHM of about 25×25 nm2, compared to a 15×20 nm2 spot with the use of the beaked design. The intensity profiles are computed on the surface of the recording medium. The nanobeak antenna can also be integrated with a thin film wing to form a SP waveguide [14].
Figure 7
(A) Cross-sectional distribution of the electric field intensity of the lollipop NFT design from Seagate, separated 7.5 nm from a recording stack, which consists of a 12.5 nm Fe, 5 nm MgO and a more than 50 nm thick Cu heat sink. White dashed lines indicate the location of the air gap and different stack layers. Red dashed lines indicate the lollipop NFT. (B) The coupling efficiency as a function of wavelength. (C) Optical absorption profile at recording layer center. Adapted with permission from [12] (A, B and C). (D) Nanobeak NFT design and (E) a comparison of intensity distributions between flat probe and beaked probe. Reused with permission from [50] (D and E).
Apart from the mechanisms used in the designs discussed above, there are other effects that can also be applied in NFT design, such as the dual-dipole effect existing in two closed spaced nanoparticles [5, 15] and the Fabry-Perot effect in relatively thick films [15]. Other methods that manipulate the shape of NFTs include canted antennas or apertures [15, 55], butted-grating structure [56, 57], and tapered plasmonic waveguide [58, 59]. The last is a 3D tapered metal-insulator-metal (MIM) multilayered structure terminated in a nanometer sized cross section which determines the cross-track spot size in the recording media. The fundamental mode in the MIM waveguide is supported without cutoff [58]. The idea of plasmonic taper that produces a hot spot with a significant fraction of energy deposited at the tip was first established in [60]. In [14], the thin film waveguide with a nanobeak antenna integrated at the end can be understood as a variant of a tapered plasmonic waveguide. Ultimately, the spot size in the medium are complex results that determined by the smallest structure dimension, which has a direct impact on the manufacturing requirements and the overall process capability for HAMR.
The optical spot size in the recording medium is often used as a figure of merit (FOM) to characterize NFT, as has been discussed above for several designs. It is also commonly used for other nanofocusing devices when evaluated in free space [61]. The power transmitted or scattered by the nanostructures to the desired region is another important criteria or an FOM for evaluating NFT’s performance. The total diffracted power could exceed the irradiation on the open area of the aperture or the area of a nanoparticle, i.e., EOT [19–24]. It should be noted that an issue associated with the transmittance FOM is that it is often evaluated in the absence of the recording medium. The optical and thermal properties of the recording medium affect the heating of the medium by NFT. Therefore, a more proper FOM based on the coupling efficiency, should be defined as the percentage of the total focused power dissipated as heat in the recording medium within a confined spot, as has been reported by Seagate Technology [12] and HGST [13]. However, establishing an FOM based on heating can be difficult since the specific recording materials vary among different companies.
In [15], a standard geometry, including a solid immersion lens (SIL) and the recording stack (10 nm cobalt layer and 105 nm gold heat sink), is used for simulating and comparing different NFT designs illuminated by a focused laser beam. The coupling efficiency is computed as the fraction of the incident optical power coupled into a 50×50 nm2 area in the cobalt layer, at the resonant wavelength of each NFT. The results indicate that canted bowtie antenna (4.1%), beaked triangle antenna (3.4%), C aperture (2.8%), and bowtie aperture (2.3%) are promising choices for NFT. In other calculations, the coupling efficiency of a lollipop NFT is found to reach 8% as shown in Figure 7B at the resonance, considering a 70×70 nm2 region of a 12.5 nm thick Fe medium [12]. HGST’s E-antenna is modeled to couple about 11.7%–14%, depending on different pole materials, of the waveguide optical power to the 50 nm cobalt medium within a 50×50 nm2 footprint [13]. For the nanobeak NFT, an 8% efficiency is estimated with a region of 50×50 nm2 in an 8 nm thick FePt recording layer [14]. It also needs to be noted that this coupling efficiency depends strongly on the NFT-medium separation distance because of the evanescent nature of local fields [12, 62]. For a lollipop NFT, It has been reported that the coupling efficiency reduces rapidly from 8% to only 1% if the NFT-medium separation increases from 8 nm to 20 nm [12]. Concerning the difficulty of designing and modeling media and making fair comparisons between head designs, the Advanced Storage Technology Consortium (ASTC) provided a standard media stack and FOMs for modeling HAMR NFTs [63]. In addition to the thermal spot size in the media and the coupling efficiency, other FOMs including thermal efficiency, thermal gradient in the media, and normalized peak temperature [63], which are related to the heating effect in NFT which will be discussed in Section 3.
Another aspect worth highlighting is the different wave guiding and coupling schemes applied in different configurations. These are important for the overall head design that aims at efficient light delivery from the laser to the recording medium. The laser could be directly focused onto the NFT by a conventional objective lens [35, 53] or a SIL [15, 64]. This resembles the Otto technique for exciting SP [65]. A broad wave vector spectrum is provided by the total internal reflection and couples to the NFT film by generating SPs. As introduced earlier and illustrated in Figure 2, the waveguide linking the light condenser and the NFT takes various forms. In [12], the laser light is coupled to a transverse electric (TE) mode supported by the PSIM via a dual offset grating with an efficiency of about 50%, and then focused by the parabolic mirror to generate a vertical-polarized net field at the focal point where the lollipop locates. Laser can be coupled to dielectric waveguides in an endfire manner, as demonstrated in [13] where the power is then guided by the lowest order transverse magnetic (TM) mode and ∼40% arrives at the E-antenna. It has been pointed out that the evanescent coupling is efficient for exciting SPs with a strong confinement and is suitable for integration of plasmonic components with photonic networks [66]. This is also widely adopted in HAMR and a perfect transfer of power from the dielectric to the plasmonic waveguide is achievable [67]. The plasmonic waveguide, in a shape of rectangular [68], needle with a triangular cross section [67] or taper [14], acts as a NFT at the same time by guiding optical waves to the medium surface. As a modification of the needle design, a magnetic core antenna was proposed in [69], where the interior of the plasmonic waveguide is replaced by magnetic materials and forms an extension of the magnetic pole. This improves the overlap of the magnetic field from the pole with the heating profile from the NFT [4, 5, 70], leading to a better performance of the head. Other techniques that guide the waves to a NFT include using a tapered MIM waveguide [59] or an aperture surrounded by grooves [25, 26].
From a circuit theory point of view, the low coupling efficiency in a HAMR system indicates an impedance mismatch between NFT and the media. The media, together with the air gap, are equivalent to terminating loads [34, 59, 68]. When modeled together with the recording medium, the tapered MIM waveguide design turns out to have a very large impedance that better matches the load (air gap + media) and thus outperforms the lollipop and the E-antenna [59]. An optimization of the media properties was carried out in [68]. Similarly, it is found that the maximum coupling efficiency is achieved when the resistance (real part of the impedance) of the plasmonic waveguide matches that of the load. At the same time, the capacitive impedance of the air gap cancels the inductive part from medium. Although the circuit analysis neglects possible higher order modes and simplifies the complex geometry, it provides a direction for qualitatively optimizing the system.
## 3 NFT self-heating and material choice
Because of the introduction of thermal energy into HDD in HAMR, the performance of all elements will be impacted by thermal effects. Prime concerns include the instability of the slider and the failure of materials [4, 5, 12, 61, 70]. Thermal expansion in NFT can cause an NFT protrusion to the recording media surface, which will require a better control of the air gap and the surface roughness to avoid contact between NFT and medium surface [12]. Temperature rise also changes the NFT-writing pole separation, which may cause variations in coupling efficiency by attenuating the resonance [12] since the writing pole is made of metal and is part of the resonance structure. Thermal modeling [71–73] has been carried out to study dependence of NFT temperature rise on absorbed power, its size, and the NFT-pole separation, as shown in Figure 8 [72]. The temperature increases linearly with the heat dissipation in NFT and could rise by several hundred degrees. A recent proposal is that the medium could be directly illuminated by the waveguide first to get a moderate background temperature rise before being heated by a NFT locally [74]. This two-stage heating scheme reduces the local thermal load to NFT and thus could possibility prolong its lifetime, but it also increases the possibility of interference and even erasure between adjacent tracks. Here we focus on plasmonic heating in NFT only and the investigation for alternative plasmonic NFT materials.
Figure 8
Temperature rise in NFT as a function of absorbed power at different transducer sizes (W×L) and different NFT-pole distances under 50 nm input laser power, 4 nm fly height and 6.5 m/s fly speed. A 15×10-6 m2K/W boundary thermal resistance (BTR) between the recording layer and heat sink is included in the model. A C-aperture NFT in gold film was used in this simulation. Figure from [72].
To get a better understanding of the effects of optical properties on NFT performance, we start with the equation for computing dissipated power density in NFT [75].
$P=12Re(σ)·|E|2=12ε0ωIm(ε)·|E|2 (2)$(2)
where ε0 is the vacuum permittivity and ω is the angular frequency of the laser. The relationship between the relative permittivity ε and the conductivity σ, ε=1+/ε0ω [66], has been applied. Eq. (2) indicates that the dissipation in a lossy medium is determined by not only the imaginary part of the permittivity Im(ε), but also the peak field intensity |E|2. Therefore the heating effect in NFT can be minimized by reducing Im(ε) and |E|2, the latter is largely determined by the real part of the permittivity Re(ε). However, in most cases, a large magnitude of -Re(ε) [more negative Re(ε)] indicates a strong resonance, large field enhancement, and lateral energy confinement, which is favorable for NFT in HAMR application. Therefore apparently there is a compromise between improving field localization/enhancement and minimizing heat dissipation. As a first order estimation, consider a sphere in the quasistatic approximation, an FOM can be defined by combining both parts of the permittivity to evaluate the overall performance of LSP systems [76] as:
$QLSP=-Re(ε)Im(ε). (3)$(3)
In other words, a desirable NFT material should have a minimized Im(ε) and a high -Re(ε), and thus a relatively large QLSP. Real and imaginary parts of permittivities of a number of metals [17, 77], gold, silver, aluminum, chromium, and titanium, are shown in Figures 9A and B. The FOM QLSP, as defined in Eq. (3), was also plotted in Figure 9C for comparison. For a diode laser at near-IR wavelengths, silver has the smallest Im(ε) in the interested range and a relatively large -Re(ε), thus the best QLSP, but suffers from chemical stability against, for example, possible decomposition of lubricants on the medium surface [78]. Aluminum is naturally excluded as an NFT material because of the interband transition around 800 nm as can be seen from the peak of Im(ε) spectrum and a low melting temperature. Chromium and titanium have similar properties, and they can hardly support plasmonic modes in the near-IR. At present, gold is widely used as the NFT material because of its chemical stability, melting point much greater than the Curie point of popular recording medium (∼750 K for FePt, [5]), and high thermal conductivity. However, nano-structured gold suffers from poor thermal stability (high ductility) at temperatures much below its melting point. For example, the stress was found to start relaxing at a low temperature of 100°C for gold, which could be a result of the highly mobile grain boundaries [79]. It needs to be pointed out that Eq. (3) is only exact for spheres in the quasistatic limit. For particles that are large and take complex geometries, as for NFT designs in HAMR technology, Eq. (3) and Figure 9C could be unreliable. This issue has been discussed in [80], which suggests a generalized form of the scattering efficiency, called the near-field intensity efficiency, as a more comprehensive FOM for large scatters in LSP applications. Additionally, a recent work [81] demonstrated the absorption efficiency of particles as the critical FOM for local heating applications.
Figure 9
(A) Real and (B) Imaginary part of permittivities of gold, silver, aluminum, chromium, and titanium, with properties taken from [17, 77]. (C) QLSP as defined in Eq. (3) for these metals. The legend is the same for all the three plots.
Recently, there is a significant interest in searching for alternative low-loss plasmonic materials [76, 82–86], and applying these materials in SP, LSP, transformation optics, and metamaterials. Some of these alternative plasmonic materials for visible and near infrared frequencies may offer a possibility of achieving high performance for NFT devices. As discussed in [84], the reported alternative plasmonic materials can be loosely categorized as metallic alloys [78, 79, 82], semiconductor-based [76], ceramic materials [85], 2D materials such as grapheme [76, 86] and organic materials [87]. Among these materials, metallic alloys, semiconductor-based transparent conducting oxides (TCOs), and transition-metal nitrides [84, 86, 88] can be promising for HAMR application near a wavelength of 800 nm. To tune standard semiconductors such as silicon and germanium into metallic materials near this wavelength, an ultrahigh doping larger than 1021 cm-3 is required, which challenges their usage as alternative plasmonic materials because of additional concerns such as the solid solubility limit, crystal defects created that limit the carrier concentration, and the difficulty in maintaining a high carrier mobility [76, 86]. Alloying metals with different proportions of each element will create a unique band structure that shifts the interband transition to a less critical spectral range. As shown in Figure 10A, the original bands I (centered around 667 nm) and II (centered around 333 nm) of gold can overlap at about 500 nm in the alloy, leaving other part of spectrum less lossy [82]. For TCOs and metal-nitrides, optical properties of thin films were characterized and fitted with a Drude+Lorentz oscillator model [84]. Figures 10B and C illustrate a comparison of optical properties of TCOs and metal-nitrides with those of gold and silver [84]. It shows that TCOs become metallic and have lowest loss in the near-IR compared to gold and silver. Its drawback is its relatively low -Re(ε). Metal-nitrides have comparable properties as gold and provide potential usage in visible frequencies. The general guideline behind the two approaches described above is to reduce the free electron density in metals since loss in conventional noble metals is closely associated with the large free electron density as indicated by Drude model, or increase the free electron density in semiconductors and ceramics by heavy doping [83].
Figure 10
(A) Simulated spectrum of Im(ε) in pure gold and Au (96.7%):Cd (3.3%) alloy. Adapted with permission from [82]. Comparison of (B) Re(ε) and (C) Im(ε) of alternative plasmonic materials with noble metals. Nitrides and TCOs become metallic in wavelength ranges indicated by the red arrows. Reused with permission from [84].
Comparative numerical and experimental studies have also been carried out for specific structures. Figure 11 shows the maximum field enhancement, absorption, and extinction cross sections of LSP resonance modes, which are related to the HAMR technology. The geometry used is a nanosphere with a diameter of wavelength/10 (in the quasistatic limit), surrounded by a host material of refractive index 1.33 [86]. The materials investigated are noble metals, metal nitrides, and TCOs. The peak field enhancement shown in Figure 11A is the same as the FOM given by Eq. (3) for spheres. Similar as the conclusions drawn from Figure 10B and C, metal-nitrides such as TiN and ZrN provide LSP resonances between 700 nm and 1000 nm. The peak absorption and extinction cross sections of TiN and ZrN are a little larger than those of gold, as shown in Figure 11B and C. Overall, the nitrides are comparable with gold at the wavelength range from about 500 nm to 1000 nm. Although they do not outperform noble metals, the optical properties can be improved by affecting deposition processes. In addition, metal-nitrides can be attractive because of controllability, superior thermo-mechanical properties, and chemical stability [84, 88]. For example, TiN has an extremely high melting point which is larger than 2900°C, making it potentially useful in plasmonic thermal applications.
Figure 11
(A) Maximum field enhancement on the surface of a spherical nanoparticle and normalized absorption (B) and extinction (C) cross-sections with different materials calculated in quasistatic limit. Materials investigated here include noble metals gold and silver, metal nitrides TiN and ZrN, and TCOs (GZO, zinc oxide doped with gallium; AZO, zinc oxide doped with aluminum; ITO, indium tin oxide). Reused with permission from [86].
The search for alternative plasmonic materials should always be associated with specific applications, the interested spectrum range, and a proper choice of FOMs. The alternative materials discussed above are investigated to be applied particularly in the visible and near-IR ranges for plasmonic and metamaterial applications. The optical properties could be significantly affected by fabrication processes and experimental conditions, for example, substrate material and temperature, processing parameters, and thickness of deposited film [78, 79, 86, 88]. Figure 12 shows a parametric study of optical constants of NFT material on the absorption rate and the coupling efficiency [72]. The coupling efficiency is defined in the same way as that discussed in Section 2, with a medium volume of 50×50×10 nm3. The NFT used is a C aperture, same as that for Figure 8, and both recording media and waveguide were included in this model [72]. Since the optical constants are related to permittivity by Re(ε)=n2k2 and Im(ε)=2nk, as n reduces, Re(ε) remains almost unchanged since n is typically much smaller than k, and Im(ε) decrease. With a reduced n, absorption reduces rapidly and the coupling efficiency increases as expected. Smaller k leads to a smaller Im(ε), but higher absorption is observed, which could be caused by the existence of a stronger resonance at which the absorption increases with |E|2 [72]. The dependency of coupling efficiency on k is somehow complicated because both Re(ε) and Im(ε) vary simultaneously. In [88], a nanorod NFT made of TiN (n=0.99, k=3.6 at 830 nm) was modeled, showing a 1.1% coupling efficiency which is about one third of that of a gold nanorod NFT.
Figure 12
Effects of optical constants of NFT on absorption rate and efficiency (from [72]).
Because of the complexity of the HAMR system, optical properties alone are not sufficient to determine the NFT performance. Other properties such as mechanical properties also need to be considered. Researches have been performed to characterize alternative plasmonic materials, for example, gold alloys [78] and silver alloys [79]. The hardness of gold can be enhanced by ∼32% if doped with copper at a concentration of 10.3%. The corresponding FOM, defined as 3*|Re(ε)|/Im(ε), is about 30 at the wavelength of 830 nm, while the FOM of pure gold is about 43 at the same wavelength [78]. In [79], it was found that the resistance to grain growth in an AgPd alloy improves with an increasing palladium concentration, which helps to prevent the plastic deformation. A 100∼150 nm thick AgPd (5.8 at% Pd) film provides about a two-fold increased hardness (normalized to pure gold); a thermal conductivity of 160 W/(mK) that is larger than gold and silver under the same conditions; and more importantly, a FOM close to 30 at 830 nm [79]. Thus, gold and silver alloys are promising NFT materials by providing improved hardness and higher stress relaxation and creep resistances, with still acceptable optical properties. To find the best alternative NFT material for HAMR, both optical and thermo-mechanical properties, as well as fabrication and integration issues should be brought together into consideration.
To conclude this section, we note that the widely used multi-physics model for simulating the electromagnetic-thermal coupling in HAMR is questionable, as has been examined in [89], and the origin lies in the failure of macroscopic Maxwell equations and constitutive relations in materials for nanoscale systems. The interaction of the highly focused laser beam with metallic materials will induce non-linear and non-local effects. As a result, different zones for energy penetration have to be considered. In addition, the conventional Joule’s law expressed by Eq. (3) and the heat conduction equation are not applicable to the transducer surface. SP oscillations in NFTs improve the coupling efficiency for a HAMR system, but also accelerate components failure. It is pointed out in [89] that a rethink of the local heating process in NFT helps to explain the short, lower than expected lifetime of NFTs.
## 4 Conclusions
An areal density of 1 Tb/in2 is estimated to be the limit for the present HDD products using the PMR technology, due to the requirement of thermal stability and available magnetic write fields. By incorporating thermal energy into the head to locally reduce the coercivity of the medium in a sub-diffraction-limited area, HAMR becomes one of the most promising technologies to keep increasing the areal density. The key component in HAMR is a NFT that needs to deliver sufficient fraction of incident optical energy into the recording medium within a region far below the diffraction limit. NFTs that based on nanoantennas and nanoapertures take advantage of various underlying physics, including resonant and non-resonant amplifications to achieve sufficient spatial resolution and coupling efficiency. On the other hand, self-heating in NFT is a concern for the NFT performance. Less dissipation in NFT and more power coupled to the recording medium are desirable. The discovery of low-loss plasmonic materials can open up possibilities for better devices for the development of the HAMR technology.
## Acknowledgments
N.Z. and X.X. acknowledge the support from the Defense Advanced Research Projects Agency (Grant No. N66001-08-1-2037), the National Science Foundation (Grant No. CMMI-1120577), and the Advanced Storage Technology Consortium (ASTC).
## References
• [1]
Rausch T, Trantham JD, Chu AS, Dakroub HD, Riddering JW, Henry CP, Kiely JD, Gage EC, Dykes JW. HAMR drive performance and integration challenges. IEEE T Magn 2013;49:730–3.
• [2]
Toshiba press release. Toshiba Leads Industry in Bringing Perpendicular Data Recording to HDD – Sets New Record for Storage Capacity with Two New HDDs. http://www.toshiba.co.jp/about/press/2004_12/pr1401.htm.
• [3]
Sharrock MP. Time-dependent magnetic phenomena and particle-size effects in recording media. IEEE T Magn 1990;26:193–7.
• [4]
Kryder MH, Gage EC, McDaniel TW, Challener WA, Rottmayer RE, Ju G, Hsia Y-T, Erden MF. Heat assisted magnetic recording. P IEEE 2008;96:1810–35.
• [5]
Ju G, Challener W, Peng Y, Seigler M, Gage E. Developments in data storage: materials perspective. John Wiley & Sons, Inc. 2011; Chapter 10:193–222.Google Scholar
• [6]
Seagate press release. Seagate Reaches 1 Terabit Per Square Inch Milestone In Hard Drive Storage With New Technology Demonstration. http://www.seagate.com/about/newsroom/press-releases/terabit-milestone-storage-seagate-master-pr/.
• [7]
Wu AQ, Kubota Y, Klemmer T, Rausch T, Chubing P, Yingguo P, Karns D, Xiaobin Z, Yinfeng D, Chang EKC, Yongjun Z, Hua Z, Kaizhong G, Thiele J-U, Seigler M, Ganping J, Gage E. HAMR areal density demonstration of 1+ Tbpsi on spinstand. IEEE T Magn 2013;49:779–82.Google Scholar
• [8]
Wang X, Gao K, Zhou H, Itagi A, Seigler M, Gage E. HAMR recording limitations and extendibility. IEEE T Magn 2013;49:686–92.
• [9]
Seagate press release. Seagate To Demo Its Revolutionary Heat Assisted Magnetic Recording Storage Technology At CEATEC 2013. http://www.seagate.com/about/newsroom/press-releases/HMR-demo-ceatec-2013-pr-master/.
• [10]
CEATEC JAPAN News, Key Technology Stage. http://www.ceatec.com/news/en-webmagazine/e023.
• [11]
WD press release. WD Demonstrates Heat Assisted Magnetic Recording Hard Drive Technology at 2013 China (Ningbo) International Forum on Advanced Materials and Commercialization. http://www.wdc.com/en/company/pressroom/releases/?release=dc8e1c07-6a5b-48ce-b931-e090e566da29.
• [12]
Challener WA, Peng C, Itagi AV, Karns D, Peng W, Peng Y, Yang X, Zhu X, Gokemeijer NJ, Hsia Y-T, Ju G, Rottmayer RE, Seigler MA, Gage EC. Heat-assisted magnetic recording by a near-field transducer with efficient optical energy transfer. Nat Photon 2009;3:220–4.
• [13]
Stipe BC, Strand TC, Poon CC, Balamane H, Boone TD, Katine JA, Li J-L, Rawat V, Nemoto H, Hirotsune A, Hellwig O, Ruiz R, Dobisz E, Kercher DS, Robertson N, Albrecht TR, Terris BD. Magnetic recording at 1.5 Pb m-2 using an integrated plasmonic antenna. Nat Photon 2010;4:484–8.Google Scholar
• [14]
Matsumoto T, Akagi F, Mochizuki M, Miyamoto H, Stipe B. Integrated head design using a nanobeak antenna for thermally assisted magnetic recording. Opt Exp 2012;20:18946–54.
• [15]
Challener WA, Gage E, Itagi A, Peng C. Optical transducers for near field recording. Jpn J Appl Phys 2006;45:6632–42.
• [16]
Bethe H. Theory of diffraction by small holes. Phys Rev 1944;66:163–82.
• [17]
Johnson PB, Christy RW. Optical constants of noble metals. Phys Rev B 1972;6:4370–79.
• [18]
HFSS 15.1, Ansoft LLC 2012.Google Scholar
• [19]
Lezec HJ, Thio T. Diffracted evanescent wave model for enhanced and suppressed optical transmission through subwavelength hole arrays. Opt Exp 2004;12:3629–51.
• [20]
Lezec HJ, Degiron A, Devaux E, Linke RA, Martin-Moreno L, Garcia-Vidal FJ, Ebbesen TW. Beaming light from a subwavelength aperture. Science 2002;297:820–2.Google Scholar
• [21]
Ebbesen TW, Lezec HJ, Ghaemi HF, Thio T, Wolff PA. Extraordinary optical transmission through sub-wavelength hole arrays. Nature 1998;391:667–9.
• [22]
Beijnum F, Retif C, Smiet CB, Liu H, Lalanne P, Exter MP. Quasi-cylindrical wave contribution in experiments on extraordinary optical transmission. Nature 2012;492:411–4.Google Scholar
• [23]
Kinzel EC, Srisungsitthisunti P, Li Y, Raman A, Xu X. Extraordinary transmission from high-gain nanoaperture antennas. Appl Phys Lett 2010;96:211116-1–3.
• [24]
Carretero-Palacios S, Mahboub O, Garcia-Vidal FJ, Martin-Moreno L, Rodrigo SG, Genet C, Ebbensen TW. Mechanisms for extraordinary optical transmission through bull’s eye structures. Opt Exp 2011;19:10429–42.
• [25]
Srituravanich W, Pan L, Wang Y, Sun C, Bogy DB, Zhang X. Flying plasmonic lens in the near field for high-speed nanolithography. Nat Nanotechnol 2008;3:733–7.
• [26]
Kim H, Sohn J, Lee M, Lee B, Suh S, Cho E. Heat-assisted magnetic recording head and method of manufacturing the same. Samsung Elctronics Co., Ltd., US Patent No. 7710686 B2, 2010.Google Scholar
• [27]
Ermushev AV, Mchedlishvili BV, Oleinikow VA, Petukhow AV. Surface enhancement of local optical fields and the lightning-rod effect. Quantum Electron 1993;23:435–40.
• [28]
Crozier KB, Sundaramuerthy A, Kino GS, Quate CF. Optical antennas: resonators for local field enhancement. J of Appl Phys 2003;94:4632–42.
• [29]
Osawa K, Sekine K, Saka M, Nishida N, Hatano H. Optical TAMR head design for placing a heating spot close to a magnetic pole. J Magn Soc Jpn 2009;33:503–6.
• [30]
Hirata M, Park M, Oumi M, Nakajima K, Ohkubo T. Near-field optical flying head with a triangle aperture. Presented in MORIS Tech Dig 2007;35–6.Google Scholar
• [31]
Hirata M, Tanabe S, Oumi M, Park M, Chiba N, Gonzaga LV, Yu S, Zhang M, Tjiptoharsono F. Light delivery system for heat-assisted magnetic recording. IEEE T Magn 2009;45: 5016–21.
• [32]
Shi X, Thornton RL, Hesselink L. Nano-aperture with 1000x power throughput enhancement for very small aperture laser system (VSAL). Proc of SPIE 2002;4342:320–7.Google Scholar
• [33]
Shi X, Hesselink L, Thornton RL. Ultrahigh light transmission through a C-shaped nanoaperture. Opt Lett 2003; 28:1320–2.
• [34]
Itagi AV, Stancil DD, Bain JA, Schlesinger TE. Ridge waveguide as a near-field optical source. Appl Phys Lett 2003;22;4474–6.
• [35]
Peng C, Jin EX, Clinton TW, Seigler MA. Cutoff wavelength of ridge waveguide near field transducer for disk data storage. Opt Exp 2008;16:16043–51.
• [36]
Sendur K. Perpendicular oriented single-pole nano-optical transducer. Opt Exp 2010;18:4920–30.
• [37]
Grober RD, Schoelkopf RJ, Prober ED. Optical antenna: Towards a unity efficiency near-field optical probe. Appl Phys Lett 1997;70;1354–6.
• [38]
Kim BJ, Flamma JW, Ten Have ES, Garcia-Parajo MF, Van Hulst NF, Brugger J. Moulded photoplastic probes for near-field optical applications. J Micros 2001;202:16–21.
• [39]
Sendur K, Challener W. Near-field radiation of bow-tie antennas and apertures at optical frequencies. J Micros 2003;210:279–83.
• [40]
Challener WA, McDaniel TW, Mihalcea CD, Mountfield KR, Pelhos K, Sendur IK. Light delivery techniques for heat-assisted magnetic recordings. Jpn J Appl Phys 2003;42:981–8.
• [41]
Jin EX, Xu X. Finite-difference time-domain studies on optical transmission through planar nano-apertures in a metal film. Jpn J Appl Phys 2004;43:407–17.
• [42]
Jin EX, Xu X. Enhanced optical near field from a bowtie aperture. Appl Phys Lett 2006;88:153110–2.
• [43]
Jin EX, Xu X. Plasmonic effects in near-field optical transmission enhancement through a single bowtie-shaped aperture. Appl Phys B 2006;84:3–9.
• [44]
Wang L, Xu X. High transmission nanoscale bowtie-shaped aperture probe for near-field optical imaging. Appl Phys Lett 2007;90:261105-1–3.
• [45]
Pozar DM. Microwave engineering. New York, Wiley, 1998.Google Scholar
• [46]
Zhou N, Kinzel EC, Xu X. Nanoscale ridge aperture as near-field transducer for heat-assisted magnetic recording. Appl Opt 2011;50:G42–6.Google Scholar
• [47]
Challener WA, Itagi AV. Near-field optics for heat-assisted magnetic recording (Experiment, Theory, and Modeling). Mod Aspect Electroc 2009;44:53–111.Google Scholar
• [48]
Keilmann F, Hillenbrand R. Near-field microscopy by elastic light scattering from a tip. Phil Trans R Soc Lond A 2004;362:787–805.Google Scholar
• [49]
Atkin JM, Berweger S, Jones AC, Raschke MB. Nano-optical imaging and spectroscopy of order, phases, and domains in complex solids. Adv in Phys 2012;61:745–842.Google Scholar
• [50]
Matsumoto T, Anzai Y, Shintani T, Nakamura K, Nishida T. Writing 40 nm marks by using a beaked metallic plate near-field optical probe. Opt Lett 2006;31:259–61.
• [51]
Peng C, Challener WA, Itagi A, Seigler M, Gage EC. Surface-plasmon resonance characterization of a near-field transducer. IEEE T Magn 2012;48:1801–6.
• [52]
Peng C. Efficient excitation of a monopole optical transducer for near-field recording. J of Appl Phys 2012;112:043108-1–6.
• [53]
Matsumoto T, Nakamura K, Nishida T, Hieda H, Kikitsu A, Naito K, Koda T. Thermally assisted magnetic recording on a bit-patterned medium by using a near-field optical head with a beaked metallic plate. Appl Phys Lett 2008;93:031108-1–3.
• [54]
Ashizawa Y, Ota T, Tamura K, Nkagawa K. Highly efficient waveguide using surface plasmon polaritons for thermally assisted magnetic recording. J Magn Soc Jpn 2013;37:111–4.
• [55]
Farahani JN, Eisler HJ, Pohl DW, Pavius M, Fluckiger P, Gasser P, Hecht B. Bow-tie optical antenna probes for single-emitter scanning near-field optical microscopy. Nanotechnology 2007;18:125506-1–4.
• [56]
Hasegawa S, Tawa F. Generation of nanosized optical beams by use of butted gratings with small numbers of periods. Appl Opt 2004;43:3085–96.
• [57]
Tawa F, Hasegawa S, Odajima W. Optical head with a butted-grating structure that generates a subwavelength spot for laser-assisted magnetic recording. J Appl Phys 2007;101:09H503-1–3.Google Scholar
• [58]
Bao W, Melli M, Caseli N, Riboli F, Wiersma DS, Staffaroni M, Choo H, Ogletree DF, Aloni S, Bokor J, Cabrini S, Intonti F, Salmeron MB, Yablonovitch E, Schuck PJ, Weber-Bargioni A. Mapping local charge recombination heterogeneity by multidimensional nanospectroscopic imaging. Science 2012;338:1317–21.Google Scholar
• [59]
Staffaroni M. Circuit analysis in metal-optics, theory and applications. (PhD Dissertation, University of California at Berkeley, 2011).Google Scholar
• [60]
Stockman ML. Nanofocusing of optical energy in tapered plasmonic waveguides. Phys Rev Lett 2004;93:137404-1–4.
• [61]
Lindquist NC, Jose J, Cherukulappurath S, Chen X, Johnson TW, Oh S-H. Tip-based plasmonics: squeezing light with metallic nanoprobes. Laser & Photon Rev 2013;7:453–77.
• [62]
Sendur K, Jones P. Effect of fly height and refractive index on the transmission efficiency of near-field optical transducers. Appl Phys Lett 2006;88:091110-1–3.
• [63]
ASTC Public Documents. ASTC HAMR Reference Media Stack for NFT Modeling and NFT FOM. http://www.idema.org/?page_id=2269.
• [64]
Sendur K, Peng C, Challener W. Near-field radiation from a ridge waveguide transducer in the vicinity of a solid immersion lens. Phys Rev Lett 2005;94:043901-1–4.
• [65]
Otto A. Excitation of nonradiative surface plasma waves in silver by the method of frustrated total reflection. Z Phy 1968;216:398–410.
• [66]
Maier SA. Plasmonics: fundamentals and applications. New York, Springer, 2007.Google Scholar
• [67]
Kong Y, Chabalko M, Black E, Powell S, Bain JA, Schlesinger TE, Luo Y. Evanescent coupling between dielectric and plasmonic waveguides for HAMR applications. IEEE T Magn 2011;47:2364–7.
• [68]
Powell SP, Black EJ, Schlesinger TE, Bain JA. The influence of media optical properties on the efficiency of optical power delivery for heat assisted magnetic recording. J Appl Phys 2011;109:07B775-1–3.Google Scholar
• [69]
Zhou Y, Jin X, Takano K, Dovek M, Maletzky T, Schreck E, Smyth J. Magnetic core plasmon antenna with recessed plasmon layer. Headway Technologies, Inc., US Patent No. 8059496B1, 2011.Google Scholar
• [70]
Rottmayer RE, Batra S, Buechel D, Challener WA, Hohlfeld J, Kubota Y, Li L, Lu B, Mihalcea C, Mountfield K, Pelhos K, Peng C, Rausch T, Seigler MA, Weller D, Yang X. Heat-assisted magnetic recording. IEEE T Magn 2006;42:2417–21.
• [71]
Stipe B, Brockie R, Richter H, Matsumoto T, Boone T, Zaki R, Huang L, Staffaroni M, et al. Optimizing heat-assisted magnetic recording and FePt-based recording media. Presented at the Magnetic Recording Conference (TMRC) 2013, paper F4.Google Scholar
• [72]
Xu B, Toh YT, Chia CW, Li J, Zhang J, Ye K, An C. Relationship between near field optical transducer laser absorption and its efficiency. IEEE T Magn 2012;48:1789–93.
• [73]
Xu BX, Liu ZJ, Ji R, Toh YT, Hu JF, Li JM, Zhang J, Ye KD, Chia CW. Thermal issues and their effects on heat-assisted magnetic recording system. J Appl Phys 2012;111:07B701-1–6.Google Scholar
• [74]
Xiong S, Kim J, Wang Y, Zhang X, Bogy D. A two-stage heating shceme for heat assisted magnetic recording. J Appl Phys 2014;115:17B702-1–3.Google Scholar
• [75]
Jackson JD. Classical electrodynamics. 2nd ed. New York, Wiley, 1975.Google Scholar
• [76]
West PR, Ishii S, Naik GV, Emani NK, Shalaev VM, Boltasseva A. Searching for better plasmonic materials. Laser & Photon Rev 2010;4:795–808.
• [77]
Johnson PB, Christy RW. Optical constants of transition metals: Ti, V, Cr, Mn, Fe, Co, Ni, and Pd. Phys Rev B 1974;9:5056–70.
• [78]
Zhu M, Zhao T, Riemer SC, Kautzky MC. HAMR NFT Materials with improved thermal stability. Seagate Technology LLC, US Patent No. 2013/0286799 A1, 2013.Google Scholar
• [79]
Zhao T, Kautzky MC, Challener WA, Seigler MA. HAMR NFT materials with improved thermal stability. Seagate Technology LLC, US Patent No. 8427925 B2, 2013.Google Scholar
• [80]
Guler U, Naik GV, Boltasseva A, Shalaev VM, Kildishev AV. Performance analysis of nitride alternative plasmonic materials for localized surface plasmon applications. Appl Phys B 2012;107:285–91.
• [81]
Guler U, Ndukaife JC, Naik GV, Nnanna AGA, Kildishev AV, Shalaev VM, Boltasseva A. Local heating with lithographically fabricated plasmonic titanium nitride nanoparticles. Nano Lett 2013;13:6078–83.
• [82]
Bobb DA, Zhu G, Mayy M, Gavrilenko AV, Mead P, Gavrilenko VI, Noginow MA. Engineering of low-loss metal for nanoplasmonic and metamaterials applications. Appl Phys Lett 2009;95:151102-1–3.
• [83]
Boltasseva A, Atwater HA. Low-loss plasmonic metamaterials. Science 2011;331:290–1.
• [84]
Naik GV, Kim J, Boltasseva A. Oxides and nitrides as alternative plasmonic materials in the optical range. Opt Mater Exp 2011;1:1090–9.
• [85]
Naik GV, Liu J, Kildishev AV, Shalaev VM, Boltasseva A. Demonstration of Al:ZnO as a plasmonic component for near-infrared metamaterials. PNAS 2012;109:8834–8.
• [86]
Naik GV, Shalaev VM, Boltasseva A. Alternative plasmonic materials: beyond gold and silver. Adv Mater 2013;25:3264–94.
• [87]
Zhu G, Gu L, Kitur JK, Urbas A, Wella J, Noginov MA. Organic materials with negative and controllable electric permittivity. Quantum Electronics and Laser Science Conference 2011, paper QThC3.Google Scholar
• [88]
Zhao T, Sahoo S, Kautzky MC, Itagi AV. Near field transducers including nitride materials. Seagate Technology LLC, US Patent No. 2013/0279315 A1, 2013.Google Scholar
• [89]
Budaev BV, Bogy DB. On the lifetime of plasmonic transducers in heat assisted magnetic recording. J Appl Phys 2012;112:034512-1–10.
Corresponding author: Xianfan Xu, School of Mechanical Engineering and Birck Nanotechnology Center, Purdue University, West Lafayette, IN 47906, USA, e-mail:
Accepted: 2014-05-08
Published Online: 2014-05-27
Published in Print: 2014-06-01
Citation Information: Nanophotonics, Volume 3, Issue 3, Pages 141–155, ISSN (Online) 2192-8614, ISSN (Print) 2192-8606,
Export Citation
©2014 Science Wise Publishing & De Gruyter Berlin/Boston.
## Citing Articles
[1]
Lauren M. Otto, Stanley P. Burgos, Matteo Staffaroni, Shen Ren, Özgün Süzer, Barry C. Stipe, Paul D. Ashby, and Aeron T. Hammack
Journal of Applied Physics, 2018, Volume 123, Number 18, Page 183104
[2]
G. Scheunert, R. McCarron, R. Kullock, S. R. Cohen, K. Rechav, I. Kaplan-Ashiri, O. Bitton, B. Hecht, and D. Oron
Journal of Applied Physics, 2018, Volume 123, Number 14, Page 143102
[3]
Nicolás Abadía, Frank Bello, Chuan Zhong, Patrick Flanigan, David M. McCloskey, Christopher Wolf, Alexander Krichevsky, Daniel Wolf, Fenghua Zong, Alireza Samani, David V. Plant, and John F. Donegan
Optics Express, 2018, Volume 26, Number 2, Page 1752
[4]
Anurup Datta and Xianfan Xu
IEEE Transactions on Magnetics, 2017, Volume 53, Number 12, Page 1
[5]
Christoph Vogler, Claas Abert, Florian Bruckner, and Dieter Suess
Physical Review Applied, 2017, Volume 8, Number 5
[6]
Yao Wang and B. V. K. Vijaya Kumar
IEEE Transactions on Magnetics, 2017, Volume 53, Number 11, Page 1
[7]
Yuwei Qin, Hai Li, and Jian-Gang Zhu
IEEE Transactions on Magnetics, 2017, Volume 53, Number 11, Page 1
[8]
Jun Aoyama, Masaru Furukawa, Shuji Nishida, Kenji Tasaka, Kouji Matsuda, Kenji Kuroki, and Masaomi Ikeda
IEEE Transactions on Magnetics, 2017, Volume 53, Number 11, Page 1
[9]
Svetlana V. Boriskina, Thomas Alan Cooper, Lingping Zeng, George Ni, Jonathan K. Tong, Yoichiro Tsurimaki, Yi Huang, Laureen Meroueh, Gerald Mahan, and Gang Chen
Advances in Optics and Photonics, 2017, Volume 9, Number 4, Page 775
[10]
Yueqiang Hu, Haoyu Wu, Yonggang Meng, and David B. Bogy
Journal of Applied Physics, 2017, Volume 122, Number 13, Page 134303
[11]
Shaomin Xiong, Robert Smith, Erhard Schreck, and Sripathi Canchi
IEEE Magnetics Letters, 2017, Volume 8, Page 1
[12]
Harsha Reddy, Urcan Guler, Zhaxylyk Kudyshev, Alexander V. Kildishev, Vladimir M. Shalaev, and Alexandra Boltasseva
ACS Photonics, 2017, Volume 4, Number 6, Page 1413
[13]
Thorsten Feichtner, Oleg Selig, and Bert Hecht
Optics Express, 2017, Volume 25, Number 10, Page 10828
[14]
Harsha Reddy, Urcan Guler, Krishnakali Chaudhuri, Aveek Dutta, Alexander V. Kildishev, Vladimir M. Shalaev, and Alexandra Boltasseva
ACS Photonics, 2017, Volume 4, Number 5, Page 1083
[15]
Justin A. Briggs, Gururaj V. Naik, Yang Zhao, Trevor A. Petach, Kunal Sahasrabuddhe, David Goldhaber-Gordon, Nicholas A. Melosh, and Jennifer A. Dionne
Applied Physics Letters, 2017, Volume 110, Number 10, Page 101901
[16]
Jacek Gosciniak, John Justice, Umar Khan, Mircea Modreanu, and Brian Corbett
Optics Express, 2017, Volume 25, Number 5, Page 5244
[17]
Jian-Gang (Jimmy) Zhu and Hai Li
AIP Advances, 2017, Volume 7, Number 5, Page 056505
[18]
Jian-Gang Jimmy Zhu and Hai Li
IEEE Transactions on Magnetics, 2017, Volume 53, Number 2, Page 1
[19]
Dihan Hasan, Prakash Pitchappa, Jiahui Wang, Tao Wang, Bin Yang, Chong Pei Ho, and Chengkuo Lee
ACS Photonics, 2017, Volume 4, Number 2, Page 302
[20]
Z H Cen, B X Xu, J F Hu, R Ji, Y T Toh, K D Ye, and Y F Hu
Journal of Physics D: Applied Physics, 2017, Volume 50, Number 7, Page 075105
[21]
Haoyu Wu and David Bogy
Applied Physics Letters, 2017, Volume 110, Number 3, Page 033104
[22]
Marcello Ferrera, Nathaniel Kinsey, Amr Shaltout, Clayton DeVault, Vladimir Shalaev, and Alexandra Boltasseva
Journal of the Optical Society of America B, 2017, Volume 34, Number 1, Page 95
[23]
Adrien Lalisse, Gilles Tessier, Jérome Plain, and Guillaume Baffou
Scientific Reports, 2016, Volume 6, Number 1
[24]
Anurup Datta and Xianfan Xu
IEEE Transactions on Magnetics, 2016, Volume 52, Number 12, Page 1
[25]
Luis Traverso, Anurup Datta, and Xianfan Xu
Optics Express, 2016, Volume 24, Number 23, Page 26016
[26]
Po-Ting Shen, Yonatan Sivan, Cheng-Wei Lin, Hsiang-Lin Liu, Chih-Wei Chang, and Shi-Wei Chu
Optics Express, 2016, Volume 24, Number 17, Page 19254
[27]
Harsha Reddy, Urcan Guler, Alexander V. Kildishev, Alexandra Boltasseva, and Vladimir M. Shalaev
Optical Materials Express, 2016, Volume 6, Number 9, Page 2776
[28]
Cheng-Ming Chow and James A. Bain
IEEE Transactions on Magnetics, 2016, Volume 52, Number 7, Page 1
[29]
Procedia Computer Science, 2016, Volume 86, Page 35
[30]
J. Gosciniak, M. Mooney, M. Gubbins, and B. Corbett
Scientific Reports, 2016, Volume 6, Number 1
[31]
J. Gosciniak, M. Mooney, M. Gubbinsi, and B. Corbett
IEEE Transactions on Magnetics, 2016, Volume 52, Number 2, Page 1
[32]
Ali Ghoreyshi and R. H. Victora
Applied Physics Letters, 2016, Volume 108, Number 9, Page 092401
[33]
G. Scheunert, O. Heinonen, R. Hardeman, A. Lapicki, M. Gubbins, and R. M. Bowman
Applied Physics Reviews, 2016, Volume 3, Number 1, Page 011301
[34]
Jacek Gosciniak, Marcus Mooney, Mark Gubbins, and Brian Corbett
Nanophotonics, 2015, Volume 4, Number 1
[35]
Haitian Xu, Ghazal Hajisalem, Geoffrey M. Steeves, Reuven Gordon, and Byoung C. Choi
Scientific Reports, 2015, Volume 5, Number 1
[36]
Rong Ji, Baoxi Xu, Zhanhong Cen, Ji Feng Ying, and Yeow Teck Toh
Journal of Applied Physics, 2015, Volume 117, Number 17, Page 17A918
[37]
Nan Zhou, Luis M Traverso, and Xianfan Xu
Nanotechnology, 2015, Volume 26, Number 13, Page 134001
[38]
Urcan Guler, Vladimir M. Shalaev, and Alexandra Boltasseva
Materials Today, 2015, Volume 18, Number 4, Page 227 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6852453351020813, "perplexity": 5567.856834358949}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794866938.68/warc/CC-MAIN-20180525024404-20180525044404-00197.warc.gz"} |
https://zenodo.org/record/3758648/export/xd | Software Open Access
# UKRMol+: UKRMol-out
UK R-matrix community
### Dublin Core Export
<?xml version='1.0' encoding='utf-8'?>
<oai_dc:dc xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:oai_dc="http://www.openarchives.org/OAI/2.0/oai_dc/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/oai_dc/ http://www.openarchives.org/OAI/2.0/oai_dc.xsd">
<dc:creator>UK R-matrix community</dc:creator>
<dc:date>2020-04-20</dc:date>
<dc:description>Outer region programs for the reengineered UK computational implementation of the R-matrix method for the treatment of electron and positron scattering from molecules (BTO/GTO continuum). Also calculates photoionization cross sections.
This version corrects 3 bugs in the program dipelm:
Use iof correct electron energies with Eleft /= 0 to avoid ismooth = 1 giving unphysical results.
Correct error text in oriented_observables so that when an error reading the namelist ORIENT occurrs, the message is correct.
Fix spherical harmonic normalization: a typo in the definition of spherical harmonic normalization and usage of complex spherical harmonics when they should be real spherical harmonics affected oriented dipoles and cross sections, and has now been corrected.
For a complete list of the authors who contributed to this software see https://www.ukamor.com/ and a file in the release tarball (after release 3.0).
Features of release 3.0:
New version of DIPELM:
selection of a non-contiguous subset of states for which to calculate observables.
output can now be chosen so that: (1) each observable has its own file and contains the results for all ionic states; (2) the observables for each state are saved into individual file each corresponding to one ionic state
rationalization of the output for oriented molecules.
now conforms to Fortran 2003 standard.
Enabled CMake "install" target
New test suite, run with CMake. Now includes RMT data production, positron and pseudostate tests.
added one standalone executable per outer region module
Updated documentation
New program to calculate rates from the cross sections
Merging of RSOLVE related codes: now RSOLVE includes RSOLVE_PHOTO
New MPI_RSOLVE, the parallel equivalent of RSOLVE.
Renamed dipoles_for_hhg to dipole_tools
Use of integrals generated by SCATCI_INTEGRALS is now default.
enabled building of shared libraries (including DLLs on Windows)
support for arbitrary BLAS/LAPACK/Arpack/ScaLAPACK integer interface
support for MPI-3 shared memory (automatically detected)
removed language elements illegal in Fortran 2018
reduced photoionization test
compatibility with Cray CE 8.7.7
This version uses GBTOlib 2.0.</dc:description>
<dc:description>Software development supported by EPSRC, CCPQ, UK-AMOR and others.</dc:description>
<dc:identifier>https://zenodo.org/record/3758648</dc:identifier>
<dc:identifier>10.5281/zenodo.3758648</dc:identifier>
<dc:identifier>oai:zenodo.org:3758648</dc:identifier>
<dc:language>eng</dc:language>
<dc:relation>doi:10.5281/zenodo.2630570</dc:relation>
<dc:rights>info:eu-repo/semantics/openAccess</dc:rights>
<dc:subject>electron scattering, photoionization</dc:subject>
<dc:title>UKRMol+: UKRMol-out</dc:title>
<dc:type>info:eu-repo/semantics/other</dc:type>
<dc:type>software</dc:type>
</oai_dc:dc>
531
180
views | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2839801609516144, "perplexity": 14855.865538987304}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402131412.93/warc/CC-MAIN-20201001112433-20201001142433-00767.warc.gz"} |
https://dlmf.nist.gov/3.8 | # §3.8 Nonlinear Equations
## §3.8(i) Introduction
The equation to be solved is
3.8.1 $f(z)=0,$ ⓘ Referenced by: §3.8(i) Permalink: http://dlmf.nist.gov/3.8.E1 Encodings: TeX, pMML, png See also: Annotations for §3.8(i), §3.8 and Ch.3
where $z$ is a real or complex variable and the function $f$ is nonlinear. Solutions are called roots of the equation, or zeros of $f$. If $f(z_{0})=0$ and $f^{\prime}(z_{0})\neq 0$, then $z_{0}$ is a simple zero of $f$. If $f(z_{0})=f^{\prime}(z_{0})=\cdots=f^{(m-1)}(z_{0})=0$ and $f^{(m)}(z_{0})\neq 0$, then $z_{0}$ is a zero of $f$ of multiplicity $m$; compare §1.10(i).
Sometimes the equation takes the form
3.8.2 $z=\phi(z),$ ⓘ Referenced by: §3.8(i), §3.8(viii) Permalink: http://dlmf.nist.gov/3.8.E2 Encodings: TeX, pMML, png See also: Annotations for §3.8(i), §3.8 and Ch.3
and the solutions are called fixed points of $\phi$.
Equations (3.8.1) and (3.8.2) are usually solved by iterative methods. Let $z_{1},z_{2},\dots$ be a sequence of approximations to a root, or fixed point, $\zeta$. If
3.8.3 $\left|z_{n+1}-\zeta\right| ⓘ Symbols: $\zeta$: fixed point A&S Ref: 3.9.2 Referenced by: §3.8(i) Permalink: http://dlmf.nist.gov/3.8.E3 Encodings: TeX, pMML, png See also: Annotations for §3.8(i), §3.8 and Ch.3
for all $n$ sufficiently large, where $A$ and $p$ are independent of $n$, then the sequence is said to have convergence of the $p$th order. (More precisely, $p$ is the largest of the possible set of indices for (3.8.3).) If $p=1$ and $A<1$, then the convergence is said to be linear or geometric. If $p=2$, then the convergence is quadratic; if $p=3$, then the convergence is cubic, and so on.
An iterative method converges locally to a solution $\zeta$ if there exists a neighborhood $N$ of $\zeta$ such that $z_{n}\to\zeta$ whenever the initial approximation $z_{0}$ lies within $N$.
## §3.8(ii) Newton’s Rule
This is an iterative method for real twice-continuously differentiable, or complex analytic, functions:
3.8.4 $z_{n+1}=z_{n}-\frac{f(z_{n})}{f^{\prime}(z_{n})},$ $n=0,1,\dots$. ⓘ A&S Ref: 3.9.5 Referenced by: §3.8(ii) Permalink: http://dlmf.nist.gov/3.8.E4 Encodings: TeX, pMML, png See also: Annotations for §3.8(ii), §3.8 and Ch.3
If $\zeta$ is a simple zero, then the iteration converges locally and quadratically. For multiple zeros the convergence is linear, but if the multiplicity $m$ is known then quadratic convergence can be restored by multiplying the ratio $f(z_{n})/f^{\prime}(z_{n})$ in (3.8.4) by $m$.
For real functions $f(x)$ the sequence of approximations to a real zero $\xi$ will always converge (and converge quadratically) if either:
• (a)
$f(x_{0})f^{\prime\prime}(x_{0})>0$ and $f^{\prime}(x)$, $f^{\prime\prime}(x)$ do not change sign between $x_{0}$ and $\xi$ (monotonic convergence).
• (b)
$f(x_{0})f^{\prime\prime}(x_{0})<0$, $f^{\prime}(x)$, $f^{\prime\prime}(x)$ do not change sign in the interval $(x_{0},x_{1})$, and $\xi\in[x_{0},x_{1}]$ (monotonic convergence after the first iteration).
### Example
$f(x)=x-\tan x$. The first positive zero of $f(x)$ lies in the interval $(\pi,\frac{3}{2}\pi)$; see Figure 4.15.3. From this graph we estimate an initial value $x_{0}=4.65$. Newton’s rule is given by
3.8.5 $\displaystyle x_{n+1}$ $\displaystyle=\phi(x_{n}),$ $\displaystyle\phi(x)$ $\displaystyle=x+x{\cot}^{2}x-\cot x.$ ⓘ Symbols: $\cot\NVar{z}$: cotangent function Permalink: http://dlmf.nist.gov/3.8.E5 Encodings: TeX, TeX, pMML, pMML, png, png See also: Annotations for §3.8(ii), §3.8(ii), §3.8 and Ch.3
Results appear in Table 3.8.1. The choice of $x_{0}$ here is critical. When $x_{0}\leq 4.2875$ or $x_{0}\geq 4.7125$, Newton’s rule does not converge to the required zero. The convergence is faster when we use instead the function $f(x)=x\cos x-\sin x$; in addition, the successful interval for the starting value $x_{0}$ is larger.
## §3.8(iii) Other Methods
### Bisection Method
If $f(a)f(b)<0$ with $a, then the interval $[a,b]$ contains one or more zeros of $f$. Bisection of this interval is used to decide where at least one zero is located. All zeros of $f$ in the original interval $[a,b]$ can be computed to any predetermined accuracy. Convergence is slow however; see Kaufman and Lenker (1986) and Nievergelt (1995).
### Regula Falsi
Let $x_{0}$ and $x_{1}$ be such that $f_{0}=f(x_{0})$ and $f_{1}=f(x_{1})$ have opposite signs. Inverse linear interpolation (§3.3(v)) is used to obtain the first approximation:
3.8.6 $x_{2}=x_{1}-\frac{x_{1}-x_{0}}{f_{1}-f_{0}}f_{1}=\frac{f_{1}x_{0}-f_{0}x_{1}}{% f_{1}-f_{0}}.$ ⓘ A&S Ref: 3.9.3 Referenced by: §3.8(iii) Permalink: http://dlmf.nist.gov/3.8.E6 Encodings: TeX, pMML, png See also: Annotations for §3.8(iii), §3.8(iii), §3.8 and Ch.3
We continue with $x_{2}$ and either $x_{0}$ or $x_{1}$, depending which of $f_{0}$ and $f_{1}$ is of opposite sign to $f(x_{2})$, and so on. The convergence is linear, and again more than one zero may occur in the original interval $[x_{0},x_{1}]$.
### Secant Method
Whether or not $f_{0}$ and $f_{1}$ have opposite signs, $x_{2}$ is computed as in (3.8.6). If the wanted zero $\xi$ is simple, then the method converges locally with order of convergence $p=\frac{1}{2}(1+\sqrt{5})=1.618\ldots\,$. Because the method requires only one function evaluation per iteration, its numerical efficiency is ultimately higher than that of Newton’s method. There is no guaranteed convergence: the first approximation $x_{2}$ may be outside $[x_{0},x_{1}]$.
### Steffensen’s Method
This iterative method for solving $z=\phi(z)$ is given by
3.8.7 $z_{n+1}=z_{n}-\frac{(\phi(z_{n})-z_{n})^{2}}{\phi(\phi(z_{n}))-2\phi(z_{n})+z_% {n}},$ $n=0,1,2,\dots$. ⓘ Permalink: http://dlmf.nist.gov/3.8.E7 Encodings: TeX, pMML, png See also: Annotations for §3.8(iii), §3.8(iii), §3.8 and Ch.3
It converges locally and quadratically for both $\mathbb{R}$ and $\mathbb{C}$.
For other efficient derivative-free methods, see Le (1985).
### Eigenvalue Methods
For the computation of zeros of orthogonal polynomials as eigenvalues of finite tridiagonal matrices (§3.5(vi)), see Gil et al. (2007a, pp. 205–207). For the computation of zeros of Bessel functions, Coulomb functions, and conical functions as eigenvalues of finite parts of infinite tridiagonal matrices, see Grad and Zakrajšek (1973), Ikebe (1975), Ikebe et al. (1991), Ball (2000), and Gil et al. (2007a, pp. 205–213).
## §3.8(iv) Zeros of Polynomials
The polynomial
3.8.8 $p(z)=a_{n}z^{n}+a_{n-1}z^{n-1}+\dots+a_{0},$ $a_{n}\neq 0$, ⓘ Defines: $p(z)$: polynomial (locally) Referenced by: §3.8(vi) Permalink: http://dlmf.nist.gov/3.8.E8 Encodings: TeX, pMML, png See also: Annotations for §3.8(iv), §3.8 and Ch.3
has $n$ zeros in $\mathbb{C}$, counting each zero according to its multiplicity. Explicit formulas for the zeros are available if $n\leq 4$; see §§1.11(iii) and 4.43. No explicit general formulas exist when $n\geq 5$.
After a zero $\zeta$ has been computed, the factor $z-\zeta$ is factored out of $p(z)$ as a by-product of Horner’s scheme (§1.11(i)) for the computation of $p(\zeta)$. In this way polynomials of successively lower degree can be used to find the remaining zeros. (This process is called deflation.) However, to guard against the accumulation of rounding errors, a final iteration for each zero should also be performed on the original polynomial $p(z)$.
### Example
$p(z)=z^{4}-1$. The zeros are $\pm 1$ and $\pm\mathrm{i}$. Newton’s method is given by
3.8.9 $\displaystyle z_{n+1}$ $\displaystyle=\phi(z_{n}),$ $\displaystyle\phi(z)$ $\displaystyle=\frac{3z^{4}+1}{4z^{3}}.$ ⓘ A&S Ref: 3.9.6 (in different form) Referenced by: §3.8(viii) Permalink: http://dlmf.nist.gov/3.8.E9 Encodings: TeX, TeX, pMML, pMML, png, png See also: Annotations for §3.8(iv), §3.8(iv), §3.8 and Ch.3
The results for $z_{0}=1.5$ are given in Table 3.8.2.
As in the case of Table 3.8.1 the quadratic nature of convergence is clearly evident: as the zero is approached, the number of correct decimal places doubles at each iteration.
Newton’s rule can also be used for complex zeros of $p(z)$. However, when the coefficients are all real, complex arithmetic can be avoided by the following iterative process.
### Bairstow’s Method
Let $z^{2}-sz-t$ be an approximation to the real quadratic factor of $p(z)$ that corresponds to a pair of conjugate complex zeros or to a pair of real zeros. We construct sequences $q_{j}$ and $r_{j}$, $j=n+1,n,\dots,0$, from $q_{n+1}=r_{n+1}=0$, $q_{n}=r_{n}=a_{n}$, and for $j\leq n-1$,
3.8.10 $\displaystyle q_{j}$ $\displaystyle=a_{j}+sq_{j+1}+tq_{j+2},$ $\displaystyle r_{j}$ $\displaystyle=q_{j}+sr_{j+1}+tr_{j+2}.$ ⓘ Defines: $r_{j}$: sequence (locally) Symbols: $q(x)$: real-valued function Permalink: http://dlmf.nist.gov/3.8.E10 Encodings: TeX, TeX, pMML, pMML, png, png See also: Annotations for §3.8(iv), §3.8(iv), §3.8 and Ch.3
Then the next approximation to the quadratic factor is $z^{2}-(s+\Delta s)z-(t+\Delta t)$, where
3.8.11 $\displaystyle\Delta s$ $\displaystyle=\frac{r_{3}q_{0}-r_{2}q_{1}}{r_{2}^{2}-\ell r_{3}},$ $\displaystyle\Delta t$ $\displaystyle=\frac{\ell q_{1}-r_{2}q_{0}}{r_{2}^{2}-\ell r_{3}},$ $\displaystyle\ell$ $\displaystyle=sr_{2}+tr_{3}.$ ⓘ Symbols: $r_{j}$: sequence and $q(x)$: real-valued function Permalink: http://dlmf.nist.gov/3.8.E11 Encodings: TeX, TeX, TeX, pMML, pMML, pMML, png, png, png See also: Annotations for §3.8(iv), §3.8(iv), §3.8 and Ch.3
The method converges locally and quadratically, except when the wanted quadratic factor is a multiple factor of $q(z)$. On the last iteration $q_{n}z^{n-2}+q_{n-1}z^{n-3}+\dots+q_{2}$ is the quotient on dividing $p(z)$ by $z^{2}-sz-t$.
### Example
$p(z)=z^{4}-2z^{2}+1$. With the starting values $s_{0}=\frac{7}{4}$, $t_{0}=-\frac{1}{2}$, an approximation to the quadratic factor $z^{2}-2z+1=(z-1)^{2}$ is computed ($s=2$, $t=-1$). Table 3.8.3 gives the successive values of $s$ and $t$. The quadratic nature of the convergence is evident.
This example illustrates the fact that the method succeeds even if the two zeros of the wanted quadratic factor are real and the same.
For further information on the computation of zeros of polynomials see McNamee (2007).
## §3.8(v) Zeros of Analytic Functions
Newton’s rule is the most frequently used iterative process for accurate computation of real or complex zeros of analytic functions $f(z)$. Another iterative method is Halley’s rule:
3.8.12 $z_{n+1}=z_{n}-\frac{f(z_{n})}{f^{\prime}(z_{n})-(f^{\prime\prime}(z_{n})f(z_{n% })/(2f^{\prime}(z_{n})))}.$ ⓘ Permalink: http://dlmf.nist.gov/3.8.E12 Encodings: TeX, pMML, png See also: Annotations for §3.8(v), §3.8 and Ch.3
This is useful when $f(z)$ satisfies a second-order linear differential equation because of the ease of computing $f^{\prime\prime}(z_{n})$. The rule converges locally and is cubically convergent.
Initial approximations to the zeros can often be found from asymptotic or other approximations to $f(z)$, or by application of the phase principle or Rouché’s theorem; see §1.10(iv). These results are also useful in ensuring that no zeros are overlooked when the complex plane is being searched.
For an example involving the Airy functions, see Fabijonas and Olver (1999).
For fixed-point methods for computing zeros of special functions, see Segura (2002), Gil and Segura (2003), and Gil et al. (2007a, Chapter 7). For describing the distribution of complex zeros of solutions of linear homogeneous second-order differential equations by methods based on the Liouville–Green (WKB) approximation, see Segura (2013).
## §3.8(vi) Conditioning of Zeros
Suppose $f(z)$ also depends on a parameter $\alpha$, denoted by $f(z,\alpha)$. Then the sensitivity of a simple zero $z$ to changes in $\alpha$ is given by
3.8.13 $\frac{\mathrm{d}z}{\mathrm{d}\alpha}=-\ifrac{\frac{\partial f}{\partial\alpha}% }{\frac{\partial f}{\partial z}}.$
Thus if $f$ is the polynomial (3.8.8) and $\alpha$ is the coefficient $a_{j}$, say, then
3.8.14 $\frac{\mathrm{d}z}{\mathrm{d}a_{j}}=-\frac{z^{j}}{f^{\prime}(z)}.$ ⓘ Symbols: $\frac{\mathrm{d}\NVar{f}}{\mathrm{d}\NVar{x}}$: derivative Referenced by: §3.8(vi), §3.8(vi) Permalink: http://dlmf.nist.gov/3.8.E14 Encodings: TeX, pMML, png See also: Annotations for §3.8(vi), §3.8 and Ch.3
For moderate or large values of $n$ it is not uncommon for the magnitude of the right-hand side of (3.8.14) to be very large compared with unity, signifying that the computation of zeros of polynomials is often an ill-posed problem.
### Example. Wilkinson’s Polynomial
The zeros of
3.8.15 $p(x)=(x-1)(x-2)\cdots(x-20)$ ⓘ Permalink: http://dlmf.nist.gov/3.8.E15 Encodings: TeX, pMML, png See also: Annotations for §3.8(vi), §3.8(vi), §3.8 and Ch.3
are well separated but extremely ill-conditioned. Consider $x=20$ and $j=19$. We have $p^{\mspace{1.0mu}\prime}(20)=19!$ and $a_{19}=1+2+\dots+20=210$. The perturbation factor (3.8.14) is given by
3.8.16 $\frac{\mathrm{d}x}{\mathrm{d}a_{19}}=-\frac{20^{19}}{19!}=(-4.30\dots)\times 1% 0^{7}.$ ⓘ Symbols: $\frac{\mathrm{d}\NVar{f}}{\mathrm{d}\NVar{x}}$: derivative and $!$: factorial (as in $n!$) Permalink: http://dlmf.nist.gov/3.8.E16 Encodings: TeX, pMML, png See also: Annotations for §3.8(vi), §3.8(vi), §3.8 and Ch.3
Corresponding numerical factors in this example for other zeros and other values of $j$ are obtained in Gautschi (1984, §4).
## §3.8(vii) Systems of Nonlinear Equations
For fixed-point iterations and Newton’s method for solving systems of nonlinear equations, see Gautschi (1997a, Chapter 4, §9) and Ortega and Rheinboldt (1970).
## §3.8(viii) Fixed-Point Iterations: Fractals
The convergence of iterative methods
3.8.17 $z_{n+1}=\phi(z_{n}),$ $n=0,1,\dots$, ⓘ Permalink: http://dlmf.nist.gov/3.8.E17 Encodings: TeX, pMML, png See also: Annotations for §3.8(viii), §3.8 and Ch.3
for solving fixed-point problems (3.8.2) cannot always be predicted, especially in the complex plane.
Consider, for example, (3.8.9). Starting this iteration in the neighborhood of one of the four zeros $\pm 1,\pm\mathrm{i}$, sequences $\{z_{n}\}$ are generated that converge to these zeros. For an arbitrary starting point $z_{0}\in\mathbb{C}$, convergence cannot be predicted, and the boundary of the set of points $z_{0}$ that generate a sequence converging to a particular zero has a very complicated structure. It is called a Julia set. In general the Julia set of an analytic function $f(z)$ is a fractal, that is, a set that is self-similar. See Julia (1918) and Devaney (1986). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 209, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9646961688995361, "perplexity": 1662.6283366063271}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103355949.26/warc/CC-MAIN-20220628050721-20220628080721-00583.warc.gz"} |
https://www.love2d.org/forums/search.php?author_id=130605&sr=posts | ## Search found 89 matches
Wed Sep 30, 2015 11:37 pm
Forum: Support and Development
Topic: How to change gamestates pressing R and particle cleanup
Replies: 1
Views: 620
### How to change gamestates pressing R and particle cleanup
SORRY IN ADVANCED FOR NOT USING .LOVE, FOR SOME REASON IT DOESN'T WORK USING .LOVE ALSO SORRY FOR HAVING SUCH MESSY CODE. I've been trying to make a space invader like game for practice, but for some reason, when I enter the gameover gamestate, I cant change back to the playing gamestate when i pres...
Tue Apr 08, 2014 11:33 pm
Forum: Support and Development
Topic: How to check if 2 keys are down?
Replies: 9
Views: 937
### How to check if 2 keys are down?
I noticed that my player goes vertical if holding A and D, now i want to change the characters picture to a vertical picture if the 2 keys are down, but how do i detect if the 2 are down?
Thu Nov 21, 2013 1:24 am
Forum: Support and Development
Topic: Love2D Beginner Tutorials (Video Series)
Replies: 0
Views: 613
### Love2D Beginner Tutorials (Video Series)
I have been working on some love2d beginner tutorials, I will be posting the whole series on this one page, hope this helps some people.
Episode 1:Installation & Explanation (HD Recommended)
Episode 2:Soon to come
Wed Oct 30, 2013 8:49 pm
Forum: Support and Development
Topic: mouse direction
Replies: 10
Views: 2879
### Re: mouse direction
Plu wrote:Most likely the mouse is using screen coördinates, while the player is using world coördinates.
I was doing
Code: Select all
if mouse.x > player.x then
blah blah blah
end
it used the players coordinates to switch directions when the mouse is clicked
Tue Oct 29, 2013 11:48 am
Forum: Support and Development
Topic: mouse direction
Replies: 10
Views: 2879
### Re: mouse direction
DaedalusYoung wrote:
Code: Select all
if mouse.x < screen.width / 2 then
-- mouse is on the left half of the screen
else
-- mouse is on the right half of the screen
end
It worked at first but when I started using tiled and the camera lib it started to mess up, do you know whats causing this?
Mon Oct 28, 2013 11:09 pm
Forum: Support and Development
Topic: mouse direction
Replies: 10
Views: 2879
### Re: mouse direction
if mouse.x < screen.width / 2 then -- mouse is on the left half of the screen else -- mouse is on the right half of the screen end It worked! It kinda didnt work right, so I tweaked it a bit to my likings so instead of screenWidth it is not player.x so that it goes left/right according to the playe...
Sun Oct 27, 2013 11:57 pm
Forum: Support and Development
Topic: mouse direction
Replies: 10
Views: 2879
### Re: mouse direction
DaedalusYoung wrote:
Code: Select all
if mouse.x < screen.width / 2 then
I only want it to update when the mouse is clicked, should I do something like
If mouse is clicked then
if mouse.x < screen.width/2 then
blah blahb lah
end
end
?
Sun Oct 27, 2013 10:29 pm
Forum: Support and Development
Topic: mouse direction
Replies: 10
Views: 2879
### mouse direction
I was just wondering, how to check the mouse direction, because for my new game that Im working on in my spare time, he will look left and right when either A or D is pressed, but what I want him to do is that when my mouse is on the left or right from the middle of the screen and I click, it will f...
Sun Oct 27, 2013 8:10 pm
Forum: Games and Creations
Topic: Very Sociopathic Aang's Adventure Time
Replies: 9
Views: 2932
### Re: Very Sociopathic Aang's Adventure Time
I am addicted to this game! xD
It's funny and not too hard or too easy! Can I ask how you did the random spawning of enemies? and how did you get the camera to shake when you shoot?
Sun Oct 27, 2013 6:42 pm
Forum: Games and Creations
Topic: [UPDATE] Flyer Deathmatch [alpha] v 1.20
Replies: 10
Views: 3825
### Re: [UPDATE] Flyer Deathmatch [alpha] v 1.20
Hey, the game has gone so epic! I was wondering though, how did you get dynamic lights for your bullets and how did you shoot 2 at once? Im thinking of doing that in R.O.B.B.E.D. (my new game that I have been working on for 2 months) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32433581352233887, "perplexity": 6298.7495877061565}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251687725.76/warc/CC-MAIN-20200126043644-20200126073644-00018.warc.gz"} |
https://mathoverflow.net/questions/91374/vanishing-theorems | # vanishing theorems
I have a flat morphism $p: X \to Y$ from a smooth projective $X$ to a smooth projective $Y$. I have a line bundle $L$ on $X$ whose restriction to every fiber of $p$ is big and nef. I need the vanishing of $Rp_*(\Omega^b X\otimes L^{-1})$ for b small compared to $\dim X-\dim Y$. I could not find this in Esnault-Viehweg. Is it true? If yes, what is the reference?
First of all I suppose you meant $R^ip_*$ for $i>0$ and not $Rp_*$. Actually, neither is true, but the latter is obviously false while the former may seem believable first. Second, I suppose you are working over an algebraically closed field of characteristic zero.
In any case, unfortunately this fails even over an algebraically closed field of characteristic zero and already if $Y$ is a point. (This is not surprising as any statement you might hope for would be enough to prove in this case).
Let $X_0$ be an arbitrary smooth projective variety over an algebraically closed field of characteristic zero and $\mathscr L$ an arbitrary ample line bundle on $X_0$. Let $\pi:X\to X_0$ be the blowing up of $X_0$ at an arbitrary smooth point with exceptional divisor $E\simeq\mathbb P^{r-1}$ (i.e., $\dim X=r$). Then for any $b\in \mathbb N$, $0<b<r$ we have the following non-vanishing: $$H^b(X,\Omega_X^b\otimes\pi^*\mathscr L)\neq 0$$
Proof: Observe that by Bott's formula for cohomology on $\mathbb P^{r-1}$ applied to $E$, we have that $R^i\pi_*\Omega_X^b=0$ if and only if $i\neq 0, b$. By Kodaira-Akizuki-Nakano applied to $\mathscr L$ on $X_0$ we have that $H^i(X_0,\pi_*\Omega_X^b\otimes\pi^*\mathscr L)=0$ for $i>0$ (this is because $\pi_*\Omega_X^b/\Omega_{X_0}^b$ is supported at the point that was blown up). So, the Leray spectral sequence computing $H^i(X,\Omega_X^b\otimes\pi^*\mathscr L)$ degenerates and gives that for $i>0$, $$H^i(X,\Omega_X^b\otimes\pi^*\mathscr L)=H^{i-b}(X,R^b\pi_*\Omega_X^b\otimes\mathscr L)$$ For $i=b$ the latter is clearly non-zero. $\square$
Corollary: $H^{b}(X, \Omega_X^b\otimes \pi^*\mathscr L^{-1})\neq 0$ for $0<b<r$. In particular, if $\dim X\geq 3$, then $H^{1}(X, \Omega_X\otimes \pi^*\mathscr L^{-1})\neq 0$
The main reason vanishing fails here is that there is this big blob of a divisor, $E$, where the chosen line bundle, $\pi^*\mathscr L$, is trivial.
There are actually still some results that do give you vanishing, but they need more assumption. Sommese proved various versions with assumptions on the fiber of the morphism induced by some high power of the line bundle, so in particular these results assume that the line bundle is actually semi-ample, however it does not always need big once you assume the restriction on the fibers. For more on this see the book by Shiffman and Sommese: Vanishing theorems on complex manifolds.
Another couple of interesting papers were written by Arapura: Check out this and this.
My result Karl mentioned in his remark is actually for singular varieties. The main idea is that since you can't expect general vanishing for not necessarily ample but big and nef line bundles, you might be able to do it on the model where they are ample, however, this usually requires working on singular varieties, which makes dealing with $\Omega$ a bit tougher. For more details see this paper.
More recent similar vanishing results were obtained by Greb-Kebekus-Peternell and myself. The most recent relevant paper is this, but you could also look at this and this.
I guess it depends on exactly what you need. Take $Y=Spec\ \mathbb{C}$. Then here's a cautionary example that $L$ nef and big is usually inadequate for vanishing for $\Omega_X^b\otimes L^{\pm 1}$ when $0 <b<n=\dim X$. Let us suppose that $X$ is obtained by blowing up a smooth variety $Z$ at a point $p$. Let $f:X\to Z$ be the projection, and suppose that $L= f^*M$ with $M$ ample.
Suppose also that $n>2$. Then $H^1(Z,\Omega_Z^1\otimes M^{-1})=0$ by Kodaira-Nakano. One might ask if $H^1(X,\Omega_X^1\otimes L^{-1})=0$? Pick $n>3$ and use Kodaira-Nakano on $Z$ and Leray to get $$H^1(X,\Omega_X^1\otimes L^{-1})=H^0(Z, R^1f_*\Omega_X^1\otimes M^{-1})=\mathbb{C}$$ because $R^1$ is the sky-scraper sheaf $\mathbb{C}_p$.
• I guess I should occasionally reload my browser... This answer was not here when I started mine, then I was interrupted and it took a while before I got back to finishing it... Sorry for the duplication, but it might be worth leaving my answer there as it has some references. Mar 16, 2012 at 18:32
• No problem. Your answer is certainly more thorough. Mar 16, 2012 at 18:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.953345775604248, "perplexity": 122.65594116181612}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499634.11/warc/CC-MAIN-20230128121809-20230128151809-00028.warc.gz"} |
https://www.electricalexams.co/electrical-continuity-between-two-points/ | # Electrical continuity between any two points exists if _______
Electrical continuity between any two points exists if _______
### Right Answer is: Pointer shows deflection
#### SOLUTION
The megger can also be used to test continuity between any two points, if the pointer shows full deflection then there is electrical continuity between them.
Scroll to Top | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9640200138092041, "perplexity": 2992.8023309904597}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304835.96/warc/CC-MAIN-20220125130117-20220125160117-00052.warc.gz"} |
http://ftp.tug.org/interviews/fischer.html | ### Ulrike Fischer
[completed 2009-11-20]
Ulrike Fischer is very active in various on-line TeX discussion groups and has written several chess-related packages.
Ulrike Fischer, interviewee: I was born 1961 in Stuttgart in Germany as the oldest of three children. As child I moved with my parents first to Geneva in Switzerland (my father worked for IBM at this time) and later to Bonn in Germany. There I finished school and then studied mathematics at the University of Bonn until I got my “Diplom” (don't ask me how you say this in English).
With my husband I then moved to Siegburg (a town about 20 km away from Bonn) where we lived about 20 years. Since nearly three years ago, we are living in Mönchengladbach. I'm working for a member of Parliament of the Landtag of Nordrhein-Westfalen and I hope that I can continue this work after the next election in May 2010.
DW: How and when did were you first introduced to TeX et al.?
UF: I didn't use TeX at the university or my thesis. I saw sometime in the second half of the 1980s in a magazine for the Atari a description of an Atari implementation of TeX together with a small introduction to LaTeX. I looked at the example (LaTeX2.09) document and knew directly I would like this and ordered the floppy disks.
DW: What was it that made TeX appealing?
UF: It is naturally difficult to remember what I thought such a long time ago. But I will try. I am not a programmer. At the time I had written some small programs in FORTRAN during my studies but not more. The atari was my first PC, it had a windows system and you could use the mouse and menus. Thus it may sound surprising but it wasn't the quality of the output or the capabilities of TeX/LaTeX which attracted me, but it was the interface. With interface I don't mean the GUI offered by the Atari port but the LaTeX code itself. I had written my thesis with an text processor called signum and I hadn't really enjoyed it. The LaTeX code seemed to me much more adequate and easier to use to generate text documents.
I liked how the code reflected the logical design of the document. It was easy to connect the input e.g. \tableofcontents or \section with the output. (I doubt very much that a plain TeX document would have attracted me in the same way.)
I liked that the commands and the syntax looked quite natural. I knew all the names used and there didn't seem to be much “syntax overhead”, that means code pieces where I couldn't directly see the use. I see sometimes people complaining that the LaTeX syntax is not clean enough compared, for instance, to xml or ConTeXt — not suited for automatic processing, etc. But I think this people forget that LaTeX documents are mainly written and read by humans not by computers. Computers like a syntax like <h1>, <h2>, <h3> and end tags like </h1> everywhere. But humans — at least if they are not skilled programmers — prefer speaking names like \chapter and \section, and don't like to have to remember to add a specific code to end something when they think it is obvious where the end is.
I liked that the commands were meaningful words, not icons or something similar. (I hate icons. Nowadays I seem to spend much of my time letting the mouse hover above icons to read the tool tips. It's like handling a Chinese application.)
I liked that the commands where visible part of the document and that you could see where e.g. a list or an argument ended. This made it much more easier to insert more text at the correct place.
And I liked that the commands were saved with the document. It meant that I didn't have to remember the mouse clicks and menus and keyboard shortcuts I had used to achieve a certain result (I have a really bad memory for such things). I could simply look up the commands in the document.
DW: Do you remember who it was that you ordered the TeX floppy disks from?
UF: No. I'm not even sure I got the dates right. Perhaps it was at the start of 90's. But I do remember it was a shareware.
DW: What happened then?
UF: I have used LaTeX since then for almost every document — private documents like letters but also documents at my job like press release and speeches.
In the course of time I bought a lot of books — starting with the books of Kopka, but I also own the LaTeX companions (all editions), The TeXbook, a book on plain TeX, TeX Unbound, and the LaTeX manual. The Kopka book mentioned the German TeX user group, and so since then I've been a member of DANTE e.V. All the books helped me to understand TeX and LaTeX better, but I learned most when I used LaTeX, when I had to convert a theoretical knowledge in real code.
In general my documents are short; I think the longest I ever wrote were the documention for my packages. But quite often I have to do a lot of similar documents. As I'm rather lazy and hate to repeat some simple tasks I always try to set up things so that I can reuse the work — even if writing all the code took more time than I gained at the end. For example, for a long time I have organized chess competitions for children and teenagers. I've had to write a lot of serial letters and name lists and naturally thought about how to use a database and LaTeX to automate the task. This led to my first article in Texnische Komödie (page 27).
Later I was responsible for the bulletins of a chess competition of my chess club. This led to my various chess packages.
On the whole, I think I had the good fortune to learn LaTeX at my pace and in small steps and pieces. Nowadays I see a lot of people who start to use LaTeX to write their thesis. This means they have to learn a lot of typography concepts and how to use a large number of packages and applications in a short time. And in addition they are trying to produce the quality a professional typesetter would be proud of (in a thesis which will then probably be marked by a professor who used a typewriter for his thesis).
DW: How long did you use that Atari configuration?
UF: I switched from the Atari to a PC with Windows 95 because we wanted access to the Internet. That must have been around 1996. I used at first emTeX, but I don't remember if I got it on disks or even then from the Internet. It was the first time that I had to use the command line and batch files and to adjust environment variables and configuration files only to install LaTeX. In retrospect I think it was quite good that I had done my first steps with LaTeX on a system where the installation was easy. Later I switched to MiKTeX, I think version 2.2. Currently I'm using MiKTeX 2.7.
DW: Looking up your “various chess packages”, one finds: chessfss (chess fonts), chessboard (print chessboard), xskak (record chess positions), and enpassant (fonts from the Nørresundby Chess Club website converted to LaTeX). Do I have the purposes correct, and are there others?
UF: Yes and no.
DW: As you started to develop packages, what did you think of the various code and interfaces you had to deal with at the programming level? They could seem considerably less logical than the document interface that first attracted you to TeX.
UF: Well naturally as a mathematician I do understand the general principles in programming and don't have much of a problem to learn a language (and I know the basics of quite a lot of languages). Nevertheless, LaTeX is the only one which actually “caught” me for various reasons. First, the LaTeX kernel is rather short and — more importantly — more or less in one file. So it is easy to skim through the code and get an idea about the general structure and to find pieces of code. And while I often see people complain about the mass of packages they had to load in a document, I liked it. The fact that you add to load a package means on the other side that you can remove them to build minimal examples. This helped me a lot while I tried to understand the code. Also packages are in generally rather self-contained, complete and quite well documented. This too helps in understanding the code.
I also think that the package system of LaTeX lures people into writing code. As the whole “infrastructure” is already there, you can do something new quite fast: You write some new commands to generate some list. Or you start with a package like skak, redefine some commands, try to improve the font handling, or tweak another command. And before you realize it you have written your first package. And when you look at other packages to improve your own, you can see that the various packages represent a large range of coding skills and coding styles; that packages exist in a lot of sizes; and that some are only useful for some exotic documents. That means that it is quite OK to add your own package to the lot even if it is not perfect, doesn't follow a strict guideline, and is of use only for a small number of people. After all, even if the package is bad — nobody is forced to use it.
DW: You say your are lazy, but the rate at which you answer questions on the MiKTeX and XeTeX lists belies that claim. (You've answered more than one question of mine on one of these lists, and I think I first noticed your name on the comp.text.tex list.) Do you have a particular motivation for all the help you give on these lists?
UF: It is fun. I like to solve problems and to find out things, and it's even better if the problems are “real” problems. But if you really look at my answers on the lists, you probably also saw that I ask quite often for minimal examples and seldom send long answers with a lot of code. Which shows that I'm lazy (and that I seldom have much time for a problem).
DW: How did you get so experienced you can answer so many questions — just years of TeX use?
UF: No, years of answering questions ;-). I don't have a very large pool of ready made solutions but over the years I have learned how to investigate a problem and how to find answers.
DW: I wonder if you could teach your technique to others?
UF: Well, I'm doing it all the time. I'm forcing people to generate minimal examples, to look at log-files, to try out code, to read documentations, to understand what they are doing.
DW: A couple of more questions about your personal life, if I may. How did your career evolve from being a math student at the University of Bonn (and a TeX enthusiast on the side) to working for a member of Parliament? This seems like an uncommon progression.
UF: Well no, my chief lives in Siegburg, and we knew him already for some years. The first time he became member of the Parliament was at the end of the election period. So he was searching for an assistant until the next election in eight months and asked me, and I thought it would be fun to try. And then he won at the following elections, and I'm still in the job.
DW: You told me that the image you provided to go at the beginning of this interview was done by the artist Herbert Döring-Spengler. Please say another word about that image.
UF: The image has not been drawn. It is a manipulated Polaroid photo. Mr. Döring-Spengler lives near Siegburg and for some years had his atelier in Siegburg. I asked him one day some years ago to make some pictures for my identification card, and at the time he made this other photo too.
DW: Thank you, Ulrike, for participating in our interview series. I am sure I have not been alone in wanting to know a little bit more about the Ulrike who answers so many questions on the TeX lists and answers with such clarity and so supportively.
Interview pages regenerated January 26, 2017; | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8814259171485901, "perplexity": 951.9527305003817}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818690591.29/warc/CC-MAIN-20170925092813-20170925112813-00292.warc.gz"} |
https://asvabtestpro.com/quiz/a-rectangle-is-cut-in-half-to-create-two-squares-that-each-6271da4ddb5cd741a80b4bca/ | A rectangle is cut in half to create two squares that each has an area of 25. What is the perimeter of the original rectangle?
30
Explanation
The formula to find the area of a square is: Side × Side. If each square has an area of 25, then each side of the square must be 5 (because $$5 \times 5=25$$). So the the dimensions of the rectangle are $$5 \times 10$$. The perimeter is the sum of all the sides, and is therefore 5 + 10 + 5 + 10 = 30.
Visit our website for other ASVAB topics now! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.82224440574646, "perplexity": 151.50300063319403}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494826.88/warc/CC-MAIN-20230126210844-20230127000844-00268.warc.gz"} |
https://astronomy.stackexchange.com/questions/38827/is-most-hydrogen-in-the-universe-in-the-form-of-plasma-atomic-neutral-hydrogen/38872 | # Is most hydrogen in the universe in the form of plasma, atomic neutral hydrogen, ionized hydrogen, or molecular?
I have had trouble finding an answer somewhere....
Some places say most hydrogen is plasma, such as the stuff stars are made of (mostly) and the 'warm-hot intergalactic plasma'.
Other places say neutral atomic hydrogen (H(I) to astronomers), such as the stuff in the interstellar medium, is the most common...
• It would be better if you added at least one example for each of "Some places say..." and "Other places say..." Thanks!
– uhoh
Sep 12 '20 at 4:31
• Molecular hydrogen is known as H$_2$. "H III" would suggest hydrogen that has lost two electrons, except that since hydrogen only has one electron, there would be no such ion. Sep 12 '20 at 19:00
## 1 Answer
@HurtHikes, I hope this answers your question,
According to Wikipedia, most Hydrogen is in the form of Atomic and Plasma states.
Throughout the universe, hydrogen is mostly found in the atomic and plasma states, with properties quite distinct from those of molecular hydrogen.
Intuition says that because, as this link says, 99.9% matter in the universe is plasma, most Hydrogen in the universe should also be present in Plasma state.
"99.9 percent of the Universe is made up of plasma," says Dr. Dennis Gallagher, a plasma physicist at NASA's Marshall Space Flight Center. "Very little material in space is made of rock like the Earth."
This link has the calculations to determine the amount of Hydrogen in Plasma in relation to the total amount of Plasma (if anyone is interested...)
Stars are made of relatively simple stuff. By mass, our Sun is 73% hydrogen, 26% helium, and only 1% of higher Z (atomic number) atoms.
X = mHnH / r = density of hydrogen / total density
Though I am not sure about the actual answer about the same, I will be grateful if anyone can point out to the right answer. And although I have made every effort into the removal and rectification of errors, If there are any errors in my answer, I will be grateful if anyone can point them out.
• Cam anyone please confirm if my answer is correct? Sep 14 '20 at 5:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7391636967658997, "perplexity": 770.4068047519846}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056900.32/warc/CC-MAIN-20210919190128-20210919220128-00319.warc.gz"} |
https://web2.0calc.com/questions/help-counting_65 | +0
# Help counting
0
133
1
In how many ways can 36 be written as the product $a \times b \times c \times d$, where $a, b, c$ and $d$ are positive integers such that $a < b < c < d$?
Aug 23, 2021 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9976537823677063, "perplexity": 273.41211603616256}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570767.11/warc/CC-MAIN-20220808061828-20220808091828-00248.warc.gz"} |
http://openstudy.com/updates/503a1042e4b0edee4f0d8a63 | Here's the question you clicked on:
55 members online
• 0 replying
• 0 viewing
## littlecat 3 years ago The surface charge density of an object is σ = dq/dA. Why is the total charge on the surface, Q = double integral σ dA ? Delete Cancel Submit
• This Question is Closed
1. ghazi
• 3 years ago
Best Response
You've already chosen the best response.
0
total charge on surface = surface charge density* area basically, surface charge density is the charge per unit area ...which means how much charge does an unit area has. so here $q=\int\limits_{0}^{A}dA*\sigma$
2. ghazi
• 3 years ago
Best Response
You've already chosen the best response.
0
if charge density would have varied then we might have used $q= \int\limits_{0}^{A}dA* \sigma + \int\limits_{0}^{\sigma} d \sigma*A$
3. Not the answer you are looking for?
Search for more explanations.
• Attachments:
#### Ask your own question
Sign Up
Find more explanations on OpenStudy
Privacy Policy | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9972331523895264, "perplexity": 4631.439527592177}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860121976.98/warc/CC-MAIN-20160428161521-00085-ip-10-239-7-51.ec2.internal.warc.gz"} |
http://codeforces.com/problemset/problem/9/E | E. Interesting Graph and Apples
time limit per test
1 second
memory limit per test
64 megabytes
input
standard input
output
standard output
Hexadecimal likes drawing. She has drawn many graphs already, both directed and not. Recently she has started to work on a still-life «interesting graph and apples». An undirected graph is called interesting, if each of its vertices belongs to one cycle only — a funny ring — and does not belong to any other cycles. A funny ring is a cycle that goes through all the vertices just once. Moreover, loops are funny rings too.
She has already drawn the apples and some of the graph edges. But now it is not clear, how to connect the rest of the vertices to get an interesting graph as a result. The answer should contain the minimal amount of added edges. And furthermore, the answer should be the lexicographically smallest one. The set of edges (x1, y1), (x2, y2), ..., (xn, yn), where xi ≤ yi, is lexicographically smaller than the set (u1, v1), (u2, v2), ..., (un, vn), where ui ≤ vi, provided that the sequence of integers x1, y1, x2, y2, ..., xn, yn is lexicographically smaller than the sequence u1, v1, u2, v2, ..., un, vn. If you do not cope, Hexadecimal will eat you. ...eat you alive.
Input
The first line of the input data contains a pair of integers n and m (1 ≤ n ≤ 50, 0 ≤ m ≤ 2500) — the amount of vertices and edges respectively. The following lines contain pairs of numbers xi and yi (1 ≤ xi, yi ≤ n) — the vertices that are already connected by edges. The initial graph may contain multiple edges and loops.
Output
In the first line output «YES» or «NO»: if it is possible or not to construct an interesting graph. If the answer is «YES», in the second line output k — the amount of edges that should be added to the initial graph. Finally, output k lines: pairs of vertices xj and yj, between which edges should be drawn. The result may contain multiple edges and loops. k can be equal to zero.
Examples
Input
3 21 22 3
Output
YES11 3 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4283733069896698, "perplexity": 333.3278734933668}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662540268.46/warc/CC-MAIN-20220521174536-20220521204536-00585.warc.gz"} |
http://math.stackexchange.com/questions/282743/what-are-the-probabilities-in-this-particular-spin-off-of-blackjack | # What are the probabilities in this particular spin-off of BlackJack
A few of my friends are playing on a gaming server where this game exists. I was curious about the probabilities in it but i was unable to derive them. Any help would be MUCH appreciated. Note: For this problem, please assume that /dice is absolutely random. It is generated by a computer using a pseudo random code but i would appreciate it if that didnt work its way in here.
As for the game itself, here is how it works, there are two people who play the game, The Player and The Dealer.
As the game begins, the player uses /dice as many times as he wants(Each /dice has 6 outcomes which have absolutely equal chances). The player then dices as many times as he wants but his total must not go over 21.
For example when the total reaches 19, it would be a good choice to stop. If he gets 22, 23 or so on then he automatically loses.
Once he has "stayed" at a particular amount the dealer then rolls. His aim is to ensure that he gets a number higher than the player. If he succeeds, he wins. If he trips and gets 22, 23 or so on then he loses.
Now what is the probability that the dealer will win? What is the probability that the player will win? On a average game what is the probability of a 21 being rolled.
-
Just to be clear, that unless the the player went bust, the dealers strategy is always to roll until he has higher number then the player so that ties are impossible? (so that if the players rolls 21 the dealer always loses) – Shard Jan 20 '13 at 13:54
What's the significance of the slash in front of "dice"? – joriki Jan 20 '13 at 14:18
What do you mean by "On a average game what is the probablity of a $21$ being rolled."? How does this differ from "what is the probablity of a $21$ being rolled?"? – joriki Jan 20 '13 at 14:40
1 - In case of a tie, the player re-rolls. I am sorry i forgot to mention that. 2- / signifies a command, i should of probably mentioned that. 3 - Both are one and the same, my bad again :/ – Aayush Agrawal Jan 20 '13 at 18:25
With Shard's notation, the probability that the player wins if she rolls until she has at least $b$ is
$$p_b=\sum_{a=b}^{21}\sum_{c=22}^{a+6}p(a,b)p(c,a+1)\;,$$
and she will choose $b$ to maximize this. Here's code that calculates these probabilities using Shard's recurrence:
$$\begin{array}{c|c} b&p_b\\\hline 16&0.28518\\ 17&0.39679\\ 18&0.47400\\ 19&0.49650\\ 20&0.44231\\ 21&0.28597\\ \end{array}$$
Thus the player should roll until she has at least $19$, and then her winning probability is very nearly even. The probability that she rolls $21$ is $p(21,19)\approx0.19091$, and the probability that the dealer rolls $21$ is $p(19,19)p(21,20)+p(20,19)p(21,21)\approx0.13597$, for a total of about $0.32689$.
-
I am not very accurate with high level math(13 years old, just learnt 2 variable algebra).What i dont get is why are the probablities that the dealer rolls a 21 different than the players rolling a 21? The dice is fair to both ends right? Also the table describes the probablity of landing on any of those numbers but it doesnt mention the chances of a bust. I am sorry but i am a bit confused :/ – Aayush Agrawal Jan 20 '13 at 18:30
Also the probablity of rolling a 21 is given thx but what are the probablities of any random game being won by the player or the dealer? – Aayush Agrawal Jan 20 '13 at 18:34
@Aayush: I think you misunderstood the table. The probabilities $p(a,b)$ that Shard introduced are the probabilities for landing on $a$ if you keep rolling until you have at least $b$. The probabilities $p_b$ that I introduced are not probabilities for landing on any particular number; they're winning probabilities, which I believe is what you asked for. The table shows that the highest winning probability is achieved if the player rolls until she has at least $19$. I'll explain it in more detail later if I find the time. – joriki Jan 20 '13 at 21:00
@Aayush: If the player rolls until she has at least $b$, she might win if she ends up with anything from $b$ to $21$. That's reflected in the first sum in the equation, where $a$ is the number she ends up with and the probabilities for these cases are $p(a,b)$. Then the dealer will roll until he has at least one more than $a$ (that's the second argument in $p(c,a+1)$), and the player will win if he ends up with anything from $22$ to $a+6$ -- that's reflected in the second sum and in the first argument of $p(c,a+1)$. Summing over all these cases yields the probability for the player to win. – joriki Jan 20 '13 at 22:39
@Aayush: Note that this answers the question as originally posed, not the one as modified in your comment under the question. Regarding the probabilities of rolling a $21$: It's not the dice that cause the difference in these probabilities, but the different roles of the player and the dealer. They play with different targets, $b$ and $a+1$, respectively, and the dealer's target $a+1$ depends on the player's result $a$. If they'd play with the same target, they'd have the same probability of hitting $21$. – joriki Jan 20 '13 at 22:42
Each players turn can be described by a set of probabilities based on the "sticking number" $b$ they choose which is the lowest total number at which they will stop rolling the dice. The dealer always picks $b=n+1$ where $n$ is the score the player got to try and beat them. The player has a more tricky decision which will be based off what the probabilities are of him winning after sticking vs the probability of immediately going bust.
Let $p(a,b)$ be the probability that the the total = $a$ given we keep rolling until the total is $\ge b$. Clearly $p(a,b)=0$ if $a<b$ as we are supposed to keep rolling until $a\ge b$. Also $p(a,b)=0$ for $a>=b+6$ since a single roll of the die cannot take us from a number less then $b$ to one greater then or equal to $b+6$.
Also if we choose $b=1$ then clearly we stop after our very first roll and so $$p(1,1)=p(2,1)=p(3,1)=p(4,1)=p(5,1)=p(6,1)=\frac16$$ Now let us consider $b=2$. After our first roll we would only choose to roll again if we rolled a one, and so we end up with
$p(2,2)=p(3,2)=p(4,2)=p(5,2)=p(6,2)=\frac16+\frac1{6^2}$ and $p(7,2)=\frac1{6^2}$
In general we end up with the recurrence relation $$p(a,b)=p(a,b-1)+\frac{p(b-1,b-1)}6$$ where $b\le a\le b+5$ and zero for other values of $a$.
It should be easy to calculate the table of probabilities up to $b=21$, and thus work out the chance the dealer wins given that the player "stuck" on a certain number. Knowing these probabilities the player can now decide their own sticking number by working out if the chance of going "bust" on the next roll is less then the chance they would lose anyway by "sticking" and letting the dealer play.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.80734783411026, "perplexity": 326.8012411724554}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422121833101.33/warc/CC-MAIN-20150124175033-00086-ip-10-180-212-252.ec2.internal.warc.gz"} |
https://www.gradesaver.com/textbooks/math/precalculus/precalculus-6th-edition/chapter-1-equations-and-inequalities-1-4-quadratic-equations-1-4-exercises-page-122/59 | ## Precalculus (6th Edition)
The solutions are $x=-\dfrac{1}{4}\pm\dfrac{\sqrt{97}}{4}$
$\dfrac{1}{2}x^{2}+\dfrac{1}{4}x-3=0$ Multiply the whole equation by $4$: $4\Big(\dfrac{1}{2}x^{2}+\dfrac{1}{4}x-3=0\Big)$ $2x^{2}+x-12=0$ Use the quadratic formula to solve this equation. The formula is $x=\dfrac{-b\pm\sqrt{b^{2}-4ac}}{2a}$. In this case, $a=2$, $b=1$ and $c=-12$ Substitute the known values into the formula and evaluate: $x=\dfrac{-1\pm\sqrt{1^{2}-4(2)(-12)}}{2(2)}=\dfrac{-1\pm\sqrt{1+96}}{4}=...$ $...=\dfrac{-1\pm\sqrt{97}}{4}=-\dfrac{1}{4}\pm\dfrac{\sqrt{97}}{4}$ The solutions are $x=-\dfrac{1}{4}\pm\dfrac{\sqrt{97}}{4}$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9877874255180359, "perplexity": 110.25593968139003}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267161638.66/warc/CC-MAIN-20180925123211-20180925143611-00181.warc.gz"} |
http://www.msri.org/workshops/259/schedules/12663 | Mathematical Sciences Research Institute
Home » Workshop » Schedules » Adaptation over many timescales | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9210743308067322, "perplexity": 266.5914682724487}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500829916.85/warc/CC-MAIN-20140820021349-00033-ip-10-180-136-8.ec2.internal.warc.gz"} |
http://www.gnu.org/software/gsl/manual/html_node/Covariance.html | Next: , Previous: Autocorrelation, Up: Statistics [Index]
### 21.5 Covariance
Function: double gsl_stats_covariance (const double data1[], const size_t stride1, const double data2[], const size_t stride2, const size_t n)
This function computes the covariance of the datasets data1 and data2 which must both be of the same length n.
covar = (1/(n - 1)) \sum_{i = 1}^{n} (x_i - \Hat x) (y_i - \Hat y)
Function: double gsl_stats_covariance_m (const double data1[], const size_t stride1, const double data2[], const size_t stride2, const size_t n, const double mean1, const double mean2)
This function computes the covariance of the datasets data1 and data2 using the given values of the means, mean1 and mean2. This is useful if you have already computed the means of data1 and data2 and want to avoid recomputing them. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4369044303894043, "perplexity": 15313.955280976268}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246645606.86/warc/CC-MAIN-20150417045725-00195-ip-10-235-10-82.ec2.internal.warc.gz"} |
https://www.ruihan.org/leetcode/binary-search/notes/ | # Binary Search¶
## Binary search problem characteristics¶
1. Ordered binary search. You need to find an index or array element where ordering info is available, either explicitly (sorted array) or implicitly (partially sorted or other special info).
2. monotony pattern. If the ordering info isn't available, but you can exclude "all" the possible cases from left or right by a condition comparing f(mid) to the target.
## Binary search problem solving techniques¶
1. Clarify that you are trying to find the first one or find the last one.
2. Clarify that you are trying to move the index or move the value (i.e. kth smallest number in a multiplicative table).
3. Use an "ordering abstraction" vless(target, f(mid)). This ordering abstraction will produce a boolean array that indicate the ordering information between the target value and the f(mid).
4. Decide whether the left or right part the f(mid) should fall into. The principle to determine the predicate is simple: don't rule out the possible result (maintain the loop invariant).
5. Shrink the range accordingly based on the predicate decided in step 3.
6. Test the case that the search range is small, such as only have one or two elements.
## Binary search practical use case¶
1. Find whether the given target is in the array.
2. Find the position of the first value equal to the given target.
3. Find the insertion position of the given target in the array.
4. Find the position of the last value equal to the given target.
5. Find the total number of x in a sorted array.
6. Find the last element less than the target.
7. Find the first element greater than the target.
## Binary search in C++ STL¶
1. lower_bound: return iterator point to the element no less than the target.
2. upper_bound: return iterator point to the element greater than the target.
3. equal_range: return a pair of iterators, the first of which is lower_bound, the second is upper_bound.
4. binary_search: return true if an element equivalent to val is found, and false otherwise.
## Caveat of binary search implementation¶
1. Specify the range: [start, end) or [start, end]? C++ STL used [start, end) to denote a range, which bring in many conveniences. We will stick on this convention.
2. Which while loop condition? start < end? start <= end? start != end? start + 1 < end?
3. The calculation of the mid. mid = start + (end - start) / 2 or mid = (start + end) / 2?
4. To proof mid is always in the range [begin, end).
5. The "bisection": start = mid + 1, start = mid, or end = mid - 1 or end = mid?
6. Where is the result? start? end? How to make sure?
## A "universal" binary search implementation¶
Despite the above caveats, just remember that there are two versions of binary search one can write based on the range [begin, end) and [begin, end]. Iterator type in C++ using the former, it have many benefits in reduce the code complexity. Among all the binary search implementation you have seen, the following one is the most powerful version and it equivalent to C++ STL lower_bound algorithm.
/**
* return an index to an element no less than x. Be more specifically, if there is
* an element in the given array equal to x, it returns the index of first such
* element; if there is no element that is equal to x, it returns the index
* where x can be inserted into the position without changing the ordering of
* the elements.
*
* All possible return value for calling this function with array.size() == n is
* [0, 1, ..., n - 1, n]
*
*/
size_t binary_search(int x, vector<int>& array, size_t n)
{
size_t begin = 0, end = n;
while (begin != end) {
size_t mid = begin + (end - begin) / 2;
if (array[mid] < x) {
begin = mid + 1;
} else {
end = mid;
}
}
return begin;
}
1. mid cannot less than begin, they can be equal. This will ensure begin = mid + 1 in the if statement at least reduce the size of [begin, end] by 1.
• Informal proof: if (array[mid] < x), it indicate x can only possible be in array[mid + 1, mid + 2, ... n - 1]. mid + 1 is at least 1 greater than begin.
2. mid and end never equal inside the while loop, mid < end is always hold. This will ensure end = mid in the else statement at least reduce the size of [begin, end] by 1.
• Informal proof: we have begain < end, so begin + end < 2 * end, thus (begin + end) / 2 < end, because integer divisioin truncate down, mid = (begin + end) / 2 always less than end.
3. begin and end never cross.
• Informal proof: Inside the while loop, at the begining, we have begin < end. If the current iteration executes the if condition, begin = mid + 1 at most advance begin to end but not exceed end. If it execute the else condition, end = mid would at worst case change end point to the minimum value of mid, because we have begin <= mid. Thus, we can conclude that executing the statement end = mid will not change end less than begin, at worst equal to begin.
### Claims regarding this binary search routine¶
1. The range [begin, end) is used, which comply to the convention used in C++ iterator.
2. It is impossible that mid == end. If they are equal, array[n] is invalid memory access.
3. We use the loop condition while (begin != end) to indicate that once the loop terminates, we have begin == end. By checking whether begin is a valid index to the array or not, we can know whether x is greater than all the elements in the array or not. If we want to check whether x is found in the array, we simply check array[begin] == x. However, this condition is based on the assumption that begin < end initially. Considering that, using while (begin < end) is better if you cannot ensure begin < end before the loop.
4. Setting begin = mid + 1 reduces the size of the remaining interested sub-array and maintains the invariant, which is if x in the array, x is in [begin, end).
5. Setting end = mid reduces the size of the remaining interested sub-array (mid never equal to end) and maintains the invariant, which is if x in the array, x is in [begin, end). This claim is a little hard to absorb. On way to understand is like the following: ~~Because we need keep searching x in the range [begin, mid] if we get in the else statement. In the else case there are two possibilities: 1) array[mid] > x. 2) array[mid] = x. For 1) it indicates x is in [begin, mid), setting end = mid maintains the loop invariant correctly, which is that x is in the shrinked range. For the 2) it is a little complex. If array[mid] is the only element equal to x, setting end = mid appears violate the loop invariant by exclude x from the range [begin, end). however, remember array[mid] is the only element equal to x, after the while loop, begin = end, we have the x found by begin even though theoretically [begin, end) is already an empty range since begin = end and array[begin] = array[end] = x. If there are more values are equal to x before and after the element array[mid] the loop will always end up finding the first x value in the array.
6. If we use end = mid + 1. Try test case [1, 3, 5, 7], with x = 0. deadloop will accur. i.e. begin = 0, mid = 1, end = 2.
## Category 1 Binary search on sorted arrays¶
To solve this type of binary search problem. You should focus on the following:
1. Come up test cases to verify your solution.
2. Be able to find which side to drop for each iteration.
3. Be extremly careful "off by 1" bugs. (1. reasoning: is mid value possible to be the solution or not. 2. exercise test cases: especially short ones)
### 34. Search for a Range¶
class Solution {
public:
vector<int> searchRange(vector<int>& nums, int target) {
vector<int> res(2, -1);
int low = lower_bound(nums.begin(), nums.end(), target) - nums.begin();
int high = upper_bound(nums.begin(), nums.end(), target) - nums.begin();
if (low == high)
return res;
return {low, hight - 1};
}
};
class Solution {
public:
vector<int> searchRange(vector<int>& nums, int target) {
vector<int> res(2, -1);
int low = lower_bound(nums, target);
//int high = lower_bound(nums, target + 1); // also works.
int high = upper_bound(nums, target);
if (low == high) {
return res;
}
return {low, high - 1};
}
int lower_bound(vector<int>& nums, int target) {
if (nums.size() == 0) return 0;
int l = 0, r = nums.size();
while (l < r) {
int m = l + (r - l) / 2;
if (nums[m] < target) {
l = m + 1;
} else {
r = m;
}
}
return l;
}
int upper_bound(vector<int>& nums, int target) {
if (nums.size() == 0) return 0;
int l = 0, r = nums.size();
while (l < r) {
int m = l + (r - l) / 2;
if (nums[m] <= target) {
l = m + 1;
} else {
r = m;
}
}
return l;
}
};
class Solution(object):
def searchRange(self, nums, target):
"""
:type nums: List[int]
:type target: int
:rtype: List[int]
"""
if len(nums) == 0:
return [-1, -1]
begin = 0
end = len(nums)
while begin != end:
mid = begin + (end - begin) / 2
if nums[mid] < target:
begin = mid + 1
else:
end = mid
if begin == len(nums):
return [-1, -1]
if nums[begin] == target:
lower = begin
else:
lower = -1
begin = 0
end = len(nums)
while begin != end:
mid = begin + (end - begin) / 2
if nums[mid] <= target:
begin = mid + 1
else:
end = mid
if nums[begin - 1] == target:
upper = begin - 1
else:
upper = -1
return [lower, upper]
### 35. Search Insert Position¶
class Solution {
public:
int searchInsert(vector<int>& nums, int target) {
if (nums.size() == 0) return 0;
int l = 0, r = nums.size();
while (l < r) {
int m = l + (r - l) / 2;
if (nums[m] < target) {
l = m + 1;
} else {
r = m;
}
}
return l;
}
};
### 33. Search in Rotated Sorted Array¶
How to locate the sorted half?
1. If left half is sorted, check where the target t is like be. else if right half is sorted, check where the target t is like to be. else if mid element is equal to left or right. Remove one of them.
2. Although no duplicate, should consider short input like [3 1], 1 will have the equal case.
/**
t = 1 t = 3 t = 5 t = 4 t = -1
5 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 1 2 3 4
5 1 3 4 5 1 3 4 5 1
1 3 5 4 1 <--need check
*/
class Solution {
public:
int search(vector<int>& A, int t) {
if (A.empty()) return -1;
int l = 0, r = A.size() - 1;
while (l < r) {
int m = l + (r - l) / 2;
if (A[m] == t) return m;
if (A[l] < A[m]) { // left is sorted
if (A[l] <= t && t < A[m]) {
r = m - 1;
} else {
l = m + 1;
}
} else if (A[m] < A[r]) { // right is sorted
if (A[m] < t && t <= A[r]) {
l = m + 1;
} else {
r = m - 1;
}
} else { // if equal, remove one. case: [3, 1], 1
if (A[l] == A[m]) l++;
if (A[m] == A[r]) r--;
}
}
return A[l] == t ? l : -1;
}
};
class Solution(object):
def search(self, nums, target):
"""
:type nums: List[int]
:type target: int
:rtype: int
"""
if len(nums) == 0:
return -1
left = 0
right = len(nums) - 1
while left < right:
mid = left + (right - left) / 2
if nums[mid] == target:
return mid;
if nums[left] < nums[mid]:
if nums[left] <= target and target < nums[mid]:
right = mid - 1
else:
left = mid + 1
elif nums[mid] < nums[right]:
if nums[mid] < target and target <= nums[right]:
left = mid + 1
else:
right = mid - 1
else:
if nums[left] == nums[mid]:
left += 1
if nums[right] == nums[mid]:
right -= 1
if nums[left] == target:
return left
return -1
### 81. Search in Rotated Sorted Array II¶
How to locate the sorted half?
class Solution {
public:
bool search(vector<int>& A, int t) {
if (A.empty())
return false;
int l = 0, r = A.size() - 1;
while (l < r) {
int m = l + (r - l) / 2;
if (A[m] == t) return true;
if (A[l] < A[m]) {
if (A[l] <= t && t < A[m]) {
r = m - 1;
} else {
l = m + 1;
}
} else if (A[m] < A[r]) {
if (A[m] < t && t <= A[r]) {
l = m + 1;
} else {
r = m - 1;
}
} else {
if (A[l] == A[m]) l++;
if (A[m] == A[r]) r--;
}
}
return A[l] == t? true : false;
}
};
class Solution(object):
def search(self, nums, target):
"""
:type nums: List[int]
:type target: int
:rtype: int
"""
if len(nums) == 0:
return False
left = 0
right = len(nums) - 1
while left < right:
mid = left + (right - left) / 2
if nums[mid] == target:
return True
if nums[left] < nums[mid]:
if nums[left] <= target and target < nums[mid]:
right = mid - 1
else:
left = mid + 1
elif nums[mid] < nums[right]:
if nums[mid] < target and target <= nums[right]:
left = mid + 1
else:
right = mid - 1
else:
if nums[left] == nums[mid]:
left += 1
if nums[right] == nums[mid]:
right -= 1
if nums[left] == target:
return True
return False
### 153. Find Minimum in Rotated Sorted Array¶
Try to locate the valley which contains the min.
1. Notice when A[0] < A[n - 1], return A[0].
2. Draw a monotonic curve and then split the curve into two half, swith the order. This can help you to write the code.
class Solution {
public:
int findMin(vector<int>& A) {
int l = 0, r = A.size() - 1;
while (l < r) {
if (A[l] < A[r]) // serve as base case.
return A[l];
int m = l + (r - l) / 2;
if (A[m] > A[r]) { // also works. looking for not sorted half
l = m + 1;
} else if (A[m] < A[r]) { // don't really need if statement
r = m;
}
}
return A[l];
}
};
class Solution(object):
def findMin(self, nums):
"""
:type nums: List[int]
:rtype: int
"""
if len(nums) == 0:
return -1
left = 0
right = len(nums) - 1
while left < right:
if nums[left] == nums[right]:
return nums[left]
mid = left + (right - left) / 2
if nums[mid] > nums[right]:
left = mid + 1
else:
right = mid
return nums[left]
### 154. Find Minimum in Rotated Sorted Array II¶
Locate the valley which contains the min.
1. Since duplicates exist. we cannot use the observation A[l] == A[r].
2. Here we deal with duplicates using decrease by one step.
class Solution {
public:
int findMin(vector<int>& A) {
int l = 0, r = A.size() - 1;
while (l < r) {
int m = l + (r - l) / 2;
if (A[m] > A[r]) {
l = m + 1;
} else if (A[m] < A[r]) {
r = m;
} else {
r--;
}
}
return A[l];
}
};
class Solution(object):
def findMin(self, nums):
"""
:type nums: List[int]
:rtype: int
"""
if len(nums) == 0:
return -1
left = 0
right = len(nums) - 1
while left < right:
mid = left + (right - left) / 2
if nums[mid] > nums[right]:
left = mid + 1
elif nums[mid] < nums[right]:
right = mid
else:
right -= 1
return nums[left]
### 162 Find Peak Element¶
Use Binary search
1. Use the neighboring relation to determin which side a peak value may occur then eliminate the other side.
class Solution {
public:
int findPeakElement(vector<int>& A) {
int l = 0, r = A.size() - 1;
while (l < r) {
int m = l + (r - l) / 2;
if (A[m] < A[m + 1]) {
l = m + 1;
} else if (A[m] > A[m + 1]) {
r = m;
}
}
return l;
}
};
Binary search
1. Notice how this can be related to the ordering abstraction.
// Forward declaration of isBadVersion API.
class Solution {
public:
int l = 0, r = n;
while (l < r) {
int m = l + (r - l) / 2;
l = m + 1;
} else {
r = m;
}
}
return l;
}
};
### 74. Search a 2D Matrix¶
Binary search
1. We can view the matrix as a big sorted array and then binary search the target.
2. Notice test your finished routine using edge cases. (i.e. the initial value of end)
class Solution {
public:
bool searchMatrix(vector<vector<int>>& matrix, int target) {
int m = matrix.size();
int n = m ? matrix[0].size() : 0;
if (m == 0 || n == 0) return false;
int start = 0, end = m * n - 1;
while (start < end) {
int mid = start + (end - start) / 2;
int i = mid / n, j = mid % n;
if (matrix[i][j] < target) {
start = mid + 1;
} else {
end = mid;
}
}
return matrix[start / n][start % n] == target ? true : false;
}
};
### 240. Search a 2D Matrix II¶
Binary search to exclude whole column or whole row
1. the key is you decide where to start the compare. If you start from left bottom or right top, the solution should be abvious.
2. Notice the idea is from binary search, if ordering info available, we want to exclude as many as impossible values as we can.
class Solution {
public:
bool searchMatrix(vector<vector<int>>& matrix, int target) {
int m = matrix.size();
int n = m ? matrix[0].size() : 0;
if (m == 0 || n == 0) return false;
int x = m - 1, y = 0;
while (x >= 0 && y < n) {
if (matrix[x][y] == target) {
return true;
}
if (matrix[x][y] < target) {
y++;
} else {
x--;
}
}
return false;
}
};
### 302. Smallest Rectangle Enclosing Black Pixels¶
class Solution {
public:
int minArea(vector<vector<char>>& image, int x, int y) {
int m = image.size();
int n = m ? image[0].size() : 0;
int top = m, bottom = 0, left = n, right = 0;
for (int i = 0; i < m; i++) {
for (int j = 0; j < n; j++) {
if (image[i][j] == '1') {
top = min(top, i);
bottom = max(bottom, i + 1);
left = min(left, j);
right = max(right, j + 1);
}
}
}
return (right - left) * (bottom - top);
}
};
Binary search
1. Notice the binary search idea is related to the problem Smallest Good Base and Wood Cut.
2. The basic idea is to search each of 1 from 4 directions. First, make sure you can search one boundary and the others are similar. For example, to search the first row that contains 1, we can look at the whole column/row to see whether this col/row have 1. Because we are searching the first row that have 1 top down, bisec based on the count of 1 on each row we can know whether we ignore the upper half or the lower half.
class Solution {
public:
int minArea(vector<vector<char>>& image, int x, int y) {
int m = image.size();
int n = m ? image[0].size() : 0;
int top = bsearch_byrows(image, 0, x, 0, n, true); // search top
int bottom = bsearch_byrows(image, x + 1, m, 0, n, false);
int left = bsearch_bycols(image, 0, y, top, bottom, true);
int right = bsearch_bycols(image, y + 1, n, top, bottom, false);
return (bottom - top) * (right - left);
}
int bsearch_byrows(vector<vector<char>>& image, int x, int y,
int left, int right, bool white2black) {
while (x < y) {
int m = (x + y) / 2;
int k = left;
while (k < right && image[m][k] == '0') k++;
if (k < right == white2black) { // mth row have '1'
y = m;
} else {
x = m + 1;
}
}
return x;
}
int bsearch_bycols(vector<vector<char>>& image, int x, int y,
int top, int bottom, bool white2black) {
while (x < y) {
int m = (x + y) / 2;
int k = top;
while (k < bottom && image[k][m] == '0') k++;
if (k < bottom == white2black) { // mth column have '1'
y = m;
} else {
x = m + 1;
}
}
return x;
}
};
class Solution(object):
def minArea(self, image, x, y):
"""
:type image: List[List[str]]
:type x: int
:type y: int
:rtype: int
"""
m = len(image)
n = 0
if m != 0:
n = len(image[0])
top = self.bsearch_row(image, 0, x, 0, n, True)
bottom = self.bsearch_row(image, x + 1, m, 0, n, False)
left = self.bsearch_col(image, 0, y, top, bottom, True)
right = self.bsearch_col(image, y + 1, n, top, bottom, False)
return (bottom - top) * (right - left)
def bsearch_row(self, image, start, end, lower, upper, white2black):
while start < end:
m = (start + end) / 2
k = lower
while k < upper and image[m][k] == '0':
k += 1
if (k < upper) == white2black:
end = m
else:
start = m + 1
return start
def bsearch_col(self, image, start, end, lower, upper, white2black):
while start < end:
m = (start + end) / 2
k = lower
while k < upper and image[k][m] == '0':
k += 1
if (k < upper) == white2black:
end = m
else:
start = m + 1
return start
class Solution {
public:
int minArea(vector<vector<char>>& image, int x, int y) {
int m = image.size();
int n = m ? image[0].size() : 0;
int top = m, bottom = 0, left = n, right = 0;
int xx[4] = {-1, 0, 1, 0};
int yy[4] = {0, 1, 0, -1};
queue<pair<int, int>> q;
q.push({x, y});
image[x][y] = '0';
while (!q.empty()) {
pair<int, int> t = q.front(); q.pop();
top = min(top, t.first);
bottom = max(bottom, t.first + 1);
left = min(left, t.second);
right = max(right, t.second + 1);
for (int k = 0; k < 4; ++k) {
int a = t.first + xx[k];
int b = t.second + yy[k];
if (a >= 0 && a < m && b >= 0 && b < n && image[a][b] == '1') {
q.push({a, b});
image[a][b] = '0';
}
}
}
return (right - left) * (bottom - top);
}
};
from collections import deque
class Solution(object):
def minArea(self, image, x, y):
"""
:type image: List[List[str]]
:type x: int
:type y: int
:rtype: int
"""
m = len(image)
n = 0
if m != 0:
n = len(image[0])
xx = [-1, 0, 1, 0]
yy = [0, -1, 0, 1]
top = m
bottom = 0
left = n
right = 0
q = deque()
q.append([x, y])
image[x][y] = '0'
while len(q) > 0:
t = q.popleft()
top = min(top, t[0])
bottom = max(bottom, t[0] + 1)
left = min(left, t[1])
right = max(right, t[1] + 1)
for k in range(4):
a = t[0] + xx[k]
b = t[1] + yy[k]
if a >= 0 and a < m and b >= 0 and b < n and image[a][b] == '1':
q.append([a, b])
image[a][b] = '0'
return (right - left) * (bottom - top)
class Solution {
private:
int m, n;
int top, bottom, left, right;
public:
int minArea(vector<vector<char>>& image, int x, int y) {
m = image.size();
n = m ? image[0].size() : 0;
top = m, bottom = 0, left = n, right = 0;
dfs_helper(image, x, y);
return (right - left) * (bottom - top);
}
void dfs_helper(vector<vector<char>>& image, int x, int y) {
if (x < 0 || x >= m || y < 0 || y >= n || image[x][y] == '0') {
return;
}
image[x][y] = '0';
top = min(top, x);
bottom = max(bottom, x + 1);
left = min(left, y);
right = max(right, y + 1);
dfs_helper(image, x - 1, y);
dfs_helper(image, x, y + 1);
dfs_helper(image, x + 1, y);
dfs_helper(image, x, y - 1);
}
};
### 363. Max Sum of Rectangle No Larger Than K¶
Iterate the wide of the matrix and using prefix sum and set lower_bound.
1. From the problem Max Sum of Subarry No Larger Than K, we have to enumerate the width of the sub-matrix and sum up all row elements and get an array of length m, m is the number of rows of the matrix. Then apply the method.
class Solution {
public:
int maxSumSubmatrix(vector<vector<int>>& matrix, int k) {
if (matrix.empty()) return 0;
int m = matrix.size();
int n = m ? matrix[0].size() : 0;
int res = INT_MIN;
for (int l = 0; l < n; ++l) {
vector<int> sums(m, 0);
for (int r = l; r < n; ++r) {
for (int i = 0; i < m; ++i) {
sums[i] += matrix[i][r];
}
set<int> preSumSet;
preSumSet.insert(0);
int preSum = 0, curMax = INT_MIN;
for (int sum : sums) {
preSum += sum;
set<int>::iterator it = preSumSet.lower_bound(preSum - k);
if (it != preSumSet.end()) {
curMax = max(curMax, preSum - *it);
}
preSumSet.insert(preSum);
}
res = max(res, curMax);
}
}
return res;
}
};
merge sort
1. The idea is similar that solution 1. Instead of calculate preSum on the fly, we finish calculation and pass it to a mergeSort routine.
2. The use mergeSort here is to find the A[j] - A[i] <= k efficiently, O(nlogn).
class Solution {
public:
int maxSumSubmatrix(vector<vector<int>>& matrix, int k) {
int m = matrix.size();
int n = m ? matrix[0].size() : 0;
int res = INT_MIN;
vector<long long> sums(m + 1, 0);
for (int l = 0; l < n; ++l) {
vector<long long>sumInRow(m, 0);
for (int r = l; r < n; ++r) {
for (int i = 0; i < m; ++i) {
sumInRow[i] += matrix[i][r];
sums[i + 1] = sums[i] + sumInRow[i];
}
res = max(res, mergeSort(sums, 0, m + 1, k));
if (res == k) return k;
}
}
return res;
}
int mergeSort(vector<long long>& sums, int start, int end, int k) {
if (end == start + 1) return INT_MIN;
int mid = start + (end - start) / 2;
int res = mergeSort(sums, start, mid, k);
if (res == k) return k;
res = max(res, mergeSort(sums, mid, end, k));
if (res == k) return k;
long long cache[end - start];
int j = mid, c = 0, t = mid;
for (int i = start; i < mid; ++i) {
while (j < end && sums[j] - sums[i] <= k) ++j; // search first time sums[j] - sums[i] > k
if (j - 1 >= mid) { // sums[j - 1] - sums[i] <= k, make sure j - 1 is still in right side
res = max(res, (int)(sums[j - 1] - sums[i]));
if (res == k) return k;
}
while (t < end && sums[t] < sums[i]) {
cache[c++] = sums[t++];
}
cache[c++] = sums[i];
}
for (int i = start; i < t; ++i) {
sums[i] = cache[i - start];
}
return res;
}
};
### 540. Single Element in a Sorted Array¶
class Solution:
def singleNonDuplicate(self, nums: List[int]) -> int:
start = 0
end = len(nums) - 1
while start < end:
mid = start + (end - start) // 2
if mid % 2 == 0:
if nums[mid] == nums[mid + 1]:
start = mid + 2
else:
end = mid
else:
if nums[mid] == nums[mid - 1]:
start = mid + 1
else:
end = mid
return nums[start]
## Category 2 Using ordering abstraction¶
### 69. Sqrt(x)¶
Solution 1 using ordering abstraction definition
To find a square root of a integer x using binary search. We need to first determin the range [left, right] that the target value sqrt(x) may in. The potential range we can search is [0, x/2 + 1].
Then we should clarify this binary search is the "find the first one" type or the "find the last one" type. Basically, we want to determine our ordering abstraction f(target, g(i)) that is able to produce a boolean array. The boolean array have true part and false part seperated. Here target = sqrt(x) and g(i) = i. We define f(sqrt(x), i) = true when i <= sqrt(x) and f(sqrt(x), i) = false when i > sqrt(x). This came from the following intuition: We are looking for the "last" integer whose square is less than x. Why not the otherwise? Because if you change to find the "first" integer whose square is greater than the x from right section of the boolean array, it is hard to define our ordering abstraction f. Of cause, we can search the "first" integer whose square is greater than x and find the previous integer next to it as the solution, but this later solution is a bit complex and counter intuitive. We prefer the first definition of ordering abstraction. Although a workable solution following the second ordering abstraction is also given below.
For example: to solve the sqrt(8) and sqrt(9) using our definition:
k, i = 0 1 2 3 4 5 6 7 8 9 10 n = 11
A = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
f(sqrt(8), k) = [T T T F F]
f(sqrt(9), k) = [T T T T F]
The binary search routine will be:
class Solution {
public:
int mySqrt(int x) {
int l = 0, r = x / 2 + 1;
while (l < r) {
// int m = l + (r - l) / 2; // will deadloop for 4, why?
int m = r - (r - l) / 2;
if (m <= x / m) {
l = m;
} else {
r = m - 1;
}
}
return l;
}
};
Solution 2 using the alternative ordering abstraction definition
Second ordering abstraction (find first value whose square is greater than x)
k, i = 0 1 2 3 4 5 6 7 8 9 10 n = 11
A = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
f(sqrt(8), k) = [F F F T T]
f(sqrt(9), k) = [F F F F T]
class Solution {
public:
int mySqrt(int x) {
if (x == 0) return 0; // should handle, but will got division by zero in line 9.
int l = 0, r = x / 2 + 2; // r = x / 2 + 1 will not working for x = 1, have to have the one past last;
while (l < r) {
//int m = r - (r - l) / 2; // will dead loop for 4
int m = l + (r - l) / 2;
if (m > x / m) {
r = m;
} else {
l = m + 1;
}
}
return l - 1;
}
};
### 367. Valid Perfect Square¶
Solution 1 Binary search using ordering abstraction
1. Notice you have to run tests for cases from 1 to 5.
class Solution {
public:
bool isPerfectSquare(int num) {
if (num == 1) return true;
int begin = 1, end = num / 2;
while (begin < end) {
//long long mid = begin + (end - begin) / 2; // not working, deadloop for 5
long long mid = end - (end - begin) / 2;
if (mid * mid == num)
return true;
if (mid * mid < num) {
begin = mid;
} else {
end = mid - 1;
}
}
return false;
}
};
class Solution(object):
def isPerfectSquare(self, num):
"""
:type num: int
:rtype: bool
"""
if num == 1:
return True
lower = 1
upper = num / 2
while lower < upper:
mid = upper - (upper - lower) / 2
if mid * mid == num:
return True
if mid * mid < num:
lower = mid
else:
upper = mid - 1
return False
### 441. Arranging Coins¶
Solution 1 Biinary search
• Notice the integer overflow possibility, we use long to deal with it.
class Solution {
public:
int arrangeCoins(int n) {
if (n < 2) {
return n;
}
long l = 0;
long r = n;
while (l < r) {
long m = l + (r - l) / 2;
long t = m * (m + 1) / 2;
if (t == n)
return m;
if (t < n) {
l = m + 1;
} else {
r = m;
}
}
return l - 1;
}
};
### 633. Sum of Square Numbers¶
Solution 1 Binary search
1. Once you have derived value b from a and c, you can binary search b.
class Solution {
public:
bool judgeSquareSum(int c) {
if (c == 0) return true;
for (long long a = 0; a * a <= c; ++a) {
int b = c - (int) (a * a);
int l = 0, r = b / 2 + 1;
while (l < r) {
long long m = r - (r - l) / 2;
if (m * m == b)
return true;
if (m * m < b) {
l = m;
} else {
r = m - 1;
}
}
}
return false;
}
};
Solution 2 Two pointers
1. Notice this square sum can be found efficiently using two pointers.
class Solution {
public:
bool judgeSquareSum(int c) {
int a = 0, b = sqrt(c);
while(a <= b){
int sum = a * a + b * b;
if(sum < c) a++;
else if(sum > c) b--;
else return true;
}
return false;
}
};
Solution 3 Using set
1. Keep inserting the value into a set, in the meantime also look up the other
class Solution {
public:
bool judgeSquareSum(int c) {
set<int> s;
for (int i = 0; i <= sqrt(c); ++i) {
s.insert(c - i*i);
if (s.count(i*i)) return true;
}
return false;
}
};
### 658. Find K Closest Elements¶
Solution 1 Binary search
1. Compare to problem 475. Heaters
2. Our search target is to find the starting index of the subarray of length K.
class Solution {
public:
vector<int> findClosestElements(vector<int>& arr, int k, int x) {
int start = 0, end = arr.size() - k;
while (start < end) {
int mid = start + (end - start) / 2;
// looking for a "mid" that
if (x - arr[mid] > arr[mid + k] - x) {
start = mid + 1;
} else {
end = mid;
}
}
return vector<int>(arr.begin() + start, arr.begin() + start + k);
}
};
Solution 2 Binary search and Two pointers
• We first use binary search to locate the x value then expand to left and right looking for the k closest elements
• Notice the i < 0 in the if condition, it is very important to be there. otherwise the array index will be out of bound.
class Solution {
public:
vector<int> findClosestElements(vector<int>& arr, int k, int x) {
int index = lower_bound(arr.begin(), arr.end(), x) - arr.begin();
int i = index - 1, j = index;
while (k--) {
if (i < 0 || j < arr.size() && abs(arr[j] - x) < abs(arr[i] - x)) {
j++;
} else {
i--;
}
}
return vector<int>(arr.begin() + i + 1, arr.begin() + j);
}
};
### 611. Valid Triangle Number¶
• The main idea comes from the triangle lateral property, in which the triple should fullfil: a + b > c, a + c > b, and b + c > a. Once we sort it. We are able to gain some advantages that we don't have to check all these 3 relations. Instead, we should only take care of A[i] + A[j] > A[k], in which i < j < k.
• Because we sorted the array, we can also fix the i and j, using binary search to find the k in the ragne of A[j + 1] ~ A[n - 1]. We can use our classic binary search template to achieve the goal.
class Solution {
public:
int triangleNumber(vector<int>& nums) {
int n = nums.size();
int res = 0;
sort(nums.begin(), nums.end());
for (int i = 0; i < n - 2; ++i) {
for (int j = i + 1; j < n - 1; ++j) {
int l = j + 1, r = n; // range of all possible k, notice l start with j + 1
int t = nums[i] + nums[j];
while (l < r) {
int m = l + (r - l) / 2;
if (nums[m] < t) {
l = m + 1;
} else {
r = m;
}
}
res += l - j - 1; // notice the count start from j + 1 to l - 1.
}
}
return res;
}
};
## Category 3 Using ordering abstration (monotonicity)¶
### 287. Find the Duplicate Number¶
Solution 1 Binary search
• The problem asking for better than O(n^2) we could check to see whether binary search will work.
• If you count how many values <= the mid elements of [1, ..., n-1], it will give you enough information to discard part of the array.
• Here you should distinguish what will be split and what will be searched. The answer is the [1, ..., n-1] sequence, not the given array. The simple proof of why it works can be put in this the following way.
• If the count of elements that <=mid in the array is less than mid, we can learn that the duplicate is in the higher end. If the count is greater, we can know that the duplicate element is in the lower end of the sequence [1, ..., n-1].
class Solution {
public:
int findDuplicate(vector<int>& nums) {
int begin = 1, end = nums.size() - 1;
while (begin < end) {
int mid = begin + (end - begin) / 2;
int count = 0;
for (int a : nums) {
if (a <= mid) ++count;
}
if (count <= mid) // "=" for [1,2,2]
begin = mid + 1;
else
end = mid;
}
return begin;
}
};
Solution 2 tortoise and hare algorithm
• This problem is very similar to the the find circle in linked list. Generally, if you repeate A[A[i]], the out put will show some periodic patterns. In fact you can imagine a rho shaped sequence.
• Image there is a function f(i) = A[i], it mapping from 1, 2, 3, ... n to 1, 2, 3, ... n. Try to traverse A[i], you will finally get circle through some same sequence of elements again and again, thus you obtain a rho shaped sequency like a circle in a linked list. The reason of it being a rho shape is becuase at least one element you will not come back to it if you leave it.
• Find Duplicate
class Solution {
public:
int findDuplicate(vector<int>& nums) {
int n = nums.size();
if (n == 0) return 0;
int slow = 0, fast = 0, find = 0;
while(slow != fast || (slow == 0 && fast == 0)) {
slow = nums[slow];
fast = nums[nums[fast]];
}
while (slow != find) {
slow = nums[slow];
find = nums[find];
}
return find;
}
};
### 374. Guess Number Higher or Lower¶
// Forward declaration of guess API.
// @return -1 if my number is lower, 1 if my number is higher, otherwise return 0
int guess(int num);
class Solution {
public:
int guessNumber(int n) {
int start = 1, end = n;
while(start < end) {
int mid = start + (end - start) / 2;
if (guess(mid) == 0)
return mid;
if (guess(mid) == 1) {
start = mid + 1;
} else {
end = mid;
}
}
return start;
}
};
# The guess API is already defined for you.
# @return -1 if my number is lower, 1 if my number is higher, otherwise return 0
# def guess(num):
class Solution(object):
def guessNumber(self, n):
"""
:type n: int
:rtype: int
"""
begin = 0
end = n
while begin != end:
mid = begin + (end - begin) / 2
if guess(mid) == 0:
return mid
if guess(mid) == 1:
begin = mid + 1
else:
end = mid
return begin
### 475. Heaters¶
Sort then brute force
1. The solution we are looking for is the max value of the smallest house-heater distance.
2. Think through what is the distance you want to keep, min or max
class Solution {
public:
int findRadius(vector<int>& houses, vector<int>& heaters) {
int m = houses.size();
int n = heaters.size();
sort(houses.begin(), houses.end());
sort(heaters.begin(), heaters.end());
int res = INT_MIN;
int i, j = 0;
for (i = 0; i < m; ++i) {
while (j < n - 1 && abs(heaters[j + 1] - houses[i]) <= abs(heaters[j] - houses[i])) {
j++;
}
res = max(res, abs(houses[i] - heaters[j]));
}
return res;
}
};
class Solution(object):
"""
:type houses: List[int]
:type heaters: List[int]
:rtype: int
"""
m = len(houses)
n = len(heaters)
houses.sort()
heaters.sort()
i = 0
j = 0
res = 0
for i in range(m):
while j < n - 1 and abs(heaters[j + 1] - houses[i]) <= abs(heaters[j] - houses[i]):
j += 1
res = max(res, abs(houses[i] - heaters[j]))
return res
Binary search the neighboring heaters get max of min
1. Notice we cannot sort hourses and then search each heater's position. A special cases [1, 2, 3] 2, the result is 0 whereis it should be 1.
class Solution {
public:
int findRadius(vector<int>& houses, vector<int>& heaters) {
int n = heaters.size();
sort(heaters.begin(), heaters.end());
int res = INT_MIN;
for (int house : houses) {
int start = 0, end = n;
while (start < end) {
int mid = start + (end - start) / 2;
if (heaters[mid] < house)
start = mid + 1;
else
end = mid;
}
int dist1 = (start == n) ? INT_MAX : heaters[start] - house;
int dist2 = (start == 0) ? INT_MAX : house - heaters[start - 1];
res = max(res, min(dist1, dist2));
}
return res;
}
};
class Solution(object):
"""
:type houses: List[int]
:type heaters: List[int]
:rtype: int
"""
m = len(houses)
n = len(heaters)
heaters.sort()
i = 0
j = 0
res = float('-inf')
for i in range(m):
start = 0
end = n
while start != end:
mid = start + (end - start) / 2
if heaters[mid] < houses[i]:
start = mid + 1
else:
end = mid
dist1 = float('inf')
dist2 = float('inf')
if start != n:
dist1 = heaters[start] - houses[i]
if start != 0:
dist2 = houses[i] - heaters[start - 1]
res = max(res, min(dist1, dist2))
return res
### 1011. Capacity To Ship Packages Within D Days¶
Binary solution
Same as the 410. Split Array Largest Sum
class Solution {
public:
int shipWithinDays(vector<int>& weights, int D) {
int n = weights.size();
if (n < D) return 0;
int l = *max_element(weights.begin(), weights.end());
int h = accumulate(weights.begin(), weights.end(), 0);
while (l < h) {
int m = (l + h) / 2;
int c = 1; // need cut D-1 times
int sum = 0;
for (int w: weights) {
if (sum + w > m) {
sum = 0;
c++;
}
sum += w;
}
if (c > D) {
l = m + 1;
} else {
h = m;
}
}
return l;
}
};
### 875. Koko Eating Bananas¶
Binary search
Using the monotonic guessing approach, notice the trick in counting whether the given guess value is possible.
class Solution {
public:
int minEatingSpeed(vector<int>& piles, int H) {
int N = piles.size();
if (N > H)
return 0;
int l = 1;
long r = 10e9;
while (l < r) {
int k = l + (r - l) / 2;
int hour = 0;
for (int p : piles) {
if (k >= p) {
hour++;
} else {
hour += (p + k - 1) / k;
}
}
if (hour > H) { // K is too large,
l = k + 1;
} else {
r = k;
}
}
return l;
}
};
class Solution {
public:
int minEatingSpeed(vector<int>& piles, int H) {
int N = piles.size();
if (N > H)
return 0;
int l = 1;
long r = 10e9;
while (l < r) {
int k = l + (r - l) / 2;
int hour = 0;
for (int p : piles) {
hour += (p + k - 1) / k;
}
if (hour > H) { // K is too large,
l = k + 1;
} else {
r = k;
}
}
return l;
}
};
### 1539. Kth Missing Positive Number¶
Naive Solution
• using multiple variables and keep loop invariant.
Binary Search
• Observe the relation: total missing positives before A[m] is A[m] - 1 - m because the index m and A[m] is related to the missing positives thus can be used for counting.
• the bisection condition can be interpreted as a boolean predicate: "whether the number of missing positives before A[m] is no less than k?"
class Solution {
public:
int findKthPositive(vector<int>& arr, int k) {
if (arr.empty()) return k;
int missing_cnt = arr[0] - 1;
if (missing_cnt >= k) return k;
int prev = arr[0];
for (int i = 1; i < arr.size(); ++i ) {
if (!(arr[i] == prev || arr[i] == prev + 1)) {
int skip = arr[i] - prev - 1;
if (missing_cnt + skip >= k) {
return prev + k - missing_cnt;
}
missing_cnt +=skip;
}
prev = arr[i];
}
return (prev + k - missing_cnt);
}
};
class Solution {
public:
int findKthPositive(vector<int>& arr, int k) {
int l = 0, r = arr.size();
while (l < r) {
int m = l + (r - l) / 2;
if (arr[m] - 1 - m < k) {
l = m + 1;
} else {
r = m;
}
}
return l + k;
}
};
### 1482. Minimum Number of Days to Make m Bouquets¶
Solution Binary Search
• Use a subroutine to compute whether the constrain can be meet or not.
• The search is looking for the whether m bouquets is possible, meet the binary pattern "no less than". So that we use if(cnt_m < m) and the return values is l.
class Solution {
public:
int minDays(vector<int>& bloomDay, int m, int k) {
int l = *min_element(bloomDay.begin(), bloomDay.end());
int r = *max_element(bloomDay.begin(), bloomDay.end());
if (bloomDay.size() < m * k) return -1;
while (l < r) {
int mid = l + (r - l) / 2;
int cnt_k = 0;
int cnt_m = 0;
for (int d: bloomDay) {
if (d > mid) {
cnt_k = 0;
} else {
cnt_k++;
if (cnt_k == k) {
cnt_m++;
cnt_k = 0;
}
}
}
if (cnt_m < m) {
l = mid + 1;
} else {
r = mid;
}
}
return l;
}
};
### 1283. Find the Smallest Divisor Given a Threshold¶
Solution Binary search
• Notice the specific divisor calculation. under this divisor operation, no matter how large the divisor is, the sum is always greater than nums.size(), if not, the solution is not guaranteed. so the threshold cannot smaller than nums.size(). This also indicate that the minimum divisor is less than or eaual to max(nums).
• in the bsection predicate, notice this time the condition becomes if (res > target) essentially, the if (f(mid) < target) in the binary search templates is saying mid and f(mid) are positive correlation. here the mid and res are negative correlation.
class Solution {
public:
int smallestDivisor(vector<int>& nums, int threshold) {
int l = 1;
int r = *max_element(nums.begin(), nums.end());
while (l < r) {
int m = l + (r - l) / 2;
int res = 0;
for (int num: nums) {
res += (num + m - 1) / m;
}
if (res > threshold) {
l = m + 1;
} else {
r = m;
}
}
return l;
}
};
### 1231. Divide Chocolate¶
Solution Binary search
1. The key difference between this problem and 410. Split Array Largest Sum is this problem asks for maximizing the smallest sweetness of the pieces, whereas the 410. Split Array Largest Sum asks minimizing the largest piece. Support k cuts generate m outcomes $S = \{s_1^{|k|}, s_2^{|k|}, \cdots, s_m^{|k|}\}$, this problem is to find the value of $\operatorname*{argmax}_m (\operatorname*{argmax}_k (S))$.
2. Imagine you guessed a value m, which is the maximum sweetness you can get from the smallest sweetness piece of all cuts. How to test whether the value m is possible? If possible, we will increase it to maximize it; if not, we will still keep it a candidate.
Same problem as 183. Wood cut.
class Solution {
public:
int maximizeSweetness(vector<int>& sweetness, int K) {
int start = *min_element(sweetness.begin(), sweetness.end());
int end = accumulate(sweetness.begin(), sweetness.end(), 0);
while (start < end) {
int mid = (start + end + 1) / 2;
int sum = 0;
int cuts = 0;
for (int s: sweetness) {
if ((sum += s) >= mid) {
sum = 0;
if (++cuts > K)
break;
}
}
if (cuts > K) {
// because >= mid above guarentee the "no less than" the guess.
// if cuts > K, mid could be the right answer and should be returned.
// Remember the binary search invariance requies not miss any
start = mid;
} else {
end = mid - 1;
}
}
return start;
}
};
Note
Compare the binary search solution of problem of 1231. Divide Chocolate and 410. Split Array Largest Sum. Notice how different in checking the number of cuts. It exceeds the limit K, for max and min case, it indicate a very trivial difference in meaning.
### 183. Wood cut (lintcode)¶
Description
Given n pieces of wood with length L[i] (integer array). Cut them into small pieces to guarantee you could have equal or more than k pieces with the same length. What is the longest length you can get from the n pieces of wood? Given L & k, return the maximum length of the small pieces. You couldn't cut wood into float length. If you couldn't get >= k pieces, return 0.
Solution 1 Binary search
• It requires getting equal or more than k pieces of wood with the same length. So you have to cut the wood to fulfill the requirement. However, you need to promise that each of the k wood is the longest that is possible.
• Imagine that you are given a bunch of wood to cut. How would you do it? You probably want to try to make one cut and see whether you can make it or not. If not, you may want to make two cuts, and so on. But how could you program such a solution? It is very hard.
• Start thinking about the length seems a good option. Suppose you know your final maximum length. You would be able to make the cut accordingly. Now given a length out of guessing, can you verify whether it going to work or not? Yes, you can! That's the core idea of this solution.
class Solution {
public:
int woodCut(vector<int> &L, int k) {
if(L.empty()) return 0;
int maxlen = *max_element(L.begin(), L.end());
if(k == 0) return maxlen;
int start = max(1, maxlen/k), end = maxlen;
while(start < end) {
int mid = start + (end - start) / 2;
int count = 0;
for(int len : L) {
count += len / (mid + 1);
}
if(count >= k)
start = mid + 1;
else
end = mid;
}
int count = 0;
for(int len : L) count += len/start;
return count >= k ? start : 0;
}
};
### 774. Minimize Max Distance to Gas Station¶
Solution 1 Binary search
• It is very similar to the problem Wood cut. You just need to take care of the accuracy of the results, namely also the int/double casts. It is also the hard part of the problem.
• Notice the count variable is int type, you should test your solution expecially for the line count += dist[i] / mid;
class Solution {
public:
double minmaxGasDist(vector<int>& stations, int K) {
int n = stations.size();
vector<int> dist(n, 0); // dist[0] = null;
int d = 0;
for (int i = 1; i < n; ++i) {
dist[i] = stations[i] - stations[i - 1];
d = max(d, dist[i]);
}
double low = 0, high = d;
while (low + 0.000001 < high) {
double mid = low + (high - low) / 2;
int count = 0;
for (int i = 1; i < n; ++i) {
count += dist[i] / mid;
}
if (count > K) { // mid is too small
low = mid;
} else {
high = mid;
}
}
return low;
}
};
### 644. Maximum Average Subarray II¶
• First try to understand why the constrain is "greater than or equal than k". You find that this constrain will ensure the solution exists and the problem is interesting.
• Notice the monotonicity of the "average sum" values, namely for a given value, if it doesn't fulfill the constrain (length >= k & max(avg)), you can eliminate half of the values from the solution space.
• Your binary search predicate will be to test whether there exists a subarray with length greater than k and an average value is larger than the mid.
• We use a trick to verify the constrains. The tricky thing is that the two constrains are not checked separately, they need to be work together in order to achieve better complexity. The length constrain is ensured partly by the "skip" indexing (i - k), partly by keeping the smallest average before the current considered subarray.
Key Math Insight
\begin{align*} \mu_k = \frac{a_i + a_{i+1} + \cdots, + a_j}{j-i+1} & >= Mid \\ a_i + a_{i+1} + \cdots, + a_j & >= Mid \times (j-i+1) \\ (a_i - Mid) + (a_{i+1} - Mid) + \cdots, + (a_j - Mid) & >= 0 \end{align*}
class Solution {
public:
double findMaxAverage(vector<int>& nums, int k) {
double start = *min_element(nums.begin(), nums.end());
double end = *max_element(nums.begin(), nums.end());
while (lower + 0.00001 < upper) {
double mid = lower + (upper - lower) / 2;
if (isLarger(nums, mid, k)) { // is average value >= mid?
lower = mid;
} else {
upper = mid;
}
}
return lower;
}
/* return true if a greater average value is possible */
bool isLarger(vector<int>& nums, double mid, int k) {
int n = nums.size();
double sums = 0, prev = 0, prev_min = 0;
for (int i = 0; i < k; i++) {
sums += nums[i] - mid;
}
// we keep looking for whether a subarray sum of length >= k in array
// "sums" is possible to be greater than zero. If such a subarray exist,
// it means that the target average value is greater than the "mid" value.
if (sums >= 0) {
return true;
}
// we look at the front part of sums that at least k element apart from i.
// If we can find the minimum of the sums[0], sums[1], ..., sums[i - k]
// and check if sums[i] - min(sums[0], sums[1], ..., sums[i - k]) >= 0.
// If this is the case, it indicate, there exist a subarray of length >= k
// with sum greater than 0 in sums. we can return ture.
for (int i = k; i < n; i++) {
sums += nums[i] - mid;
prev += nums[i - k] - mid;
prev_min = min(prev_min, prev);
if (sums >= prev_min)
return true;
}
return false;
}
};
### 778. Swim in Rising Water¶
• In This problem we are trying to find a path, in which the maximum element in the path among all paths is minimum. Meaning we look for a target value in the grid, such that there exist a path from grid[0][0] to grid[n-1][n-1] which includes this value and it is the maximum value in the path.
class Solution {
int x[4] = {0, -1, 0, 1};
int y[4] = {-1, 0, 1, 0};
public:
int swimInWater(vector<vector<int>>& grid) {
int n = grid.size();
int begin = grid[0][0], end = n * n - 1;
// binary search find a path with mini elevation
while (begin < end) {
int mid = begin + (end - begin) / 2;
if (pathExist(grid, mid)) {
end = mid;
} else {
begin = mid + 1;
}
}
return begin;
}
bool pathExist(vector<vector<int>> & grid, int mid) {
int n = grid.size();
vector<vector<int>> visited(n, vector<int>(n, 0));
return dfs_helper(grid, visited, n, mid, 0, 0);
}
bool dfs_helper(vector<vector<int>> & grid, vector<vector<int>>& visited,
int n, int mid, int i, int j) {
visited[i][j] = 1;
for (int k = 0; k < 4; ++k) {
int a = i + x[k];
int b = j + y[k];
if (a < 0 || a >= n || b < 0 || b >= n || visited[a][b] == 1 || grid[a][b] > mid) continue;
if (a == n - 1 && b == n - 1) return true;
if (dfs_helper(grid, visited, n, mid, a, b)) return true;
}
return false;
}
};
### 483 Smallest Good Base¶
Solution 1 Binary search
1. This problem requires a bit reasoning to achieve the solution.
2. The starting point is realy mean what's asking by the problem. Here it is asking a minimum base that represent the given number n in a representation like binary representation. For example: 13 = 3^0 + 3^1 + 3^2 so 13 can be representd as 111(base 3).
3. First of all, there is a special case that such a base may not exist. (precisely, we should seperate the special case when n = (n-1)^0 + (n-1)^1; With this special case in mind, we can use binary search to iterate through each m from largest to smallest and check whether the corresponding k is a good base of the given value n. Because when m is the largest, k is the smallest, so if the bianry search find one it must be the smallest k we are looking for. If binary search found nothing, we simpley return the special case n-1.
class Solution {
public:
string smallestGoodBase(string n) {
long long num = stoll(n);
/* for each lenght of the potentional representation,
* n = 1 + k + ... + k^{i-1} = (k^i-1)/(k-1), lower bound k is 2,
* we have 2^i-1 = n ==> upper bound i = log2(n+1). */
for (int i = log2(num + 1); i >= 2; --i) {
/* upper bound is obtained by n = 1 + k + k^2 ... + k^(i-1) > k^(i-1),
* n > k^(i-1) ==> k < n^(1/(i-1)); */
long long left = 2, right = pow(num, 1.0 / (i - 1)) + 1;
while (left < right) {
long long mid = left + (right - left) / 2;
long long sum = 0;
/* calculate i digits value with base "mid" */
for (int j = 0; j < i; ++j) {
sum = sum * mid + 1;
}
/* binary search for the mid (good base) */
if (sum == num)
if (sum < num)
left = mid + 1;
else
right = mid;
}
}
}
};
### 378. Kth Smallest Element in a Sorted Matrix¶
Solution 1 Binary Search
1. The idea of using binary search for this problem my not be straightforward. But the method is very important. The idea is very similar to the problem Search in a rotated sorted array.
2. Because the matrix is sorted row wise and column wise, there are some ordering information we can make use of.
3. Notice we are not try to search using the matrix index, we are searching the matrix element value. Compare to the problem 287. Find the Duplicate Number.
4. The comparison if (count < k) isn't include mid explicitly. but the count is some function f(mid), with the current mid, the count value is unique and can be use to test a condition that decide which side we can go to shrink the range the target value is possible in.
class Solution {
public:
int kthSmallest(vector<vector<int>>& matrix, int k) {
int m = matrix.size();
int n = m ? matrix[0].size() : 0;
int start = matrix[0][0], end = matrix[m - 1][n - 1];
while (start < end) {
int mid = start + (end - start) / 2;
int count = 0;
for (int i = 0; i < m; ++i) {
count += upper_bound(matrix[i].begin(), matrix[i].end(), mid) - matrix[i].begin();
}
if (count < k) { // notice no mid here, but count is a function of mid.
start = mid + 1;
} else {
end = mid;
}
}
return start;
}
};
Solution 2 Priority Queue
1. Notice when the k <= n^2, index j < matrix.size() will also make it work.
class Solution {
public:
int kthSmallest(vector<vector<int>>& matrix, int k) {
priority_queue<int> pq;
for (int i = 0; i < matrix.size(); ++i) {
for (int j = 0; j < matrix[0].size(); ++j) {
pq.push(matrix[i][j]);
if (pq.size() > k)
pq.pop();
}
}
return pq.top();
}
};
### 668. Kth Smallest Number in Multiplication Table¶
Solution 1 Binary search
1. While this problem looks simple. But it really isn't unless you observed the following.
2. The condition used for binary search is "whether there are k smaller elements in the range [start, mid]". You are looking for the smallest number that has k elements less than or equal to it. Like in the problem Kth Smallest Element in a Sorted Matrix, we will move the number not the index.
3. We move the start or end appropriately based on this condition, if there are more than k, we shrink the range by reduce end: end = mid. If there are less than k numbers, we increase begin hopefully to make mid larger so as to have close to k numbers in the range of [1, mid].
4. When being == end, we've located the kth number desired. In case k > m*n, we will got begin == end < k, which is not a solution.
5. In counting how many elements less than mid, you have to be clever a bit by using the feature that this matrix is a multiplicative table. That is for row i, you can at most have x/i number smaller than x, why?
6. Follow up: Does the kth element will be in the range of [1, m*n]?
class Solution {
public:
int findKthNumber(int m, int n, int k) {
int begin = 1, end = m * n;
while (begin < end) {
int mid = begin + (end - begin) / 2;
int count = 0;
for (int i = 1; i <= m; ++i) {
count += min(mid / i, n);
}
if (count < k)
begin = mid + 1;
else
end = mid;
}
return begin;
}
};
### 719. Find K-th Smallest Pair Distance¶
Solution 1 Priority Queue TLE
class Solution {
public:
int smallestDistancePair(vector<int>& nums, int k) {
priority_queue<int> pq;
for (int i = 0; i < nums.size(); ++i) {
for (int j = i + 1; j < nums.size(); ++j) {
int dist = abs(nums[i] - nums[j]);
if (pq.size() < k) {
pq.push(dist);
} else if (dist < pq.top()) {
pq.push(dist), pq.pop();
}
}
}
return pq.top();
}
};
Solution 2 Binary search
• Similar to Problem 668. Kth Smallest Number in Multiplication Table.
• The problem is complicated at the first glance. A brute force solution generates all the absolute distances and then sort to find the kth smallest one.
• We found it is potentially a searchable scenario if we sort the elements. We have range [min_distance, max_distance]. We search a distance in this range such that there are exactly k pairs distance including itself. If the count of pair distance less than k, we try to increase it buy start = mid + 1, vice versa.
• When the binary search loop stops, if the result exist, start point to the distance we are searching. Since this problem guarrantee solution exist, we return start.
class Solution {
public:
int smallestDistancePair(vector<int>& nums, int k) {
sort(nums.begin(), nums.end());
int start = nums[1] - nums[0];
for (int i = 2; i < nums.size(); ++i) {
start = min(start, nums[i] - nums[i - 1]);
}
int end = nums.back() - nums[0];
while (start < end) {
int mid = start + (end - start) / 2;
// count how many absolute differences that <= mid;
int count = 0;
for (int i = 0; i < nums.size(); ++i) {
int j = i;
while (j < nums.size() && nums[j] - nums[i] <= mid) j++;
count += j - i - 1;
}
if (count < k) {
start = mid + 1;
} else {
end = mid;
}
}
return start;
}
};
Solution 3 Using binary search to optimize the counting
1. You can also write your own binary search routine upper_bound.
class Solution {
public:
int smallestDistancePair(vector<int>& nums, int k) {
sort(nums.begin(), nums.end());
int start = nums[1] - nums[0];
for (int i = 2; i < nums.size(); ++i) {
start = min(start, nums[i] - nums[i - 1]);
}
int end = nums.back() - nums[0];
while (start < end) {
int mid = start + (end - start) / 2;
// count how many absolute differences that <= mid;
int count = 0;
/*
for (int i = 0; i < nums.size(); ++i) {
int j = i;
while (j < nums.size() && nums[j] - nums[i] <= mid) j++;
count += j - i - 1;
}
*/
// optimize the counting use binary search (nested binary search)
for (int i = 0; i < nums.size(); ++i) {
auto iter = upper_bound(nums.begin() + i, nums.end(), nums[i] + mid);
count += iter - (nums.begin() + i) - 1;
}
if (count < k) {
start = mid + 1;
} else {
end = mid;
}
}
return start;
}
};
### 786. K-th Smallest Prime Fraction¶
• You should find seek to find the monotonic pattern of the fraction and think how to search it effectively. To use binary search you need to draw an imaginary matrix to consider how can you search effectively.
class Solution {
public:
vector<int> kthSmallestPrimeFraction(vector<int>& A, int K) {
int n = A.size();
double l = 0, r = 1.0;
while (l < r) {
double m = (l + r) / 2;
// calculate how many smaller on the right
int cnt = 0;
double mx = 0;
int p, q;
int j = 1;
for (int i = 0; i < n - 1; ++i) {
while (j < n && A[i] > A[j] * m) ++j;
// int j = upper_bound(A.begin() + i, A.end(), A[i] / m) - A.begin();
cnt += (n - j);
if (n == j) break;
double fraction = (double) A[i] / (double) A[j];
if (fraction > mx) {
p = A[i];
q = A[j];
mx = fraction;
}
}
if (cnt == K) {
return {p, q};
}
if (cnt > K) {
r = m;
} else if (cnt < K) {
l = m;
}
}
return {};
}
};
### 1631. Path With Minimum Effort¶
Solution 1 Binary search + BFS
• Because we are searching for the smallest effort of all paths. If the proposed solution is not possible, namely, all paths have effort greater than the proposed solution (the proposed value is too small). We need to increase the start in binary search. We are looking for the no-less-than x in the binary search.
Solution 2 Dijkstra
• If you can change the problem into searching a weighted graph with edge weights, which are the absolute differences (effort). Since the weights are all positives, using Dijkstra algorithm can find the shortest path in the measure of effort.
class Solution {
vector<int> dx={0, 1, 0, -1};
vector<int> dy={-1,0, 1, 0};
public:
int minimumEffortPath(vector<vector<int>>& heights) {
int m = heights.size();
int n = m == 0 ? 0 : heights[0].size();
int start = 0, end = 10e6;
while (start < end) {
int mid = (start + end) / 2;
if (!pathPossible(heights, mid)) {
start = mid + 1;
} else {
end = mid;
}
}
return start;
}
bool pathPossible(vector<vector<int>>& heights, int val) {
int m = heights.size();
int n = m == 0 ? 0 : heights[0].size();
queue<vector<int>> q;
q.push({0, 0});
set<int> visited;
visited.insert(0);
while (!q.empty()) {
vector<int> t = q.front();
int x = t[0];
int y = t[1];
q.pop();
if (x == m - 1 && y == n - 1)
return true;
for (int k = 0; k < 4; k++) {
int a = x + dx[k];
int b = y + dy[k];
if (a < 0 || a >= m || b < 0 || b >= n) continue;
if (val < abs(heights[a][b] - heights[x][y])) continue;
if (visited.count(a * n + b) > 0) continue;
q.push({a, b});
visited.insert(a * n + b);
}
}
return false;
}
};
class Solution {
private int[] d = {0, 1, 0, -1, 0};
public int minimumEffortPath(int[][] heights) {
int lo = 0, hi = 1_000_000;
while (lo < hi) {
int effort = lo + (hi - lo) / 2;
if (isPath(heights, effort)) {
hi = effort;
}else {
lo = effort + 1;
}
}
return lo;
}
private boolean isPath(int[][] h, int effort) {
int m = h.length, n = h[0].length;
q.offer(new int[2]);
Set<Integer> seen = new HashSet<>();
while (!q.isEmpty()) {
int[] cur = q.poll();
int x = cur[0], y = cur[1];
if (x == m - 1 && y == n - 1) {
return true;
}
for (int k = 0; k < 4; ++k) {
int r = x + d[k], c = y + d[k + 1];
if (0 <= r && r < m && 0 <= c && c < n &&
effort >= Math.abs(h[r][c] - h[x][y]) && seen.add(r * n + c)) {
q.offer(new int[]{r, c});
}
}
}
return false;
}
}
class Solution {
public:
int minimumEffortPath(vector<vector<int>>& heights) {
int m = heights.size();
int n = heights[0].size();
vector<vector<int>> dist(m, vector<int>(n, INT_MAX)); // min distance found so far.
priority_queue<pair<int, int>, vector<pair<int, int>>, greater<pair<int, int>>> pq;
int d[5] = {0, 1, 0, -1, 0};
pq.push({0, 0}); // first: min effort, second: encoded (x, y) (=x * n + y);
while (!pq.empty()) {
pair<int, int> t = pq.top(), pq.pop();
int effort = t.first;
int x = t.second / n;
int y = t.second % n;
if (x == m - 1 && y == n - 1)
return effort;
for (int k = 0; k < 4; ++k) {
int a = x + d[k];
int b = y + d[k + 1
if (a < 0 || a >= m || b < 0 || b >= n) continue;
// update neighboring node, effort=min effort before visit node(a,b)
int currEffort = max(effort, abs(heights[a][b] - heights[x][y]));
if (dist[a][b] > currEffort) {
dist[a][b] = currEffort;
pq.push({currEffort, a * n + b});
}
}
}
return -1;
}
};
class Solution:
def minimumEffortPath(self, heights: List[List[int]]) -> int:
m, n = map(len, [heights, heights[0]])
efforts = [[math.inf] * n for _ in range(m)]
efforts[0][0] = 0
heap = [(0, 0, 0)]
while heap:
effort, x, y = heapq.heappop(heap);
if (x, y) == (m - 1, n - 1):
return effort
for i, j in (x, y - 1), (x, y + 1), (x - 1, y), (x + 1, y):
if i < 0 or i >= m or j < 0 or j >= n:
continue
currEffort = max(effort, abs(heights[x][y] - heights[i][j]))
if efforts[i][j] > currEffort:
efforts[i][j] = currEffort
heapq.heappush(heap, (currEffort, i, j))
### 1102. Path With Maximum Minimum Value¶
Solution 1 Binary Search + BFS
• Again, propose a possible value and use a isValid function to check the validity of the proposed solution.
Solution 2 BFS + PQ
• This solution can be thought as a variant of Dijkastra, but not the same.
Solution 3 Union Find
• We need to sort all the vertices by their values in descending order, then choose element from the vertices and use Union-Find to check connectivity of A[0][0] and A[m - 1][n - 1].
class Solution {
public:
int maximumMinimumPath(vector<vector<int>>& A) {
int m = A.size();
int n = A[0].size();
int start = 0, end = max(A[0][0], A[m - 1][n - 1]);
int res = 0, mid = 0;
while (start < end) {
mid = start + (end - start) / 2;
if (pathPossible(A, mid)) {
start = mid + 1;
} else {
end = mid;
}
}
return start;
}
bool pathPossible(vector<vector<int>>& A, int mid) {
int m = A.size();
int n = A[0].size();
queue<pair<int, int>> q;
q.emplace(0, 0);
vector<vector<int>> v(m, vector<int>(n, 0));
v[0][0] = 1;
int d[5] = {0, 1, 0, -1, 0};
while (!q.empty()) {
int x = q.front().first;
int y = q.front().second;
q.pop();
if (x == m - 1 && y == n - 1)
return true;
for (int k = 0; k < 4; ++k) {
int a = x + d[k];
int b = y + d[k + 1];
if (a < 0 || a >= m || b < 0 || b >= n || v[a][b] == 1) continue;
if (mid > A[a][b]) continue;
q.emplace(a, b);
v[a][b] = 1;
}
}
return false;
}
};
class Solution {
public:
int maximumMinimumPath(vector<vector<int>>& A) {
int m = A.size();
int n = A[0].size();
int res = INT_MAX;
priority_queue<pair<int, int>, vector<pair<int, int>>> pq; // max heap.
pq.emplace(A[0][0], 0);
vector<vector<int>> visited(m, vector<int>(n, 0));
visited[0][0] = -1;
int d[5] = {0, 1, 0, -1, 0};
while (!pq.empty()) {
pair<int, int> t = pq.top(); pq.pop();
int cost = t.first;
int x = t.second / n;
int y = t.second % n;
res = min(res, cost);
if (x == m - 1 && y == n - 1)
break;
for (int k = 0; k < 4; k++) {
int r = x + d[k];
int c = y + d[k + 1];
if (r < 0 || r >= m || c < 0 || c >= n || visited[r][c] < 0) continue;
pq.emplace(A[r][c], r * n + c);
visited[r][c] = -1;
}
}
return res;
}
};
## Category 4 Binary search as an optimization routine¶
### 300 Longest Increasing Subsequence¶
Solution 1 DP
1. The base case is single char. f[i] is the length of LIS from the begining.
class Solution {
public:
int lengthOfLIS(vector<int>& nums) {
if (n == nums.size()) return 0;
int f[n] = {0};
int res = 0;
for (int j = 0; j < n; j++) {
f[j] = 1;
for (int i = 0; i < j; i++) {
if (nums[i] < nums[j] && f[i] + 1 > f[j])
f[j] = f[i] + 1;
}
res = max(res, f[j]);
}
return res;
}
};
Solution 2 Using binary search
1. The DP solution is O(n^2). Using binary search could reduce to O(nlogn).
2. Binary search solution analysis. For each i, we are looking for the largest f value that has smallest A value. For example, A[0] = 5 could be ignored because of its f value is same as A[1] = 1, which is smaller. In searching for the LIS, we prefer a small ending value when the length is the same.
3. The following solution using a vector b to record the minimum A value for each length of LIS (f value), we use binary search to find the last value in b that smaller that current value A[i]. If we found such a value in b, we use A[i] to replace the value next to the found value in b).
i 0 1 2 3 4 5 6 7
A 5 1 3 7 6 4 2 10
f 1 1 2 3 3 3 2 4
f[1] = 1, a[1] = 1
f[6] = 2, a[6] = 2
f[5] = 3, a[5] = 4
f[7] = 4, a[7] = 10
class Solution {
public:
int lengthOfLIS(vector<int>& nums) {
vector<int> b;
for (int i = 0; i < nums.size(); ++i) {
int l = 0, r = b.size();
while (l < r) {
int m = l + (r - l) / 2;
if (b[m] < nums[i]) { // nums[i] is the target
l = m + 1;
} else {
r = m;
}
}
if (l == b.size()) // nums[i] greater than all element in b
b.push_back(nums[i]);
else // begin point to next element no less than the target nums[i].
b[l] = nums[i];
}
return b.size();
}
};
Alternatively, we could use lower_bound.
class Solution {
public:
int lengthOfLIS(vector<int>& nums) {
vector<int> b;
for (int i = 0; i < nums.size(); ++i) {
int l = lower_bound(b.begin(), b.end(), nums[i]) - b.begin();
if (l == b.size()) // nums[i] greater than all element in b
b.push_back(nums[i]);
else // begin point to next element no less than the target nums[i].
b[l] = nums[i];
}
return b.size();
}
}; | {"extraction_info": {"found_math": true, "script_math_tex": 2, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2448115348815918, "perplexity": 6040.556077842511}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104669950.91/warc/CC-MAIN-20220706090857-20220706120857-00448.warc.gz"} |
https://www.physicsforums.com/threads/please-help-me-to-complete-this-equations.759566/ | 1. Jun 26, 2014
### Micky raj
Sir the Question is this
[(-1+√3)^2][/(1-i)^20] + [(-1-√3)^15][/(1+i)^20]
and i could solve it half using Euler' Form
[(2e^2∏/3i)^15][/(√2e^-∏/4i)^20] + [(2e^-2∏/3i)^15][/(√2e^∏/4i)^20]
2. Jun 26, 2014
### HallsofIvy
Staff Emeritus
That's NOT an equation and I have no idea what you are trying to do with it. What do you mean by "solve" it? Just do the indicated arithmetic? Yes, you can do high powers by changing to "Eulers form" (or "polar form") but you haven't done the powers yet. Why not?
It is not easy to add in that form so after you have done the powers, change back to the original "rectangular" form.
(In the original form you have the numerator of the first fraction to the second power. Below you have it to the 15th power. Which is correct?)
3. Jun 27, 2014
### Micky raj
sorry Sir First time post that's why a bit nervous
Draft saved Draft deleted | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8148248195648193, "perplexity": 2084.8290790091974}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824824.76/warc/CC-MAIN-20171021171712-20171021191712-00557.warc.gz"} |
https://pbil.univ-lyon1.fr/CRAN/web/packages/matlib/vignettes/gramreg.html | # Gram-Schmidt Orthogonalization and Regression
#### 2022-12-08
This vignette illustrates the process of transforming a set of variables to a new set of uncorrelated (orthogonal) variables. It carries out the Gram-Schmidt process directly by successively projecting each successive variable on the previous ones and subtracting (taking residuals). This is equivalent by replacing each successive variable by its residuals from a least squares regression on the previous variables.
When this method is used on the predictors in a regression problem, the resulting orthogonal variables have exactly the same anova() summary (based on “Type I”, sequential sums of squares) as do original variables.
## Setup
We use the class data set, but convert the character factor sex to a dummy (0/1) variable male.
library(matlib)
data(class)
class$male <- as.numeric(class$sex=="M")
For later use in regression, we create a variable IQ as a response variable
class <- transform(class,
IQ = round(20 + height + 3*age -.1*weight -3*male + 10*rnorm(nrow(class))))
head(class)
## sex age height weight male IQ
## Alfred M 14 69.0 112.5 1 122
## Alice F 13 56.5 84.0 0 112
## Barbara F 13 65.3 98.0 0 112
## Carol F 14 62.8 102.5 0 111
## Henry M 14 63.5 102.5 1 126
## James M 12 57.3 83.0 1 100
Reorder the predictors we want, forming a numeric matrix, X.
X <- as.matrix(class[,c(3,4,2,5)])
head(X)
## height weight age male
## Alfred 69.0 112.5 14 1
## Alice 56.5 84.0 13 0
## Barbara 65.3 98.0 13 0
## Carol 62.8 102.5 14 0
## Henry 63.5 102.5 14 1
## James 57.3 83.0 12 1
## Orthogonalization by projections
The Gram-Schmidt process treats the variables in a given order, according to the columns in X. We start with a new matrix Z consisting of X[,1]. Then, find a new variable Z[,2] orthogonal to Z[,1] by subtracting the projection of X[,2] on Z[,1].
Z <- cbind(X[,1], 0, 0, 0)
Z[,2] <- X[,2] - Proj(X[,2], Z[,1])
crossprod(Z[,1], Z[,2]) # verify orthogonality
## [,1]
## [1,] 7.276e-12
Continue in the same way, subtracting the projections of X[,3] on the previous columns, and so forth
Z[,3] <- X[,3] - Proj(X[,3], Z[,1]) - Proj(X[,3], Z[,2])
Z[,4] <- X[,4] - Proj(X[,4], Z[,1]) - Proj(X[,4], Z[,2]) - Proj(X[,4], Z[,3])
Note that if any column of X is a linear combination of the previous columns, the corresponding column of Z will be all zeros.
These computations are similar to the following set of linear regressions:
z2 <- residuals(lm(X[,2] ~ X[,1]), type="response")
z3 <- residuals(lm(X[,3] ~ X[,1:2]), type="response")
z4 <- residuals(lm(X[,4] ~ X[,1:3]), type="response")
The columns of Z are now orthogonal, but not of unit length,
zapsmall(crossprod(Z)) # check orthogonality
## [,1] [,2] [,3] [,4]
## [1,] 57888 0 0 0
## [2,] 0 3249 0 0
## [3,] 0 0 7 0
## [4,] 0 0 0 2
We make standardize column to unit length, giving Z as an orthonormal matrix, such that $$Z' Z = I$$.
Z <- Z %*% diag(1 / len(Z)) # make each column unit length
zapsmall(crossprod(Z)) # check orthonormal
## [,1] [,2] [,3] [,4]
## [1,] 1 0 0 0
## [2,] 0 1 0 0
## [3,] 0 0 1 0
## [4,] 0 0 0 1
colnames(Z) <- colnames(X)
### Relationship to QR factorization
The QR method uses essentially the same process, factoring the matrix $$\mathbf{X}$$ as $$\mathbf{X = Q R}$$, where $$\mathbf{Q}$$ is the orthonormal matrix corresponding to Z and $$\mathbf{R}$$ is an upper triangular matrix. However, the signs of the columns of $$\mathbf{Q}$$ are arbitrary, and QR() returns QR(X)$Q with signs reversed, compared to Z. # same result as QR(X)$Q, but with signs reversed
head(Z, 5)
## height weight age male
## Alfred 0.2868 0.07545 -0.3687 0.12456
## Alice 0.2348 -0.08067 0.3569 -0.02177
## Barbara 0.2714 -0.07715 -0.3862 -0.45170
## Carol 0.2610 0.07058 0.1559 -0.20548
## Henry 0.2639 0.05132 0.1047 0.40538
head(-QR(X)$Q, 5) ## [,1] [,2] [,3] [,4] ## [1,] 0.2868 0.07545 -0.3687 0.12456 ## [2,] 0.2348 -0.08067 0.3569 -0.02177 ## [3,] 0.2714 -0.07715 -0.3862 -0.45170 ## [4,] 0.2610 0.07058 0.1559 -0.20548 ## [5,] 0.2639 0.05132 0.1047 0.40538 all.equal( unname(Z), -QR(X)$Q )
## [1] TRUE
## Regression with X and Z
We carry out two regressions of IQ on the variables in X and in Z. These are equivalent, in the sense that
• The $$R^2$$ and MSE are the same in both models
• Residuals are the same
• The Type I tests given by anova() are the same.
class2 <- data.frame(Z, IQ=class\$IQ)
Regression of IQ on the original variables in X
mod1 <- lm(IQ ~ height + weight + age + male, data=class)
anova(mod1)
## Analysis of Variance Table
##
## Response: IQ
## Df Sum Sq Mean Sq F value Pr(>F)
## height 1 67 67.2 0.65 0.44
## weight 1 0 0.1 0.00 0.98
## age 1 8 8.4 0.08 0.78
## male 1 118 118.3 1.15 0.31
## Residuals 10 1033 103.3
Regression of IQ on the orthogonalized variables in Z
mod2 <- lm(IQ ~ height + weight + age + male, data=class2)
anova(mod2)
## Analysis of Variance Table
##
## Response: IQ
## Df Sum Sq Mean Sq F value Pr(>F)
## height 1 67 67.2 0.65 0.44
## weight 1 0 0.1 0.00 0.98
## age 1 8 8.4 0.08 0.78
## male 1 118 118.3 1.15 0.31
## Residuals 10 1033 103.3
This illustrates that anova() tests for linear models are sequential tests. They test hypotheses about the extra contribution of each variable over and above all previous ones, in a given order. These usually do not make substantive sense, except in testing ordered (“hierarchical”) models. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.564755916595459, "perplexity": 5188.662749793875}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950363.89/warc/CC-MAIN-20230401221921-20230402011921-00780.warc.gz"} |
https://dev.goldbook.iupac.org/terms/view/B00754 | ## bulk mesophase
https://doi.org/10.1351/goldbook.B00754
A continuous anisotropic phase formed by coalescence of @M03849@ spheres. Bulk @M03849@ retains @F02450@ and is deformable in the temperature range up to about $$770\ \text{K}$$, and transforms into @G02697@ by further loss of hydrogen or low-molecular-@W06668@ species. This bulk @M03849@ can sometimes be formed directly from the @I03353@ @P04677@ without observation of intermediate spheres.
Source:
PAC, 1995, 67, 473. (Recommended terminology for the description of carbon as a solid (IUPAC Recommendations 1995)) on page 478 [Terms] [Paper] | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.628713071346283, "perplexity": 9377.509511367885}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780055601.25/warc/CC-MAIN-20210917055515-20210917085515-00183.warc.gz"} |
http://lambda-the-ultimate.org/taxonomy/term/17?from=10 | ## Asynchronous Proof Processing with Isabelle/Scala and Isabelle/jEdit
Asynchronous Proof Processing with Isabelle/Scala and Isabelle/jEdit. Makarius Wenzel, UITP 2010.
After several decades, most proof assistants are still centered around TTY-based interaction in a tight read-eval-print loop. Even well-known Emacs modes for such provers follow this synchronous model based on single commands with immediate response, meaning that the editor waits for the prover after each command. There have been some attempts to re-implement prover interfaces in big IDE frameworks, while keeping the old interaction model. Can we do better than that?
Ten years ago, the Isabelle/Isar proof language already emphasized the idea of proof document (structured text) instead of proof script (sequence of commands), although the implementation was still emulating TTY interaction in order to be able to work with the then emerging Proof General interface. After some recent reworking of Isabelle internals, to support parallel processing of theories and proofs, the original idea of structured document processing has surfaced again.
Isabelle versions from 2009 or later already provide some support for interactive proof documents with asynchronous checking, which awaits to be connected to a suitable editor framework or full-scale IDE. The remaining problem is how to do that systematically, without having to specify and implement complex protocols for prover interaction.
This is the point where we introduce the new Isabelle/Scala layer, which is meant to expose certain aspects of Isabelle/ML to the outside world. The Scala language (by Martin Odersky) is sufficiently close to ML in order to model well-known prover concepts conveniently, but Scala also runs on the JVM and can access existing Java libraries directly. By building more and more external system wrapping for Isabelle in Scala, we eventually reach the point where we can integrate the prover seamlessly into existing IDEs (say Netbeans).
To avoid getting side-tracked by IDE platform complexity, our current experiments are focused on jEdit, which is a powerful editor framework written in Java that can be easily extended by plugin modules. Our plugins are written again in Scala for our convenience, and to leverage the Scala actor library for parallel and interactive programming. Thanks to the Isabelle/Scala layer, the Isabelle/jEdit implementation is very small and simple.
I thought this was a nice paper on the pragmatics of incremental, interactive proof editing. I've suspected for a while that as programming languages and IDEs grow more sophisticated and do more computationally-intensive checks at compile time (including but not limited to theorem proving), it will become similarly important to design our languages to support modular and incremental analysis.
However, IDE designs also need more experimentation, and unfortunately the choice of IDEs to extend seem to be limited to archaic systems like Emacs or industrial behemoths like Eclipse or Visual Studio, both of which constrain the scope for new design -- Emacs is too limited, and the API surface of Eclipse/VS is just too big and irregular. (Agda-mode for Emacs is a heroic but somewhat terrifying piece of elisp.)
## Finding and Understanding Bugs in C Compilers
In Finding and Understanding Bugs in C Compilers Xuejun Yang, Yang Chen, Eric Eide, and John Regehr of University of Utah, School of Computing describe Csmith, a fuzzer for testing C compilers. The hard part was avoiding undefined behavior.
Compilers should be correct. To improve the quality of C compilers, we created Csmith, a randomized test-case generation tool, and spent three years using it to ï¬nd compiler bugs. During this period we reported more than 325 previously unknown bugs to compiler developers. Every compiler we tested was found to crash and also to silently generate wrong code when presented with valid input. In this paper we present our compiler-testing tool and the results of our bug-hunting study. Our ï¬rst contribution is to advance the state of the art in compiler testing. Unlike previous tools, Csmith generates programs that cover a large subset of C while avoiding the undeï¬ned and unspeciï¬ed behaviors that would destroy its ability to automatically ï¬nd wrong code bugs. Our second contribution is a collection of qualitative and quantitative results about the bugs we have found in open-source C compilers.
Two bits really stuck out for me. First, formal verification has a real positive impact
The striking thing about our CompCert results is that the middleend bugs we found in all other compilers are absent. As of early 2011, the under-development version of CompCert is the only compiler we have tested for which Csmith cannot ï¬nd wrong-code errors. This is not for lack of trying: we have devoted about six CPU-years to the task. The apparent unbreakability of CompCert supports a strong argument that developing compiler optimizations within a proof framework, where safety checks are explicit and machine-checked, has tangible beneï¬ts for compiler users.
And second, code coverage is inadequate for ensuring good test thoroughness for software as complex as a compiler.
Because we ï¬nd many bugs, we hypothesized that randomly generated programs exercise large parts of the compilers that were not covered by existing test suites. To test this, we enabled code coverage monitoring in GCC and LLVM. We then used each compiler to build its own test suite, and also to build its test suite plus 10,000 Csmith-generated programs. Table 3 shows that the incremental coverage due to Csmith is so small as to be a negative result. Our best guess is that these metrics are too shallow to capture Csmith’s effects, and that we would generate useful additional coverage in terms of deeper metrics such as path or value coverage.
## The Habit Programming Language: The Revised Preliminary Report
Habit is a systems programming dialect of Haskell from the High-Assurance Systems Programming (HASP) project at Portland State University. From The Habit Programming Language: The Revised Preliminary Report
This report presents a preliminary design for the programming language Habit, a dialect of Haskell that supports the development of high quality systems software. The primary commitments of the design are as follows:
* Systems programming: Unlike Haskell, which was intended to serve as a general purpose functional programming language, the design of Habit focusses on features that are needed in systems software development. These priorities are reflected fairly directly in the new features that Habit provides for describing bit-level and memory-based data representations, the introduction of new syntactic extensions to facilitate monadic programming, and, most signiï¬cantly, the adoption of a call-by-value semantics to improve predictability of execution. The emphasis on systems programming also impacts the design in less direct ways, including assumptions about the expected use of whole program compilation and optimization strategies in a practical Habit implementation.
* High assurance: Although most details of Haskell’s semantics have been formalized at some point in the research literature, there is no consolidated formal description of the whole language. There are also known differences in semantics, particularly with respect to operational behavior, between different Haskell implementations in areas where the Haskell report provides no guidance. Although it is not addressed in the current report, a high-priority for Habit is to provide a full, formal semantics for the complete language that can be used as a foundation for reasoning and formal veriï¬cation, a mechanism for ensuring consistency between implementations, and a basis for reliably predicting details about memory allocation, asymptotic behavior, and resource utilization.
HASP has a couple of postdoc positions open to help with Habit.
## Ghosts of Unix Past: a historical search for design patterns
Not strictly PLT-related, but Neil Brown has contributed an amazing series of articles to Linux Weekly News:
For this series we try to look for patterns which become visible only over an extended time period. As development of a system proceeds, early decisions can have consequences that were not fully appreciated when they were made. If we can find patterns relating these decisions to their outcomes, it might be hoped that a review of these patterns while making new decisions will help to avoid old mistakes or to leverage established successes.
## Pure and Declarative Syntax Definition: Paradise Lost and Regained, Onward 2010
Pure and Declarative Syntax Definition: Paradise Lost and Regained by Lennart C. L. Kats, Eelco Visser, Guido Wachsmuth from Delft
Syntax definitions are pervasive in modern software systems, and serve as the basis for language processing tools like parsers and compilers. Mainstream parser generators pose restrictions on syntax definitions that follow from their implementation algorithm. They hamper evolution, maintainability, and compositionality of syntax definitions. The pureness and declarativity of syntax definitions is lost. We analyze how these problems arise for different aspects of syntax definitions, discuss their consequences for language engineers, and show how the pure and declarative nature of syntax definitions can be regained.
I haven't compared this version with the Onward 2010 version, but they look essentially the same. It seems timely to post this paper, considering the other recent story Yacc is dead. There is not a whole lot to argue against in this paper, since we all "know" the other approaches aren't as elegant and only resort to them for specific reasons such as efficiency. Yet, this is the first paper I know of that tries to state the argument to software engineers.
For example, the Dragon Book, in every single edition, effectively brushes these topics aside. In particular, the Dragon Book does not even mention scannerless parsing as a technique, and instead only explains the "advantages" of using a scanner. Unfortunately, the authors of this paper don't consider other design proposals, either, such as Van Wyk's context-aware scanners from GPCE 2007. It is examples like these that made me wish the paper was a bit more robust in its analysis; the examples seem focused on the author's previous work.
If you are not familiar with the author's previous work in this area, the paper covers it in the references. It includes Martin Bravenboer's work on modular Eclipse IDE support for AspectJ.
## First-class modules: hidden power and tantalizing promises
Oleg just posted a new page, First-class modules: hidden power and tantalizing promises, related to new features in OCaml 3.12 (on LtU).
First-class modules introduced in OCaml 3.12 make type constructors first-class, permitting type constructor abstraction and polymorphism. It becomes possible to manipulate and quantify over types of higher kind. We demonstrate that as a consequence, full-scale, efficient generalized algebraic data types (GADTs) become expressible in OCaml 3.12 as it is, without any further extensions. Value-independent generic programming along the lines of Haskell's popular Generics for the masses'' become possible in OCaml for the first time. We discuss extensions such as a better implementation of polymorphic equality on modules, which can give us intensional type analysis (aka, type-case), permitting generic programming frameworks like SYB.
It includes a nice intro to first-class modules by Frisch and Garrigue: First-class modules and composable signatures in Objective Caml 3.12.
OCaml definitely just got even more interesting.
## Turning down the LAMP: Software specialization for the cloud
Several years ago, a reading group I was in read about the Flux OSKit Project, which aimed to provide a modular basis for operating systems. One of the topics of discussion was the possibility of, and possible benefits of, an application-specific OS. (For example, the fearful spectre of EmacsOS was raised.)
Today, I ran across "Turning down the LAMP: Software specialization for the cloud", which actually makes a pretty strong case for the idea on a virtual machine infrastructure,
...We instead view the cloud as a stable hardware platform, and present a programming framework which permits applications to be constructed to run directly on top of it without intervening software layers. Our prototype (dubbed Mirage) is unashamedly academic; it extends the Objective Caml language with storage extensions and a custom run-time to emit binaries that execute as a guest operating system under Xen. Mirage applications exhibit significant performance speedups for I/O and memory handling versus the same code running under Linux/Xen.
As one example,
Frameworks which currently use (for example) fork(2) on a host to spawn processes would benefit from using cloud management APIs to request resources and eliminate the distinction between cores and hosts.
On the other hand, I suspect that this "unashamedly academic" idea may already be advancing into the commercial arena, if I am correctly reading between the lines of the VMware vFabric tc ServerTM marketing material.
## Software Development with Code Maps
Robert DeLine, Gina Venolia, and Kael Rowan, "Software Development with Code Maps", Communications of the ACM, Vol. 53 No. 8, Pages 48-54, 10.1145/1787234.1787250
Getting lost in a large code base is altogether too easy. The code consists of many thousands of symbols, with few visual landmarks to guide the eye. As a developer navigates the code, she follows hyperlinks, such as jumping from a method caller to a callee, with no visual transition to show where the jump landed. ... Better support for code diagrams in the development environment could support code understanding and communication, and could serve as a "map" to help keep developers oriented. ... Our goal is to integrate maps into the development environment such that developers can carry out most tasks within the map.
Although the focus of this article is largely on "Code Map as UI", there are hints of the possibility that we might eventually see "Code Map as Language Element" (for example, the comment that "An important lesson from the Oahu research is that developers assign meaning to the spatial layout of the code. Code Canvas therefore takes a mixed initiative approach to layout. The user is able to place any box on the map through direct manipulation..."). The same ideas will of course be familiar to anyone who has worked with environments like Simulink, which provide a combination of diagrammatic structuring and textual definition of algorithms. But in the past such environments have only really been found in specific application domains -- control systems and signal processing in the case of Simulink -- while the Code Map idea seems targeted at more general-purpose software development. Is the complexity of large software systems pushing us towards a situation in which graphical structures like Code Maps will become a common part of the syntax of general-purpose programming languages?
## Is Transactional Programming Actually Easier?
Is Transactional Programming Actually Easier?, WDDD '09, Christopher J. Rossbach, Owen S. Hofmann, and Emmett Witchel.
Chip multi-processors (CMPs) have become ubiquitous, while tools that ease concurrent programming have not. The promise of increased performance for all applications through ever more parallel hardware requires good tools for concurrent programming, especially for average programmers. Transactional memory (TM) has enjoyed recent interest as a tool that can help programmers program concurrently.
The TM research community claims that programming with transactional memory is easier than alternatives (like locks), but evidence is scant. In this paper, we describe a user-study in which 147 undergraduate students in an operating systems course implemented the same programs using coarse and fine-grain locks, monitors, and transactions. We surveyed the students after the assignment, and examined their code to determine the types and frequency of programming errors for each synchronization technique. Inexperienced programmers found baroque syntax a barrier to entry for transactional programming. On average, subjective evaluation showed that students found transactions harder to use than coarse-grain locks, but slightly easier to use than fine-grained locks. Detailed examination of synchronization errors in the students’ code tells a rather different story. Overwhelmingly, the number and types of programming errors the students made was much lower for transactions than for locks. On a similar programming problem, over 70% of students made errors with fine-grained locking, while less than 10% made errors with transactions.
I've recently discovered the Workshop on Duplicating, Deconstructing, and Debunking (WDDD) and have found a handful of neat papers, and this one seemed especially relevant to LtU.
[Edit: Apparently, there is a PPoPP'10 version of this paper with 237 undergraduate students.]
Also, previously on LtU:
Transactional Memory versus Locks - A Comparative Case Study
Despite the fact Tommy McGuire's post mentions Dr. Victor Pankratius's talk was at UT-Austin and the authors of this WDDD'09 paper represent UT-Austin, these are two independent case studies with different programming assignments. The difference in assignments is interesting because it may indicate some statistical noise associated with problem domain complexity (as perceived by the test subjects) and could account for differences between the two studies.
Everyone always likes to talk about usability in programming languages without trying to do it. Some claim it can't even be done, despite the fact Horning and Gannon did work on the subject 3+ decades ago, assessing how one can Language Design to Enhance Program Reliability. This gives a glimpse both on (a) why it is hard (b) how you can still try to do usability testing, rather than determine the truthiness of a language design decision.
## Joe Duffy: A (brief) retrospective on transactional memory
A (brief) retrospective on transactional memory, by Joe Duffy, January 3rd, 2010. Although this is a blog post, don't expect to read it all on your lunch break...
The STM.NET incubator project was canceled May 11, 2010, after beginning public life July 27, 2009 at DevLabs. In this blog post, written 4 months prior to its cancellation, Joe Duffy discusses the practical engineering challenges around implementing Software Transactional Memory in .NET. Note: He starts off with a disclaimer that he was not engaged in the STM.NET project past its initial working group phase.
In short, Joe argues, "Throughout, it became abundantly clear that TM, much like generics, was a systemic and platform-wide technology shift. It didn’t require type theory, but the road ahead sure wasn’t going to be easy." The whole blog post deals with how many implementation challenges platform-wide support for STM would be in .NET, including what options were considered. He does not mention Maurice Herlihy's SXM library approach, but refers to Tim Harris's work several times.
There was plenty here that surprised me, especially when you compare Concurrent Haskell's STM implementation to STM.NET design decisions and interesting debates the team had. In Concurrent Haskell, issues Joe raises, like making Console.WriteLine transactional, are delegated to the type system by the very nature of the TVar monad, preventing programmers from writing such wishywashy code. To be honest, this is why I didn't understand what Joe meant by "it didn't require type theory" gambit, since some of the design concerns are mediated in Concurrent Haskell via type theory. On the other hand, based on the pragmatics Joe discusses, and the platform-wide integration with the CLR they were shooting for, reminds me of The Transactional Memory / Garbage Collection Analogy. Joe also wrote a briefer follow-up post, More thoughts on transactional memory, where he talks more about Barbara Liskov's Argus. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36021310091018677, "perplexity": 2913.2524707711145}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422122039674.71/warc/CC-MAIN-20150124175359-00122-ip-10-180-212-252.ec2.internal.warc.gz"} |
https://www.gradesaver.com/aristotles-poetics/e-text/xxv-critical-objections-brought-against-poetry-and-the-principles-on-which-they-are-to-be-answered | # Aristotle's Poetics
## XXV Critical Objections brought against Poetry, and the principles on which they are to be answered
With respect to critical difficulties and their solutions, the number and nature of the sources from which they may be drawn may be thus exhibited.
The poet being an imitator, like a painter or any other artist, must of necessity imitate one of three objects,--things as they were or are, things as they are said or thought to be, or things as they ought to be. The vehicle of expression is language,--either current terms or, it may be, rare words or metaphors. There are also many modifications of language, which we concede to the poets. Add to this, that the standard of correctness is not the same in poetry and politics, any more than in poetry and any other art. Within the art of poetry itself there are two kinds of faults, those which touch its essence, and those which are accidental. If a poet has chosen to imitate something, <but has imitated it incorrectly> through want of capacity, the error is inherent in the poetry. But if the failure is due to a wrong choice if he has represented a horse as throwing out both his off legs at once, or introduced technical inaccuracies in medicine, for example, or in any other art the error is not essential to the poetry. These are the points of view from which we should consider and answer the objections raised by the critics.
First as to matters which concern the poet's own art. If he describes the impossible, he is guilty of an error; but the error may be justified, if the end of the art be thereby attained (the end being that already mentioned), if, that is, the effect of this or any other part of the poem is thus rendered more striking. A case in point is the pursuit of Hector. If, however, the end might have been as well, or better, attained without violating the special rules of the poetic art, the error is not justified: for every kind of error should, if possible, be avoided.
Again, does the error touch the essentials of the poetic art, or some accident of it? For example,--not to know that a hind has no horns is a less serious matter than to paint it inartistically.
Further, if it be objected that the description is not true to fact, the poet may perhaps reply,--'But the objects are as they ought to be': just as Sophocles said that he drew men as they ought to be; Euripides, as they are. In this way the objection may be met. If, however, the representation be of neither kind, the poet may answer,--This is how men say the thing is.' This applies to tales about the gods. It may well be that these stories are not higher than fact nor yet true to fact: they are, very possibly, what Xenophanes says of them. But anyhow, 'this is what is said.' Again, a description may be no better than the fact: 'still, it was the fact'; as in the passage about the arms: 'Upright upon their butt-ends stood the spears.' This was the custom then, as it now is among the Illyrians.
Again, in examining whether what has been said or done by some one is poetically right or not, we must not look merely to the particular act or saying, and ask whether it is poetically good or bad. We must also consider by whom it is said or done, to whom, when, by what means, or for what end; whether, for instance, it be to secure a greater good, or avert a greater evil.
Other difficulties may be resolved by due regard to the usage of language. We may note a rare word, as in {omicron upsilon rho eta alpha sigma / mu epsilon nu / pi rho omega tau omicron nu}, where the poet perhaps employs {omicron upsilon rho eta alpha sigma} not in the sense of mules, but of sentinels. So, again, of Dolon: 'ill-favoured indeed he was to look upon.' It is not meant that his body was ill-shaped, but that his face was ugly; for the Cretans use the word {epsilon upsilon epsilon iota delta epsilon sigma}, 'well-favoured,' to denote a fair face. Again, {zeta omega rho omicron tau epsilon rho omicron nu / delta epsilon / kappa epsilon rho alpha iota epsilon}, 'mix the drink livelier,' does not mean `mix it stronger' as for hard drinkers, but 'mix it quicker.'
Sometimes an expression is metaphorical, as 'Now all gods and men were sleeping through the night,'--while at the same time the poet says: 'Often indeed as he turned his gaze to the Trojan plain, he marvelled at the sound of flutes and pipes.' 'All' is here used metaphorically for 'many,' all being a species of many. So in the verse,--'alone she hath no part . . ,' {omicron iota eta}, 'alone,' is metaphorical; for the best known may be called the only one.
Again, the solution may depend upon accent or breathing. Thus Hippias of Thasos solved the difficulties in the lines,--{delta iota delta omicron mu epsilon nu (delta iota delta omicron mu epsilon nu) delta epsilon / omicron iota,} and { tau omicron / mu epsilon nu / omicron upsilon (omicron upsilon) kappa alpha tau alpha pi upsilon theta epsilon tau alpha iota / omicron mu beta rho omega}.
Or again, the question may be solved by punctuation, as in Empedocles,-- 'Of a sudden things became mortal that before had learnt to be immortal, and things unmixed before mixed.'
Or again, by ambiguity of meaning,--as {pi alpha rho omega chi eta kappa epsilon nu / delta epsilon / pi lambda epsilon omega / nu upsilon xi}, where the word {pi lambda epsilon omega} is ambiguous.
Or by the usage of language. Thus any mixed drink is called {omicron iota nu omicron sigma}, 'wine.' Hence Ganymede is said 'to pour the wine to Zeus,' though the gods do not drink wine. So too workers in iron are called {chi alpha lambda kappa epsilon alpha sigma}, or workers in bronze. This, however, may also be taken as a metaphor.
Again, when a word seems to involve some inconsistency of meaning, we should consider how many senses it may bear in the particular passage. For example: 'there was stayed the spear of bronze'--we should ask in how many ways we may take 'being checked there.' The true mode of interpretation is the precise opposite of what Glaucon mentions. Critics, he says, jump at certain groundless conclusions; they pass adverse judgment and then proceed to reason on it; and, assuming that the poet has said whatever they happen to think, find fault if a thing is inconsistent with their own fancy. The question about Icarius has been treated in this fashion. The critics imagine he was a Lacedaemonian. They think it strange, therefore, that Telemachus should not have met him when he went to Lacedaemon. But the Cephallenian story may perhaps be the true one. They allege that Odysseus took a wife from among themselves, and that her father was Icadius not Icarius. It is merely a mistake, then, that gives plausibility to the objection.
In general, the impossible must be justified by reference to artistic requirements, or to the higher reality, or to received opinion. With respect to the requirements of art, a probable impossibility is to be preferred to a thing improbable and yet possible. Again, it may be impossible that there should be men such as Zeuxis painted. 'Yes,' we say, 'but the impossible is the higher thing; for the ideal type must surpass the reality.' To justify the irrational, we appeal to what is commonly said to be. In addition to which, we urge that the irrational sometimes does not violate reason; just as 'it is probable that a thing may happen contrary to probability.'
Things that sound contradictory should be examined by the same rules as in dialectical refutation whether the same thing is meant, in the same relation, and in the same sense. We should therefore solve the question by reference to what the poet says himself, or to what is tacitly assumed by a person of intelligence.
The element of the irrational, and, similarly, depravity of character, are justly censured when there is no inner necessity for introducing them. Such is the irrational element in the introduction of Aegeus by Euripides and the badness of Menelaus in the Orestes.
Thus, there are five sources from which critical objections are drawn. Things are censured either as impossible, or irrational, or morally hurtful, or contradictory, or contrary to artistic correctness. The answers should be sought under the twelve heads above mentioned. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.823689877986908, "perplexity": 3568.9623833853307}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948089.47/warc/CC-MAIN-20180426051046-20180426071046-00357.warc.gz"} |
https://www.lazymaths.com/smart-math/arithmetic-problem-18/ | Categories
# [Smart Math] Arithmetic Problem 18
Here’s and example of a SMART MATH problem for ARITHMETIC.
### Problem
The sum of cubes of three numbers is 8072 and the ratio of the first to the second as also the second to the third is 3 : 2. What is the second number?
1. 2
2. 4
3. 6
4. 9
5. 12
### The Usual Method
Let the three numbers be ‘1’, ‘2’ and ‘3’.
Hence a : b = 3 : 2 and b : c = 3 : 2
$\therefore$ a : b : c
= 3 : 2
3 : 2
9 : 6 : 4
Hence, a = 9x,
b = 6x
c = 4x
Now, $(9x)^{3}+(6x)^{3}+(4x)^{3}=$ 8072
$\therefore 729x^{3}+216x^{3}+64x^{3}=8072$
$\therefore 1009x^{3}=8072$
$\therefore x^{3}=\frac{8072}{1009}=8$
$\therefore x=2$
Hence the second number is 6x = 6 x 2 = 12
(Ans: 5)
Estimated Time to arrive at the answer = 75 seconds.
### Using Technique
Simply by knowing that the ratios are in the form 3 : 2 between the first & second and 3 : 2 between the second & third number, we can know that the second number is a multiple of 6. (Since a : b : c = 9 : 6 : 4)
This means that the answer from amongst the options is either 6 or 12. All other options are eliminated.
Now assuming that b = 6, we will have a = 9 and c = 4 (Since a : b : c = 9 : 6 : 4)
Now simply add the last digits of the cubes 9, 6 and 4 i.e. 9 + 6 + 4 = 19 (Last digit of cubes of 9, 6 and 4 are 9, 6 and 4 respectively). Here the last digit is 9 (from 19). But the last digit of the sum of cubes of these number is actually 2 (from 8 072). Hence the middle number is not 6 but 12.
(Ans: 5)
Estimated Time to arrive at the answer = 30 seconds.
[starrater tpl=10]
[contentblock id=smartmath-blockquote] | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 6, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6731613874435425, "perplexity": 578.8459549566285}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991288.0/warc/CC-MAIN-20210518160705-20210518190705-00398.warc.gz"} |
https://www.rndsystems.com/resources/articles/th1-th2 | # Th1/Th2
First printed in R&D Systems' 2000 Catalog.
### Overview
Immunity is the result of interplay between two "immune" systems: the innate immune system that initially encounters antigen and the adaptive immune system of T and B cells, which responds to information provided by the innate system. A hallmark of the innate system is the non-specificity of antigen identification. By utilizing a relatively small number of "pattern recognition receptors" that recognize highly conserved native molecular patterns on microbes, a large number of diverse organisms can be detected by a limited number of cell types.1-3 Cells that contribute to this system include phagocytes (neutrophils and monocytes), macrophages, dendritic cells and dendritic cell precursors, NK cells, gd T cells, and likely mast cells.1-8 Once antigen has been detected by an innate immune cell, this information is communicated to T and B cells of the adaptive immune system.1,3,4 Although T and B cells need to be instructed how to respond (i.e., what specific cytokines to secrete, what antibody isotype(s) to make), they demonstrate the remarkable facility of immunologic memory. The innate immune system provides signals for the activation of the adaptive immune system. It does so by providing signals related to the context and molecular nature of antigenic epitopes. The signals that are sent indicate whether antigen should be attacked and, if so, how? The adaptive immune system induces T cells to change from a naive phenotype to either an effector functional type or a memory phenotype. The Th1/Th2 phenotype reflects the outcome of naive T cell activation.1-3,7
Figure 1. Schematic representation of cytokines influencing the development of antigen-activated naive CD4+ T cells into Th1 and Th2 cells.
### Th, Naive, Effector, and Memory Cells
The abbreviations Th1 (T helper cell type 1) and Th2 (T helper cell type 2) refer to CD4+ g; Th2: IL-4 and IL-5), they can be considered Th1 or Th2 primary effector cells.21, 22 If they are "resting" but polarized (i.e., committed to a Th type), they could be considered Th1 or Th2 memory cells21,22 which, when reactivated, form Th1 or Th2 memory effector cells.22 Memory cells, at least in the CD8+ lineage, are a distinct stage in T cell differentiation and not a transition state in effector cell development.23 Whether memory cells derive directly from antigen-stimulated naive cells21 or result from a down-activation of functioning effector cells has yet to be determined.21,24
The Th phenotypes are characterized by the cytokines they produce. The first Th cell types reported were mouse Th1 and Th2 cells. Mouse Th1 cells were found to secrete IFN-gamma, while Th2 cells secreted IL-4.25 In the human, Th1 cells were also identified that secrete IFN-gamma,9,12 but their actual existence is controversial. Since cytokine production is usually measured in a heterogeneous population of cells, the Th0 cell may simply represent a mixture of Th1 and Th2 cells.17,19
In addition to cytokine production profiles, there are a number of cell surface markers proposed to differentiate Th1 vs. Th2 subtypes. For example, Th1 cells express both components of IL-12 receptor chains (beta 1 and alpha.32,33 Only Th2 cells appear to express a fully functional IL-1 receptor,34 and ST2L/T1, a newly discovered IL-1 RI-like molecule, is found on Th2 cells only.35 Chemokine receptors CXCR3 and CCR5 are characteristic of Th1 cells,36,37 while CXCR4, CCR3, CCR4, CCR7 and CCR8 are associated with Th2 cells.37,38 CD30, a member of the TNF superfamily, is associated with Th2 cells.15,39
### Factors Regulating Th Differentiation
A number of factors have been suggested to impact the development of Th1 and Th2 cells. All seem to be dependent, however, on a fundamental stepwise interaction between the antigen presenting cell (APC) and the naive/Thp cell. This interaction begins with the presentation of an antigen-MHC class II complex on the surface of an APC to the TCR/CD3/CD4 complex on naive T lymphocytes.40,41 This interplay activates the naive T cell, resulting in IL-2 receptor expression, IL-2 secretion, and CD40L upregulation. IL-2 interacts with IL-2R in an autocrine manner, while the appearance of CD40L allows the T cell to bind constitutively expressed CD40 on the surface of the APC. This interaction stimulates the APC to first express CD86/B7-2, and later CD80/B7-1. These molecules serve as membrane-bound ligands for T cell membrane CD28. The B7-CD28 interaction is a key connection, because CD28 ligation: 1) amplifies IL-2 secretion (and thus proliferation), 2) induces the appearance of the anti-apoptotic molecule Bcl-x (promoting survival), and 3) may contribute to future cytokine secretion.41-44
With ligation of CD28, the naive T cell may differentiate along more than one pathway, subject to a variety of inputs.16 One factor that may influence Th development is the MHC-TCR interaction itself. Very low and very high antigen doses have been suggested to promote a Th2 response, while moderate antigen levels predispose naive cells to become Th1 cells.45 Alternatively, when dose and affinity of antigen are considered concurrently, exact opposite results are reported. Low and high doses of high affinity antigens yield Th1 cells, while moderate doses of high affinity antigens yield Th2 cells.46 Antigen, at almost any dose, favors a Th0 phenotype and the key to subsequent differentiation is the level of available IL-2.47 Although CD4 is part of the TCR complex on naive T cells, it does not appear to be required for either Th1 or Th2 development.48,49
Co-stimulatory molecules have also been investigated for their effects on differentiation. Within the B7-CD28 system, CD86/B7-2 is associated with Th2 development9,50,51 while CD80/B7-1 delivers a neutral differentiation signal.52,53 These effects may be cytokine dependent.53 Other co-stimulatory molecules found on T cells include ICOS, a newly discovered CD28-like molecule that may contribute to Th2 development,54,55 and two TNF superfamily members, OX40 and 4-1BB, that may predispose to Th2 and Th1 development, respectively.55-57
The time of availability plus the relative ratio(s) of cytokines systematically drives naive T cells to one or more fundamental phenotypes.12,16 Aside from the issue of IL-2, whose role in differentiation may be restricted to select stages of Th2 development,58,59 three cytokines seem central to the initial stages of development of Th1 and Th2 cells. The first is IL-4, a 20 kDa monomer secreted by Th2 cells, mast cells, basophils, and eosinophils. The second is IFN-gamma, a 35 kDa noncovalent homodimer that is secreted by a variety of cells, including NK cells, Th1 cells, macrophages, and gd T cells. The third is IL-12, a 70 kDa heterodimer that is secreted by APCs, neutrophils, and keratinocytes.
IL-12 and IL-4 have been considered the pivotal cytokines in influencing antigen-activated naive CD4 + T cells to develop into Th1 and Th2 cells, respectively.9,12,60-62 Not all cytokines are equal in their effects, however, and this is in part due to variability of receptor expression. When T cells are in the Thp/naive state they are IL-4 R+,18,63 IL-12 R beta1-beta2-64 and IFN-gamma R alpha+ beta+.65 Once the Thp cell is antigen-activated via interaction with an APC (as described above), its existing IL-4 R is upregulated,63 both IL-12 R beta 1 and beta 2 appear,62,64 while IFN-gamma R alpha and beta expression is maintained.65 In this transitory, Th0-like state, all relevant receptors appear to be expressed. From this point, the quantities and timing of appearance of various cytokines are determinative. IL-12, a Th1 growth factor, is secreted almost immediately by APCs through their antigen presentation and B7 ligation.60,66,67 IL-12 binds to NK cells and Th0 cells, inducing rapid synthesis of IFN-gamma.16,68,69 This initial induction of IFN-gamma in a Th0 cell leads, first, to an apparent reinforcement of IL-12 R subunits, 17 and, second, to a downregulation of its own IFN-gamma R beta subunit.17,65 This, in theory, yields an IFN-gamma secreting cell (i.e., Th1 cell) that is now unresponsive to its own IFN-gamma, still responsive to IL-12, and potentially responsive to IL-4 via its IL-4R.18 The existence of the IL-4 R on fully differentiated Th1 cells must be emphasized, because this provides a mechanism for future functional modulation by IL-4.
IL-4 is considered dominant over IL-12. It upregulates expression of its own receptor, inhibits the secretion of IL-12 and downregulates the expression of the beta 2 subunit of the IL-12 receptor. It likely induces its own expression (in both naive and effector cells), and it is reported to induce a Th1 to Th2 switch, possibly through activation of its own receptor on Th1 cells.18,61,62,70,71 IL-12, in contrast, cannot block IL-4 production and cannot induce a Th2 switch to Th1. This may relate to the fact that Th2 cells are constantly making IL-4, and IL-4 downregulates the IL-12 R beta, any lag in IL-4 appearance may not be important as long as Th1 cells can later be potentially converted to Th2 cells. In any event, IL-4 is dominant, and will prevail if it reaches a critical level.66
The discussion above is an oversimplification of a very complex system. For example, TGF-beta in the presence of IL-4 and high IL-2 may drive naive T cells to a Th1 phenotype.74 The Th0 stage may be bypassed entirely if cytokines are immediately available.75 It is suggested that IL-12 can promote Th2 responses in effector cells.60 Memory Th1 cells may not be converted to Th2 cells, while Th2 memory cells may be converted to Th1 cells.76 In summary, the extent of T cell diversity is not fully understood.9
### Th1/Th2 Biology
Th1 and Th2 cells have been associated with specific immune responses due to the cytokines they secrete. For pathogens that require internalization, the presence of Th1 cytokines (IFN-gamma and TNF-beta) is considered necessary. Conversely, for large extracellular parasites such as helminths, Th2-type cytokines (IL-4 and IL-5) have been considered most protective.13,78-80 In the case of Th1-type cytokines, IFN-gamma has a multitude of functions. It promotes phagocytosis and upregulates microbial killing. In particular, it induces IgG 2A (in mice) which is known to opsonize bacteria. On phagocytes, IFN-gamma promotes the expression of Fc gamma RI receptors, which are used for phagocytosis. It further upregulates the availability of NO, hydrogen peroxide, and superoxide in cells actively participating in phagocytosis. IFN-gamma provides all the tools necessary to eliminate most external microbes.61,79-81 To guarantee that monocytes/macrophages and T cells get to the site of infection, IFN-gamma works in concert with TNF-beta/LT-alpha to induce endothelial cell expression of adhesion molecules specific to monocytes and T cells, and promotes the expression of chemokines that specifically attract mononuclear cells (i.e., IP-10, MIG, RANTES, and MCP-1).81,82
For the classic Th2 cytokine, IL-4, its secretion triggers a number of events that parallel those of IFN-gamma. IL-4 promotes production of neutralizing antibodies (IgG) and the mast cell/eosinophil degranulating antibody known as IgE. 61,81 It also promotes upregulation of IgE receptors on mast cells, eosinophils and macrophages, and it induces membrane expression of macrophage MHC class II molecules and the IL-4 receptor.61 IL-4 and IFN-gamma often exist in an antagonistic relationship. IFN-gamma blocks IgE and IgG1 production, while IL-4 blocks IgG2A secretion. 81 Although Th2 cells have been associated with helminth infections, the roles that IL-4 and IL-5 play in protective immunity are unclear. IL-4 has a strong association with the clearance of intestinal worms. This effect, however, may be on non-immune cells.78 IL-5 likely activates eosinophils against parasites present in tissue. This could complemented by IgE, which would be directed against parasite antigens and may serve as an opsonizing factor that induces toxic granule secretion by eosinophils,13,78,82,83 and by IL-4 itself, which is known to induce the expression of endothelial adhesion molecules that draw eosinophils to the site of infection. 84
### References
1. Medzhitov, R. and C.A. Janeway (1997) Curr. Opin. Immunol. 9:4.
2. Fearon, D.T. and R.M. Locksley (1996) Science 272:50.
3. Borghans, J.A.M. et al. (1999) J. Immunol. 163:569.
4. Palucka, K. and J. Banchereau (1999) Nature Med. 5:868.
5. Siegal, F.P. et al. (1999) Science 284:1835.
6. Cella, M. et al. (1999) Nature Med. 5:919.
7. Mak T.W. and D.A. Ferrick (1998) Nature Med. 4:764.
8. Welle, M. (1997) J. Leukoc. Biol. 61:233.
9. Mosmann, T.R. and S. Sad (1996) Immunol. Today 17:138.
10. Ferrick, D.A. et al. (1995) Nature 373:255.
11. Sad, S. et al. (1995) Immunity 2:271.
12. Seder, R.A. and W.E. Paul (1994) Annu. Rev. Immunol. 12:635.
13. Romagnani, S. (1995) J. Clin. Immunol. 15:121.
14. Letterio, J.J. and A.B. Roberst (1998) Annu. Rev. Immunol. 16:137.
15. Romagnani, S. (1997) Immunol. Today 18:263.
16. Delespesse, G. et al. (1997) Int. Arch. Allergy Immunol. 113:157.
17. Zhai, Y. et al. (1999) Crit. Rev. Immunol. 19:155.
18. Nakamura, T. et al. (1997) J. Immunol. 158:1085.
19. Bucy, R.P. et al. (1995) Proc. Natl. Acad. Sci. USA 92:7565.
20. Openshaw, P. et al. (1995) J. Exp. Med. 182:1357.
21. Ahmed, R. and D. Gray (1996) Science 272:54.
22. Dutton, R.W. et al. (1998) Annu. Rev. Immunol. 16:201.
23. Bachmann, M.F. et al. (1999) Eur. J. Immunol. 29:291.
24. Garcia, S. et al. (1999) Immunity 11:163.
25. Mosmann, T.R. et al. (1986) J. Immunol. 136:2348.
26. Del Prete, G.F. et al. (1991) J. Clin. Invest. 88:346.
27. Katsikis, P.D. et al. (1995) Int. Immunol. 7:1287.
28. Chen, Y. et al. (1994) Science 265:1237.
29. Weiner, H.L. (1997) Immunol. Today 19:335
30. Sad, S. and T.R. Mosmann (1994) J. Immunol. 153:3514.
31. Ohshima, Y. et al. (1999) J. Immunol. 162:3790.
32. Pernis, A. et al. (1995) Science 269:245.
33. Groux, H. et al. (1997) J. Immunol. 158:5627.
34. Lichtman, A.H. et al. (1988) Proc. Natl. Acad. Sci. USA 85:9699.
35. Xu, D. et al. (1998) J. Exp. Med. 187:787.
36. Bonecchi, R. et al. (1998) J. Exp. Med. 187:129.
37. Jung, S. and D.R. Littman (1999) Curr. Opin. Immunol. 11:319.
38. Sallusto, F. et al. (1998) J. Exp. Med. 187:875.
39. Krampera, M. et al. (1999) Clin. Exp. Immunol. 117:291.
40. Kapsenberg, M.L. et al. (1999) Clin. Exp. Allergy 29 (Suppl 2):33.
41. Foy, T.M. et al. (1996) Annu. Rev. Immunol. 14:591.
42. Walunas, T.L. et al. (1996) J. Exp. Med. 183:2541.
43. Lenschow, D.J. et al. (1996) Annu. Rev. Immunol. 14:233.
44. Boise, L.H. et al. (1995) Immunity 3:87.
45. Murray, J.S. (1998) Immunol. Today 19:157.
46. Rogers, P.R. and M. Croft (1999) J. Immunol. 163:1205.
47. Rogers, P.R. et al. (1998) J. Immunol. 161:3844.
48. Locksley, R.M. et al. (1993) Science 261:1448.
49. Wack, A. et al. (1999) J. Immunol. 163:1162.
50. Rulifson, I.C. et al. (1997) J. Immunol. 158:658.
51. Freeman, G.J. et al. (1995) Immunity 2:523.
52. Xu, H. et al. (1997) J. Immunol. 159:4217.
53. De Becker, G. et al. (1998) Eur. J. Immunol. 28:3161.
54. Hutloff, A. et al. (1999) Nature 397:263.
55. Watta, T.H. and M.A. DeBenedette (1999) Curr. Opin. Immunol. 11:286.
56. Ohshima, Y. et al. (1998) Blood 92:3338.
57. Lane, P.J.L. and T. Brocker (1999) Curr. Opin. Immunol. 11:308.
58. Heinzel, F.P. et al. (1993) J. Immunol. 150:3924.
59. Seder, R.A. et al. (1994) J. Exp. Med. 179:299.
60. Muraille, E. and O. Leo (1998) Scand. J. Immunol. 47:1.
61. Paludan, S.R. (1998) Scand. J. Immunol. 48:459.
62. Murphy, K.M. (1998) Curr. Opin. Immunol. 10:226.
63. Kubo, M. et al. (1999) J. Immunol. 163:2434.
64. Igarashi, O. et al. (1998) J. Immunol. 160:1638.
65. Bach, E.A. et al. (1995) Science 270:1215.
66. O’Garra, A. (1998) Immunity 8:275.
67. Macatonia, S.E. et al. (1995) J. Immunol. 154:5071.
68. Lederer, J.A. et al. (1996) J. Exp. Med. 184:397.
69. Ohshima, Y. and G. Delespesse (1997) J. Immunol. 158:629.
70. Szabo, S.J. et al. (1995) Immunity 2:665.
71. Breit, S. et al. (1996) Eur. J. Immunol. 26:1860.
72. Nakamura, T. et al. (1997) J. Immunol. 158:2648.
73. Mountford, A.P. et al. (1999) Immunology 97:588.
74. Bird, J.J. et al. (1998) Immunity 9:229.
75. Lingnau, K. et al. (1998) J. Immunol. 161:4709.
76. Toellner, K-M. et al. (1998) J. Exp. Med. 187:1193.
77. Aarvak, T. et al. (1999) Scand. J. Immunol. 50:1.
78. Allen, J.E. and R.M. Maizels (1997) Immunol. Today 18:387.
79. Heinzel, F.P. (1995) Curr. Opin. Inf. Dis. 8:151.
80. Abbas, A.K. et al. (1996) Nature 383:787.
81. Boehm, U. et al. (1997) Annu. Rev. Immunol. 15:749.
82. Cuff, C.A. et al. (1998) J. Immunol. 161:6853.
83. Desreumaux, P. and M. Capron (1996) Curr. Opin. Immunol. 8:790.
84. Patel, K.D. (1998) Blood 92:3904. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8573111891746521, "perplexity": 20386.471735328163}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500158.5/warc/CC-MAIN-20230205000727-20230205030727-00751.warc.gz"} |
https://deploybot.com/blog/chain-your-deployments | Today we're announcing a feature that will give you more power over the way your deployments are triggered. In some cases you may want to trigger a deployment in one environment only after another one has deployed successfully or you may want to trigger deployments for multiple repositories one after another. Well, now you can.
Setting this up is very easy. As soon as you have two environments in your account, no matter if they are in different repositories or not, you can set up deployment triggers for them. On the environments page, here's how the trigger setup looks:
Once the trigger is set up, on every successful deployment to the main environment, the dependent environment will be deployed. This allows for some very interesting deployment pipelines and we can't wait to hear what you do with this feature. Once you set up a trigger, for your convenience you should be able to see the information about it in the environment you're triggering deployment in:
I hope you find this feature useful and please, let us know if you have any concerns, questions or ideas on how we can improve things. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8039833307266235, "perplexity": 614.279631175269}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487586390.4/warc/CC-MAIN-20210612193058-20210612223058-00129.warc.gz"} |
https://www.solidot.org/translate/?nid=148213 | ## Continuous Breuer-Major theorem for vector valued fields. (arXiv:1901.02317v1 [math.PR])
Let $\xi : \Omega \times \mathbb{R}^n \to \mathbb{R}$ be zero mean, mean-square continuous, stationary, isotropic Gaussian random field with covariance function $r(x) = \mathbb{E}[\xi(0)\xi(x)]$ and let $G : \mathbb{R} \to \mathbb{R}$ such that $G$ is square integrable with respect to the standard Gaussian measure and is of Hermite rank $d$. The Breuer-Major theorem in it's continuous setting gives that, if $r \in L^d(\mathbb{R}^n)$ and $r(x) \to 0$ as $|x| \to \infty$, then the finite dimensional distributions of $Z_s(t) = \frac{1}{(2s)^{n/2}} \int_{[-st^{1/n},st^{1/n}]^n} \Big[G(\xi(x)) - \mathbb{E}[G(\xi(x))]\Big]dx$ converge to that of a scaled Brownian motion as $s \to \infty$. Here we give a proof for the case when $\xi : \Omega \times \mathbb{R}^n \to \mathbb{R}^m$ is a random vector field. We also give conditions for the functional convergence in $C([0,\infty))$ of $Z_s$ to hold along with expression for the asymptotic variance of the second chaos component in the Wiener chaos查看全文
## Solidot 文章翻译
你的名字 留空匿名提交 你的Email或网站 用户可以联系你 标题 简单描述 内容 Let $\xi : \Omega \times \mathbb{R}^n \to \mathbb{R}$ be zero mean, mean-square continuous, stationary, isotropic Gaussian random field with covariance function $r(x) = \mathbb{E}[\xi(0)\xi(x)]$ and let $G : \mathbb{R} \to \mathbb{R}$ such that $G$ is square integrable with respect to the standard Gaussian measure and is of Hermite rank $d$. The Breuer-Major theorem in it's continuous setting gives that, if $r \in L^d(\mathbb{R}^n)$ and $r(x) \to 0$ as $|x| \to \infty$, then the finite dimensional distributions of $Z_s(t) = \frac{1}{(2s)^{n/2}} \int_{[-st^{1/n},st^{1/n}]^n} \Big[G(\xi(x)) - \mathbb{E}[G(\xi(x))]\Big]dx$ converge to that of a scaled Brownian motion as $s \to \infty$. Here we give a proof for the case when $\xi : \Omega \times \mathbb{R}^n \to \mathbb{R}^m$ is a random vector field. We also give conditions for the functional convergence in $C([0,\infty))$ of $Z_s$ to hold along with expression for the asymptotic variance of the second chaos component in the Wiener chaos | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9929203391075134, "perplexity": 133.98122350279633}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203991.44/warc/CC-MAIN-20190325133117-20190325155117-00385.warc.gz"} |
http://mathhelpforum.com/advanced-statistics/102800-statistics-probabolity-problems-please-help.html | I have these problems to turn in for tomorrow. I don't know the best way to solve them.
Please see if you can solve them and post your results along with calculations.
Problem 1:
In bowl A there are 4 red balls, 3 blue balls and 2 green balls.
In bowl B there are 2 red, 3, blue and 4 green.
One ball is taken from bowl A and put into bowl B.
After this is done one ball is taken from bowl B.
What are the odds that the ball, taken from bowl B, is red?
Problem 2:
Given are:
S1 = {1,2,3,4}
S2 = {1,2,3,4,5,6}
S3 = {1,2,3,4,5,6,7,8}
We pick one number randomly from S1, where there is an equal chance of picking any one number. We do the same with S2 and S3.
What are the odds that the sum of the numbers we picked are equal to 5?
2. Hello Lesarinn
Welcome to Math Help Forum!
Originally Posted by Lesarinn
I have these problems to turn in for tomorrow. I don't know the best way to solve them.
Please see if you can solve them and post your results along with calculations.
Problem 1:
In bowl A there are 4 red balls, 3 blue balls and 2 green balls.
In bowl B there are 2 red, 3, blue and 4 green.
One ball is taken from bowl A and put into bowl B.
After this is done one ball is taken from bowl B.
What are the odds that the ball, taken from bowl B, is red?
There are two different cases to consider:
• (i) The ball taken from bowl A is red; and then the ball taken from bowl B is red.
• (ii) The ball taken from bowl A is not red; and then the ball taken from bowl B is red.
Work out the probabilities $p_1$ and $p_2$ that the ball taken from A is (i) red, and (ii) not red. Then, by considering the number of red balls in bowl B in each case (i) and (ii), work out the probabilities $q_1$ and $q_2$ that the second ball is red.
Finally, multiply $p_1$ by $q_1$; multiply $p_2$ by $q_2$; then add your answers to get the final answer. I reckon this comes out as $\frac{11}{45}$.
Problem 2:
Given are:
S1 = {1,2,3,4}
S2 = {1,2,3,4,5,6}
S3 = {1,2,3,4,5,6,7,8}
We pick one number randomly from S1, where there is an equal chance of picking any one number. We do the same with S2 and S3.
What are the odds that the sum of the numbers we picked are equal to 5?
Work out all the ways in which the total could be 5; then work out the probability that each of these sequences of numbers occurs. Finally add all your answers together to get the overall probability.
I'll start you off. We could have:
• 1,1,3. The probability of this is $\tfrac14\times\tfrac16\times\tfrac18 = \tfrac{1}{192}$.
• 1,2,2. The probability of this is exactly the same: $\tfrac14\times\tfrac16\times\tfrac18 = \tfrac{1}{192}$
• ... and so on.
I reckon that the answer is $\tfrac{5}{192}$.
Can you complete these now? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8584282398223877, "perplexity": 476.78640821688236}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398453656.76/warc/CC-MAIN-20151124205413-00004-ip-10-71-132-137.ec2.internal.warc.gz"} |
http://ccx.openaircraft.com/ccx-doc/cgx_2.17.1/doc/cgx/node141.html | ## plot
'plot' ['n'|'e'|'f'|'p'|'l'|'s'|'b'|'S'|'L'|'sh'|'si']&->
['a'|'b'|'c'|'d'|'n'|'p'|'q'|'t'|'v'] ->
<set> ['b'|'g'|'k'|'m'|'n'|'r'|'t'|'w'|'y'] [<width>->
|<transparency>]
This keyword is used to display the entities of a set. Entities already visible will be erased. The following types of entities are known:
Nodes n, Elements e, Faces f, Points p, Lines l, Surfaces s, Bodies b, Nurbs Surfaces S, Nurbs Lines L, Shapes sh and the shaded (illuminated) surfaces si
The entities can be displayed in the following colors:
White w, Black k, Red r, Green g, Blue b, Yellow y, Magenta m, Neutral 'n' (metallic grey) and turquois t
To display the entities with attributes, use the type in combination with an attribute (second letter). For example
plot la all
will display all lines with their names. The attribute d works only for lines,
plot ld all
will display all lines with their division and bias (see bia). The division is given by the numbers (1-99) following the # sign and the bias by the leading numbers. If there is more than one number in front of the division, the number has to be divided by a factor of ten to get the bias (101#30 means a bias of 10.1 and a div of 30).
plot ln all
shows potential node locations. The attribute p works only for lines. In this case the lines with its end-points are drawn:
plot lp all
This is useful to detect the begin and end of all lines. If end-points are deleted, the line is also deleted. Therefore special care with end-points is necessary. The key c combines the line parameters d, p, n:
plot lc all
The lines are drawn with their end-points, potential node positions and divisions. Shaded surfaces
plot si all
can only be displayed if the interiour was previously calculated, which is done with the command “rep” or “mesh”. The attribute t applies only to nodes and will display only the ones which have attached texts:
plot nt all
will display only the nodes which have attached texts out of the set 'all'. They are created with ”qadd”, ”qenq” or ”qtxt”. The attribute “width” determines the number of pixels used for the thickness of the entity (points, nodes, lines):
plot l all 4
will display all lines with a width of 4 pixels. This works also for 2D faces and beams. The attribute n works for nodes only:
plot nn set1
will display the nodes in set set1 with their numerical values. The attribute v works for nodes, faces ane elements. This attribute is used to display results with colors representing their values:
plot nv set1
plot fv set1
plot ev set1
Actually this is what happens automatically if the user selects an ”Entity” from ”Datasets” in the ”menu”. The faces can be displayed in a transparent manner with the attribute b:
plot fb set1 t 33
will display the faces in turquois color with a transparency of 33%.
plot fvb set1 33
will display the faces with their colored values with a transparency of 33%. A default transparency is used if a number is not given.
The attribute q works only for elements. With this attribute, only elements which do not pass the element-quality check are displayed:
plot eq all
The threshold for the element-quality is defined with ”eqal”.
To plot additional entities, see plus. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3093951940536499, "perplexity": 4071.3513537814283}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662562106.58/warc/CC-MAIN-20220523224456-20220524014456-00541.warc.gz"} |