url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://raweb.inria.fr/rapportsactivite/RA2020/graphik/index.html
2020 Activity report Project-Team GRAPHIK RNSR: 201019618K Research center In partnership with: CNRS, INRAE, Université de Montpellier Team name: GRAPHs for Inferences and Knowledge representation In collaboration with: Laboratoire d'informatique, de robotique et de microélectronique de Montpellier (LIRMM) Domain Perception, Cognition and Interaction Theme Data and Knowledge Representation and Processing Creation of the Project-Team: 2010 January 01 # Keywords • A3.1.1. Modeling, representation • A3.2.1. Knowledge bases • A3.2.3. Inference • A3.2.5. Ontologies • A7.2. Logic in Computer Science • A9.1. Knowledge • A9.6. Decision support • A9.7. AI algorithmics • A9.8. Reasoning • B3.1. Sustainable development • B9.5.6. Data science • B9.7.2. Open data # 1 Team members, visitors, external collaborators ## Research Scientists • Jean-François Baget [Inria, Researcher] • Pierre Bisquert [Institut national de recherche pour l'agriculture, l'alimentation et l'environnement, Researcher] ## Faculty Members • Marie-Laure Mugnier [Team leader, Univ de Montpellier, Professor, HDR] • Michel Chein [Univ de Montpellier, Emeritus, HDR] • Madalina Croitoru [Univ de Montpellier, Associate Professor, HDR] • Jérôme Fortin [Univ de Montpellier, Associate Professor] • Michel Leclère [Univ de Montpellier, Associate Professor] • Federico Ulliana [Univ de Montpellier, Associate Professor] ## PhD Students • Martin Jedwabny [Univ de Montpellier] • Elie Najm [Inria] • Guillaume Perution Kihli [Inria, from Sep 2020] • Olivier Rodriguez [Inria] ## Technical Staff • Florent Tornil [Inria, Engineer, from Sep 2020] ## Interns and Apprentices • Guillaume Perution Kihli [Inria, from Feb 2020 until Jul 2020] • Noel Rodriguez [Inria, from Jul 2020 until Aug 2020] • Annie Aliaga [Inria] ## External Collaborators • Meghyn Bienvenu [CNRS, HDR] • Patrice Buche [Institut national de recherche pour l'agriculture, l'alimentation et l'environnement, HDR] • Alain Gutierrez [CNRS] • Rallou Thomopoulos [Institut national de recherche pour l'agriculture, l'alimentation et l'environnement, HDR] # 2 Overall objectives ## 2.1 Logic and Graph-based KR The main research domain of GraphIK is Knowledge Representation and Reasoning (KR), which studies paradigms and formalisms for representing knowledge and reasoning on these representations. A large part of our work is strongly related to data management and database theory. We develop logical languages, which mainly correspond to fragments of first-order logic. However, we also use graphs and hypergraphs (in the graph-theoretic sense) as basic objects. Indeed, we view labelled graphs as an abstract representation of knowledge that can be expressed in many KR languages: different kinds of conceptual graphs —historically our main focus—, the Semantic Web language RDFS, expressive rules equivalent to so-called tuple-generating-dependencies in databases, some description logics dedicated to query answering, etc. For these languages, reasoning can be based on the structure of objects (thus on graph-theoretic notions) while being sound and complete with respect to entailment in the associated logical fragments. An important issue is to study trade-offs between the expressivity and the computational tractability of (sound and complete) reasoning in these languages. ## 2.2 From Theory to Applications, and Vice-versa We study logic- and graph-based KR formalisms from three perspectives: • theoretical (structural properties, expressiveness, translations between languages, problem complexity, algorithm design), • software (developing tools to implement theoretical results), • applications (formalizing practical issues and solving them with our techniques, which also feeds back into theoretical work). ## 2.3 Main Challenges GraphIK focuses on some of the main challenges in KR: • ontological query answering: querying large, complex or heterogeneous datasets, provided with an ontological layer; • reasoning with rule-based languages; • reasoning in presence of inconsistencies and • decision making. ## 2.4 Scientific Directions Our research work is currently organized into two research lines, both with theoretical and applied sides: 1. Ontology-mediated query answering (OMQA). Modern information systems are often structured around an ontology, which provides a high-level vocabulary, as well as knowledge relevant to the target domain, and enables a uniform access to possibly heterogeneous data sources. As many complex tasks can be recast in terms of query answering, the question of querying data while taking into account inferences enabled by ontological knowledge has become a fundamental issue. This gives rise to the notion of a knowledge base, composed of an ontology and a factbase, both described using a KR language. The factbase can be seen as an abstraction of several data sources, and may actually remain virtual. The topical ontology-mediated query answering (OMQA) problem asks for all answers to queries that are logically entailed by the given knowledge base. 2. Reasoning with imperfect knowledge and decision support. To solve real-world problems we often need to consider features that cannot be expressed purely (or naturally) in classical logic. Indeed, information is often “imperfect”: it can be partially contradictory, vague or uncertain, etc. These last years, we mostly considered reasoning in presence of conflicts, where contradictory information may come from the data or from the ontology. This requires to define appropriate semantics, able to provide meaningful answers to queries while taming the computational complexity increase. Reasoning becomes more complex from a conceptual viewpoint as well, hence how to explain results to an end-user is also an important issue. Such questions are natural extensions to those studied in the first axis. On the other hand, the work of this axis is also motivated by applications provided by our INRAE partners, where the knowledge to be represented intrinsically features several viewpoints and involves different stakeholders with divergent priorities, while a decision has to be made. Beyond the representation of conflictual knowledge itself, this raises arbitration issues. The aim here is to support decision making by tools that help eliciting and representing relevant knowledge, including the stakeholders' preferences and motivations, compute syntheses of compatible options, and propose justified decisions. # 3 Research program ## 3.1 Logic-based Knowledge Representation and Reasoning We follow the mainstream logic-based approach to knowledge representation (KR). First-order logic (FOL) is the reference logic in KR and most formalisms in this area can be translated into fragments (i.e., particular subsets) of FOL. This is in particular the case for description logics and existential rules, two well-known KR formalisms studied in the team. A large part of research in this domain can be seen as studying trade-offs between the expressivity of languages and the complexity of (sound and complete) reasoning in these languages. The fundamental problem in KR languages is entailment checking: is a given piece of knowledge entailed by other pieces of knowledge, for instance from a knowledge base (KB)? Another important problem is consistency checking: is a set of knowledge pieces (for instance the knowledge base itself) consistent, i.e., is it sure that nothing absurd can be entailed from it? The ontology-mediated query answering problem is a topical problem (see Section 3.3). It asks for the set of answers to a query in the KB. In the case of Boolean queries (i.e., queries with a yes/no answer), it can be recast as entailment checking. ## 3.2 Graph-based Knowledge Representation and Reasoning Besides logical foundations, we are interested in KR formalisms that comply, or aim at complying with the following requirements: to have good computational properties and to allow users of knowledge-based systems to have a maximal understanding and control over each step of the knowledge base building process and use. These two requirements are the core motivations for our graph-based approach to KR. We view labelled graphs as an abstract representation of knowledge that can be expressed in many KR languages (different kinds of conceptual graphs —historically our main focus— the Semantic Web language RDF (Resource Description Framework), its extension RDFS (RDF Schema), expressive rules equivalent to the so-called tuple-generating-dependencies in databases, some description logics dedicated to query answering, etc.). For these languages, reasoning can be based on the structure of objects, thus based on graph-theoretic notions, while staying logically founded. More precisely, our basic objects are labelled graphs (or hypergraphs) representing entities and relationships between these entities. These graphs have a natural translation in first-order logic. Our basic reasoning tool is graph homomorphism. The fundamental property is that graph homomorphism is sound and complete with respect to logical entailment i.e., given two (labelled) graphs $G$ and $H$, there is a homomorphism from $G$ to $H$if and only if the formula assigned to $G$ is entailed by the formula assigned to $H$. In other words, logical reasoning on these graphs can be performed by graph mechanisms. These knowledge constructs and the associated reasoning mechanisms can be extended (to represent rules for instance) while keeping this fundamental correspondence between graphs and logics. Querying knowledge bases has become a central problem in knowledge representation and in databases. A knowledge base is classically composed of a terminological part (metadata, ontology) and an assertional part (facts, data). Queries are supposed to be at least as expressive as the basic queries in databases, i.e., conjunctive queries, which can be seen as existentially closed conjunctions of atoms or as labelled graphs. The challenge is to define good trade-offs between the expressivity of the ontological language and the complexity of querying data in presence of ontological knowledge. Description logics have been so far the prominent family of formalisms for representing and reasoning with ontological knowledge. However, classical description logics were not designed for efficient data querying. On the other hand, database languages are able to process complex queries on huge databases, but without taking the ontology into account. There is thus a need for new languages and mechanisms, able to cope with the ever growing size of knowledge bases in the Semantic Web or in scientific domains. This problem is related to two other problems identified as fundamental in KR: • Query answering with incomplete information. Incomplete information means that it might be unknown whether a given assertion is true or false. Databases classically make the so-called closed-world assumption: every fact that cannot be retrieved or inferred from the base is assumed to be false. Knowledge bases classically make the open-world assumption: if something cannot be inferred from the base, and neither can its negation, then its truth status is unknown. The need of coping with incomplete information is a distinctive feature of querying knowledge bases with respect to querying classical databases (however, as explained above, this distinction tends to disappear). The presence of incomplete information makes the query answering task much more difficult. • Reasoning with rules. Researching types of rules and adequate manners to process them is a mainstream topic in the Semantic Web, and, more generally a crucial issue for knowledge-based systems. For several years, we have been studying rules, both in their logical and their graph form, which are syntactically very simple but also very expressive. These rules, known as existential rules or Datalog$+$, can be seen as an abstraction of ontological knowledge expressed in the main languages used in the context of KB querying. ## 3.4 Inconsistency and Decision Making While classical FOL is the kernel of many KR languages, to solve real-world problems we often need to consider features that cannot be expressed purely (or not naturally) in classical logic. The logic and graph-based formalisms used for previous points have thus to be extended with such features. The following requirements have been identified from scenarios in decision making, privileging the agronomy domain: • to cope with inconsistency; • to cope with defeasible knowledge; • to take into account different and potentially conflicting viewpoints; • to integrate decision notions (priorities, gravity, risk, benefit). Although the solutions we develop require to be validated on the applications that motivated them, we also want them to be sufficiently generic to be applied in other contexts. One angle of attack (but not the only possible one) consists in increasing the expressivity of our core languages, while trying to preserve their essential combinatorial properties, so that algorithmic optimizations can be transferred to these extensions. # 4 Application domains ## 4.1 Agronomy Agronomy is a strong expertise domain in the area of Montpellier. Some members of GraphIK are INRAE researchers (computer scientists). We closely collaborate with the Montpellier research laboratory IATE, a join unit of INRAE and other organisms. A major issue for INRAE and more specifically IATE applications is modeling agrifood chains (i.e., the chain of all processes leading from the plants to the final products, including waste treatment). This modeling has several objectives. It provides better understanding of the processes from begin to end, which aids in decision making, with the aim of improving the quality of the products and decreasing the environmental impact. It also facilitates knowledge sharing between researchers, as well as the capitalization of expert knowledge and “know how”. This last point is particularly important in areas strongly related to local know how (like in cheese or wine making), where knowledge is transmitted by experience, with the risk of non-sustainability of the specific skills. An agrifood chain analysis is a highly complex procedure since it relies on numerous criteria of various types: environmental, economical, functional, sanitary, etc. Quality objectives involve different stakeholders, technicians, managers, professional organizations, end-users, public organizations, etc. Since the goals of the implied stakeholders may be divergent dedicated knowledge and representation techniques are to be employed. ## 4.2 Data Journalism One of today’s major issues in data science is to design techniques and algorithms that allow analysts to efficiently infer useful information and knowledge by inspecting heterogeneous information sources, from structured data to unstructured content. We take data journalism as an emblematic use-case, which stands at the crossroad of multiple research fields: content analysis, data management, knowledge representation and reasoning, visualization and human-machine interaction. We are particularly interested in issues raised by the design of data and knowledge management systems that will support data journalism. These systems include an ontology (which typically expresses domain knowledge), heterogeneous data sources (provided with their own vocabulary and querying capabilities), and mappings that relate these data sources to the ontological vocabulary. Ontologies play a central role as they act both as a mediation layer that glue together pieces of knowledge extracted from data sources, and as an inference layer that allow to draw new knowledge. Besides pure knowledge representation and reasoning issues, querying such systems raise issues at the crossroad of data and knowledge management. In particular, although mappings have been widely investigated in databases, they need to be revisited in the light of the reasoning capabilities enabled by the ontology. More generally, the consistency and the efficiency of the system cannot be ensured by considering the components of the system in isolation (i.e., the ontology, data sources and mappings), but require to study the interactions between these components and to consider the system as a whole. # 5 Social and environmental responsibility Since January 2020, Pierre Bisquert is a member of the national INRAE DigigrAL thinking group. This group aims at providing reflections under the form of reports about the technological, societal and ethical impacts of digital technologies in agriculture. Some questions of interest are, among others: In what way digitalization might redefine power relation between citizens, consumers and industries? Where lies the responsability when using a decision support tool? How to sustain massive data production? This group meets monthly and is composed of 13 researchers, each representing a department of the INRAE institute. # 6 Highlights of the year ## 6.1 Awards Maxime Buron, jointly supervised by François Goasdoue (IRISA/CEDAR), Ioana Manolescu (CEDAR) and Marie-Laure Mugnier (GraphIK) obtained the BDA PhD price 2020 for his PhD thesis entitled “Efficient Reasoning on Large and Heterogeneous Graphs”. BDA is the conference of the French research community on data management. https://bda.lip6.fr/ # 7 New software and platforms ## 7.1 New software ### 7.1.1 GRAAL • Keywords: Knowledge database, Ontologies, Querying, Data management • Scientific Description: Graal is a Java toolkit dedicated to querying knowledge bases within the framework of existential rules, aka Datalog+/-. • Functional Description: Graal has been designed in a modular way, in order to facilitate software reuse and extension. It should make it easy to test new scenarios and techniques, in particular by combining algorithms. The main features of Graal are currently the following: (1) a data layer that provides generic interfaces to store various kinds of data and query them with (union of) conjunctive queries, currently: MySQL, PostgreSQL, Sqlite, in memory graph and linked list structures, (2) an ontological layer, where an ontology is a set of existential rules, (3) a knowledge base layer, where a knowledge base is composed of a fact base (abstraction of the data via generic interfaces) and an ontology, (4) algorithms to process ontology-mediated queries, based on query rewriting and/or forward chaining (or chase), (5) a rule analyzer, which performs a syntactic and structural analysis of an existential rule set, (6) several IO formats, including imports from OWL 2. • Release Contributions: Beta version (2020) provides improved chase algorithms. Available for internal use on gite.lirmm.fr Previous versions: version 1.3.1 (2018), 1.3.0 (2017). • News of the Year: 2020: beta version with improved chase algorithms. 2018: Version 1.3.1, with small bug fixes and minor improvements. Several new functionalities were developed by internships in 2018 but the code is not integrated to Graal yet. 2017: New stable version (1.3.0) realised. Moreover, Graal website has been deeply restructured and enriched with new tools, available online or for download, and documentation including tutorials, examples of use, and technical documentation about all Graal modules. • URL: • Publications: • Authors: Clément Sipieter, Marie-Laure Mugnier, Jean-François Baget, Mélanie König, Michel Leclère, Swan Rocher, Guillaume Perution Kihli • Contacts: Marie-Laure Mugnier, Federico Ulliana • Participants: Marie-Laure Mugnier, Jean-François Baget, Michel Leclère, Federico Ulliana, Guillaume Perution Kihli, Olivier Rodriguez, Florent Tornil ### 7.1.2 Obi-Wan • Name: Obi-Wan • Keywords: RDF, Databases • Scientific Description: Obi-Wan provides query answering on heterogeneous data sources integrated through mappings into a (possibly virtual) RDFS factbase, provided with an RDFS ontology and RDFS entailment rules • Functional Description: Integration system of heteregeneous DB with RDFS ontology • URL: • Publications: • Contact: Maxime Buron • Participants: Maxime Buron, François Goasdoué, Ioana Manolescu, Marie-Laure Mugnier # 8 New results Participants: Jean-François Baget, Meghyn Bienvenu, Michel Leclère, Marie-Laure Mugnier, Elie Najm, Guillaume Pérution-Kihli, Olivier Rodriguez, Federico Ulliana, Pierre Bourhis, Maxime Buron, François Goasdoué, Ioana Manolescu, Sophie Tison. Ontolology-mediated query answering (OMQA) is the issue of querying data while taking into account inferences enabled by ontological knowledge. From an abstract viewpoint, this gives rise to knowledge bases, composed of an ontology and a factbase (in database terms: a database instance under incomplete data assumption). Answers to queries are logically entailed from the knowledge base. This year, we worked in two directions: • deepening foundations of OMQA with existential rules, the main KR language developed by the team; • moving from OMQA to a more general framework with explicit management of data sources and mappings from data to knowledge. ### 8.1.1 Fundamental Issues on OMQA with Existential Rules Existential rules (a.k.a. datalog+, as this framework generalizes the deductive database language datalog) have emerged as a new expressive ontological language, well-suited to OMQA. The basic techniques for query answering under existential rules rely on the two classical ways of processing rules, namely forward chaining and backward chaining. In forward chaining, known as the chase in databases, the rules are applied to enrich the factbase and query answering can then be solved by evaluating the query against the saturated factbase (as in a classical database system, i.e., with forgetting the ontological knowledge). The backward chaining process is divided into two steps: first, the query is rewritten using the rules into a first-order query (typically a union of conjunctive queries, but it can be a more compact form); then the rewritten query is evaluated against the factbase (again, as in a classical database system). Depending on the considered class of existential rules, the chase and/or query rewriting may terminate or not. In 2018 and 2019, we carried out the first studies on the boundedness problem for existential rules. This problem asks whether a given set of existential rules is bounded, i.e., whether there is a predefined bound on the “depth” of the chase independently from any factbase. It has been deeply studied in the context of datalog, where it is key to query optimization, but barely considered for existential rules yet. The boundedness problem is already undecidable in the specific case of datalog rules. However, even for decidable subclasses, knowing that a set of rules is bounded does not help much in practice if the bound is unknown. Hence, as part of Stathis Delivorias's PhD thesis (defended in October 2019), we investigated the decidability and complexity of the k-boundedness problem, which asks whether a given set of rules is bounded by an integer $k$; we proved that k-boundedness is decidable for some main chase variants 32. We extended and deepened these results, which gave rise to a long paper version published in Theory and Practice of Logic Programming 11. For datalog rules, boundedness is equivalent to a desirable property, namely first-order rewritability: a set of rules is called first-order rewritable if any conjunctive query can be rewritten into a first-order query, whose evaluation on any factbase yields the expected answers (i.e., the relevant part of the ontology can be compiled into the rewritten query, which allows one to reduce query answering to a simple query evaluation task). This equivalence does not hold for existential rules. Beside potential practical use, the notion of boundedness is closely related to an interesting theoretical question on existential rules: what are the relationships between chase termination and first-order query rewritability? With respect to this question, we obtained the following salient result for two main chase variants (oblivious and skolem): a set of existential rules is bounded if and only if it ensures both chase termination for any factbase and first-order rewritability for any conjunctive query. This gave rise to a paper at IJCAI 2019. This year, we wrote an extended version with all proof details 29. In collaboration with Pierre Bourhis and Sophie Tison. Still on OMQA, we wrote an invited paper on the relationships between two prominent families of KR languages, namely existential rules and description logics, under the angle of data access. Generally speaking, existential rules and description logics are incomparable in terms of expressivity. However, existential rules generalize so-called Horn description logics, which are precisely those description logic dialects used in OMQA. In this paper, we compare salient Horn description logics and a decidable family of existential rules from a semantic and complexity viewpoints 12 (KI - Künstliche Intelligenz). Finally, the collective book “A Guided Tour of Artificial Intelligence Research”, to which we contributed with a chapter on “Reasoning with Ontologies” finally appeared 27. In collaboration with Meghyn Bienvenu and Marie-Christine Rousset. ### 8.1.2 Ontology-Based Data Access with RDFS As part of Maxime Buron's PhD thesis (defended in October 2020 31), co-supervised with Inria CEDAR team (Ioana Manolescu and François Goasdoué) within the iCODA IPL (Inria Project Lab), we considered the so-called Ontology-Based Data Access framework, which is composed of three components: the data level made of several independent data sources, the ontological level made of a knowledge base, and mappings that relate queries on the data sources to facts described in the vocabulary of the ontology. Roughly, the OMQA problem mentioned previously (Section 8.1.1) can be seen as a special case of query answering in the OBDA setting, where all mappings have been triggered to produce a set of facts, which allows one to do query answering on the knowledge base and ignore the data sources that gave rise to the facts. Our work is in the context of the Semantic Web, where knowledge is described in the RDFS language. Specifically, our framework features heterogeneous data sources integrated through mappings into a (possibly virtual) RDFS factbase, provided with an RDFS ontology and RDFS entailment rules. The innovative aspects with respect to the state of the art are (i) SPARQL queries that extend classical conjunctive queries by the ability of querying data and ontological triples together, namely Basic Graph Pattern Queries and (ii) Global-Local-As-View (GLAV) mappings, which can be seen as source-to-target existential rules. GLAV mappings make it possible to create unknown entities (blank nodes), which increases the amount of information accessible through the integration system, e.g., to state the existence of some data whose values are not known in the sources. We devised and experimentally compared several query answering techniques in this setting. These techniques can be seen as different ways of distributing the reasoning effort among preprocessing and query times 16 (EDBT 2020). Moreover, the performance of query answering in an RDF database strongly depends on the data layout, that is, the way data is split in persistent data structures. We proposed a new layout (TCP), which combines two well-known layouts (T and CP). In exchange to occupying more storage space, e.g. on an inexpensive disk, TCP avoids the bad or even catastrophic performance that T and/or CP sometimes exhibit for queries. We also introduced summary-based pruning, a novel technique based on existing RDF quotient summaries, which improves query answering performance on the T, CP and the more robust TCP layouts 14 (SSWS 2020). The whole framework and associated algorithms have been implemented in a prototype called Obi-Wan, developed on top of CEDAR and GraphIK software (Tatooine, OntoSQL and Graal), which was demonstrated at VLDB 2020 15. ## 8.2 Reasoning with Conflicts and Decision Support Participants: Pierre Bisquert, Patrice Buche, Madalina Croitoru, Jérôme Fortin, Martin Jedwabny, Rallou Thomopoulos. In real-world applications, data is likely to generate inconsistencies in the presence of ontological knowledge, specially when it comes from several independent sources. In particular, data coming from different stakeholders, such as preferences and opinions, is generally conflicting. In order to use this data, for instance in a decision support setting, it is thus necessary to be able to reason in the presence of inconsistencies. In such a context, classical reasoning fails because any statement can be derived from a contradiction. Argumentation is one approach to this problem, where inference steps are represented as possibly conflicting arguments. To a set of arguments is naturally associated a graph in which arguments are nodes and conflicts are edges. One interest of the argumentation framework is that it allows to define a variety of semantics for reasoning in the presence of inconsistencies, some of them having been shown to be semantically equivalent to repair-based approaches. Second, this framework naturally benefits from the explanatory potential of graphs, which is particularly interesting to help the users better understand the results of the reasoning. This year, we investigated the following questions: • How to be expressive enough while controlling the computational cost of reasoning? • How to represent stakeholders' conflicting opinions and preferences to practically support a decision? ### 8.2.1 Argumentation Argumentation is an appealing reasoning tool in presence of inconsistencies, however a main concern lies in its real-world applicability. We basically face two challenges: 1. taming the combinatorial explosion of arguments associated with a knowledge base, 2. meeting the expressivity needs in applications. #### Combinatorial Aspects Regarding the combinatorial aspects it is known that the number of arguments generated by argumentation-based methods can be prohibitively large, since they require the construction of one argument per inference step (i.e., per rule application). We started to investigate alternative methods still based on argumentation, but avoiding the combinatorial explosion at the graph construction phase. To that end, we focused on two main techniques: the use of argumentation hypergraphs and the deployment of backward chaining. Argumentation hypergraphs extend argumentation graphs by considering hyperedges (as opposed to classically considered binary edges). They can encode in a much more compact form the inconsistencies arising from n-ary conflicts. Please note that this is especially important for the work of GraphIK as our main foundations rule language, existential rules, can easily encode n-ary conflicts (called n-ary constraints) as opposed to other ontological languages such as Description Logics (e.g., DL-Lite), which can only directly capture binary constraints. In 25 (COMMA 2020), we provided an argumentation framework that considers sets of attacking arguments (n-ary attacks) and possesses arguments that are recursively built upon other arguments and n-ary attacks. We proved that this new framework retains desirable properties with fewer arguments and attacks compared to the existing frameworks. Based on this foundational work, we developed in 24 (AAAI 2020) the first ranking-based semantics applicable to n-ary attack relations. We generalised existing postulates for ranking-based semantics to fit this framework, proved that it converges for every argumentation framework and studied the postulates it satisfies. In 23 (COMMA 2020), we addressed the problem of efficient generation of structured argumentation systems and considered a simplified variant of an ASPIC argumentation system. We provided a backward chaining mechanism for the generation of argumentation graphs and we empirically compared the efficiency of this new approach with existing approaches (which are based on forward chaining). Finally, we studied the practical issue of computing repairs for existential rule inconsistent KBs, which is needed for certain tasks that require an enumeration of the repairs (such as inconsistency-based repair ranking frameworks or argumentation-based decision-making). Indeed, the problem of all repair computation is very costly in practice. In 22 (ICCS 2020), we proposed and evaluated an incremental approach providing an efficient computation of all repairs when the conflicts have a cardinality of at most three. #### Expressivity Regarding the expressivity needs of argumentation for real-world scenarios, bipolar argumentation graphs extend argumentation graphs by considering the additional binary relation of the support (translating the fact that an argument supports another). Hence, a bipolar argumentation graph has bi-colored edges: attack and support. The notion of support is largely debated in the literature and in our work we considered two main semantics: support in defeasible logics and logical necessities. Defeasible Logics are a family of approaches to handle conflicts in situations were two types of rules are considered: strict rules expressing undeniable implications (if A is true then B is definitely true), and defeasible rules expressing weaker implications (if A is true then B is generally true). The use of Defeasible Logics allows for more expressivity since contradictions may stem from either relying on incorrect facts or from having exceptions to the defeasible implications. Our work in defeasible logics was initiated throughout the PhD thesis of Abdelraouf Hecham who graduated in 2018. Then, the underlying bipolar structure of defeasible logics and their expressivity have continued to be investigated throughout his postdoc and beyond, as part of the PhD work of Martin Jedwabny: 19, 20, 18 (AAAI 2020, ECAI 2020, ICCS 2020). Argumentation framework with necessities replace the classical deductive support relation between arguments (if argument A supports argument B, accepting A implies accepting B) with logical necessity support (if A supports B, accepting B implies accepting A), which allows to express requirements between arguments. The role of necessities as a support relation in ranking semantics has been investigated in 17 (IJCAI 2020). To this end, we (1) introduced a set of postulates specifically designed for necessities and (2) proposed the first ranking-based semantics in the literature to be shown to respect these postulates and converge for certain argumentation graphs. ### 8.2.2 Decision Support This part of our work is concerned with the practical application of knowledge representation and argumentation for supporting decision. In particular, we applied our work in the context of Life Cycle analysis (LCA). LCA is a family of multi-criteria analyses specific to the environmental impact of a product in its different stages (such as manufacturing or transporting), where the criteria relate to different dimensions of the environment (global warming, water ecotoxicity, etc.). LCA is however susceptible to collective disagreement or practitioner bias on the weighting of the different criteria. In 13 (Sustainability journal), we proposed a methodology using our software DAMN in order to represent arguments justifying preferences on impact criteria. Those preferences are then aggregated in order to produce a weighting profile that is used in the LCA analysis. We applied this approach in the context of the European project NoAW in which polyphenol extraction technologies were compared. Complementary work on decision support was carried out by our associate collaborators at INRAE, see 21, 28, 26. # 9 Partnerships and cooperations ## 9.1 International research visitors Due to the sanitary crisis, all visits were cancelled. In particular, Marie-Laure Mugnier was invited for 3 months at Stanford University as part of a 6 month-sabbatical. ## 9.2 European initiatives ### 9.2.1 FP7 & H2020 Projects #### NoAW (H2020, Oct. 2016-Sept. 2020) Participants: Patrice Buche, Pierre Bisquert, Madalina Croitoru, Rallou Thomopoulos. NoAW (No Agricultural Waste) is led by INRAE (IATE laboratory). Driven by a “near zero-waste” society requirement, the goal of NoAW project is to generate innovative efficient approaches to convert growing agricultural waste issues into eco-efficient bio-based products opportunities with direct benefits for both environment, economy and EU consumer. To achieve this goal, the NoAW concept relies on developing holistic life cycle thinking able to support environmentally responsible R&D innovations on agro-waste conversion at different TRLs, in the light of regional and seasonal specificities, not forgetting risks emerging from circular management of agro-wastes (e.g. contaminants accumulation). GraphIK contributes on two aspects. On the one hand we participate in the annotation effort of knowledge bases (using the @Web tool). On the other hand we further investigate the interplay of argumentation with logically instantiated frameworks and its relation with social choice in the context of decision making. #### GLOPACK (H2020, June. 2018- July. 2022) Participants: Patrice Buche, Pierre Bisquert, Madalina Croitoru. GLOPACK is led by the University of Montpellier (IATE laboratory). It proposes a cutting-edge strategy addressing the technical and societal barriers to spread in our social system, innovative eco-efficient packaging able to reduce food environmental footprint. Focusing on accelerating the transition to a circular economy concept, GLOPACK aims to support users and consumers’ access to innovative packaging solutions enabling the reduction and circular management of agro-food, including packaging, wastes. Validation of the solutions including compliance with legal requirements, economic feasibility and environmental impact will push forward the technologies tested and the related decision-making tool to TRL 7 for a rapid and easy market uptake contributing therefore to strengthen European companies’ competitiveness in an always more globalised and connected world. ### 9.2.2 Collaborations in European programs, except FP7 and H2020 #### FoodMC (European COST action, 2016-2020) Participants: Patrice Buche, Madalina Croitoru, Rallou Thomopoulos. COST actions aim to develop European cooperation in science and technology. FoodMC (CA 15118) is a cost action on Mathematical and Computer Science Methods for Food Science and Industry. Rallou Thomopoulos is co-leader of this action for France, and member of the action Management Committee, and other members of GraphIK (Patrice Buche, Madalina Croitoru) are participants. The action is organised in four working groups, dealing respectively with the modelling of food products and food processes, modelling for eco-design of food processes, software tools for the food industry, and dissemination and knowledge transfer. ## 9.3 National initiatives #### CQFD (ANR PRC, Jan. 2019-Dec. 2024) Participants: Jean-François Baget, Michel Leclère, Marie-Laure Mugnier, Guillaume Pérution-Kihli, Olivier Rodriguez, Florent Tornil, Federico Ulliana. CQFD (Complex ontological Queries over Federated heterogeneous Data), coordinated by Federico Ulliana (GraphIK), involves participants from Inria Saclay (CEDAR team), Inria Paris (VALDA team), Inria Nord Europe (SPIRALS team), IRISA, LIG, LTCI, and LaBRI. The aim of this project is tackle two crucial challenges in OMQA (Ontology Mediated Query Answering), namely, heterogeneity, that is, the possibility to deal with multiple types of data-sources and database management systems, and federation, that is, the possibility of cross-querying a collection of heterogeneous datasources. By featuring 8 different partners in France, this project aims at consolidating a national community of researchers around the OMQA issue. #### ICODA (Inria Project Lab, 2017-2021) Participants: Jean-François Baget, Michel Chein, Alain Gutierrez, Marie-Laure Mugnier. The iCODA project (Knowledge-mediated Content and Data Interactive Analytics—The case of data journalism), coordinated by Guillaume Gravier and Laurent Amsaleg (LINKMEDIA), takes together four Inria teams: LINKMEDIA, CEDAR, ILDA and GraphIK, as well as three press partners: Ouest France, Le Monde (les décodeurs) and AFP. Taking data journalism as an emblematic use-case, the goal of the project is to develop the scientific and technological foundations for knowledge-mediated user-in-the-loop big data analytics jointly exploiting data and content, and to demonstrate the effectiveness of the approach in realistic, high-visibility use-cases. #### Docamex (CASDAR project, 2017-2020) Participants: Patrice Buche, Madalina Croitoru, Jérôme Fortin. DOCaMEx (Développement de prOgiciels de Capitalisation et de Mobilisation du savoir-faire et de l'Expérience fromagers en filière valorisant leur terroir), let by CFTC (centre technique des fromages de Franche-Comté) involves 7 research units (including IATE and LIRMM), 8 technical centers and 3 dairy product schools. It represents five cheese-making chains (Comté, Reblochon, Emmental de Savoie, Salers, Cantal). Traditional cheese making requires a lot of knowledge, expertise, and experience, which are usually acquired over a long time. This know-how is today mainly transmitted by apprenticeship and a concrete risk of knowledge forgetting is raised by the evolution of practices in the sector. The main goal of the project is to develop a new approach for expert knowledge elicitation and capitalization, and a dedicated software for decision making. The novel part of the decision making tool consists in the representation power and reasoning efficiency in the context of the logic used to describe the domain knowledge. ## 9.4 Regional initiatives #### Convergence Institute #DigitAg (2017-2023) Participants: Jean-François Baget, Patrice Buche, Madalina Croitoru, Marie-Laure Mugnier, Elie Najm, Rallou Thomopoulos, Federico Ulliana. Located in Montpellier, #DigitAg (for Digital Agriculture) gathers 17 founding members: research institutes, including Inria, the University of Montpellier and higher-education institutes in agronomy, transfer structures and companies. Its objective is to support the development of digital agriculture. GraphIK is involved in this project on the issues of designing data and knowledge management systems adapted to agricultural information systems, and of developing methods for integrating different types of information and knowledge (generated from data, experts, models). A PhD thesis started in 2019 (Elie Najm) is investigating knowledge representation and reasoning for agro-ecological systems, in collaboration with the research laboratory UMR SYSTEM (Tropical and mediterranean cropping system functioning and management). ## 9.5 Informal Partners We continue to work informally with the following partners: • Pierre Bourhis (SPIRALS Inria team) and Sophie Tison (LINKS Inria team) on Ontology-Mediated Query Answering 29. • Michael Thomazo (VALDA Inria team) on Ontology-Mediated Query Answering . • Maxime Buron (CEDAR Inria team), François Goasdoué (IRISA/CEDAR) and Ioana Manolescu (CEDAR) on Ontology-Based Data Access 15, 16, 1431. • Srdjan Vesic (CRIL) and Bruno Yun (University of Aberdeen) on Argumentation Systems 24, 22, 17, 25, 23 . # 10 Dissemination ## 10.1 Promoting scientific activities ### 10.1.1 Scientific events: organisation #### Member of the conference program committees We regularly participate to the program committees of the top conferences in AI (IJCAI and ECAI for 2020) as senior PC members or PC members. We also regularly participate to the program committees of more focused international conferences and workshops as well as national events. ### 10.1.3 Leadership within the scientific community • Madalina Croitoru was a member of the steering committee for ICCS 2020 (26th International Conference on Conceptual Structures), https://iccs-conference.org/ • Rallou Thomopoulos has been co-leader of the trans-unit program InCom (Knowledge and Model Integration) of the TRANSFORM Division of INRAE from 2016. • From September 2019 onwards, Madalina Croitoru has been deputy member of the CNU section 27 (Computer Science). • Rallou Thomopoulos is an elected member of the Scientific Committee of the INRAE-CEPIA research division (2016-2020). ## 10.2 Teaching - Supervision - Juries ### 10.2.1 Teaching The five faculty members do an average of 200 teaching hours per year at the Computer Science department of the Science Faculty. They are in charge of courses in Logics (Licence), Databases (Master), Artificial Intelligence (M), Knowledge Representation and Reasoning (M), Theory of Data and Knowledge Bases (M), Social and Semantic Web (M) and Multi-Agent Systems (M). Concerning full-time researchers in 2020, Jean-François Baget gave about 40 hours in master. Moreover, some faculty members have specific teaching responsibilities: • Madalina Croitoru has been in charge of international relations for the Computer Science department of the Science Faculty as well as of the management of industrial master internships (about 100 students each year) of the Master of Computer Science, since 2019. • Federico Ulliana has been the head of the curriculum “Data, Knowledge and Natural Language Processing” (DECOL, about 30 students), part of the Master of Computer Science, since 2017. ### 10.2.2 Involvement in University Structures • Marie-Laure Mugnier has been a member of the Council of the Scientific Department MIPS (Mathematics Informatics Physics and Systems) of the University of Montpellier, since 2016. ### 10.2.3 Supervision • PhD Defended: Maxime Buron (CEDAR Inria team), “Efficient reasoning on large heterogeneous graphs”. Supervisors: François Goasdoué (IRISA/CEDAR), Ioana Manolescu (CEDAR) and Marie-Laure Mugnier. Institut Polytechnique de Paris, October 2020 31. • PhD in progress: Olivier Rodriguez, “Querying key-value store under semantic constraints”. Supervisors: Federico Ulliana and Marie-Laure Mugnier. Started February 2019. • PhD in progress: Elie Najm, “Knowledge Representation and Reasoning for innovating agroecological systems”. Supervisors: Marie-Laure Mugnier, Christian Gary (INRAE, UMR ABSys), Jean-François Baget and Raphaël Metral (Supagro, UMR ABSys). Started October 2019. • PhD in progress: Martin Jedwabny, “Argumentation and ethical decision making”. Supervisors: Madalina Croitoru and Pierre Bisquert. Started October 2019. • PhD in progress: Guillaume Pérution-Kihli, “Des données aux connaissances : un cadre unifié pour l’intégration sémantique de données hétérogènes et l’amélioration de leur qualité”. Supervisors: Michel Leclère and Marie-Laure Mugnier. Started September 2020. ### 10.2.4 Juries • Jury reviewer for the PhD defense of Mélanie Munch (November 2020, U. Paris Saclay) - Madalina Croitoru • Jury member for the PhD defense of Jose Luis Lozano (February 2020, U. Lille) - Federico Ulliana • Jury reviewer for the PhD defense of Adrian Robert (November 2020, U. Angers) - Patrice Buche Madalina Croitoru was vice-presidente of a recruitement jury for an assistant professor (MCF) position at the University of Montpellier. # 11 Scientific production ## 11.1 Major publications • 1 inproceedings Jean-FrançoisJ.-F. Baget, MeghynM. Bienvenu, Marie-LaureM.-L. Mugnier and MichaëlM. Thomazo. 'Answering Conjunctive Regular Path Queries over Guarded Existential Rules'. IJCAI: International Joint Conference on Artificial Intelligence Melbourne, Australia August 2017 • 2 articleJean-FrançoisJ.-F. Baget, MichelM. Leclère, Marie-LaureM.-L. Mugnier and EricE. Salvat. 'On Rules with Existential Variables: Walking the Decidability Line'.Artificial Intelligence1759-10March 2011, 1620-1654 • 3 inproceedings MeghynM. Bienvenu, PierreP. Bourhis, Marie-LaureM.-L. Mugnier, SophieS. Tison and FedericoF. Ulliana. 'Ontology-Mediated Query Answering for Key-Value Stores'. IJCAI: International Joint Conference on Artificial Intelligence Melbourne, Australia August 2017 • 4 articleMeghynM. Bienvenu, StanislavS. Kikot, RomanR. Kontchakov, Vladimir VV. Podolskii and MichaelM. Zakharyaschev. 'Ontology-Mediated Queries: Combined Complexity and Succinctness of Rewritings via Circuit Complexity'.Journal of the ACM (JACM)655September 2018, 1-51 • 5 inproceedings PierreP. Bourhis, MichelM. Leclère, Marie-LaureM.-L. Mugnier, SophieS. Tison, FedericoF. Ulliana and LilyL. Gallois. 'Oblivious and Semi-Oblivious Boundedness for Existential Rules'. IJCAI 2019 - International Joint Conference on Artificial Intelligence Macao, China August 2019 • 6 inproceedings MaximeM. Buron, FrançoisF. Goasdoué, IoanaI. Manolescu and Marie-LaureM.-L. Mugnier. 'Ontology-Based RDF Integration of Heterogeneous Data'. EDBT/ICDT 2020 - 23rd International Conference on Extending Database Technology Copenhagen, Denmark March 2020 • 7 inproceedingsAbdelraoufA. Hecham, PierreP. Bisquert and MadalinaM. Croitoru. 'On a Flexible Representation for Defeasible Reasoning Variants'.AAMAS: Autonomous Agents and MultiAgent SystemsStockholm, SwedenJuly 2018, 1123-1131 • 8 articleMélanieM. König, MichelM. Leclère, Marie-LaureM.-L. Mugnier and MichaëlM. Thomazo. 'Sound, Complete and Minimal UCQ-Rewriting for Existential Rules'.Semantic Web journal652015, 451-475 • 9 articleBrunoB. Yun, PierreP. Bisquert, PatriceP. Buche, MadalinaM. Croitoru, ValérieV. Guillard and RallouR. Thomopoulos. 'Choice of environment-friendly food packagings through argumentation systems and preferences'.Ecological Informatics48November 2018, 24-36 • 10 inproceedingsBrunoB. Yun, SrdjanS. Vesic, MadalinaM. Croitoru and PierreP. Bisquert. 'Inconsistency Measures for Repair Semantics in OBDA'.IJCAI: International Joint Conference on Artificial IntelligenceStockholm, SwedenJuly 2018, 1977-1983 ## 11.2 Publications of the year ### International journals • 11 articleStathisS. Delivorias, MichelM. Leclère, Marie-LaureM.-L. Mugnier and FedericoF. Ulliana. 'Characterizing Boundedness in Chase Variants'.Theory and Practice of Logic Programming211August 2020, 51-79 • 12 articleMarie-LaureM.-L. Mugnier. 'Data Access With Horn Ontologies: Where Description Logics Meet Existential Rules'.KI - Künstliche Intelligenz3442020, 475-489 • 13 articleJoshuaJ. Sohn, PierreP. Bisquert, PatriceP. Buche, AbdelraoufA. Hecham, Pradip PP. Kalbar, BenB. Goldstein, MortenM. Birkved and Stig IrvingS. Olsen. 'Argumentation Corrected Context Weighting-Life Cycle Assessment: A Practical Method of Including Stakeholder Perspectives in Multi-Criteria Decision Support for LCA'.Sustainability126March 2020, 2170 ### International peer-reviewed conferences • 14 inproceedings MaximeM. Buron, FrançoisF. Goasdoué, IoanaI. Manolescu, TayebT. Merabti and Marie-LaureM.-L. Mugnier. 'Revisiting RDF storage layouts for efficient query answering'. SSWS 2020 - 13th International Workshop on Scalable Semantic Web Knowledge Base Systems Athène, Greece inria saclay August 2020 • 15 inproceedings MaximeM. Buron, FrançoisF. Goasdoué, IoanaI. Manolescu and Marie-LaureM.-L. Mugnier. 'Obi-Wan: Ontology-Based RDF Integration of Heterogeneous Data'. VLDB 2020 - 46th International Conference on Very Large Data Bases Tokyo, Japan August 2020 • 16 inproceedings MaximeM. Buron, FrançoisF. Goasdoué, IoanaI. Manolescu and Marie-LaureM.-L. Mugnier. 'Ontology-Based RDF Integration of Heterogeneous Data'. EDBT/ICDT 2020 - 23rd International Conference on Extending Database Technology Copenhagen, Denmark March 2020 • 17 inproceedingsDraganD. Doder, SrdjanS. Vesic and MadalinaM. Croitoru. 'Ranking Semantics for Argumentation Systems With Necessities'.29th International Joint Conference on Artificial Intelligence (IJCAI)Yokohama, JapanJanuary 2021, 1912-1918 • 18 inproceedingsAbdelraoufA. Hecham, PierreP. Bisquert and MadalinaM. Croitoru. 'A formalism unifying Defeasible Logics and Repair Semantics for existential rules'.ICCS 2020 - 25th International Conference on Conceptual Structures12277Lecture Notes in Computer ScienceBolzano / Virtual, Italyhttps://iccs-conference.org/?page_id=17September 2020, 3-17 • 19 inproceedings AbdelraoufA. Hecham, MadalinaM. Croitoru and PierreP. Bisquert. 'DAMN: Defeasible Reasoning Tool for Multi-Agent Reasoning'. AAAI 2020 - 34th AAAI Conference on Artificial Intelligence New York, United States https://aaai.org/Conferences/AAAI-20/ 2020 • 20 inproceedingsMartinM. Jedwabny, MadalinaM. Croitoru and PierreP. Bisquert. 'Gradual Semantics for Logic-Based Bipolar Graphs Using T-(Co)norms'.ECAI 2020 - 24th European Conference on Artificial Intelligence325Frontiers in Artificial Intelligence and ApplicationsSantiago de Compostela (virtual), Spainhttp://ecai2020.eu/2020, 777-783 • 21 inproceedings RallouR. Thomopoulos, JulienJ. Cufi and MaximeM. Le Breton. 'A Generic Software to Support Collective Decision in Food Chains and in Multi-Stakeholder Situations'. FoodSim 2020 - 11th Biennial FOODSIM Conference Proceedings of FoodSim 2020 Ghent, Belgium September 2020 • 22 inproceedings BrunoB. Yun and MadalinaM. Croitoru. 'An Incremental Algorithm for Computing All Repairs in Inconsistent Knowledge Bases'. ICCS 2020 - 25th International Conference on Conceptual Structures Bolzano / Virtual, Italy https://iccs-conference.org/?page_id=17 2020 • 23 inproceedingsBrunoB. Yun, NirN. Oren and MadalinaM. Croitoru. 'Efficient Construction of Structured Argumentation Systems'.COMMA 2020 - 8th International Conference on Computational Models of Argument326Frontiers in Artificial Intelligence and ApplicationsPerugia, Italy2020, 411-418 • 24 inproceedingsBrunoB. Yun, SrdjanS. Vesic and MadalinaM. Croitoru. 'Ranking-Based Semantics for Sets of Attacking Arguments'.AAAI 20 - 34th AAAI Conference on Artificial Intelligence343New York, United StatesApril 2020, 3033-3040 • 25 inproceedingsBrunoB. Yun, SrdjanS. Vesic and MadalinaM. Croitoru. 'Sets of Attacking Arguments for Inconsistent Datalog Knowledge Bases'.COMMA 2020 - 8th International Conference on Computational Models of Argument326Frontiers in Artificial Intelligence and ApplicationsPerugia / Virtual), ItalySeptember 2020, 419-430 ### Conferences without proceedings • 26 inproceedingsPatriceP. Buche, JulienJ. Cufi, StéphaneS. Dervaux, JulietteJ. Dibie, Liliana L.L. Ibanescu, AlrickA. Oudot and MagalieM. Weber. 'A new alignment method based on FoodOn as pivot ontology to integrate nutritional legacy data sources'.ICBO 2020 - IFOW Integrated Food Ontology WorkshopBolzano / Virtual, Italyhttps://foodon.org/icbo-2020-food-workshop/September 2020, 1-2 ### Scientific book chapters • 27 inbookMeghynM. Bienvenu, MichelM. Leclère, Marie-LaureM.-L. Mugnier and Marie-ChristineM.-C. Rousset. 'Reasoning with Ontologies'.A Guided Tour of Artificial Intelligence ResearchVolume I: Knowledge Representation, Reasoning and LearningMay 2020, 185-215 • 28 inbook RallouR. Thomopoulos, NicolasN. Salliou, PatrickP. Taillandier and AlbertoA. Tonda. 'Consumers' Motivations towards Environment-Friendly Dietary Changes: An Assessment of Trends Related to the Consumption of Animal Products'. Handbook of Climate Change Across the Food Supply Chain 2020 ### Reports & preprints • 29 report PierreP. Bourhis, MichelM. Leclère, Marie-LaureM.-L. Mugnier, SophieS. Tison, FedericoF. Ulliana and LilyL. Galois. 'Oblivious and Semi-Oblivious Boundedness for Existential Rules'. LIRMM (UM, CNRS) June 2020 ## 11.3 Other ### Patents • 30 patent R. Thomopoulos, J. Cufi, M. Le Breton and B. Thomas. 'MyChoice software'. 2020 ## 11.4 Cited publications • 31 phdthesisMaximeM. Buron. 'Efficient reasoning on large and heterogeneous graphs. (Raisonnement efficace sur des grandsgraphes hétérogènes)'.École Polytechnique, Palaiseau, France2020, • 32 inproceedingsStathisS. Delivorias, MichelM. Leclère, Marie-LaureM.-L. Mugnier and FedericoF. Ulliana. 'On the k-Boundedness for Existential Rules'.Rules and Reasoning - Second International Joint Conference, RuleML+RR 2018, Luxembourg, September 18-21, 2018, Proceedings11092Lecture Notes in Computer ScienceSpringer2018, 48--64
2022-05-24 04:00:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 8, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3624110817909241, "perplexity": 8186.365660976217}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662562410.53/warc/CC-MAIN-20220524014636-20220524044636-00552.warc.gz"}
https://physicsoverflow.org/6325/have-fundamental-hamiltonian-the-system-h%24_2%24o-molecules
# Do we have a fundamental Hamiltonian for the system of H$_2$O molecules? + 4 like - 0 dislike 68 views From the quantum mechanics(QM) viewpoint, does there exist a Hamiltonian $H$ for the system of H$_2$O molecules? Assume that the number of H$_2$O molecules is fixed. Imagine that by calculating the free energy $F(T)$ from the Hamiltonian $H$, one can reproduce the three common phases ice, water, and steam at different temperatures $T$(Here do we need order parameters to characterize these phases?). On the other hand, the Hamiltonian $H$ may depend on some parameters, i.e., $H=H(\lambda )$, and at $T=273K$, varying the parameter $\lambda$ causes the phase transition between ice and water without temperature changing. So can we write a microscopic Hamiltonian $H$ of this kind in terms of the QM language? This post imported from StackExchange Physics at 2014-03-09 08:38 (UCT), posted by SE-user K-boy + 2 like - 0 dislike From the quantum mechanics(QM) viewpoint, does there exist a Hamiltonian H for the system of H2O molecules? Assume that the number of H2O molecules is fixed. Yes, the multi-particle Hamiltonian with the Coulomb potential energy. But calculating the free energy $F(T,V,N)$ from such Hamiltonian, although straightforward in principle, would be mathematically $very$ difficult. How much gaseous, liquid and solid phase is present at the triple point is not determined by any parameter $\lambda$ in the microscopic Hamiltonian; it is rather part of the macroscopic description, similarly to volume $V$ and molar number $N$. I do not know of approach that would describe water by free energy function(al) that uses order parameter (similar to Landau theory of magnetism). The Landau theory was meant for second-order phase transitions ; it does not seem to make sense for first-order transition that water may undergo at the triple point. This post imported from StackExchange Physics at 2014-03-09 08:38 (UCT), posted by SE-user Ján Lalinský answered Dec 27, 2013 by (10 points)
2019-07-23 02:50:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8889113068580627, "perplexity": 375.74372145247446}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195528687.63/warc/CC-MAIN-20190723022935-20190723044935-00347.warc.gz"}
https://mathoverflow.net/tags/polytopes/hot
# Tag Info ## Hot answers tagged polytopes 22 Philosophical questions deserve philosophical answers, so I am afraid no amount of references and specific results will probably satisfy you. Let me try to explain it in a somewhat generic way. Think about it this way - why care about sequences like $\{n!\}$, Fibonacci or Catalan numbers? The honest answer is "because they come up all the time". Now, ... 21 Not all simple polytopes are incribable, e.g. the dual of the cyclic polytope $C_4(8)$ is simple and not inscribable, as shown recently in Combinatorial Inscribability Obstructions for Higher-Dimensional Polytopes by Doolittle, Labbé, Lange, Sinn, Spreer and Ziegler In dimension $3$, there is a combinatorial criterion by Rivin describing inscribabilty ... 17 The origins of associahedra go back to the thesis work in homotopy theory of Jim Stasheff in the early 1960's. He did graduate work at Oxford, working with Ioan James, who in the mid 1950's had proved lots of beautiful theorems about the free associative topological monoid $JX$ generated by a based topological space $X$; in particular, showing that, when $... 13 Pick's theorem says these two convex lattice polygons have area $$i+\frac{b}{2}-1 = 4 + 10/2 -1 = 8 \;,$$ and they both have perimeter$8 + 2 \sqrt{2}$. You can see I've "bumped out" two corners of an underlying octagon. (I am interpreting the OP's phrase "the same area and perimeter" as "the same area and the same perimeter" as ... 12 In my opinion there are two answers to this question. The first is that these particular classes of polytopes have fascinating combinatorial properties and structure. Presumably you're aware of the work of Postnikov and others in this direction. In my view, and the view of many others, these properties make the polytopes worth studying in their own right. ... 10 There are remarkable combinatorial formulas for the face numbers and the volumes (of certain geometric realizations of) of these polytopes and a more general family ("generalized permutohedra" a.k.a. "polymatroids") to which they belong. These numbers include classical sequences like the Eulerian numbers, Catalan numbers,$(n+1)^{n-1}$, etc. This is a major ... 9 I'm not an expert, but perhaps this could help: Hohlweg, Christophe. "Permutahedra and associahedra: Generalized associahedra from the geometry of finite reflection groups." arXiv:1112.3255 (2011): "Permutahedra are a class of convex polytopes arising naturally from the study of finite reflection groups, while generalized associahedra are a class of ... 9 The answer is No, there are no other such polytopes. The proof is quite laborious in parts, and I wrote it down in this article. Theorem. In dimension$d\ge 4$, an edge-transitive polytope is vertex-transitive. The idea is as follows: first, show that every edge-transitive polytope$P$that is not vertex-transitive has the following three properties: all ... 8 There are other polytopes. To construct one let's do the following. Remember first that in the hyperbolic$4$-space there exists a regular compact right-angled 120-cell. Here, right-angled means that any two adjacent faces intersect under angle$\frac{\pi}{2}$. Regular means, that all the faces are isomeric, and the polytope has the same group of self-... 8 This is true in all dimensions, and can be proved by induction (on$d$) applied to the following (slightly stronger) hypothesis: Theorem: If$P$is a convex$d$-polytope with$k$-in-spheres for all$k \in [0, d-1]$, then:$P$is regular.$P$is determined (up to an element of the orthogonal group$O(d)$) by that$d$-tuple$(r_0, r_1, \dots, r_{d-1})$of$k$... 6 It is not clear what do you mean by tightest bound --- in which sense tightest? Also you did not say from above or from below. Anyway, let me say something, hope it will help. Let $$Q = \{x \in \mathbb{R}^n : Ax \leq \mathbb{1}\},$$ where$\mathbb{1}=(1,\dots,1)$Note that $$P'=P+\varepsilon\cdot Q.$$ A. You can get lower bounds from Brunn–Minkowski ... 6 If you're asking about the combinatorics of the polytope, then there is an easy answer. If you sample with a continuous distribution, then you will get a combinatorial permutohedron with probability 1. The only way to get something else is to take a point on one or more of the hyperplanes x_i=x_j. Then your orbit (i.e. set of vertices of the polytope) ... 6 You are looking for the edgewise subdivision: Edelsbrunner, H.; Grayson, D. R., Edgewise subdivision of a simplex, Discrete Comput. Geom. 24, No. 4, 707-719 (2000). ZBL0968.51016. The basic idea is to slice your simplex k times along each coordinate hyperplane. Then you get some small uniform shapes, which are not simplices except at the corners (or in ... 5 Using the unimodular scaling of the Leech lattice, the length of each minimal vector is$\sqrt{4}$. Fixing a particular minimal vector$u$, the remaining minimal vectors$v$are: 1 vector$v$with$\langle u, v \rangle = 4$(namely$v = u$); 4600 vectors$v$with$\langle u, v \rangle = 2$; 47104 vectors$v$with$\langle u, v \rangle = 1$; 93150 vectors$v$... 5 Edited to add: I now think the answer below is completely wrong. The three-dimensional cyclohedron has 12 facets, while the three-dimensional polytope the OP is looking for should have 14. This is the number of facets of the permutohedron, and none of them collapse. (In fact, if I have correctly understood things, the facets of the polytope the OP wants ... 5 Newton polytopes and the polynomials they support We will use the standard notion$\mathbf{x}^{\mathbf{a}} := \prod_{i=1}^{n} x_i^{a_i}$to represent monomials in a multivariate (Laurent) polynomial ring. The Newton polytope of a polynomial is the convex hull of its exponent vectors. We say that a polytope$P$supports a polynomial$p$if$P$is the Newton ... 5 It is possible to have an abstract polytope where the automorphism group acts transitively on the faces of each rank (fully transitive) but does not act transitively on flags (not regular). For any flag$\Phi$and any$j$let$\Phi_j$denote the face of rank$j$in$\Phi$and let$\Phi^j$denote the flag adjacent to$\Phi$which differs only in the rank$j$... 5 This recent paper Sartipizadeh, Hossein, and Tyrone L. Vincent. "Computing the Approximate Convex Hull in High Dimensions." arXiv:1603.04422 (2016). includes a summary of previous work on approximate convex hulls. The time complexity of their algorithm is independent of the dimension, and quadratic in the number of points. "The proposed algorithm uses a ... 5 This is Theorem 1 (actually, Satz 1) of Roswitha Blind, Konvexe Polytope mit kongruenten regulären$(n- 1)$-Seiten im$\Bbb{R}^n(n \ge 4)$, Comment. Math. Helvetici 54 (1979) 304--308. The short proof follows from two short lemmas, one of which cites Coxeter's Regular Polytopes and an article by Shephard. You might also be interested in a related ... 5 Note that$x+\varepsilon y\in K$for small$\varepsilon$if and only if$y$satisfies the inequalities$y_i\leqslant y_{i+1}$if$x_i=x_{i+1}$;$y_1\geqslant 0$if$x_1=0$;$y_n\leqslant 0$if$x_n=0$. We may think that$y_i$are i.i.d. Gaussian, then the probability of a chain of an inequalities like$0\leqslant y_1\leqslant y_2\leqslant \ldots \leqslant ... 5 This answers question 2 as well. But I think both questions are way more suitable for math.stackexchange. I add another example because, unlike the first, it has the property that multiplied by $I^{n-2}$ it gives further examples in $\mathbb{R}^n$ as well: 4 In fact, there are many to be found on Wikipedia under isogonal figures, even in three dimensions. Examples in dimension four are obtained as dual polytope of runcinated 4-simplex or runcinated 24-cell. This works, because these both runcinations are edge-transitive, and each edge is contained in exactly four facets. Since in the dual edges become 2-... 4 If you consider a tiling of 3-space to be a 4-dimensional polytope, then the Rhombic dodecahedral honeycomb would work. Other possibilities are limited by the potential 3-faces. Because every edge has one endpoint in each of two vertex orbits, the 2-faces must all have evenly many sides. If the edge-transitivity descends to the 3-faces, then the 3-faces ... 4 Observe that your set of inequalities is $S_n$-invariant, hence your polytope is, hence your set of vertices is. So it's enough to understand the case $x_1 \leq \ldots \leq x_n$. Now you don't need to think about general $S$; for each $|S|$ the tightest inequality comes from $S = [n-|S|+1,n]$ a terminal interval. The $|S|=n$ inequality is actually your ... 4 Now that I think of it, my answer was wrong. I was just reproducing examples mentioned by the question poser (Bing's, as seen in Ziegler's book). These examples cannot be reduced keeping the boundary to be a 2-sphere, but the question is whether there are examples that cannot be reduced keeping the boundary a 2-manifold. 4 The answer is yes. In other words, if the order of the orbit $x_n=T^n(x)$ of each point $x\in K$ is finite then $T^n$ is identity map for some $n$. Choose a simplex $\triangle$ of maximal dimension $m$. Set $$E_n=\{\,x\in\triangle\mid T^n(x)=x\,\}.$$ By Baire theorem, $E_n$ has nonempty interior for some $n$; fix such a value $n_1$. Note that $E_{n_1}$ ... 4 Maybe Lerman-Tolman's paper Hamiltonian torus action on symplectic orbifolds and toric varieties http://www.ams.org/journals/tran/1997-349-10/S0002-9947-97-01821-7/S0002-9947-97-01821-7.pdf is useful for your question. In that paper, a "labeled polytope" is defined to be a convex rational simple polytope, plus a positive integer attached to each open facet, ... 4 In $R^3$, since the spheres are concentric, not only all faces are regular, but also all edges are of the same length, and all faces are inscribed in circles of the same radius, hence are congruent. Also, all dihedral angles between faces with a common edge are equal, which implies that all vertices are of the same valence. This makes the polytope regular. ... 4 First, a simple remark: If a polytope with congruent facets is inscribed in a sphere, then it is circumscribed about a sphere as well, and the two spheres are concentric. Next, there is a series of examples described and pictured in my old question Can the sphere be partitioned into small congruent cells? . Each of these examples is what you want in $R^3$. ... 4 Multiple polytopes can have the same data, as pictured below. Take a pyramidal frustum, and twist it slightly clockwise or slightly counterclockwise. Make one polytope by gluing two identical versions, and make another polytope by gluing two opposite versions. These will have the same combinatorial type, the same edge lengths, and the same distances from the ... Only top voted, non community-wiki answers of a minimum length are eligible
2022-01-28 16:53:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.789503276348114, "perplexity": 600.2696750832467}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320306301.52/warc/CC-MAIN-20220128152530-20220128182530-00197.warc.gz"}
https://www.physicsforums.com/threads/abstract-algebra-cosets.351297/
# Abstract Algebra - Cosets 1. Nov 2, 2009 ### vwishndaetr Question: Prove the following properties of cosets. Given: Let H be a subgroup and let a and b be elements of G. $$H\leq\ G$$ Statement: $$aH=bH \ if\ and\ only\ if\ a^{-1}b\ \epsilon\ H$$ The statement is what I have to prove. My issue is I don't know how to start off the problem. When I first looked at the statement. I wanted to say that it is only true when a=b. But there is not talk of the groups being abelian. So what I thought was a start to some thinking, did not take me very far. 2. Nov 3, 2009 ### HallsofIvy Staff Emeritus First of all, you need to say "H is a subgroup of G" or the problem doesn't make sense. Now, what is the definition of "aH" and "bH"? Your remark that "I wanted to say that it is only true when a=b" indicates that you are not clear on that definition. 3. Nov 3, 2009 ### vwishndaetr $$H\leq\ G$$ That means H is a subgroup of G. So clearly stated. a and b are elements of G, and "aH" is a left coset with "a" and "bH" is a left coset with "b." 4. Nov 3, 2009 ### foxjwill But then what is the definition of a "left coset with a"? 5. Nov 3, 2009 ### vwishndaetr $$aH= \{a*h\ |\ h\ \epsilon\ H }\$$ H is a subset. Where h is an element of the set H. 6. Nov 3, 2009 ### foxjwill Right. So, what does it mean for the set aH to be equal to the set bH? Oh, and you can type the "in" symbol using a "\in" and the "not in" symbol using "\notin". I think it formats it better that way. 7. Nov 5, 2009 ### Quantumpencil Remember that aH and bH partition the group (since they are equivalence classes defined by a=b if b = ah for some h in H), if you had any two cosets which had a nonzero intersection, then the transitivity of equivalence classes would automatically make the two equal. So to begin with, you know that if c is in aH, then c equals ah for h in H. if d is in bH, it equals d = bh', for h' in H. Your condition for the equivalence classes being equal is that there exists an element c in aH, such that c=bh' for some h' in H. 8. Nov 5, 2009 ### vwishndaetr Yup thanks. I had one of my professors explain it to me. Forgot to post up. Thanks though! :)
2017-08-22 12:19:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6902395486831665, "perplexity": 949.5557157236993}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886110578.17/warc/CC-MAIN-20170822104509-20170822124509-00445.warc.gz"}
https://www.mustafacoban.de/part-2-data-management/
# Part II: Data Management ## 1  Stata Command Syntax Having used a few Stata commands, it may be time to comment briefly on their structure. One of Stata’s great strengths is the consistency of its command syntax. Most of Stata’s commands share the following syntax, where bold indicates keywords and square brackets mean that something is optional [by varlist:] command [varlist] [if exp] [in range] [weight] [, options] In this diagram, varlist denotes a list of variable names, command denotes a Stata command, exp denotes an algebraic expression, range denotes an observation range, weight denotes a weighting expression, and options denotes a list of options. Let’s briefly describe each syntax element: • varlist: If no varlist appears, these commands assume a varlist of _all, the Stata shorthand for indicating all the variables in the dataset. Some commands take a varname, rather than a varlist. A varname refers to exactly one variable • if exp: The if-qualifier restricts the scope of a command to observations for which the value of the expression is true (which is equivalent to the expression being nonzero) • in range: The in-qualifier qualifier restricts the scope of the command to a specific observation range. A range specification takes the form $\#_1 [/ \#_2]$ , where $\#_1$ and $\#_2$ are positive or negative integers. Negative integers are understood to mean „from the end of the data“, with -1 referring to the last observation. The implied first observation must be less than or equal to the implied last observation. The first and last observations in the dataset may be denoted by f and l, respectively. • weight: Some commands allow the use of weights. You should use these if you want to do some analysis for the whole population based on your sample. ### 1.1  The by-Prefix The by varlist: prefix causes Stata to repeat a command for each subset of the data for which the values of the variables in varlist are equal. When prefixed with by varlist:, the result of the command will be the same as if you had formed separate datasets for each group of observations, saved them, and then given the command on each dataset separately. The data must already be sorted by varlist, although by has a sort option. The by prefix is important for understanding data manipulation and working with subpopulations within Stata. Furthermore, the varlist in by varlist: may contain string variables, numeric variables, or both. Let’s show how the by-prefix works with a small example. We reload the GSOEP data set and calculate the average income for the women and men in the sample. First, the data are sorted by gender, whereby two alternatives are available for sorting. On the one hand, we can just sort the data before we use the by-prefix in combination with the sum command. . use "https://www.mustafacoban.de/wp-content/stata/gsoep.dta", clear (SOEP 2009 (Kohler/Kreuter)) . sort gender . by gender: sum income --------------------------------------------------------------------------------------------------------------------------- -> gender = Female Variable | Obs Mean Std. Dev. Min Max -------------+--------------------------------------------------------- income | 2,458 13323 21290.77 0 612757 --------------------------------------------------------------------------------------------------------------------------- -> gender = Male Variable | Obs Mean Std. Dev. Min Max -------------+--------------------------------------------------------- income | 2,320 28190.75 47868.24 0 897756 On the other hand, we can also sort directly within the by-prefix . use "https://www.mustafacoban.de/wp-content/stata/gsoep.dta", clear (SOEP 2009 (Kohler/Kreuter)) . bysort gender: sum income --------------------------------------------------------------------------------------------------------------------------- -> gender = Female Variable | Obs Mean Std. Dev. Min Max -------------+--------------------------------------------------------- income | 2,458 13323 21290.77 0 612757 --------------------------------------------------------------------------------------------------------------------------- -> gender = Male Variable | Obs Mean Std. Dev. Min Max -------------+--------------------------------------------------------- income | 2,320 28190.75 47868.24 0 897756 As we can see, both approaches lead to the same result. Now, which of the two alternatives is preferred? As already explained in Part 1 of this tutorial, the command sort leads to an ascending sorting of the data set by the respective variables. Thus, the first alternative, i.e. sorting outside the by-prefix is always appropriate when we need descending sorting and therefore work with the gsort command. Since the concept of the by-prefix is not clearly visible from the command lines, let’s go step by step through Stata’s procedure. First, let’s sort the data by the identification number pnr and view the selected variables in the Browser window. . sort pnr . br pnr gender income As you can see, the observations are sorted by pnr and each respondent has a different gender and income. Next, we sort the data by gender and view the data again in the Browser window. Since the gender variable is a string variable, the sort command sorts the observations alphabetically. The command that follows the by-prefix is then executed for each category of the variable gender. Since the variable gender only has two possible values, the sum command is only executed twice. Now, let’s obtain average incomes according to gender (gender) broken down by educational attainment (educ). Thus, we type . sort gender educ . by gender educ: sum income --------------------------------------------------------------------------------------------------------------------------- -> gender = Female, educ = Elementary Variable | Obs Mean Std. Dev. Min Max -------------+--------------------------------------------------------- income | 755 8132.087 24628.12 0 612757 --------------------------------------------------------------------------------------------------------------------------- -> gender = Female, educ = Intermediate Secondary Variable | Obs Mean Std. Dev. Min Max -------------+--------------------------------------------------------- income | 821 14754.12 13836.29 0 102111 --------------------------------------------------------------------------------------------------------------------------- -> gender = Female, educ = Technical Secondary Variable | Obs Mean Std. Dev. Min Max -------------+--------------------------------------------------------- income | 107 20141.42 17049.57 0 84958 --------------------------------------------------------------------------------------------------------------------------- -> gender = Female, educ = Maturity Variable | Obs Mean Std. Dev. Min Max -------------+--------------------------------------------------------- income | 448 22042.61 27588.56 0 424107 --------------------------------------------------------------------------------------------------------------------------- -> gender = Female, educ = . Variable | Obs Mean Std. Dev. Min Max -------------+--------------------------------------------------------- income | 327 7537.804 13151.93 0 96065 --------------------------------------------------------------------------------------------------------------------------- -> gender = Male, educ = Elementary Variable | Obs Mean Std. Dev. Min Max -------------+--------------------------------------------------------- income | 783 19431.59 25558.53 0 365076 --------------------------------------------------------------------------------------------------------------------------- -> gender = Male, educ = Intermediate Secondary Variable | Obs Mean Std. Dev. Min Max -------------+--------------------------------------------------------- income | 628 27109.07 21148.42 0 109121 --------------------------------------------------------------------------------------------------------------------------- -> gender = Male, educ = Technical Secondary Variable | Obs Mean Std. Dev. Min Max -------------+--------------------------------------------------------- income | 138 40967.2 41181.82 0 287194 --------------------------------------------------------------------------------------------------------------------------- -> gender = Male, educ = Maturity Variable | Obs Mean Std. Dev. Min Max -------------+--------------------------------------------------------- income | 449 51046.58 92057 0 897756 --------------------------------------------------------------------------------------------------------------------------- -> gender = Male, educ = . Variable | Obs Mean Std. Dev. Min Max -------------+--------------------------------------------------------- income | 322 14253.83 18768.37 0 108143 Sorting is as follows: First, the observations are sorted by the gender variable, while no sorting by the second variable educ takes place. In the next step, the first sorting by the gender variable is preserved and the observations are sorted by the second variable educ in each category or possible value of the first variable gender. Now, the command that follows the by-prefix is executed for each combination of the two variables in varlist of the by-prefix. ### 1.2  Wildcards and Ordering Variable lists (or varlists) can be specified in a variety of ways, all designed to save typing and encourage good variable names. If you want to address several variables in a command, you can use different placeholders to save some time. • If the variables differ only in a single character, you should use a question mark. • . use "https://www.mustafacoban.de/wp-content/stata/gsoep.dta", clear (SOEP 2009 (Kohler/Kreuter)) . desc educ? storage display value variable name type format label variable label --------------------------------------------------------------------------------------------------------------------------- educy float %9.0g Number of Years of Education • The wildcard * can be used to address variables whose names are partially identical. • . desc hh* *nr storage display value variable name type format label variable label --------------------------------------------------------------------------------------------------------------------------- hhnr long %12.0g Houshold Number hhmem byte %8.0g Number of Persons in Household hhkids byte %8.0g Number of Kids (0-14 Years) in Household hhtyp byte %35.0g hhtyp Household Type hhinc long %10.0g Household Post-Government Income (in Euro) pnr long %12.0g Person Number hhnr long %12.0g Houshold Number • Variables that are arranged one after the other in the data set can be addressed together with a hyphen. • . desc task-rooms storage display value variable name type format label variable label --------------------------------------------------------------------------------------------------------------------------- state byte %22.0g state State of Residence health byte %32.0g health Satisfaction with Health satlif byte %32.0g satlif Overall Life Satisfaction polint byte %20.0g polint Political Interests party byte %15.0g party Political party supported suppar byte %20.0g suppar Supports political party worpea byte %20.0g worpea Worried about peace worter byte %20.0g worter Worried about global terrorism worcri byte %20.0g worcri Worried about crime in Germany worimm byte %20.0g worimm Worried about immigration to Germany worhfo byte %20.0g worhfo Worried about hostility to foreigners worjos byte %20.0g worjos Worried about job security size float %12.0g Size of Housing (in m^2) rent float %12.0g Rent Minus Heating Costs (in Euro) rooms byte %8.0g Number of Rooms > 6m^2 Since the use of hyphens in varlists depends on the order of the variables in the dataset, we briefly introduce the order command. This command enables changing the order of variables in the dataset. If we type . desc Contains data from https://www.mustafacoban.de/wp-content/stata/gsoep.dta obs: 5,410 SOEP 2009 (Kohler/Kreuter) vars: 36 23 Sep 2015 16:20 size: 384,110 --------------------------------------------------------------------------------------------------------------------------- storage display value variable name type format label variable label --------------------------------------------------------------------------------------------------------------------------- pnr long %12.0g Person Number hhnr long %12.0g Houshold Number gender str6 %9s Gender female byte %20.0g female Female - Dummy age float %9.0g Age marst byte %29.0g marst Marital Status of Individual marr float %11.0g marr Married / Not Married - Dummy hhmem byte %8.0g Number of Persons in Household hhkids byte %8.0g Number of Kids (0-14 Years) in Household hhtyp byte %35.0g hhtyp Household Type income long %10.0g Individual Labor Earnings (in Euro) hhinc long %10.0g Household Post-Government Income (in Euro) educ byte %28.0g educ Education educy float %9.0g Number of Years of Education ausb byte %40.0g ausb Ausbildungsabschluss emplst byte %44.0g emplst Employment Status lfp float %18.0g lfp Labor Force Participation state byte %22.0g state State of Residence health byte %32.0g health Satisfaction with Health satlif byte %32.0g satlif Overall Life Satisfaction polint byte %20.0g polint Political Interests party byte %15.0g party Political party supported suppar byte %20.0g suppar Supports political party worpea byte %20.0g worpea Worried about peace worter byte %20.0g worter Worried about global terrorism worcri byte %20.0g worcri Worried about crime in Germany worimm byte %20.0g worimm Worried about immigration to Germany worhfo byte %20.0g worhfo Worried about hostility to foreigners worjos byte %20.0g worjos Worried about job security size float %12.0g Size of Housing (in m^2) rent float %12.0g Rent Minus Heating Costs (in Euro) rooms byte %8.0g Number of Rooms > 6m^2 renttype byte %20.0g renttype Status of living condit byte %24.0g condit Condition of house satliv byte %45.0g satliv Satisfaction with Living/Habitation --------------------------------------------------------------------------------------------------------------------------- Sorted by: pnr . order wor* hh* female . desc Contains data from https://www.mustafacoban.de/wp-content/stata/gsoep.dta obs: 5,410 SOEP 2009 (Kohler/Kreuter) vars: 36 23 Sep 2015 16:20 size: 384,110 --------------------------------------------------------------------------------------------------------------------------- storage display value variable name type format label variable label --------------------------------------------------------------------------------------------------------------------------- worpea byte %20.0g worpea Worried about peace worter byte %20.0g worter Worried about global terrorism worcri byte %20.0g worcri Worried about crime in Germany worimm byte %20.0g worimm Worried about immigration to Germany worhfo byte %20.0g worhfo Worried about hostility to foreigners worjos byte %20.0g worjos Worried about job security hhnr long %12.0g Houshold Number hhmem byte %8.0g Number of Persons in Household hhkids byte %8.0g Number of Kids (0-14 Years) in Household hhtyp byte %35.0g hhtyp Household Type hhinc long %10.0g Household Post-Government Income (in Euro) female byte %20.0g female Female - Dummy pnr long %12.0g Person Number gender str6 %9s Gender age float %9.0g Age marst byte %29.0g marst Marital Status of Individual marr float %11.0g marr Married / Not Married - Dummy income long %10.0g Individual Labor Earnings (in Euro) educ byte %28.0g educ Education educy float %9.0g Number of Years of Education ausb byte %40.0g ausb Ausbildungsabschluss emplst byte %44.0g emplst Employment Status lfp float %18.0g lfp Labor Force Participation state byte %22.0g state State of Residence health byte %32.0g health Satisfaction with Health satlif byte %32.0g satlif Overall Life Satisfaction polint byte %20.0g polint Political Interests party byte %15.0g party Political party supported suppar byte %20.0g suppar Supports political party size float %12.0g Size of Housing (in m^2) rent float %12.0g Rent Minus Heating Costs (in Euro) rooms byte %8.0g Number of Rooms > 6m^2 renttype byte %20.0g renttype Status of living condit byte %24.0g condit Condition of house satliv byte %45.0g satliv Satisfaction with Living/Habitation --------------------------------------------------------------------------------------------------------------------------- Sorted by: pnr the specified variables are placed at the beginning of the data set. Now, variables beginning with a wor are at the beginning. They are followed by the variables beginning with an hh and finally by the gender variable. The remaining variables are appended with no change to their sorting. Further layout rules can be found typing help order. ## 2  Create New Variables One of the three commandments says that the original data should never be overwritten. However, this does not mean that the data must not be changed. For data analysis and generation of new knowledge from the existing raw data set, new variables often have to be generated from the existing variables and existing variables have to be modified. In addition, it is common for large datasets to delete variables that are not relevant to a particular application or project. This ensures a better overview for the user and shortens the calculation time of certain analysis methods and algorithms. ### 2.1  Creating and Modifying Variables The most common command to generate a new variable is generate, which is usually abbreviated to gen. This command can be used with the by-prefix as well as with the in– and if-qualifiers. Thus, the basic syntax for this command is generate newvariable = expression First we have to choose a variable name for our newvariable and then type a single equals sign to start the definition of the new variable. An expression is a formula made up of constants, existing variables, operators, and functions. In general we can distinguish between mathematical and logical expressions. The operators needed for these expressions are given below Arithmetic Logical Relational + addition ! not > greater than - subtraction | or < less than * multiplication & and >= greater than or equal to / division <= less than or equal to ^ power == equal != not equal + string concatenation First, let's generate a new variable using a mathematical expression. Since the GSOEP has data on a person's individual labor earnings, we want to take the logarithm of this income variable and call the new variable loginc to make our operation identifiable in the new variable's name. . gen loginc = log(income) (2,001 missing values generated) Variable | Obs Mean Std. Dev. Min Max -------------+--------------------------------------------------------- loginc | 3,409 9.770343 1.124561 3.828641 13.70765 income | 4,778 20542.17 37426.25 0 897756 The new variable has many more missings than the original income variable because the logarithm of a zero income generates always a missing. Stata has many mathematical, statistical, string, date, time-series, and programming functions. Just type help functions to see some basic functions. Now, let's generate a new variable using a logical expression. We want to generate a new variable called midage that takes the value 1 if a person is aged between 25 and 64 and otherwise takes the value 0 . gen midage = age >= 25 & age < 64 . tab midage midage | Freq. Percent Cum. ------------+----------------------------------- 0 | 1,913 35.36 35.36 1 | 3,497 64.64 100.00 ------------+----------------------------------- Total | 5,410 100.00 Thus, 3,497 persons are within the defined age range between 25 and 64 years. In a next step we want to generate a new variable using a string variable within the logical expression. For this purpose, let's apply the string variable gender, which has two possible values, Male and Female. We want to create a new variable called male that takes the value 1 if a person is a man and the value 0 if a person is a woman . gen male = gender == "Male" . tab male male | Freq. Percent Cum. ------------+----------------------------------- 0 | 2,825 52.22 52.22 1 | 2,585 47.78 100.00 ------------+----------------------------------- Total | 5,410 100.00 . tab gender Gender | Freq. Percent Cum. ------------+----------------------------------- Female | 2,825 52.22 52.22 Male | 2,585 47.78 100.00 ------------+----------------------------------- Total | 5,410 100.00 The logical expression above says that a person is assigned the value 1 in the new variable if the expression is true for this person and otherwise is assigned the value 0, i.e. if the gender variable equals the string "Male" for a person, then the expression is true. Therefore, logical expressions are case-sensitive and sensitive to spaces. The following procedure leads to a completely different and undesired result due to the extra space at the end of the string notation . gen male2 = gender == "Male " . tab male2 male2 | Freq. Percent Cum. ------------+----------------------------------- 0 | 5,410 100.00 100.00 ------------+----------------------------------- Total | 5,410 100.00 Using the generate command, we can also generate new string variables. We can combine two or more string variables to a new string variable. But we can also create a new string variable by attaching strings to an existing string variable. First, let's generate the new string variable gender2 by combining the gender variable with itself . gen gender2 = gender + gender . list gender2 gender in 1/5 +-----------------------+ | gender2 gender | |-----------------------| 1. | MaleMale Male | 2. | FemaleFemale Female | 3. | MaleMale Male | 4. | FemaleFemale Female | 5. | MaleMale Male | +-----------------------+ As you can see, using the operator "+" will concatenate the string variables - in our case the replication of the gender variable - without spaces, i.e. simply join them together. Further, the missing value for a string variable is nothing special - it is simply the empty string " ". Second, let's create a new string variable gender3 by combing the gender variable with a constant string, e.g. the string " - Gender". . gen gender3 = gender + " - Gender" . list gender3 gender2 gender in 1/5 +-----------------------------------------+ | gender3 gender2 gender | |-----------------------------------------| 1. | Male - Gender MaleMale Male | 2. | Female - Gender FemaleFemale Female | 3. | Male - Gender MaleMale Male | 4. | Female - Gender FemaleFemale Female | 5. | Male - Gender MaleMale Male | +-----------------------------------------+ Stata shows a particularity if you want to change the values of an existing variable because Stata will not let you overwrite an existing variable using the generate command. If you really want to replace the values of an old variable you have to use the replace command. Thus, Stata uses two different commands to prevent you from accidentally modifying your data. The syntax of the replace command is similar to syntax of the generate command, although the former cannot be abbreviated. Now, let's change the values in the male variable to 2 if a person is a man and to 1 if a person is a woman by application of the gender variable within the if-qualifier. . replace male = 2 if gender == "Male" . replace male = 1 if gender == "Female" . tab male male | Freq. Percent Cum. ------------+----------------------------------- 1 | 2,825 52.22 52.22 2 | 2,585 47.78 100.00 ------------+----------------------------------- Total | 5,410 100.00 ### 2.2  More Generating and Recoding Variables There is another important command to create new variable. Let me introduce the more powerful egen command which is useful for working across groups of variables or within groups of observations. There are plenty of functions that can be applied by the egen command. Just type help egen to explore some of them. For example, if we are interested in the amount of missing values for an observation from a selected variable list, we can generate a new variable miss by applying the rowmiss() function to the egen command . egen miss = rowmiss(gender - educ) . tab miss miss | Freq. Percent Cum. ------------+----------------------------------- 0 | 4,129 76.32 76.32 1 | 1,169 21.61 97.93 2 | 112 2.07 100.00 ------------+----------------------------------- Total | 5,410 100.00 Thus, 4,129 persons have valid values for all five variables, while 112 persons have missing values for two out of the five variables. The rowmiss function is useful if you want to create a dataset containing observations without any missings for your selected variables. Furthermore, you can use the egen command if you want to store summary statistics of a variable in a new variable by group membership. Let's generate a new variable that stores the mean income of men if a person is a man and the mean income of women otherwise. . bysort gender: egen incgen_av = mean(income) . bysort gender: sum income --------------------------------------------------------------------------------------------------------------------------- -> gender = Female Variable | Obs Mean Std. Dev. Min Max -------------+--------------------------------------------------------- income | 2,458 13323 21290.77 0 612757 --------------------------------------------------------------------------------------------------------------------------- -> gender = Male Variable | Obs Mean Std. Dev. Min Max -------------+--------------------------------------------------------- income | 2,320 28190.75 47868.24 0 897756 . sort pnr . list incgen_av gender in 1/5 +-------------------+ | incgen~v gender | |-------------------| 1. | 28190.75 Male | 2. | 13323 Female | 3. | 28190.75 Male | 4. | 13323 Female | 5. | 28190.75 Male | +-------------------+ There is another command to generate new variables and modify existing variables. The recode command is used to group numeric variables into categories or the easily change the values for existing categories in categorical variables. Now, let's generate a new variable agecat4 that divides persons into four age groups, whereby the first age group is assigned the value 1 and the last age group the value 4. . sum age Variable | Obs Mean Std. Dev. Min Max -------------+--------------------------------------------------------- age | 5,410 49.50961 18.12717 17 100 . recode age (17/24 = 1) (25/44 = 2) (45/64 = 3) (65/100 = 4), gen(agecat5) (5410 differences between age and agecat5) . tab agecat5 RECODE of | age (Age) | Freq. Percent Cum. ------------+----------------------------------- 1 | 562 10.39 10.39 2 | 1,678 31.02 41.40 3 | 1,870 34.57 75.97 4 | 1,300 24.03 100.00 ------------+----------------------------------- Total | 5,410 100.00 Each expression in the parentheses is a recoding rule and consists of a list or range of values, followed by an equals sign and a new value. A range is specified by using a slash and includes the two boundaries, so 17/24 is 17 to 24. The gen() option guarantees that the new variable is created following the recoding rule, while the existing variable age remains unchanged. Moreover, you can use min to refer to the smallest value and max to refer to the largest value within the recoding rule, as in min/24 and 65/max. Values that are never assigned to a category are kept as they are. You can use else within the recoding rule to capture these values and assign them a specific category. The next example shows that the recode command can also be used to swap certain numeric values for a variable . recode female (0 = 1) (1 = 0) . tab female Female - | Dummy | Freq. Percent Cum. ------------+----------------------------------- Male | 2,825 52.22 52.22 Female | 2,585 47.78 100.00 ------------+----------------------------------- Total | 5,410 100.00 Since no option was applied, the existing variable female has been recoded. Now, all women take the value 0 and all men take the value 1. We simply swapped the values for this variable. I recommend that you always use the gen() option or make a copy of the original variable before recoding it. ### 2.3  Variable Names Convention, Dropping Variables, and Missings Variable names can have up to 32 characters, but many commands print only 12. Since shorter names are easier to type, I recommend a maximum length of 8 to 12 characters for variable names. A variable name is a sequence of 1 to 32 letters (A-Z, a-z, and any Unicode letter), digits (0-9), and underscores (_). Thus, Stata names are case-sensitive, which means that Age and age are two different variables. Furthermore, the first character of a variable name must be a letter or an underscore. I recommend, however, that you not begin your variable names with an underscore because all Stata's built-in variables begin with an underscore. Moreover, Stata reserves the following names _all float _n _skip _b if _N str# byte in _pi strL _coef int _pred using _cons long _rc with double It pays to develop a convention for naming variables and sticking to it. I prefer short lowercase names and tend to use single words or abbreviations rather than multi-word names with underscores; for example, I prefer hhinc to household_income, although both names are legal. There are two main commands for removing data and variables from memory: drop and keep. Remember that they affect only what is in memory. None of these commands alter anything that has been saved to the disk. The drop command is used to remove variables or observations from the dataset in memory. If you want to drop variables after you reload the GSOEP dataset, just type . use "https://www.mustafacoban.de/wp-content/stata/gsoep.dta", clear (SOEP 2009 (Kohler/Kreuter)) . drop rent size rooms If you want to drop observations, you have to use an if- or an in-qualifier. For example, at first we drop the last ten observations by applying the in-qualifier, and then we delete all men from the dataset by applying the if-qualifier. . drop in -10/l (10 observations deleted) . drop if gender == "Male" (2,581 observations deleted) These changes are only to the data in memory. If you want to make the changes permanent, you need to save the dataset. The keep command is a command for preserving specified variables or observations. Thus, it works inversely the drop command. If you want to keep a specific list of variables and drop the rest, just type . keep pnr female age If you want to keep certain observations, the same syntax of the drop command applies. For example, at first we only keep the first 100 observations, and then keep all individuals younger than 45 years. . keep in 1/100 (2,719 observations deleted) . keep if age < 45 (52 observations deleted) You can use the browse and describe commands to take a look at your miniature dataset in memory. So far we have become acquainted with Stata without dealing with the topic of missings. For the rest of the tutorial, however, it is indispensable to understand how missings are coded and programed in Stata. Like other statistical packages, Stata distinguishes missing values. The basic missing value for numeric variables is represented by a dot . There are 26 additional missing-value codes denoted by .a to .z. These values are represented internally as incredibly large numbers and the following ranking applies . < .a < .b < ... < .z Because missings internally take very large numbers, using the operators > and >= with an if-qualifier may produce erroneous results if this programming property is not taken into account. Let's look at example for this missing problem. We want to know the mean income of individuals who obtained more than 14 years of education (educy) . use "https://www.mustafacoban.de/wp-content/stata/gsoep.dta", clear (SOEP 2009 (Kohler/Kreuter)) . sum income if educy > 14 Variable | Obs Mean Std. Dev. Min Max -------------+--------------------------------------------------------- income | 1,119 32226.08 64576.71 0 897756 The summary statistics, however, may be incorrect if there are individuals with valid income values, but missings for the education variable. Thus, the command calculates the mean income of individuals who obtained more than 14 years of education or have not indicated their years of education. Since we do not want to consider the latter group of individuals, the correct command must be . sum income if educy > 14 & educy < . Variable | Obs Mean Std. Dev. Min Max -------------+--------------------------------------------------------- income | 804 42125.13 73451.05 0 897756 As you can see, the two commands lead to a difference in mean incomes, which is due to the exclusion of individuals with no education information in the second command. Furthermore, Stata has the missing() function which can be used within an if qualifier to exclude observations with missings for a certain variable. . sum income if educy > 14 & !missing(educy) Variable | Obs Mean Std. Dev. Min Max -------------+--------------------------------------------------------- income | 804 42125.13 73451.05 0 897756 We get the same result as above. But why are there several missing values in Stata and why is a simple dot not sufficient as a missing definition? Various missing values are useful for survey data. If a respondent has a missing for a particular question or variable, there may be different reasons behind that. The first would be if the respondent hadn’t answered the question. The second possible reason is if the question did not apply to the respondent. For example, if there is a question about my spouse's income, a missing could occur due to one of these two reasons. In the first case, the reason would be that I don't want to disclose my spouse’s income. The second case would be that I have no spouse. Of course, there could many more reasons, but I think these two examples make my point clear. Thus, if there is only one missing value, we lose the information about the reasons for this missing observation. But having the possibility to assign different missing values enables us to account for the reasons. In our example, we can choose the missing values .a and .b to distinguish between the two reasons. Now, how can we detect missings in our data or variables? If we want to know the amount of missings for a categorical variable with numeric values, we can use the missing of the tab command . tab educy, missing Number of | Years of | Education | Freq. Percent Cum. ------------+----------------------------------- 8.7 | 640 11.83 11.83 10 | 1,321 24.42 36.25 11 | 1,168 21.59 57.84 12 | 487 9.00 66.84 13 | 357 6.60 73.44 14 | 191 3.53 76.97 15 | 243 4.49 81.46 16.1 | 166 3.07 84.53 18 | 465 8.60 93.12 . | 372 6.88 100.00 ------------+----------------------------------- Total | 5,410 100.00 If we want to know the amount of missings for a continuous or quantitative variable, we can apply the misstable summarize command . misstable sum income Obs<. +------------------------------ | | Unique Variable | Obs=. Obs>. Obs<. | values Min Max -------------+--------------------------------+------------------------------ income | 632 4,778 | >500 0 897756 ----------------------------------------------------------------------------- . misstable sum _all Obs<. +------------------------------ | | Unique Variable | Obs=. Obs>. Obs<. | values Min Max -------------+--------------------------------+------------------------------ income | 632 4,778 | >500 0 897756 hhinc | 4 5,406 | >500 583 507369 educ | 761 4,649 | 4 1 4 educy | 372 5,038 | 9 8.7 18 ausb | 1,309 4,101 | 4 1 4 emplst | 155 5,255 | 6 1 6 lfp | 23 5,387 | 4 1 4 task | 622 4,788 | 7 1 7 health | 77 5,333 | 11 0 10 satlif | 88 5,322 | 11 0 10 polint | 89 5,321 | 4 1 4 party | 3,309 2,101 | 7 1 7 suppar | 85 5,325 | 2 1 2 worpea | 85 5,325 | 3 1 3 worter | 92 5,318 | 3 1 3 worcri | 92 5,318 | 3 1 3 worimm | 101 5,309 | 3 1 3 worhfo | 111 5,299 | 3 1 3 worjos | 2,329 3,081 | 3 1 3 rent | 3,049 2,361 | >500 27.3 3003.7 condit | 10 5,400 | 4 1 4 satliv | 95 5,315 | 11 0 10 ----------------------------------------------------------------------------- Using the notation _all tells Stata to apply all numeric variables of the dataset to the command. If you want to recode a valid numeric value of a variable to a specific missing, you can use the recode command . recode female (2 = .a) ## 3  Data Documentation Now, we will discuss, in brief, the labeling of the dataset, variables, and values. Such labeling is critical to the careful use of data. Labeling variables with descriptive names clarifies their meanings and their measurements. Labeling values of numerical categorical variables ensures that the real-world meanings of the encodings are not forgotten. These points are crucial when sharing data with others, including your future self. Labels are also used in the output of most Stata commands, so proper labeling of the dataset will produce much more readable results. Let's start with variable labels. Since we use abbreviations and short notations for variables in the dataset, labelling variables is essential. We can label a variable by using the command label variable. . use "https://www.mustafacoban.de/wp-content/stata/gsoep.dta", clear (SOEP 2009 (Kohler/Kreuter)) . label var ausb "Educational Attainment" . desc aus storage display value variable name type format label variable label --------------------------------------------------------------------------------------------------------------------------- ausb byte %40.0g ausb Educational Attainment The command is followed by the variable to be labeled. Then we type our preferred label in quotation marks. Next, we will take a look at value labels. These are very important because the numerical values of the categorical variables would have no real-world meaning otherwise. Value labels allow numeric variables to have words associated with numeric codes. Stata has a two-step approach to defining labels. First, you define a named label set which associates integer codes with labels of up to 80 characters, using the label define command. Then you assign the set of labels to a variable, using the label values command. . recode female (1 = 0) (0 = 1), gen(male) (5410 differences between female and male) . label define male_lb 0 "Female" 1 "Male" . label values male male_lb First, we created a new variable male which takes a value of one for men and a zero for women. Then we defined the new value label male_lb and assigned the 0 to women and the 1 to men. Next, we associated our new value label with the male variable. I highly recommend using the same name for the value label set and the variable because then you don't have to remember the order in the last step. . label define male 0 "Female" 1 "Male" . label values male male One advantage of this two-step approach is that you can use the same set of value labels for several variables. The canonical example is . label define yesno 1 "yes" 0 "no" , which can then be associated with all 0-1 variables in your dataset by simply stringing all variables together after label values and by putting the name of the value label yesno at the end of the command. Moreover, label sets can be modified using the options add or modify. Just check help label. Since we have seen that we can define different missing values you can also assign value labels to them. For example, . label define party .a "No answer" .b "Not applicable", modify Using the desc command, you can check whether your variables have value labels. If you want to take a look at one or several specific value labels you can use the label list command. . desc Contains data from https://www.mustafacoban.de/wp-content/stata/gsoep.dta obs: 5,410 SOEP 2009 (Kohler/Kreuter) vars: 37 23 Sep 2015 16:20 size: 389,520 --------------------------------------------------------------------------------------------------------------------------- storage display value variable name type format label variable label --------------------------------------------------------------------------------------------------------------------------- pnr long %12.0g Person Number hhnr long %12.0g Houshold Number gender str6 %9s Gender female byte %20.0g female Female - Dummy age float %9.0g Age marst byte %29.0g marst Marital Status of Individual marr float %11.0g marr Married / Not Married - Dummy hhmem byte %8.0g Number of Persons in Household hhkids byte %8.0g Number of Kids (0-14 Years) in Household hhtyp byte %35.0g hhtyp Household Type income long %10.0g Individual Labor Earnings (in Euro) hhinc long %10.0g Household Post-Government Income (in Euro) educ byte %28.0g educ Education educy float %9.0g Number of Years of Education ausb byte %40.0g ausb Educational Attainment emplst byte %44.0g emplst Employment Status lfp float %18.0g lfp Labor Force Participation state byte %22.0g state State of Residence health byte %32.0g health Satisfaction with Health satlif byte %32.0g satlif Overall Life Satisfaction polint byte %20.0g polint Political Interests party byte %15.0g party Political party supported suppar byte %20.0g suppar Supports political party worpea byte %20.0g worpea Worried about peace worter byte %20.0g worter Worried about global terrorism worcri byte %20.0g worcri Worried about crime in Germany worimm byte %20.0g worimm Worried about immigration to Germany worhfo byte %20.0g worhfo Worried about hostility to foreigners worjos byte %20.0g worjos Worried about job security size float %12.0g Size of Housing (in m^2) rent float %12.0g Rent Minus Heating Costs (in Euro) rooms byte %8.0g Number of Rooms > 6m^2 renttype byte %20.0g renttype Status of living condit byte %24.0g condit Condition of house satliv byte %45.0g satliv Satisfaction with Living/Habitation male byte %9.0g male RECODE of female (Female - Dummy) --------------------------------------------------------------------------------------------------------------------------- Sorted by: pnr Note: Dataset has changed since last saved. . label list emplst emplst: 1 Full-Time Employee 2 Part-Time Employee 3 Irregular Employee 4 Unemployed 5 Retired 6 Not in Labor Force ## 4  Work Documentation While it is fun to type commands interactively and see the results straightaway, serious work requires that you save your results and keep track of the commands that you have used, so that you can document your work and reproduce it later if needed. ### 4.1  Log-Files When you work on an analysis, it is worthwhile to behave like a bench scientist and keep a lab notebook of your actions so that your work can be easily replicated. Everyone has a feeling of complete omniscience while working intensely - this feeling is wonderful but fleeting. By the next day, the exact little details needed for perfect duplication have become obscure. Stata has a lab notebook on hand: the log file. A log file is simply a record of your Results window. It records all commands and all textual output in real time. Thus it keeps your lab notebook for you as you work. Because it saves the file to the disk while it writes the Results window, it also protects you from disastrous failures, be they power failures or computer crashes. We recommend that you start a log file whenever you begin any serious work in Stata. To open a log file, use the log using command and give your log file a meaningful filename. . use "https://www.mustafacoban.de/wp-content/stata/gsoep.dta", clear (SOEP 2009 (Kohler/Kreuter)) . log using project1, replace --------------------------------------------------------------------------------------------------------------------------- name: <unnamed> log: N:\Lehre\Stata\7.Homepage\3.Parts\Part 2\project1.smcl log type: smcl opened on: 2 Aug 2018, 19:08:13 The replace option ensures that an existing log file with the name project1 will be overwritten. This will often be the case if you need to re-run your commands several times to get them right. By default, Stata will save the log file in its Stata Markup and Control Language (SMCL) format, which preserves all formatting and links from the Results window. If you want to temporarily suspend logging and then resume logging, just use the commands log off and log on . tab emplst Employment Status | Freq. Percent Cum. -------------------+----------------------------------- Full-Time Employee | 2,040 38.82 38.82 Part-Time Employee | 599 11.40 50.22 Irregular Employee | 288 5.48 55.70 Unemployed | 312 5.94 61.64 Retired | 1,389 26.43 88.07 Not in Labor Force | 627 11.93 100.00 -------------------+----------------------------------- Total | 5,255 100.00 . log off name: <unnamed> log: N:\Lehre\Stata\7.Homepage\3.Parts\Part 2\project1.smcl log type: smcl paused on: 2 Aug 2018, 19:08:13 --------------------------------------------------------------------------------------------------------------------------- . drop party . log on --------------------------------------------------------------------------------------------------------------------------- name: <unnamed> log: N:\Lehre\Stata\7.Homepage\3.Parts\Part 2\project1.smcl log type: smcl resumed on: 2 Aug 2018, 19:08:14 To finish your logging, close your log file using the log close command. Once the log file is closed and saved on the hard disk, you can use the view command to open the file in the Viewer of Stata or you can directly print the content of your log file using the print command. . tab lfp Labor Force | Participation | Freq. Percent Cum. -------------------+----------------------------------- Dependent Employee | 2,846 52.83 52.83 Self-Employed | 213 3.95 56.78 Unemployed | 312 5.79 62.58 Not in Labor Force | 2,016 37.42 100.00 -------------------+----------------------------------- Total | 5,387 100.00 . log close name: <unnamed> log: N:\Lehre\Stata\7.Homepage\3.Parts\Part 2\project1.smcl log type: smcl closed on: 2 Aug 2018, 19:22:29 --------------------------------------------------------------------------------------------------------------------------- . view project1.smcl . print project1.smcl As log files in SMCL format can only be opened with Stata, a log file can alternatively be saved in ASCII/HTML format with the extension .log or in plain-text format with the extension .txt. However, I recommend that you use the default SMCL format because SMCL files can be translated into variety of formats, such as plain log, plain-text, PostScript, and PDF, using the translate command. . translate project1.smcl project1.pdf, replace (file project1.pdf written in PDF format) Stata comes with an integrated text editor called the Do-file Editor, which can be used for many tasks. It gets its name from the term do-file, which is a file containing a list of commands for Stata to run (called a batch file or a script in other programs). Although the Do-file Editor has advanced features that can help in writing such files, it can also be used to build up a series of commands that can then be submitted to Stata all at once. This feature can be handy when writing a loop to process multiple variables in a similar fashion or when doing complex, repetitive tasks interactively. Thus, you can run your program directly from the editor without using the Command Window anymore. To access Stata's Do-File Editor, use the shortcut Ctrl+9 or type the command doedit in the Command window. Do-files have the extension .do and existing do-files can be opened by typing doedit dofilename.do There are several useful shortcuts to handle your new do-file if you're working within a do-file Shortcurt Execution Ctrl + s Save the do-file Ctrl + d All commands of the do-file are executed, starting at the beginning of the do-file Ctrl + Shift + d All commands starting from the current cursor position are executed If you want to execute a do-file from your hard disk, you can use the do command by typing doedit dofilename.do You will notice that the color of the text changes as you type within a do-file. The different colors are examples of the Do-File Editor's syntax highlighting which you can modify if you want to. Code that looks obvious to you may not be so obvious to a co-worker, or even to you a few months later. It is always a good idea to annotate your do-files with explanatory comments that provide the gist of what you are trying to do. If the default settings of highlighting within do-files have not been modified, comments are in green. There are three alternative ways of using comments in a do-file 1. Single Comment: * You can start a new line with a * to indicate that this line is a comment, not a command. 2. . sum income Variable | Obs Mean Std. Dev. Min Max -------------+--------------------------------------------------------- income | 4,778 20542.17 37426.25 0 897756 . * The sum command calculates the mean value of a single variable or several variables 3. Toggle Comment: // A toggle comment // is at the end of a command and indicates that everything that follows to the end of the line is a comment and should be ignored by Stata. 4. . gen loginc = log(income) // New Variable with Logarith of Income (2,001 missing values generated) 5. Block Comment: /*[...]*/ A block comment /*[...]*/ is used to indicate that all text between the opening /* and the closing */, which may be a few characters or may span several lines, is a comment to be ignored by Stata. This type of comment can be used anywhere, even in the middle of a line, and is usually used to "comment out" temporarily unused commands. 6. . replace loginc = 0 if loginc >= . . /* > are assigned a zero value > */ Often, commands can be very long, especially when it comes to graph commands. In a do-file you will probably want to break long commands into lines to improve readability. There are two alternatives to tell Stata that a command continues on the next line or lines 1. Triple Slashes: /// Triple Slashes say that everything after them to the end of the line is a comment and the command itself continues on the next line. . sum income /// > if educ == 1 & /// > female == 1 Variable | Obs Mean Std. Dev. Min Max -------------+--------------------------------------------------------- income | 755 8132.087 24628.12 0 612757 2. Delimiter: ; Alternatively, you tell Stata to use a semi-colon instead of the carriage return at the end of the line to mark the end of a command by using #delimit ;. Now all commands need to terminate with a semi-colon. To return to using carriage return as the delimiter, use #delimit cr. Remember, the delimiter can only be changed in do-files. . desc income storage display value variable name type format label variable label --------------------------------------------------------------------------------------------------------------------------- income long %10.0g Individual Labor Earnings (in Euro) . #delimit ; delimiter now ; . sum income > if educ == 1 & > female == 1 ; Variable | Obs Mean Std. Dev. Min Max -------------+--------------------------------------------------------- income | 755 8132.087 24628.12 0 612757 . #delimit cr delimiter now cr . desc income storage display value variable name type format label variable label --------------------------------------------------------------------------------------------------------------------------- income long %10.0g Individual Labor Earnings (in Euro) Now, let's take a look at a sample do-file and what it should contain at minimum /* An Introduction to Stata Mustafa Coban July 2018 */ version 15 clear set more off capture log close log using project1.smcl, replace use "https://www.mustafacoban.de/wp-content/stata/gsoep.dta", clear sum income /// if educ == 1 & /// female == 1 #delimit ; #delimit cr * Replace Missings with zero values It is always a good idea to start every do file with comments that include at least a title, the name of the programmer who wrote the file, and the date. Assumptions about required files should also be noted. Then we continue with specifying the version of Stata we are using, in this case 15. This ensures that future versions of Stata will continue to interpret the commands correctly, even if Stata has changed. The clear statement deletes the data currently held in memory and any value labels you might have. We need clear just in case we need to rerun the program. The set more off command ensures that the execution of the do-file is not interrupted if the Results window is not large enough. If an earlier run of the do-file has failed, it is likely that you still have a log file open, in which case the log using command will fail. Thus, at first we have to close any open logs. The problem with this solution is that it will not work if there is no log file open. The way out of this problem is to use the prefix capture. This prefix tells Stata to run the command that follows and ignore any errors. Use judiciously. At the end of the do-file we close the log-file and exit the do-file.
2021-02-26 21:14:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2139568328857422, "perplexity": 5397.731412416963}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178357984.22/warc/CC-MAIN-20210226205107-20210226235107-00039.warc.gz"}
https://gmatclub.com/forum/the-perimeter-of-a-flat-rectangular-lawn-is-42-meters-the-width-of-th-270357.html
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 19 Sep 2018, 21:57 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # The perimeter of a flat rectangular lawn is 42 meters. The width of th Author Message TAGS: ### Hide Tags Math Expert Joined: 02 Sep 2009 Posts: 49257 The perimeter of a flat rectangular lawn is 42 meters. The width of th  [#permalink] ### Show Tags 12 Jul 2018, 05:44 00:00 Difficulty: 25% (medium) Question Stats: 77% (01:30) correct 23% (01:31) wrong based on 39 sessions ### HideShow timer Statistics The perimeter of a flat rectangular lawn is 42 meters. The width of the lawn is 75 percent of its length. What is the area of the lawn, in square meters? A) 40.5 B) 96 C) 108 D) 192 E) 432 _________________ Manager Joined: 13 Feb 2018 Posts: 131 Re: The perimeter of a flat rectangular lawn is 42 meters. The width of th  [#permalink] ### Show Tags 12 Jul 2018, 05:52 Let's say: Length=x Width=0.75x Perimeter=2x+2*(0.75x)=42 x=12 Length=12 width=9 IMO Ans: C Director Joined: 31 Oct 2013 Posts: 571 Concentration: Accounting, Finance GPA: 3.68 WE: Analyst (Accounting) The perimeter of a flat rectangular lawn is 42 meters. The width of th  [#permalink] ### Show Tags Updated on: 12 Jul 2018, 07:17 Bunuel wrote: The perimeter of a flat rectangular lawn is 42 meters. The width of the lawn is 75 percent of its length. What is the area of the lawn, in square meters? A) 40.5 B) 96 C) 108 D) 192 E) 432 Given Data: Item = rectangular . Perimeter = 42 meters we know that perimeter of a rectangular = 2x + 2y where x is the length and y s the width. 2x + 2y = 42 x+ y = 21........................................1 Note : width is 75% of the length . So, x + 75% of x = 21 x + (3x / 4) = 21 7x = 84 x = 12. so y = 9 we know , area of a rectangular = width * length = 9*12 = 108 Originally posted by selim on 12 Jul 2018, 06:06. Last edited by selim on 12 Jul 2018, 07:17, edited 1 time in total. Intern Joined: 09 May 2018 Posts: 3 Re: The perimeter of a flat rectangular lawn is 42 meters. The width of th  [#permalink] ### Show Tags 12 Jul 2018, 06:51 Bunuel wrote: The perimeter of a flat rectangular lawn is 42 meters. The width of the lawn is 75 percent of its length. What is the area of the lawn, in square meters? A) 40.5 B) 96 C) 108 D) 192 E) 432 If the length of the lawn is 4x, the width will be 75% or $$\frac{3}{4}$$th of the length. Therefore, width of the lawn = $$\frac{3}{4}$$(4x) = 3x. The perimeter is given to be 42 meters. Perimeter of a rectangle = 2(l + b) = 2(4x + 3x) = 14x. 42 = 14x x = 3 Area = l x b = (4x)(3x) = 12$$x^2$$ = 12(9) = 108 Board of Directors Status: QA & VA Forum Moderator Joined: 11 Jun 2011 Posts: 4020 Location: India GPA: 3.5 Re: The perimeter of a flat rectangular lawn is 42 meters. The width of th  [#permalink] ### Show Tags 12 Jul 2018, 07:36 Bunuel wrote: The perimeter of a flat rectangular lawn is 42 meters. The width of the lawn is 75 percent of its length. What is the area of the lawn, in square meters? A) 40.5 B) 96 C) 108 D) 192 E) 432 Let Length be 4 so width be 3 So, Perimeter will be 2*(4+3) = 14 Now 14 represents 42, so Length is 12 units and breadth is 9 units. Thus, Area will be 108 Units, Answer must be (C) _________________ Thanks and Regards Abhishek.... PLEASE FOLLOW THE RULES FOR POSTING IN QA AND VA FORUM AND USE SEARCH FUNCTION BEFORE POSTING NEW QUESTIONS How to use Search Function in GMAT Club | Rules for Posting in QA forum | Writing Mathematical Formulas |Rules for Posting in VA forum | Request Expert's Reply ( VA Forum Only ) Re: The perimeter of a flat rectangular lawn is 42 meters. The width of th &nbs [#permalink] 12 Jul 2018, 07:36 Display posts from previous: Sort by # Events & Promotions Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
2018-09-20 04:57:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6862179040908813, "perplexity": 4890.63910087199}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156416.22/warc/CC-MAIN-20180920041337-20180920061337-00491.warc.gz"}
https://www.wyzant.com/resources/answers/685419/finance-capital-cost
Mark W. # Finance: Capital Cost A new computer system that controls a manufacturing process for Cat Incorporated can be straight-line depreciated to zero over 7.00-years. The cost of the computer system is $300,000.00, and it will also cost$23,511.00 to install and deliver. What is the yearly depreciation for the computer? What is the book value of the computer after the fifth year? 
2019-06-16 05:05:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20282518863677979, "perplexity": 5236.615111544371}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627997731.69/warc/CC-MAIN-20190616042701-20190616064701-00481.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/algebra-a-combined-approach-4th-edition/chapter-6-section-6-5-factoring-by-special-products-exercise-set-page-448/90
## Algebra: A Combined Approach (4th Edition) Published by Pearson # Chapter 6 - Section 6.5 - Factoring by Special Products - Exercise Set - Page 448: 90 (y-6+z)(y-6-z) #### Work Step by Step $(y-6)^{2}$ -$z^{2}$=(y-6+z)(y-6-z) After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
2018-09-20 18:42:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49237996339797974, "perplexity": 3271.0752173939227}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156554.12/warc/CC-MAIN-20180920175529-20180920195929-00044.warc.gz"}
http://forum.mackichan.com/node/297
## Space below a graphic Hi, im having problems with the captions of the figures im using in my thesis, the problem is that i don't know how to modify the space between the caption text and the figure, because the space is pretty much, i want to reduce it, how can i do it?, my figures are inline not floating. Any suggestion would be really appreciated. Another question would be how to justify or center the caption of the figure when the figure is inline not floating?. Thanks. ### The caption text should be The caption text should be appearing on what would be the next typeset line after the graphics image.  If you have a lot of blank space inside the graphics, then this might make the caption appear to be too far away from the graphics.  Try turning on the frame to see how the caption text is positioned relative to the size of the graphics. There won't be an easy way to change this spacing.  The mechanism is handled differently depending on if you are saving for SWP/SW/SN file type or Portable LaTeX.  You could add vertical spacing objects inside the caption text, but this is messy and not really recommended. The caption text is centered under the graphics.  You could add \raggedright or \raggedleft to change this behavior in a TeX field at the beginning of the caption text.
2020-02-19 00:51:09
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9830548763275146, "perplexity": 581.1183086527964}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143963.79/warc/CC-MAIN-20200219000604-20200219030604-00478.warc.gz"}
https://codegolf.meta.stackexchange.com/questions/2140/sandbox-for-proposed-challenges?page=13&tab=votes
# What is the Sandbox? This "Sandbox" is a place where Code Golf users can get feedback on prospective challenges they wish to post to the main page. This is useful because writing a clear and fully specified challenge on the first try can be difficult. There is a much better chance of your challenge being well received if you post it in the Sandbox first. See the Sandbox FAQ for more information on how to use the Sandbox. ## Get the Sandbox Viewer to view the sandbox more easily To add an inline tag to a proposal use shortcut link syntax with a prefix: [tag:king-of-the-hill] • How are tags added to questions? – guest271314 Jan 9 at 7:51 • @guest271314 You can use this markup to create a tag in a draft: [tag:code-golf] – DJMcMayhem Aug 29 at 15:19 • Why no featured anymore? Can't we have it auto-added or something? – JL2210 Sep 26 at 15:57 • @JL2210 We now have a permanent info box that links to the Sandbox, so the featured tag isn't necessary – caird coinheringaahing Sep 29 at 13:43 • I think the sentence 'replace the post here with a link to the challenge and delete it' may specify that the deletion should be done immediately . – AZTECCO Oct 5 at 19:39 # Fastest Code: checking if interval pairs overlap Given an unsorted input of many interval pairs (50+), write the fastest algorithm to determine if they do not overlap. An interval pair is said to overlap if interval x and interval y are overlapping. Example input 1: interval x , interval y 10-25, 50-60 10-15, 25-60 Output: Can be in any true false format. false (They overlap) reasoning: a.x overlaps b.x a.y overlaps b.y Example input 2: 10-25, 50-60 20-30, 25-30 Output: true (they do not overlap) reasoning: a.x overlaps b.x a.y does not overlap b.y Scoring: [not sure...] brute force gives a worst case n^2 runtime • It's hard to understand what the program is supposed to do. It's better to give three separate self-contained test cases than to mix them together with extra identifiers which won't be in the actual input. But if I understand correctly, there's nothing difficult here at all. It's just interval overlap testing (two ifs) done twice for no obvious reason. – Peter Taylor Jul 5 '13 at 19:45 • The problem is that there will be a very large input. I'm thinking > 50 lines. – EAKAE Jul 5 '13 at 20:50 • I'm not sure whether or not to score it based on time, or worst case runtime. – EAKAE Jul 5 '13 at 20:59 • Instead of asking for overlap, ask for disjoint: "Check if a family of intervals is disjoint". I also think it would be more interesting if you give intervals in interval notation but I you should at least specify whether or not the endpoints are included. – Justin Dec 21 '13 at 7:41 Business Card Ray Tracer I have no idea how to create a good code golf question! See this description of a ray tracer with source code that fits on a business card. The author stopped when the code size was 1337 bytes. Achieving identical output, optimise for minimum code size. Execution time is not relevant. • I think what you have here is a straight ahead golf. All languages. You need only define the requirements. Do you want identical output or do you want "good output encompassing <list of features>"? – dmckee Oct 6 '13 at 17:22 • For a minimum feature list I'd suggest something like (1) it is ray tracer (2) supports point-like lights and shadow + ambient light (3) supports mirrored (implies reflection) and matte surfaces (3) all objects are sphere and overlaps are allowed. With no requirement for (a) anti-aliasing; (2) finite sized light sources; (c) atmosphere effect or (d) depth of field; or (e) tiling and gradients. Notice however, that the example supports at least (b), (d) and (e). – dmckee Oct 6 '13 at 17:29 • BTW--The one you linked can get a little bit more with #define Q return (R was already taken for the rand wrapper) and #define O operator. – dmckee Oct 6 '13 at 17:33 • I suggest reading the Teapot question in the sandbox Mk IV and the comments - it's not the same question, but some of the same issues are relevant, and it might give you ideas for improvements to the spec. – Peter Taylor Oct 6 '13 at 22:48 • Yes. Read the teapot question for guidance. Ultimately I decided that one was too big, but we did get into some pertinent details. – luser droog Dec 1 '13 at 9:48 • This sandbox post has had little activity in a while and little positive reception from the community. Please improve / edit it or delete it to help us clean up the sandbox. – programmer5000 Jun 9 '17 at 15:32 # Countdown: Federal Holidays in the United States Inspired by this question: Christmas Countdown Write a program or script that will countdown to the nearest U.S. federal holiday, at any given time, and will switch the display to an appropriate greeting during each holiday. The following holidays must be tracked, and announced: Holiday Date Greeting ========================================================================================== New Year's Day Jan. 1 Happy New Year! Martin Luther King, Jr. Day 3rd Mon. in Jan. Happy Martin Luther King, Jr. Day! President's Day 3rd Mon. in Feb. Happy President's Day! Memorial Day Last Mon. in May Happy Memorial Day! Independence Day Jul. 4 Happy Independence Day! Labor Day First Mon. in Sept. Happy Labor Day! Columbus Day 2nd Mon. in Oct. Happy Columbus Day! Veterans Day Nov. 11 Happy Veterans Day! Thanksgiving 4th Thu. in Nov. Happy Thanksgiving! Christmas Dec. 25 Merry Christmas! The strings listed under "Holiday" and "Greeting" are all free. Shortcuts like "Merry X-mas!" or "Happy 4th of July" will count against you - the full and proper holiday names are free, so there's no good reason not to use them. The following strings are also free, only when used as a label for time units or in advertising the next upcoming holiday: days hours minutes seconds milliseconds until time On any given non-holiday, the program must show a count-down timer which displays time remaining at least down to the second, and updates the display with an accurate value (according to the system clock) at least once per second. Time remaining until a holiday must be counted as the time until midnight (00:00:00) on that day. How the days, hours, minutes, and seconds (and milliseconds, if you choose) are displayed is up to you, so long as all mandatory items are present and it is clear which numbers represent which value. Again, the strings defining units of time are free so there's no really good reason not to use them. (Though you won't be penalized for not using these strings, so long as it is still unambiguous which time units are which.) The program should also make apparent which holiday is being counted down towards. On any given holiday, the program must cease displaying the countdown timer and instead display the appropriate greeting for that holiday from 00:00:00 until 23:59:59. After a holiday is over, at 00:00:00 the next day, the holiday greeting must go away and be replaced with the countdown timer for the next holiday. • Name of language • Score (length of golfed code, minus free characters) • Golfed code • Total length of golfed code • Total number of free characters used • Un-golfed code, with descriptive comments The program must be capable of running accurately (according to the system clock) at any time, and must be able to run indefinitely. The only limitations to this should be those imposed by the host computer or the nature of the programming language. Are there any additions/deletions/modifications that should be made to these rules? I'm considering changing some of the greetings, but I'm not quite sure what to. • "Happy Martin Luther King, Jr. Day!" is just a mouthful and feels awkward, but shortening it to "Happy MLK Day" feels weird too - any other suggestions? • I'm not quite sure "Memorial Day" should really be preceded by "Happy" - thoughts? • Any others? • I think it would be more interesting if the strings were not free, but you still required exact match. I would like to see the compression scheme used by contestants. – John Dvorak Dec 7 '13 at 12:04 • @JanDvorak This is meant to be code-golf, not kolmogorov-complexity. – Iszi Dec 7 '13 at 22:11 • This challenge proposal has been inactive for over a month. I would like to take ownership of the challenge and make it ready for posting. Please let me know within the next 14 days if you have any objections and would still like to finish and post this challenge yourself. – Hosch250 Nov 3 '14 at 2:01 # Count unique characters in text. Given a string for input, output the unique non-whitespace characters in that string along with a count of their occurrences. The list should be sorted in ascending order of ASCII code. Examples Input: Hello, World! Output: Character Count ! 1 , 1 H 1 W 1 d 1 e 1 l 3 o 2 r 1 Input: The quick brown fox jumps over the lazy dog. Output: Character Count . 1 T 1 a 1 b 1 c 1 d 1 e 3 f 1 g 1 h 2 i 1 j 1 k 1 l 1 m 1 n 1 o 4 p 1 q 1 r 1 s 1 t 1 u 1 v 1 w 1 x 1 y 1 z 1 The actual formatting (headers, spacing, etc) of the on-screen output is up to you. The only conditions are that it must be sorted in ascending order by ASCII code, and it must be easy to tell what represents a character from the string and what represents a count of a given character. (For example, given a string of 99999999, the output should be explicit so that it is not confused as saying I have 9 8s.) Ultimate challenge (taken from here): JKqdJg+oJgiowgyIJgkS+gyxJdeS+gyxJ4yoJdybJdioJdqIJ4kS+KwFJ4QS+gzYJg+ow4vIJ4yxvd+IJgy=+dv=JdQx+gzbJrzx24zYJgkxJ4qLJKQxJ4yxJKqx+KqdJKqdJg+oJgiowgyIJgkS+gyxJdeo24yxJm+xJdybJdioJdqIJKi=J4wF+dvS+gzYJg+ow4zYJ4yxvdy=J4i=+Kv=JdQo+KqxJrzdJKzYJgkxJ4qLJgkxJ4yxJKvSJ4qbJKqdJg+oJgiowgyIJgkdJgyxJdeo24yxJm+xJdybJd+oJd+S+dz=J4wF+dvS+g+SJg+ow4vIJ4yxJ4voJgy=+dv=+dzdJgqxJrzdJKzYJgkS+dweJKQxJ4yxJKvSJ4qbJKq=24yYJgiowgyIJgkdJgzdJryo24yxJm+d24zxJd+oJdqIJ4kS+KwFJ4QS+gzYJ4y=2gzYJ4yxJ4voJgy=+dv=+dzdJgqxJrzx24zYJgkS+dweJKQxJ4fK+dQSJ4qbJKq=24yYJgiowgyIJgkS+gzdJryS+gyxJ4yoJdybJd+oJd+S+dz=J4wF+dvS+gzYJ4y=2gvIJ4yxJ4voJgy=+dv=JdQo+KqxJrzx24zY+dzS+dweJKQxJ4yxJKqx+KqbJKq=24vbJdyowgyIJgkdJgzdJryS+gyxJm+d24zxJdioJd+S+dz=J4wF+dvS+gzYJg+ow4vIJ4yxJ4voJgy=+Kv=JdQx+gzbJrzx24zYJgkS+dweJgkxJ4yxJKvSJ4qdJKq=24yYJgiowgyIJgkdJgzdJryS+gyxJ4yoJdybJd+oJdqIJKi=J4wF+dvS+gzYJg+ow4vIJ4yxJ4v=J4i=+Kv=+dzdJgqxJrzx24zYJgkS+dweJgkxJ4fKJ4qx+KqdJKqdJg+SJdyowg+oJgkS+gyxJdeS+gyxJ4yoJdybJd+oJdqIJ4kS+KwFJ4QS+g+SJ4y=2gzYJ4yxJ4v=J4i=+Kv=JdQo+KqxJrzx24zY+dzS+dweJKQxJ4yxJKvSJ4qbJKqdJg+oJgiowg+oJgkS+gzdJryo24yxJ4yoJdybJdioJdqIJ4kS+KwFJ4QS+g+SJg+ow4vIJ4yxvd+IJgy=+dv=JdQo+KqxJrzdJKzY+dzxJ4qLJKQxJ4yxJKqx+KqdJKq=24vbJdyowg+oJgkS+gzdJryo24yxJ4yoJdybJdioJd+S+dz=J4wFJ4QS+gzYJg+ow4zYJ4yxvd+IJgy=+Kv=+dzdJgqxJrzdJKzYJgkxJ4qLJgkxJ4yxJKvSJ4qbJKq=24vbJdyowgyIJgkdJgyxJdeo24yxJm+xJdybJd+oJdqIJKi=J4wF+dvS+g+SJ4y=2gvIJ4yxvd+IJgy=+dv=+dzdJKzbJrzdJKzY+dzS+dweJgkxJ4yxJKvSJ4qbJKq=24yYJgiowg+oJgkS+gyxJdeo24yxJ4yoJKzxJd+oJdqIJKi=J4wF+dvS+gzYJg+ow4vIJ4yxJ4v=J4i=+dv=+dzdJgqxJrzx24zYJgkxJ4qLJKQxJ4fKJ4qx+KqdJKqdJg+oJgiowgyIJgkS+gzdJryS+gyxJm+d24zxJd+oJdqIJKi=J4wFJ4QS+gzYJ4y=2gzYJ4yxvdy=J4i=+Kv=+dzdJKzbJrzx24zY+dzxJ4qLJKQxJ4yxJKqx+KqdJKqdJg+SJdyowg+oJgkdJgzdJryo24yxJm+d24zxJd+5 • This isn't really an interesting problem. The shortest answer is almost certainly going to be fewer than 10 characters. – Peter Taylor Dec 11 '13 at 12:19 • @PeterTaylor While I mostly agree with your comment - already the header line may contain more than 10 characters. – Howard Dec 12 '13 at 6:15 • "The quick brown fox jumps over the lazy dog." contains "e" three times. – Howard Dec 12 '13 at 6:16 • @Howard Thanks. I must be blind - it took me about five times of reading your comment to find it. Also, do remember that the header is optional to a certain degree - you just need to make sure the output is unambiguous as to which items are characters from the string, and which are character counts. – Iszi Dec 12 '13 at 7:02 • My brain instantly went into bash mode. wc and uniq practically solve half of this, but not in any particularly short manner. – Rob Dec 17 '13 at 20:31 # Quine with syntax highlighting I don't really have much of an idea how to properly pose a quine challenge, or what the common syntax highlighting rules are (or aren't) for various languages. So, I figured I'd just toss this concept up here for consideration and let the community flesh it out if they think it's a good idea. • I'm pretty sure some languages don't even have syntax to highlight – John Dvorak Dec 13 '13 at 20:12 • @JanDvorak Perhaps this would not quite be an "all languages" challenge, then - only languages which naturally lend themselves to syntax highlighting would be eligible. – Iszi Dec 13 '13 at 20:19 • You also can't use a language that cannot render any decent GUI. Also, specifying the amount of syntax highlighting the program needs to generate will be hell. – John Dvorak Dec 13 '13 at 20:36 • I don't think this question is feasible, due to the output restrictions and due to the difficulty in defining the minimum required syntax highlighting. – John Dvorak Dec 13 '13 at 21:09 • I like this idea. I think you could specify an adequate level of highlighting with just keywords, strings or characters and numeric literals each having their own color. – Οurous Feb 28 '18 at 21:13 ### PETSCII banner In an other world... I was using a PET 2001 who used some particular PETSCII charset. The screen green on black, with 40 columns and 25 lines, was only able to display characters from this charset. No way to draw dots or lines... But in the chaset, there is some ▝ and ▚, which, ( by the use of reverse video in order to obtain 16 chars: ' ','▖','▗','▘','▝','▀','▄','▐','▌','▞','▚','▟','▛','▜','▙','█' ) make us able to draw graphics on a 80x50 dots plan. Using an internal clock triggering IRQ, I've done a animated prompter like this: The goal of this is to make a similar banner, with same charset, (but using UTF-8 characters: ' ','▖','▗','▘','▝','▀','▄','▐','▌','▞','▚','▟','▛','▜','▙','█'). Warn, this charset use inverted lower/upper cases. • This imply the use of PETSCII charset, I will post them there as a json string, before getting this out of the sandbox if some interest... • The tool have to change his position 20 time per second. • The tool must accept as argument, the string to display. • The tool must add date and time in the form - WDay MDay Mnth Year, HH:MM:SS - • Scrolling have to be done bit per bit: I.E.: by half character! • Shortest code... • -3 if size of console is not limited to 40 columns • -5 if cpu usage stay less than 90% (On my poor Core(TM)2 Duo CPU E8400 @ 3.00GHz, with 4G ram) • -5+ if cpu usage stay less than 50% • -5+ if cpu usage stay under 5% C.U. • as for the CPU bonuses - what is the target environment, what is the smoothing factor, and what processes count against this measure? – John Dvorak Dec 15 '13 at 6:19 • This sandbox post has had little activity in a while and little positive reception from the community. Please improve / edit it or delete it to help us clean up the sandbox. – programmer5000 Jun 9 '17 at 15:32 # McDonald's Drive-Thru Changes from original: • Provided some clarification of requirements with regards to impossible ordering quantities. • Added specification to include total cost of order. • Added specification to prefer lowest cost in case of a tie for number of packages. TODO: • Verify package sizes and pricing to be used for this challenge. • Add pricing to output samples. • Edit or remove "not have any limitations" rule. As currently written, it may force otherwise unnecessary bloating of code in some languages. (e.g.: PowerShell can handle numbers as uint64 to work with extremely large quantities, but it defaults to int32.) We want to write a program to help McDonald's Drive-Thru employees assist their customers in ordering Chicken McNuggets. Chicken McNuggets only come in packs of 4, 6, 9, or 20. However, customers may not always be considering this when they pull up to the speaker. For example, a customer might want to order 50 McNuggets but they really don't care what sort of packaging they come in - they just want to make sure they get 50 McNuggets one way or another. We want to help the customers get the best value out of their order - that is, to compose an order large enough to accommodate their needs in as few packages as possible with little to no excess. Users will provide a request for n Chicken McNuggets. Your program's task is to provide the user with the sizes and numbers of McNugget packages needed to fulfill the order exactly. If the exact order cannot be fulfilled, the system must output an order which would meet the customer's needs with as little excess as possible. The system must also provide the total cost of the order. ## Rules • For values of n which can be ordered exactly, output how many of each pack must be ordered to achieve the requested quantity. • For impossible orders (1,2,3,5,7,11), print "[requested quantity] is impossible. Have [nearest valid quantity >n]:" followed by the normal output for the nearest possible quantity >n. • Impossible orders cannot be hard-coded. The program must be able to determine whether fulfilling an order exactly is possible without being explicitly told that 1,2,3,5,7, and 11 are impossible. • Output must exclude any package sizes which do not need to be ordered. • Output must be in descending order of package size. • Output must include the sum total cost of all the packages. (Tax not included.) • Further layout and formatting of the output is up to you, so long as it is unambiguous. • Program must not have any limitations beyond those inherent to the system or programming language. • If there are multiple ways to assemble the order in the least number of packages, output the method which has the lowest total price. Examples: Input: 8 Output: 2x4 Input: 43 Output: 1x20 1x9 1x6 2x4 Input: 11 Output: 11 is impossible. Have 12: 2x6 My main concern is that this problem may be too similar to this thread: Work out change Otherwise, are there any changes that should be made to this? • My recommendation is to minimize the total cost of the order, rather than the number of packages. Based on these prices: fastfoodmenuprices.com/mcdonalds-prices, the costs are $2.99,$3.89, $4.29, and$5.00. This website lists the "9 piece" as "10 piece", I think that might be an error. – PhiNotPi Dec 14 '13 at 0:01 • Why the restriction #3? – John Dvorak Dec 14 '13 at 6:12 • I agree that it's too similar to the existing question. In addition, "nearest valid quantity" isn't unique, and you don't give any hint as to how to break ties. – Peter Taylor Dec 14 '13 at 10:17 • @PeterTaylor Tiebreaker is specified as ">n", where "n" is the quantity requested by the user - that is, we want to give the user an option that will have at least as many nuggets as they want to order. – Iszi Dec 14 '13 at 23:37 • @JanDvorak Essentially, to up the difficulty a notch. I figure it's a little trickier to catch the invalid quantities in the process of figuring out the answer if you can't write a simple if statement to match against the known quantities. – Iszi Dec 14 '13 at 23:42 • @PhiNotPi Not sure if that's an error on the site, or a regional difference. The information I posted was based on the linked Numberphile video, which was made in the U.K.. It's also possible they may have changed the menu since then. Presuming that larger packages hold better value in terms of cost-per-nugget than smaller ones, the problem as stated should work itself out to the same goal as you've suggested. However, it might help to differentiate the challenge from the suggested duplicate if we add the total price into the expected output. – Iszi Dec 14 '13 at 23:45 • My question is: how are you going to measure that? How large part of this knowledge are we disallowed from encoding? Can we memorize all but one? Can we special-case 1,2,3? Or, is it that anything goes as long as it either can be generalised to other Frobenius problems, or is inclusive, not exclusive? – John Dvorak Dec 14 '13 at 23:47 • @JanDvorak The program should be able to work out for itself whether or not a given quantity is invalid - that's all there is to it. By its nature, I suppose that means solutions would be able to also handle other Frobenius problems. In fact, I was actually considering a separate "return the largest impossible quantity" problem, where users input several integers and the program outputs the largest quantity that cannot be achieved by adding multiples of those integers. – Iszi Dec 14 '13 at 23:53 • Provided some updates to address comments. – Iszi Dec 15 '13 at 2:34 • @Iszi Minimizing cost should serve as a tiebreaker for when there are multiple solutions with the minimum packaging. For example, look at N=36. The solution {0*4,0*6,4*9,0*20} works, but {1*4,2*6,0*9,1*20} is cheaper. (I used the costs {{4,2.99},{6,3.89},{9,4.29},{20,5.00}}) – PhiNotPi Dec 15 '13 at 3:07 • @PhiNotPi Ah, I think I misunderstood when Peter said there wasn't a specifier for the tiebreaker. For some reason, I was thinking it was not possible for there to be a tie of that sort. Adding the price aspect definitely helps sort that out, then. Thanks. – Iszi Dec 15 '13 at 3:15 • FWIW, it was my misreading. I failed to see the ">n". – Peter Taylor Dec 21 '13 at 12:01 # .... . .-.. .-.. --- .-- --- .-. .-.. -.. Another Hello World challenge, this time with Morse code! Taking no input, your program must output HELLO WORLD in audible Morse code, printing each letter as it is played. For the purpose of this challenge, the following Morse code guidelines will be followed: Duration of sounds: • Dits are one time-unit long. • Dahs are three time-units long. • The gap between elements within the same character is equal to one dit. • The gap between characters within the same word is equal to one dah. • The gap between words is seven time units long. • The length of "one time unit" is up to the programmer, so long as it is consistent throughout the message. Letters: • H: .... • E: . • L: .-.. • O: --- • W: .-- • R: .-. • D: -.. I'm a little iffy on that last bullet regarding duration. Should I set a hard standard, or a minimum? If so, what to? • Set a hard minimum for timing. Otherwise, a golfed solution might have 1 unit = 1 millisecond. – PhiNotPi Dec 16 '13 at 22:23 • Tasks which take input are normally more interesting. – Peter Taylor Dec 17 '13 at 0:09 • I guess that dahs need to be a continuous tone, not just two dits without a gap? – John Dvorak Dec 17 '13 at 6:32 • @JanDvorak Correct. – Iszi Dec 17 '13 at 6:33 • If you don't plan to post this, I would like to modify it and post it. (If you don't reply to this message within two weeks, by community standards, I am allowed to adopt the challenge.) – MD XF Dec 22 '17 at 2:41 • @MDXF What do you suggest for modifications? – Iszi Jan 2 '18 at 15:18 ## Code Golf: counting all colors in an image The goal of this Code Golf is to create a program that counts all colors in an image. ### The input The input will be a path to the image file. ### The output The output should be a number that indicates how much different colors your program found in the image. ### The scoring It's also important that your program supports much image formats, so I'll calculate the score based on this formula: (character_count * 3) / (number_of_supported_image_formats * 2) ### Some other rules • The lowest score wins • You're not allowed to execute an external program • No Internet access • A color doesn't just count if it's present in the palette, there really should be pixels of that color in the image. • You should also count pixels with 0% opacity. • #FFFFFF with 100% opacity is not the same color as #FFFFFF with 50% (of course, this is the same for all other colors). • In vector image formats, if there's a red square (for example) with 50% opacity that overlaps a blue square, then this should count as two colors: red and blue. • In vector image formats, in case of a gradient, the number of colors depend on which colors are used in the gradient. For example, if there is a red/yellow gradient, then you should count this as two colors: red and yellow. • A paletted image format is another image format than the non-paletted variant. • SVG 1.0 is another image format than SVG 1.1 (also count for other image formats). • What counts as a colour? Does a colour count as present if it's in the palette, even if there aren't any pixels of that colour? What about if it's present, but at 0% opacity? On the subject of opacity, are #ffffff at 100% opacity and #ffffff at 50% opacity the same colour? What about vector image formats: does a red square at 50% opacity partially overlapping a blue square count as two colours (red and blue) or three (red, magenta in the overlap, and blue)? What about gradients: does the number of colours depend on the size of the gradient-coloured object? – Peter Taylor Dec 20 '13 at 15:25 • Also, what counts as an image format? If a program supports paletted PNG but not non-paletted PNG, does that count as 0 formats, 0.5 formats, or 1 format? If a program supports SVG 1.0 and SVG 1.1 does that count as 1 format or 2 formats? Etc. – Peter Taylor Dec 20 '13 at 15:27 • @PeterTaylor: Thanks for your comments! I updated my question. – ProgramFOX Dec 20 '13 at 19:16 • I'm sorry, but I'm afraid the core of this challenge is to be as bold as possible when counting the amount of file formats my language's standard library can handle. – John Dvorak Dec 20 '13 at 19:33 • @JanDvorak: Of course, you should also look whether it's really worth to handle another image format, after you made sure to handle some other. If your score doesn't get lower, then it's not really worth. – ProgramFOX Dec 20 '13 at 19:35 Since this question is closed, I figured I'd post it here so further issues can be hammered out in Meta instead of the main site. Known Issues: • Some rules seem a bit unclear to some users. • Clarification may be needed on what is needed to qualify for the "win percentage" bonus. • Win percentage bonus may not be enough to be a real incentive. (This may just depend on the language or implementation.) • Perhaps the win percentage bonus should be eliminated entirely, or maybe it should just be made a mandatory part of the spec. • It's been suggested to use a simple 1-9 numbering system for the board positions, instead of any sort of X,Y coordinates. • May want to allow some flexibility on the input format. (i.e.: Input must still specify the sequence of moves thus far, using whatever addressing scheme is specified in the spec, but leave the delimiters - or lack thereof - up to the developer.) • Exactly what is expected of the program, such as how it can figure out whose turn it is or what the output should be, seems to need some clarification. • Some test cases should probably be added. • Clarification may be needed on the matter of what parts of the game we can assume have followed the guide already. • Some flaws exist in the chart. (Two already mentioned in comments on the original post.) These should be identified and addressed so that proper expectations for those conditions are clearly set. • Original post said we would not have to account for null input (i.e.: X asking what their first move should be) but this might be a good enhancement to add. I personally think this is a great challenge. So far, I've had a very hard time finding a lot of room for optimization and got up to probably 400 characters in PowerShell before I gave up (not even half-way through the chart yet) due to some of the above issues. I'd really like to see what some more serious golfers could do with this, once the spec is properly hammered out. ## Overview This is the XKCD tic-tac-toe cheetsheet: It's rather big, I know. But it's also your most valuable resource in your challenge. ## The challenge Create a program (in your language of choice) that uses the Optimal Move Cheatsheet (henceforth OMC) to output the optimal move when given a sequence of moves. ## The input Your program will be fed a series of moves in the following fashion: A3 A2 B2 C2 ... Where the first combination is always X, the second O, and so on. The letter is the Y coordinate (A-C, A at the top) and the number is the X coordinate (1-3, 1 at the left). You may assume that the given combination is following the OMC's suggestion for each move at least for the player asking for a recommendation. You can also assume that the input will never be null (at least one move has been made). You must: 1. Figure out whether the next move is for X or O (you don't need to output this) 2. Use the OMC to decide the next move 3. Print the next move in the standard format A3 ## Optional: You may also include the player's chance of winning (as a percentage) for 50 character discount on your score. • I think a 1-9 system would be easier than any XY system, but not by too much. The biggest issue I think is that if you go by the chart (rather than formulating your own algorithm that plays the same way) you have a ton of data to enter (there are several hundred squares in the two charts). Perhaps limit the input to only sequences starting A1 B2 (or 1 5 if you use telephone keypad numbering)? That's the center square in the X chart and the top left square in the O chart. – Blckknght Dec 23 '13 at 5:14 • @Blckknght Limiting the scope of the challenge makes it less interesting. Part of the challenge (if not the entire challenge) here is to find ways to shortcut the flow while still putting out accurate results. As for the 1-9 system, the simplification may be relatively trivial but it does help clear out some otherwise unneeded bloat since everyone will probably build in some conversion to a 1-9 system anyway to shorten the code. It also enables some other shortcuts where the same move suggestion applies to multiple situations which are mathematically related. – Iszi Dec 23 '13 at 19:47 • My point is that the chart data so dominates the code size that winning answers will pretty much have to ignore the data in the chart and use an AI. So the challenge becomes "write a Tic-Tac-Toe AI that plays exactly like this chart", which seems less interesting to me than "use (part of) this chart to make an AI with trivial code". I already have working code for the problem and bonus in about 200 non-golfed characters of Python, but it will require many 1000s of characters of data, even if I exploit some symmetries. Even if I was willing to type all that data, an AI will beat it, I'm sure. – Blckknght Dec 23 '13 at 20:55 • @Blckknght I'm pretty sure even a fairly straightforward implementation of the chart can be fit within about 5,000 characters - especially in a proper golfing language. IRRC, I'd finished the X portion of the chart in about 400 characters with PowerShell before I gave up on my first go at it. Even then, there was still plenty of room for optimization, and that's in a language which is far from optimal for golfing. Certainly, it's nice when you can bang out a quick answer in 15 minutes. But not every challenge has to fit in 500 characters or less. – Iszi Dec 23 '13 at 21:12 ## Implement addition using only division (code golf) Thought you could implement division using only addition? Well try it the other way around! Your job is to make a function or equivalent program that accepts 2 numbers and adds them using only division. ## Rules • No importing libraries • You can't use anything dealing with mathematics except / and /=, (and their equivalents) • No bitwise operations • No string operations except input, output, return, and string concatenation • Interesting. You might have to close some loopholes, though, as some people will just create a giant lookup table. Also, some people could use string operations use perform addition. Is it going to be code golf? – PhiNotPi Dec 24 '13 at 16:49 • @PhiNotPi I think so, thanks for the tip. – Timtech Dec 24 '13 at 18:42 • Does "no string operations" refer to I/O as well? It's hard to do I/O without string operations of any kind – John Dvorak Dec 24 '13 at 19:21 • @JanDvorak I want to allow I/O - how do I rephrase the question as to allow I/O without allowing math by executing strings? – Timtech Dec 25 '13 at 16:17 • "division using only division" looks like an error... – Peter Taylor Dec 28 '13 at 10:17 • Is this a code golf, code challenge or a popularity contest? – ProgramFOX Dec 28 '13 at 10:56 • @PeterTaylor Thanks :) And @ ProgramFOX, it's code golf. – Timtech Dec 28 '13 at 14:36 • @Timtech Not the number of divisions required? – Johannes Kuhn Dec 28 '13 at 15:03 • @JohannesKuhn What are you talking about? – Timtech Dec 28 '13 at 15:10 • Tried to calculate 0+0 - the only thing I accomplished was a division by zero ;-) – Howard Dec 28 '13 at 16:20 • How do you prevent solutions like Array(a).concat(Array(b)).concat([0,0]).length? – John Dvorak Dec 28 '13 at 19:53 • Is eliminating string concatenation too restrictive? Maybe only allow the built in conversions from strings to numeric types. – Tim Seguine Jan 14 '14 at 11:38 • @Tim I guess so, maybe just disallow eval/expr. – Timtech Jan 14 '14 at 11:50 • and would mod be allowed? – Tim Seguine Jan 14 '14 at 12:07 • @Tim As it currently stands, no. Do you think I should add it? – Timtech Jan 14 '14 at 15:22 ## Recognize spoken numbers of .wav file The goal of this code golf is to create a program or function that recognizes (and outputs) the spoken numbers of a Waveform Audio File (.wav). The rules are: 1. No network access and you are not allowed to run external programs. 2. The input will be the file path to the WAV file, and the spoken text will only be one of these digits: one, two, three, four or five. 3. The output must be the recognized spoken number of the WAV file. 4. You are not allowed to use third-party libraries. 5. This is a code golf, so the code with the smallest character count wins. • What do you mean by convert to text? Encode? Recognise spoken text? – Howard Jan 21 '14 at 18:36 • @Howard: Recognize spoken text. I updated my question. – ProgramFOX Jan 21 '14 at 18:38 • That makes it a very subjective challenge. It is quite debatable if a wav file contains recognisable text or not. I can't think of a safe way to put restrictions on the input without making it to a fixed-input kind of puzzle. – Howard Jan 21 '14 at 18:41 • @Howard: You mean, for example, ensuring that the input will only be spoken text without background music? – ProgramFOX Jan 21 '14 at 18:43 • This needs some explicit restrictions on input. I assume that you're assuming that the text will be English, but even then there is a lot of accent variety. Most speech-to-text programs which U.S. companies release can't handle many (if any) British accents in their first version or two. I think that the only way this can be reasonably objective is either to invert a TTS program (in which case it's boring - no errors to account for) or to specify a training text and a test text, where it gets to hear the training text read n times and then tries to interpret the test text. – Peter Taylor Jan 21 '14 at 18:58 • Maybe it is possible if you restrict the challenge to recognise the spoken digits one, two, three and four. Although still difficult to define clearly spoken there may be small enough variation in the input. – Howard Jan 21 '14 at 19:09 • Maybe you can make a youtube video or something similar that contains all the sound that needs to be recognized; the programs just need to cater to those sounds. – Justin Jan 21 '14 at 19:32 • @Howard: That's a good suggestion. I updated my question. – ProgramFOX Jan 22 '14 at 14:14 • What's a "third-party library"? Can C# programs use MS libraries, Obj-C programs use Apple libraries, etc? – Peter Taylor Jan 22 '14 at 16:36 • @PeterTaylor: Yes, they can. – ProgramFOX Jan 22 '14 at 17:29 • Golfing msdn.microsoft.com/en-us/library/… would make for a rather short and boring answer. – Peter Taylor Jan 26 '14 at 15:37 • @PeterTaylor, ProgramFOX: It would make sense to forbid any libraries or programs designed for speech recognition, whether third-party or not. You might want to take a look at my earlier speech synthesis challenge for some ideas on how to word such challenges (and in the comments for some issues I should've thought of in advance). – Ilmari Karonen Feb 9 '14 at 10:08 ## Print Lorem ipsum The goal of this code golf is to write a program that prints EXACTLY this text: Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. The rules: 1. No external resources 2. The shortest code in bytes wins. • Is there any reason to expect the answers to be fundamentally different to those to existing kolmogorov-complexity questions? – Peter Taylor Jan 26 '14 at 15:33 • Won't the winner just post something like cout<<"/*text here*/";? This will probably be pretty boring, as the text needs to be hardcoded in. – Hosch250 Feb 6 '14 at 1:29 • @user2509848: No, I'd expect the winner to be something that packs the text in base 29 or 32 into a raw byte string and decodes it in GolfScript or some similar language. Or possibly some PHP code that starts with <?=gzinflate(. – Ilmari Karonen Feb 9 '14 at 10:00 • OK, but you will need to specify that in the rules. – Hosch250 Feb 9 '14 at 15:35 Here's my first proposal. It just occurred to me that it might be a bit difficult testing submissions without a functioning server, but maybe we can manage without? What do you people think? The web hosting company I use has a jobs page that looks a bit like this: If you want to work for them, you have to calculate the correct answer and submit it through this form. But you only have a few seconds in which to do this, so you need a script to do it for you. If you submit the correct answer in time, you're then given a hash code and an email address, and are asked to email your source code to this address, using the hash code as the subject line: Using any language you like, write a script to download and submit this application form with the correct answer and hidden id field, and then email your source code with the hash code provided as the subject line. You can assume that the HTML source of the two pages is as follows: # 1. http://jobs.example.com/ <!DOCTYPE html> <html> <title>Job Application</title> <body> <p>Evaluate 943 + 376 - 394 * 573 * 983 , and submit the answer with the following form.</p> <form method="POST" action="apply.pl"> <input type="text" name="answer" value="" /> <input type="hidden" name="id" value="5d41402abc4b2a76b9719d911017c592" /> <input type="submit" name="submit" value="submit" /> </form> </body> </html> # 2. http://jobs.example.com/apply.pl <!DOCTYPE html> <html> <title>Job Application</title> <body> <p>Well done, that was the correct answer. Now email your source code to jobs@example.com with the following text in the Subject <p><code>1a79a4d60de6718e8e5b326e338ae533</code></p> <p>But hurry, you only have five seconds!</p> </body> </html> The only variable parts of these pages are: 1. The sum (up to 6 numbers separated by any combination of +, - and * with spaces on both sides) 2. The hidden id field that must be submitted with the form. 3. The hash code on the second page You may assume that the sum can be calculated without overflow using 32-bit integer arithmetic. • What's the difficulty with having a functioning server? – Peter Taylor Jan 28 '14 at 15:53 • @PeterTaylor It won't be possible to actually test anyone's script without a server that can process these applications. This probably isn't a problem for sensible languages, but if someone submitted an answer written in Golfscript or Whitespace then I'd have no idea if it would actually work or not. – squeamish ossifrage Jan 28 '14 at 16:07 • It won't be possible to test them anyway if you send the e-mail to jobs@example.com. I note that you haven't specified that you're after a program: I would specify that answers should be a full program which takes HTTP URL and e-mail address as command-line arguments or as separate lines on stdin; then each person can test with an e-mail address they control. If you're willing to change the URLs a bit then I can host a couple of PHP pages somewhere under cheddarmonk.org. – Peter Taylor Jan 28 '14 at 16:34 • @PeterTaylor Ah, of course! It didn't occur to me that the email address could be separated out as input data. We'd have to change the background story a bit though. Emailing a job application to yourself seems a bit daft. – squeamish ossifrage Jan 28 '14 at 16:52 • I don't see why. If you're allowing people advance knowledge of the full HTML structure, you can assume that they have advance knowledge of the target e-mail, and then it's just a case of promoting testable code. – Peter Taylor Jan 28 '14 at 17:15 This is my first try at writing a challenge. Please let me know how I can improve it. # Roman Calculator Create a basic calculator for Roman numerals. ### Requirements • Supports +,-,*,/ • Input and output should expect only one prefix per symbol (i.e. 3 can't be IIV because there is two I's before V) • Input and output should be left to right in order of value, starting with the largest (i.e. 19 = XIX not IXX, 10 is larger than 9) • Left to right, no operator precedence, as if you were using a hand calculator. • Supports whole positive numbers input/output between 1-4999 (no need for V̅) • No libraries that do roman numeral conversion for you ### For you to decide • Case sensitivity • Spaces or no spaces on input • What happens if you get a decimal output. Truncate, no answer, error, etc.. • What to do for output that you can't handle. Negatives or numbers to large to be printed. ### Extra Credit • -20 - Handle up to 99999 or larger (numbers with a vinculum) Sample input/output XIX + LXXX (19+80) XCIX XCIX + I / L * D + IV (99+1/50*500+7) MIV The shortest code wins. • You might want to be explicit about which variants of Roman numerals need to be supported. For example, do I have to understand IV as 4, or can I require that it be written as IIII? And what about, say, writing 8 as IIX instead of VIII, 19 as IXX or XVIV instead of XIX, or 99 as IC instead of XCIX? (All these variants have, AFAIK, been used classically.) – Ilmari Karonen Feb 9 '14 at 22:36 • @IlmariKaronen thanks. I modified the question to be slightly more specific about that. – Danny Feb 10 '14 at 14:09 • I think that using IV, IX, IC, XC, etc. should be alright, but only allow one prefix. Also, 19 should be written XIX, not IXX. One other thing, can we assume that the operators will be separated by a space, or no? – Hosch250 Feb 12 '14 at 0:32 • Hello! This looks like a good but abandoned meta post, would you be willing to offer it for adoption? (If you want to, you can still post to main.) – programmer5000 Jun 9 '17 at 16:06 • 1. I don't need to handle I/III but need to handle I/III+II/III? 2. For the extra can I output maybe [V] for 5000? – l4m2 Apr 12 '18 at 15:05 • @programmer5000 it was posted to main awhile ago. codegolf.stackexchange.com/questions/20670/… – Danny Apr 26 '18 at 11:58 The Poet's Quine: Write a quine with 1 or more rhyme scheme from http://en.wikipedia.org/wiki/Rhyme_scheme when read. The non-alphanumeric symbols aren't used for rhyming in this scheme (apart from the basic arithmetic signs like plus, minus, times and divided by), neither are comments. Words may be pronounced in any dialect, but it needs to be consistent within the same stanza (no having the first word be pronounced in a Scottish way and the second in a Welsh way). Contest type would be popularity contest. Thoughts on this proposal? • I'm not sure I understand the last two points. Examples? – John Dvorak Feb 19 '14 at 14:50 • "words are pronounced without a heavy accent or dialect" seems to me to be incompatible with "worked and borked rhyme". – Peter Taylor Feb 19 '14 at 17:32 • That was more intended to be an example of a rhyme in general, rather than a "no heavy accent" example. I'm not a native speaker, so my pronunciation might not be totally accurate. I'll drop that rule (also could make for more interesting interpretations). – Nzall Feb 20 '14 at 14:53 • It seems the scoring scheme actively encourages bad poetry. Maintaining a consistent rhyme scheme throughout is more difficult and better poetically, yet you penalize adjacent repetition of a scheme and give bonuses to unique schemes. Using syllables instead of feet is odd, too. A line of 12 syllables and a line of 8 can work perfectly together if one is anapests and the other iambs. I realize this is a programming site, but if you're going to call it "The Poet's Quine", let's have some real poetry!! – Jonathan Van Matre Feb 21 '14 at 23:05 • I'm not really someone who knows a lot about poetry, but those suggestions seem good. I didn't want to make it too complicated though. you say yourself that this site is for programmers, and I doubt there are many programmers out there that know the different di-, tri- and tetrasyllable feet. maybe having a properly feeted poem can be a bonus objective? – Nzall Feb 22 '14 at 20:21 • The biggest challenge will be finding a proper scoring system which makes sense both poetically and programmatically. It's definitely possible, but it won't be easy. Poetry is such a wide art and relies just as much on format as on content. And I don't want to force a specific kind of meter on the participants, because that's part of the challenge. – Nzall Feb 22 '14 at 20:35 • We could also make it a popularity contest, since poetry is not about the format and content, but about evoking emotions and feelings. A popularity contest might be suited more for such a puzzle. – Nzall Feb 22 '14 at 20:37 • Yeah, I think popularity contest solves a lot of the issues here. Of course, it also creates issues of its own, like the inexplicable number of "To be or not to be" entries on the aphorism challenge. But...lesser of two evils. :) – Jonathan Van Matre Feb 28 '14 at 19:32 • What issues are you thinking about? maybe some extra rules can make this work. – Nzall Mar 4 '14 at 8:09 # The shortest C program which generates the most instructions Write a very short C program (length being defined by character count) which generates the most instructions when compiled. Of course, indicate your compiler, the version, and your operating system, and say what your program does. Linked libraries do not count! ### Score • Base score: 1/(characters) * (instructions) • Bonus: if it computes something "useful," +20% I'm fascinated by C challenges and compiler oddities, but I'm not sure about this question because of the variance you'll get between different compiler versions. Would it be acceptable to ask users to use an online resource which will compile C to assembly? I found two after a cursory search: • With the chars/instructions formula, the score can approach 0 (e.g. use C macros that, when nested N times, generate 2^N instructions). Also, make it clear that linked libraries don't count. – ugoren Feb 25 '14 at 14:57 • @ugoren I'm confused about what you mean by chars/instructions, maybe I should have written instructions/characters instead of 1/characters * instructions? Noted about the linked libraries. – 2rs2ts Feb 25 '14 at 15:03 • define DUP(x) x x and DUP(DUP(DUP(DUP(DUP(DUP(x++;)))))) - this duplicates x++ 64 times. Add another DUP and you get 128 times. – ugoren Feb 25 '14 at 15:20 • I caught my mistake. The score can approach infinity, not zero. Still, I think, a problem. – ugoren Feb 25 '14 at 15:22 • @ugoren Probably too many straightforward abuses to bar them all, eh? – 2rs2ts Feb 25 '14 at 15:29 # How many pizzas do I need Write a program that figures out the minimum number of pizzas I need to order and the amount of left overs I will have. ### Requirements • Each pizza is 8 slices • Each person gets one choice of pizza topping, represented by a letter A-Z • Input in the format PVBC 2. Where each letter represents the choice of 1 person (e.g. P=Plain, V=Vegie, etc...), and the number is the amount of slices each person is allowed to eat. Letters can be in any order and do not need to be grouped. • If I don't need a full pizza I must be able to do half one topping and half another topping, the output for a half and half pizza will be denoted by X/Y where X and Y are different toppings • If I need multiple of a certain type of pizza they must be shown on one line (e.g. 2 x V Pizza). If there are different combinations the both result in the same, least, amount of pizzas, either output works • Output must match the format below of one type of pizza per line and a comma separated list of left overs. The output must show the minimum amount of pizzas and leftover possible. ### Extra Credit • -20 - Take a 3rd argument that allows you to input the number of slices in a pizza, assume it will be an even number such that you can split it in half Sample Input/Output PCPVCB 3 (6 slices P, 3 slices V, 6 slices C, 3 slices B) 1 x P Pizza 1 x V/B Pizza 1 x C Pizza 2 slices P, 2 slices C, 1 slice V, 1 slice B left over VBBCBBB 2 (10 slices B, 2 slices C, 2 slices V) 1 x C/V Pizza 2 x B Pizza 6 slices B, 2 slices C, 2 slices V left over The 2nd example has many other combinations that could result in only 3 pizzas, this is just an example of what an output might be. The shortest code wins. • I can't say for sure, but I may have seen a similar challenge before. If not, it seems good to me. – Hosch250 Feb 18 '14 at 19:20 • The use of the word "preferences" is confusing to me, because it suggests some kind of optimisation problem where people might get their second preference and you have to optimise for overall satisfaction. In addition, I don't find either the input or the output specification sufficiently clear. For the input, is there any guarantee that the letters are grouped (i.e. that PVP 1 will never be given as input)? And are the 4 letters given the only ones which may be used, or could there potentially be 26 different preferences? How much flexibility is there in the output? – Peter Taylor Feb 18 '14 at 21:12 • @Danny, the one problem with this question is that because of my voracious appetite, there would be no left over pizza... ;) – WallyWest Feb 18 '14 at 21:15 • @PeterTaylor I made edits to hopefully address all of the parts you saw that were possibly confusing. Can you look at the question again and let me know what you think. – Danny Feb 19 '14 at 13:44 • @Danny You might want to add that you want the minimum amount of ordered pizzas/left overs - otherwise there exists a trivial solution where each person gets his own pizza (provided slices<=8). – Howard Feb 26 '14 at 8:19 • Hello! This looks like a good but abandoned meta post, would you be willing to offer it for adoption? (If you want to, you can still post to main.) – programmer5000 Jun 9 '17 at 16:07 ## Golf a random Human Genome fragment with non-random features A totally random genome fragment is easy enough: just spit out the letters ATCG in random order, and you're done. So let's try something a little less random and more useful to science. • Accept an argument from the user for number of base pairs (20bp-10000bp must be supported, more if you wish) • Accept an argument from the user for GC content. This indicates how frequently the generated sequence should contain the G and C bases as a percentage of total sequence length. • Include at least one complete gene in every request of 500bp or more, where a gene is defined as an otherwise random sequence that begins with a start codon triplet (ATG) and ends with the first stop codon triplet it encounters (TAG, TGA, or TAA). The distance between the start codon and the stop codon does not have to be a multiple of 3. • Vary gene content (the portion of the fragment that is "gene", inclusive of the gene's start and stop codons) linearly with respect to GC content (when sequence >= 500bp). At the extremes, when GC content is 0%, gene content is 10%; when GC content is 100%, gene content is 60%. • Output a single-strand sequence that complies with the above specs and the user's given parameters. (i.e. a single row of letters will suffice since it is trivial to deduce the complementary strand of the DNA given the sequence of one strand) • Calculate the actual GC content %, actual number of genes, and actual gene content % in the resulting fragment, and output a status line below the sequence conforming to the example format below. Percentages may be rounded to one decimal place. Actual values may deviate by +/- 3% from the expected outcome based on user's input. GC content: 42.1% | Genes: 3 | Gene content: 32.1% Your program will not: • Use any Internet, library, or built-in gene sequence generation functions or databases. Roll your own. Sufficient randomness: • For the purposes of this challenge, any built-in random/pseudo-random number generator function, GUID generator, well-seeded cryptographic hash function, etc. is considered an acceptable source of randomness. What-ifs: • What if another start codon occurs before the stop codon? E.g. ATGXXXATGXXXXXXXXXXXXTAG. This is acceptable, but the "gene" length in this case is calculated from the most proximal start codon to the stop codon. • What if another stop codon occurs after a stop codon? E.g. ATGXXXXXXXXXXXXXTAGXXXXXXTAG This is also acceptable, but likewise the "gene" length is calculated from the start to the most proximal stop. • What if both of these things happen? E.g. ATGXXXATGXXXXXXXXXXXXTAGXXXTGA. Here again, the "most proximal" principle applies and the gene content is demarcated by the innermost start and the innermost stop. • Do "orphaned" start and stop codons that do not demarcate a gene count as gene content? No. This challenge is code golf, so shortest valid code wins. Post example output from a 500-bp request with GC content between 35% and 65%, and have fun! • "Use hardcoded fragments for anything other than the start and stop codons." - why not? Specifying criteria for what counts as enough randomness should make these useless in any case. Speaking of which, you need to specify criteria for what counts as enough randomness. – John Dvorak Feb 28 '14 at 5:54 • The only partial output example given flagrantly violates the spec. If the GC content is 42.1%, the gene content should be 31.05%, not 22.0%. The definition of "gene" is also imprecise: in the sequence AUGCCAUGCCUAGCUAA, which is the gene? – Peter Taylor Feb 28 '14 at 12:02 • @PeterTaylor AUG starts the gene, then come the CCA, UGC, CUA and GCU triplets, none of which terminate the gene. Now if there were three C's instead of two, then UAA would be the terminating triplet and the whole sequence would form a gene. I agree the definition is imprecise, though. – John Dvorak Feb 28 '14 at 12:11 • @JanDvorak, (part of) the point of that example is that there are two AUG substrings. – Peter Taylor Feb 28 '14 at 12:30 • Good points. I was hoping to avoid having too much text, but that came at the expense of less clarity than the challenge demands. Edit forthcoming. – Jonathan Van Matre Feb 28 '14 at 13:58 • Also, I've muddied the waters with RNA encoding and DNA encoding, (U vs T), which we can chalk up to a late night. – Jonathan Van Matre Feb 28 '14 at 14:00 • Revised accordingly, although I remain open to suggestions on how best to frame the standards for acceptable randomness. I want something that won't be exploited by answers making no effort at randomness, but that doesn't have the pain-in-the-butt factor of generating 10mb+ of data and running a Diehard test battery. – Jonathan Van Matre Feb 28 '14 at 17:20 • " This is acceptable, but the "gene" length in this case is calculated from the most proximal start codon to the stop codon. " - wait, what? In nature, the first one is the start codon, and the rest encode methionine. Under your scheme, methionine (which is an essential amino-acid) would be impossible to include into proteins. Your scheme would also be much harder to splice. Also, what happens to AUG substrings that are not triplet-aligned to previous AUG substrings? – John Dvorak Mar 1 '14 at 9:25 • In nature, the first ATG encodes the start of a protein coding region and defines a reading frame (triplet boundary), the rest encode methionine and the first triplet aligned stop codon encodes the end of the protein coding region (and no amino-acid). – John Dvorak Mar 1 '14 at 9:29 • As for the randomness, I'm not worried about the source of randomness (whatever native library is available is assumed to be good enough) but rather how the source of randomness is used (can we just start the sequence with a start codon and insert an end codon at just the right spot if it doesn't occur naturally sooner, then fill in with more random codons while avoiding ATG subsequences? Your "sufficient randomness" places constraints on the RNG (useless) but no constraints on how it's used (or that it needs to be used at all) – John Dvorak Mar 1 '14 at 9:34 • My true random number sequence generator was sitting there watching silently as I typed away the sequence ACACACACACACAC.... It's all okay. The TRNG was capable of producing something better - it just didn't really get to it. – John Dvorak Mar 1 '14 at 9:38 • In fact, the 3% tolerance for the CG content leaves no room for randomness when there are only 20 base pairs. I can shuffle the pairs and turn some A<->T or C<->G, but that's it. In fact, if the CG content is set to zero, the task is impossible: we want a gene content of 2 base pairs (which is itself impossible), but the start codon contains a G, and a single G in a 2bp sequence means a 5% CG content, 2% than is the limit. Not including a gene means that we are 7% under the gene content lower limit. Similarly, it's not possible to start or stop a gene with nothing but Cs and Gs. – John Dvorak Mar 1 '14 at 9:45 • Yeah, the 20bp starting point is a bad idea. The problem with start codons is that I considered introducing the idea of promoters and decided that would make the whole thing too complex. So in the absence of promoters there has to be some way to determine which Met is the start codon vs an amino acid and the easiest simplification is to have no Mets in the gene. Likewise, for "not triplet aligned", I'm trying to avoid having to go into explanations of frameshift mutations (even though a Frameshift% would be a cool parameter). – Jonathan Van Matre Mar 1 '14 at 14:29 • I am starting to think that all of these complexities should be included (this proposal stems from me noticing that most of the extant random DNA generators are pretty weak) and this should just be a popularity contest instead of a golf. Link a couple of good articles on the structure of the genetic code and let people add as many features as they wish. Making it a golf seems to be a catch-22 between too many compromises or a too-impenetrable wall of rules and conditions that will dissuade participation. – Jonathan Van Matre Mar 1 '14 at 14:33 • Perhaps a code-challenge where people earn x points for each complexity implemented? – Hosch250 Mar 2 '14 at 5:52 # Resurrect Adobe SubScript. In an obscure conference procedings volume of forgotton lore, there's a quaint little paper which describes an early effort to implement a published subset of Adobe Postscript. There a line in the bibliography! :) But it cannot be found Nobody's ever heard of it. :( Adobe Systems, "SubScript Specification", 1984. But there's obvious utility in such a thing. So this is a hypothetical Micro-Manual Postscript, and its name shall be ASS[*]. :) ASS is a dynamically-typed stack-based programming language with powerful graphics primitives. It has support for floating-point arithmetic, arrays and dictionaries. The scanner reads white-space delimited tokens and attempts to interpret the token as a decimal floating-point number with optional sign (+/-). The program may (but is not required to) support exponential notation. Failing to recognize a valid number, the token becomes a name object, an atomic symbol type which is identified by the name (an "interned" string). # Types As suggested by the scanner behavior and the operator list , there are the following object types: • floating-point numbers (coerced to integer where appropriate) • names (usually an index into a string table, for easy comparisons) • arrays (an indexable sequence of objects) • dictionaries (a key-value map of objects) • operators (a pointer to a built-in function) # Operators Operators are the basic actions predefined in the dynamic name space. ## Stack Manipulation • any   pop   - pop an object from the operand stack • any1 any2   exch   any2 any1 exchange top two elements • anyN anyN-1 ... any0 N   index   anyN anyN-1 ... any0 anyN retrieve object from stack by position where N is treated as an integer. ## Arrays. • N   array   array create a new array of length N • any0 any1 ... anyN-1 array   astore   array fill array with objects from stack • array   aload   any0 any1 ... anyN-1 array spill contents of array onto stack • [   any0 any1 ... anyN-1   ]   array construct an array • array index any   put   - put a value into array • array index   get   any retrieve value from array where index is treated as an integer. The typical way to implement the array syntax is using an auxiliary type, the marktype object, and an operator counttomark. This is an implementation detail and is not strictly required but may be found to be convenient. • -   [   mark produce marktype object as a sentinel on the stack • mark anyN anyN-1 ... any1   counttomark   mark anyN anyN-1 ... any1 N count objects up to mark Then the ] operator may be implemented in terms of the other array operators. • mark anyN anyN-1 ... any1   ]   array { counttomark array astore exch pop } ## Dictionaries. • N   dict   dict create a new dictionary, an associative container with room for N name-value pairs • dict   begin   - push dictionary on dictionary stack, making names part of the dynamic name space • -   currentdict   dict push copy of topmost dictionary on dictionary stack to the operand stack • -   end   - pop and discard the topmost dictionary on the dictionary stack • name any   def   - associate name with any value in topmost dictionary • name   load   any lookup name in each dictionary in the dictionary stack from the top-down, returning the first match, or error if not found ## Matrices and transformations. A matrix is a 6-number array [a b c d e f] which represent a left-multiplying affine transformation matrix with the constant right-most column omitted. a b 0 c d 0 => [a b c d e f] e f 1 • -   matrix   matrix returns a new identity matrix [1 0 0 1 0 0] • matrix   setmatrix   - make matrix the current transform in the graphics state • -   currentmatrix   matrix return current transform from the graphics state • x y   transform   x′ y′ transform (x,y) pair by current transformation matrix Transforming a point involves multiplying the homogeneous vector through the transformation matrix: [a b 0] [x y 1] * [c d 0] => [x' y' 1] [e f 1] or, equivalently x' = a*x + c*y + e y' = b*x + d*y + f ## Path description. • -   newpath   - • x y   moveto   - • x y   lineto   - • -   closepath   - ## Clipping. • -   clip   - • -   clippath   - ## Painting. • -   erasepage   - • -   fill   - • -   showpage   - The fill operator is where the magic happens. This operator is responsible for performing all of the graphics algorithms in sequence: • Shape Mapping Tranform the coordinates of the path from user space to device space using the current transformation matrix. • Shape Clipping Clip the portions of the path that lie outside the clipping path. • Filling *Perform a scan-line rasterization of the (may assumed closed-) polygon described by the path into the output frame buffer. And showpage copies the contents of the framebuffer to the actual output mechanism (window or file as described above). ... need to fill this out a little more. Math, graphics state, errors. Describing stroked lines is too much, I think. I'm not sure if it needs the forall operators for iterating through arrays and dicts. I'd like to avoid any need for overloading different types under the same operator name, and calling back to user code from an operator. Output may be to a window, or to a file in a simple format, like pgm or even a text-file of hashes and spaces for rough bitmaps. No half-toning. Only bi-level filling of convex polygons will be required. But a program may handle more colors if desired. This is CW in case anybody wants to help me type-in the basic operators. # Questions Does it need anything more? Should something be removed as unnecessary? Does anyone have the spec?? Perusing my ps implementations of the graphics portions linked in the comments, I've noticed the following needed operators: length sub roll eq array copy mul div ne I think it needs loops, too. It's possible to do with just recursion, of course, but loops are nice. And length, I think, needs to be polymorphic, operating on array or dict to retrieve the size for making copies and calculating indices. Add sin and cos, too. And this would be a . [*] The moniker "ASS" is not intended as a disparagement of Adobe Systems nor any of their stupendous intellectual property. Rather it is merely intended to express frustration at the encountered difficulty in locating this document. • So this is intended to be a subset of PostScript: are you going to point people at a PS spec for the nitty-gritty details about things like the precise implementation of path filling? Also, if the idea is to be minimalistic, why have both mark and [? – Peter Taylor Jan 11 '14 at 13:19 • I'm hoping I can concisely specify everything so it's self-contained and not need to refer to a PS spec. ... Good point about mark. I suppose I can require [ and ] and suggest mark ... counttomark as a possible way to implement it. – luser droog Jan 11 '14 at 13:22 • oh. I see what you mean now. removed mark as a separate entity. It isn't needed. – luser droog Jan 11 '14 at 13:32 • My idea is to follow the most basic part of the original Warnock paper which is the basis of the Adobe Image Model. I've got some excerpts here. – luser droog Jan 11 '14 at 13:46 • I don't see any way to create a non-identity matrix. – Peter Taylor Jan 24 '14 at 9:20 • You can construct any matrix using the array notation. There should also be user space transforms: rotate, scale and translate. They're usually part of the graphics state, so I didn't put them under matrices. – luser droog Jan 24 '14 at 9:25 • This spec from the 80s would be gold for implementing postscript. Offering a glimpse at the intermediate stage between the Warnock/Wyatt paper (which describes the image model in the syntax of the Xerox Mesa language) and the PLRM 1ed. Warnock/Wyatt has been described as "smuggling" the ideas out of Xerox. ... Ugh. I forgot to add some control structures. – luser droog Mar 1 '14 at 10:22 • I've got implementations of paths, matrices, clipping, and filling in postscript. Perhaps I should wrap these up and just require the data structures and scanning to load and use them. – luser droog Mar 1 '14 at 11:33 • Hello! This looks like a good but abandoned meta post, would you be willing to offer it for adoption? (If you want to, you can still post to main.) – programmer5000 Jun 9 '17 at 16:08 • Thanks! ... Done. – luser droog Jun 11 '17 at 10:00 ## Off-topic bullshit detector You run a blog about astronomy and for each post there is an area for comments, where people post comments. So, when you post news about the discovery of a new exoplanet, quickly there are some people commenting about its habitability or about the methods using for their detection, and you do answer those comments, very nice. You already have a very good spam detector that handles people who tries to post links to viagra-selling sites, so you do not worry with these. But there is always people who you really hate and makes you very tired. People who insists to post comments that every astronomer is tired and angry to see: • Comments about religion arguing that instead of looking to the sky, people should look for God. • Comments claiming that this is all a big lie made up by governments around the world, and in fact the man never went to the Moon and the Earth is flat. • Comments about planet Nibiru, planet Hercolobus, planet X, planet Nemesis and similars. • Comments about Mayan, Sumerian, or Nostradamus profecies about the end of the world in any particular date. • Comments about the CIA hiding ETs in Area 51 captured from Roswell and similar stuff. • Comments about conspiracies by secret groups controlling or willing to control the world, like the Illuminati, the Masonry, the New World Order, and similars. • Useless flamewar comments that happens when people from two different groups in the previous categories disagree one with the other, posting comments that makes you sometimes doubt that intelligent life exists on Earth. Your task is: Create a complete program that receives as input a text comment limited to 300 characters and outputs Yes/No, 0/1, Approve/Reject or something similar, rejecting the bullshit comments and accepting the valid ones. Further, we have a few restrictions: • As a policy of your company, everyone may comment any post at will, without the need of prior registration, so you can't build some sort of reputation barrier system for this. • You can't also make comments be approved by other frequent commenters based in some reputation system. This happens because your competitor did that and the result was that the people that you want to avoid managed to take over the site being the ones with the most reputation and thus completely ruining your competitor's site. So, your boss decided that you should not build a reputation system. • No use of external resources in the internet. • You are allowed to save files in the disk or to use a database (please do not abuse this rule). • If you do need, you can add a training program to pre-populate the program data. • Your algorithm must be deterministic and consistent. I.E, in a given state for a given input, it always produce the same output. So, do not make it randomized nor use as input something like the colors of the pixels in the screen, the system clock or similar sources of entropy. • [Lacking a rule to avoid exploiting the score system by overfitting the test data]. This is still lack a winning criteria. Don't know if should be , some sort of or something else. is surely out-of-question for this. What do you think? Further, to make it testable, this will need some sort of corpus which falls in those bullshit categories and some perfectly valid as well. If you do have some suggestion on this, please, drop a comment. • I could say that people who post anti-creationist comments are just as annoying... – Hosch250 Mar 3 '14 at 3:35 • Are comments about creationism OK when they are replying to comments about extraterrestrial life? – John Dvorak Mar 3 '14 at 7:23 • Why no database access? Having to reimplement a database makes this challenge harder, but not more interesting. Speaking of which, code-golf requires hard criteria for accuracy (and absolute accuracy is impossible to achieve here). The usual solution is to use the popularity metric while telling people to strive for accuracy / accuracy and consciseness / accuracy and opacity / ... – John Dvorak Mar 3 '14 at 7:26 • What makes you think the Illuminati won't use their moon-based supercomputer to figure out how to get around your filter? – Geobits Mar 3 '14 at 15:21 • @Geobits That is easy: The man never went to the Moon, so couldn't the Illuminati do it either. In fact, it is impossible to go to the Moon because God made the Earth flat and you can imply by the Genesis that ETs do not exists. – Victor Stafusa Mar 3 '14 at 15:28 • @JanDvorak. Ok, relaxed the databases requeriment. – Victor Stafusa Mar 3 '14 at 15:31 • @user2509848 Ok, added this: "Useless flamewar comments that happens when people from two different groups in the previous categories disagree one with the other, posting comments that makes you sometimes doubt that intelligent life exists on Earth." – Victor Stafusa Mar 3 '14 at 15:32 • What? No, Genesis clearly states that aliens are among us. How do you think the Illuminati got started in the first place? I'm pretty sure the "boss" in this scenario is a member anyway. He's clearly going to use your program to figure out the limits of automated bullshit detection. On topic, I like the spirit of this question, and I'd label it a code-challenge. – Geobits Mar 3 '14 at 15:36 • @Geobits, yes I think to that it should be a code-challenge, but don't know yet how to score that. If I don't figure out a good scoring system, will default to popularity-contest. – Victor Stafusa Mar 3 '14 at 15:41 • If you had a corpus, a basic points system seems easiest. +x for each correct reject, -y for each incorrect, something like that. If entries tie on base score, default to either code length of popularity. – Geobits Mar 3 '14 at 15:45 • This is an interesting idea, but it does seem tough to come up with objective scoring unless there are known inputs (i.e. not "Comments about..." but "These 3 sample strings that comment about..."). But then people will just optimize to those inputs, so you'll probably get better & more interesting results if you go the popularity route and leave the detection categories open-ended as they are. – Jonathan Van Matre Mar 3 '14 at 15:46 • +1ed Geobits for simultaneously having the same idea I did. Testing corpus is the way to go if you want it objective. – Jonathan Van Matre Mar 3 '14 at 15:47 • OK, seems better. – Hosch250 Mar 3 '14 at 16:16 • Basically you're asking for a Bayesian spam filter. The tricky thing is to write a spec for a Bayesian filter which isn't so restrictive that there's no freedom to be creative, isn't so loose that people can cheat, and doesn't require you to keep the test data secret. – Peter Taylor Mar 3 '14 at 16:17 • @PeterTaylor Yes, the solution would probably be a Bayesian spam filter, but it does not needs to be. Yes, that spec is somewhat tricky to fine tune. Further, I still need a corpus. – Victor Stafusa Mar 3 '14 at 16:52 Repost from previous sandbox, I realize this is somewhat similar to the Limerick program abit higher, but this was made before that. The Poet's Quine: Write a quine with 1 or more rhyme scheme from http://en.wikipedia.org/wiki/Rhyme_scheme when read. The non-alphanumeric symbols aren't used for rhyming in this scheme (apart from the basic arithmetic signs like plus, minus, times and divided by), neither are comments. Words may be pronounced in any dialect, but it needs to be consistent within the same stanza (no having the first word be pronounced in a Scottish way and the second in a Welsh way). Contest type would be popularity contest. Thoughts on this proposal? • Do you guys think this is ready for posting? – Nzall Mar 5 '14 at 14:12 # Convert input to ASCII Semaphore With monitor resolutions getting higher and font sizes getting lower, a good programmer has to make efforts to ensure that output is accessible to the visually impaired. This can be problematic when the only display is in text. Toward this end, your assignment (if you choose to accept it) is to write a program that converts text input into ASCII art flag semaphore. ## Input 1. Your program must accept any letter in the ASCII character set from A to Z (case insensitive) and spaces. 2. The program can accept input in any way that is convenient for the language it is written in (stdin, command line, file, etc.). ## Output 1. The program should output an ASCII art representation of the input string in flag semaphore. Follow this link to see the expected encoding. 2. Line feeds and carriage returns should be interpreted as spaces. 3. Numbers and other non-letters in the input may be ignored. 4. You may use whatever ASCII art representation of the semaphore sender you like, but it must contain a person holding two flags and have distinct arms, legs, head, and flags. It must be at least 10x10 characters. 5. Output may be either horizontal or vertical. ## Example Input: Hello Output: ### ### # _____######## | | ### |__| #### # ### # ### / # # /\ # # / \ # # \ / # # \/ ## ## /\ / \ /\ / # \/ ### # ### # # # #### # ### # ### # ### # ### | # # |__ # | |# |__|# ## ## /\ / \ /\ / # \/ ### # ### # # # ### #### # ### # ### # ### / # # /\ # # / \ # # \ / # # \/ ## ## /\ / \ /\ / # \/ ### # ### # # # ### #### # ### # ### # ### / # # /\ # # / \ # # \ / # # \/ ## ## /\ / \ \ /\ \/ # # ### # ### # # _____######## | | ### |__| ### ### ### # # # # # # # # ## ## ## Scoring This is code golf. Shortest code wins. • define "easily recognisable". Would a simple 3x3 compass (say, with a head if not covered) do? say:.o. -|. /|. ; or even: ... xx. x.. (read by lines, dots represent spaces) – John Dvorak Mar 6 '14 at 20:16 • @JanDvorak Good catch. Edited to include distinct items that must be present and a minimum size. I'm not exactly sure how to make that rule more clear. – Comintern Mar 6 '14 at 20:34 • Define "person holding two flags". Is what I drew a person? Is this a (lying, due to formatting issues) person: o--? Are three x's on a vertical line a person? – John Dvorak Mar 6 '14 at 20:43 • @JanDvorak Ack! had to many tabs open and forgot to save my edit. I think number 4 for output should cover that. – Comintern Mar 6 '14 at 20:47 • Define "distinct arms, legs, head, and flags." But I suggest allowing very small figures as well, otherwise this will turn into a kolmogorov-complexity-like question with very little of the code actually involving generating a pair of directions. – John Dvorak Mar 6 '14 at 20:51 • Very similar to this question. The ascii art is more complex here so perhaps it's not close enough to be called a duplicate... – Gareth Mar 6 '14 at 22:20 • I disagree with @JanDvorak: I think this would be better with a fixed output spec which must be followed exactly. That way people can golf their code rather than the output. – Peter Taylor Mar 6 '14 at 23:59 • Standard figures seem best to me as well. If you demonstrate a full "clock" of hand positions for the standard figure, then you can require those as output. That's easier to assess than free reign for variations. – Jonathan Van Matre Mar 7 '14 at 0:14 # Unified format patcher Write the shortest program that will take a patch file in the unified format from stdin and apply that patch. No external tools that do the process for you can be used. ### Clarifications • Extra documentation about the unified format can be found here • All file paths will be relative • Only one file will be modified per patch • Timestamps can be ignored • The patch file will be valid and will apply cleanly to the file specified (it will not lie about line numbers, etc..) • Assume all files being patched already exist, you don't need to create/delete files ### Extra • -35 - Take an argument that allows you to unpatch a patch ### Example /test/a.cpp #include <iostream> using namespace std; int main() { cout << "Hello world!"; return 0; } patch.txt --- a/test/a.cpp +++ b/test/a.cpp @@ -1,7 +1,8 @@ #include <iostream> +#include <vector> using namespace std; int main() { - cout << "Hello world!"; + cout << "Goodbye world!"; return 0; } Run patch patch.exe patch.txt /test/a.cpp #include <iostream> #include <vector> using namespace std; int main() { cout << "Goodbye world!"; return 0; } • Can the program assume that the @@ lines contain the correct line numbers? – ugoren Mar 6 '14 at 17:52 • A good explanation of the patch file format is needed. If not too long, include it in the question. Else, provide a link. – ugoren Mar 6 '14 at 17:53 • You forgot the obvious "no external tools" disclaimer. You don't want the patch \$1 answer. – ugoren Mar 6 '14 at 17:55 • @ugoren thanks for the comments, I added some further clarifications. – Danny Mar 6 '14 at 18:38 • Does "The patch file will be valid (it will not lie about line numbers)" also mean that it will apply cleanly? – Peter Taylor Mar 6 '14 at 19:24 • @PeterTaylor yes, updated question. – Danny Mar 6 '14 at 19:51 • "The shorted program" should say "the shortest program", but other than that I think it's ready to go. Of course, no-one's actually going to do more than filter out the lines starting -, remove the first char from each line, and parse the line-numbers to work out how to splice the resulting text in. – Peter Taylor Mar 7 '14 at 0:01 • This sandbox post has had little activity in a while. Please improve / edit it or delete it to help us clean up the sandbox. Due to community guidelines, if you don't respond to this comment in 7 days I have permission to vote to delete this. – programmer5000 Jun 9 '17 at 16:10 Your task, should you choose to accept it, is to convert an image into ASCII art. Essentially, your program has to do precisely what picascii.com does. Rules: • You must take the image from stdin or read it from a file specified in the command line. • You must output to stdout or to a filename specified in the command line. • Your program must take input in a format that ImageMagick supports. You can choose any format you want, however. If you want to read ppm images and we have to pass a jpeg through ImageMagick first to use your program, that's fine. • Given the above, your program itself must use only standard libraries, even for loading the image. • You must only output printable ASCII characters (that's 32-126 plus CR and LF). • You can choose in which font or setting your image should be viewed in, e.g. it might look good in a terminal but awful in a stackexchange code block, or vice versa, or maybe it only looks good with Courier New size 12, etc. • The largest edge of your output must be at least 25 characters and at most 200 characters long. • Aspect ratio must be preserved as much as possible within one fixed-width character size. e.g. if you have a 400x320 pixel image, and the fixed-width font you're outputting for is 8x13 including spacing, your output must be at least 25x12 characters, or it can be 50x25, 125x62, etc., with a maximum size of 200x98. • Provide at least two sample inputs & outputs with your submission. Outputs can be stackexchange code blocks or links to paste bins or screenshots of the output viewed in the environment you intended it for, etc. • Your score is the byte count of your source code. Lowest score wins. However, I want the output to bear some reasonable resemblance to the input. I don't want this to be subjective. I'd rather have a hard limit that people can hack around. Opening suggestion: maybe something like: given a font size of 8x13, if the image is converted to grayscale and quantized to 8x13 blocks, and your solution is converted to an image, scaled to fit, and also quantized to homogeneous 8x13 blocks where the value of each block is the percentage of filled-in pixels for each block, the average distance between the image blocks and your output blocks must be less than X. • You should delete it from the main site for now because you can't really change the rules once somebody posts an answer. You can repost it when you think it is ready. – Hosch250 Mar 11 '14 at 18:53 • @hosch250: Good idea, just deleted it. gotta make it a good one! – Claudiu Mar 11 '14 at 19:02 • @hosch250: The link isn't broken, it's just a deleted question, and can only be viewed by me and the mods. I wanted to not lose the link but it's there in the edit history I guess. – Claudiu Mar 11 '14 at 20:26 • It is still in your account page too. – Hosch250 Mar 11 '14 at 21:55 • I think this would be quite dull as a code-golf challenge. The optimal solution is to simply read every other line of a PGM file and convert each number into ASCII 32 (space) or 33 ('!') based on some threshold value. Without a code length restriction, we could add more interesting features like Floyd-Steinberg dithering and matching letter shapes to image features (e.g., using / and \ in places where diagonal likes are detected). – squeamish ossifrage Mar 14 '14 at 0:15 • @squeamishossifrage: Hmm interesting. I was going for making an objective criterion that would make that not the optimal solution, so you'd have to use more than a few characters, but that'd be awkward. Making it a popularity contest would definitely lead to more creative solutions.. I will consider it – Claudiu Mar 14 '14 at 0:53 • Hello! This looks like a good but abandoned meta post, would you be willing to offer it for adoption? (If you want to, you can still post to main.) Due to community guidelines, if you don't respond to this comment in 7 days I have permission to adopt this. – programmer5000 Jun 9 '17 at 16:19 # Create the perfect CSS reset stylesheet Your job is to create a CSS reset stylesheet, That is, a stylesheet that you can apply to any HTML file, and the result will look the same in all webbrowsers. Because we all know that cross-browser interoperability is very important these days, and you want to make your website look pixel perfect everywhere. The rules: 1. You must be able to throw any valid HTML5 document at it and the result will look the same in the main browsers. For simplicity, you can assume that the HTML document does not contain any styles of its own or Javascript that changes anything. Just pure, static HTML that is valid HTML5. 2. The main browsers are Firefox >= 22, Chrome >= 28 and IE >= 10. 3. To avoid solutions like *{display:none} (which do indeed make all documents look the same in all browsers, yes) the result must be identical to the document without the stylesheet in one of the browsers. In other words, take your browser of choice and make the document look like that in the other browsers. The winner is the stylesheet that works the best, again, on any HTML file that is valid HTML5 and uses no other styles. I'm not looking at efficiency. If you come up with a 100K stylesheet or one that slows the site down considerably, that doesn't matter, as long as the end result looks good. That's the question so far. Now I have a bit of a problem with "any HTML5 document"; I know I could provide a test document that people can work with, but then you'll get answers that cater to only that particular test case, and that's not what I want. Not sure how to handle this. Ideas? Also, I want to include Safari as a main browsers, but as I don't have a Mac, I can't test the results on it. Not sure how to handle that. • – Peter Taylor Mar 14 '14 at 12:59 • @PeterTaylor That breaks rule #3. – Mr Lister Mar 14 '14 at 13:05 • The result must be identical to the document without the stylesheet in one of the browsers. I assume you have loaded a webpage without a stylesheet before? If you mean that it can have the main stylesheets, and we just need to create a modification stylesheet, you should specify that. – Hosch250 Mar 14 '14 at 16:17 • @hosch250 What I mean is that I want the document to retain its basic HTML-ness, so it shouldn't look like plain text. Take this fiddle for example; open it in all browsers, and then add CSS to it so that it looks like (your favourite browser) in all other browsers. If the name of such is "modification stylesheet" rather than "reset stylesheet", I apologise. – Mr Lister Mar 14 '14 at 19:32 • OK, I was thinking about how most HTML pages rely on CSS stylesheets to even be legible. If you took the CSS sheet off any webpage, it would not look the same; in fact, if the HTML wasn't laid out good using accessibility techniques, it wouldn't be legible. – Hosch250 Mar 14 '14 at 20:01 • Pixel perfect isn't going to happen because of issues around anti-aliasing: CSS doesn't let you do things like enable ClearType on Safari/OS X or disable it on IE/Win. So the best anyone can do is somehow obtain the default stylesheets for the listed browsers (e.g. iecss.com but updated) and then find a minimal diff. – Peter Taylor Mar 14 '14 at 20:09 • Guys, I'm not interested in solutions to the question right now. I want to know if the question is OK! Specifically if I can get away with not posting a testcase like the fiddle above. – Mr Lister Mar 14 '14 at 20:38 # music theory challenge Create a program that takes some input in the form of frequency, waveform, and duration that generates an audio stream based on the input. You can take input parameters however you choose, but if I input (translated to your method) 440Hz, sin(x), 3 seconds, your program should play or create a file for a sound 3 seconds long at 440 hertz on a sine wave. Also, any output should be musically correct as far as frequency is concerned. See http://www.phy.mtu.edu/~suits/notefreqs.html for example frequencies Since this is a popularity contest, the rest is up to you. I bid you Good programming! Oh, and any use of external functions or APIs is ok, as long as they weren't developed specifically for this contest. • If the program takes "input in the form of frequency, waveform and duration" then where do linear functions fit? What do you mean "output should be musically correct as far as frequency is concerned" given that the input is frequency? Is it supposed to correct the input: "You said 494Hz but you must mean 493.88Hz"? And simple synth has been done before in various guises: see music. To differentiate this and make it non-trivial you could perhaps specify a set of basic synth operations which need to be configurable (e.g. input specifies generators, envelopes, filters, mixers). – Peter Taylor Mar 14 '14 at 8:39 • On second thoughts, that would probably work better as a Code Review Code Challenge – Peter Taylor Mar 14 '14 at 9:23 • @PeterTaylor I didn't even know about Code Review Code Challenges <intrigued>. Linear isn't the right word...and I think that statement is redundant anyway, so I'll nix it. – David Wilkins Mar 14 '14 at 12:44 • Actually, I'm going to re-write this challenge...I don't know yet whether it'll be here of on CR – David Wilkins Mar 14 '14 at 13:07 # Tic-tac-toe Tic-tac-toe is a paper-and-pencil game for two players, played on a 3×3 grid. ## Rules of Tic-tac-toe • The first player uses X, the second O as a symbol. • Tic-tac-toe is round-based. So in the first round, X has to place his symbol on any free grid cell, then O has to place his symbol on any free grid cell and so on. • The game is over when any player manages to get three of his symbols in any row / column or the diagonal. That player has won. So every game has at least 5 turns. • The game is over when all cells are full. This is a draw. So every game has at most 9 turns. ## Rules of this codegolf • This is a . So shortest code wins. • Every code has to be playable. This means, at first the user has to be able to decide which player he wants to be (first / second or X / O). The other player will be the computer. • Optimality: The computer has to use the optimal strategy. This means, the computer can never lose (see Wikipedia). • Bonus: If you can play it via GUI, you get -200 characters. • You should provide an ungolfed version ## Background I've just seen this question on StackOverflow where somebody seems to have hard-coded an optimal player and wants to know how to reduce the size of his program. Lets see how far we can get :-) ## Related questions • What is the difference between this question and this proposed question? I am not saying that I think that people are wrong in +1ing this and -1ing mine, I just hope to find out how I could have improved it. – kitcar2000 Mar 27 '14 at 16:14 • @kitcar2000 it may simply be because yours was posted later. It would have been useful if the downvoter has posted a constructive comment... – trichoplax Apr 16 '14 at 9:23 • @kitcar2000 I also think that time is important. Why did you add another proposal? – Martin Thoma Apr 16 '14 at 9:33 • @moose Sorry, I accidentally undeleted it. – kitcar2000 Apr 19 '14 at 14:09 # A Brief Mystery of Time Given a cron schedule, when will the job next run? The schedule is supplied as the usual 5 part schedule (to be fleshed out with the full spec). Support for JAN-DEC, SUN-SAT is not required - just numeric schedules - however support for ',','/' and '*' are required. You are not allowed to use the network, or external libraries/programs that implement scheduling - eg using cron itself to schedule a job to return the answer. Your answer should return the result before the time in question. eg 3-59/15 * * * 0,7 ... should return 3 minutes past midnight next Sunday. Output should be expressed as a human-readable date (not just seconds since the epoch, or fractions of a julian day) Notes: I had a look, we don't seem to have had cron as a puzzle before. There's going to be some choice of implementations I think-certainly between the Kernhigan cron way of iterating over every minute, or nested loops. A valid answer would be to convert the cron spec to a regexp, iterate over the next few years worth of minutes and match the output of date, for example. The code in quartz shows that you can be amazingly verbose writing this algorithm, but it's not that hard. As specified, cron will fire at least once every 40 years if the days of the month are valid-28 years if there's no intervening century year. Unsure whether to say that the input will always produce an event, since validation is easy. Another variant might be to ensure the solution works for the entire 40 year cycle, by saying the start/date time is input (in some format) and then providing example output. This would save me having to debug the entries, because I could pose the edge cases as tests. My first try at posing a question. • In order to make this properly testable, it would probably be sensible to make it take the "current" datetime as an input rather than reading it from the clock. E.g. your example won't return 3 minutes past midnight next Sunday if run on a Sunday before 23:03. – Peter Taylor Apr 1 '14 at 14:58 • Yep @PeterTaylor, that's the variant question at the bottom. I agree - it's not only easier to test, but easier to judge the answers because I can tell you the cases I want answered. – bazzargh Apr 1 '14 at 16:33 # Find words in word square solver On social media I often see images with letters and in them are some positive words for people to find. I challenge you to write a program that finds all words in the puzzle that matches a input dictionary. An example of such puzzle is this one: An ASCII representation I made of this: XCUALOVEYKBWSNG DUAWKCBEAUTYRJV YOUTHFSMGNEZLPR MHJREYWDKZLUSTJ FSUCCESSDHEALTH ENMQXPTIMELMSAQ VEXPERIENCEGHBW GHUMOURLOYMONEY SYZPOPULARITYNA AMKCFUNBXHUZYIX CWIHYSHAPPINESS HONESTYCFRIENDS KPYJAETWPOWERQC BTYACFREEDOMJMO RIWINTELLIGENCE Now I imagine we can find words horizontally, vertical and diagonal and all of the mentioned in reverse. The program must be able to take a square and a dictionary like this one and print all the matching words. As a test case I give custom dictionary: bar bid dir dog fed foo god man mod set sun And a test square: OGFIR DOMAN ODBID OPGES OGFIR Your code should be able to print all but the two last words in the dictionary. For diversity you should specify how the cube and the dictionary is bo be entered. This is so shortest code wins. • What should be output? Only the matched words? Their positions? And directions? – John Dvorak Apr 3 '14 at 15:47 • @JanDvorak Just print the words found. Do you think coordinates and direction can be given a bonus? – Sylwester Apr 3 '14 at 15:51 • Cube? I'm only seeing two dimensions. On a more general note, perhaps for questions of this sort it would be OK to assume the availability of a standard dictionary file like /usr/share/dict, and discount the characters used to access this file? What do people think? – squeamish ossifrage Apr 3 '14 at 15:55 • @squeamishossifrage OMG You're right. I meant square of course :-) I think people can choose. eg. The question is open for diversity like cat square.txt dic.txt | solver now, but I'm open for change that does not discriminate. – Sylwester Apr 3 '14 at 16:03 • How does the program know where the wordsearch ends and the dictionary starts? – Peter Taylor Apr 3 '14 at 21:39 • @PeterTaylor By mistake I made the test a rectagle, but I fixed that. The length of the first line would be the number of lines in the square. Anyway how the input is done I thought should be up to the solver so that they can choose to open files, read stdin or maybe more disturebing ways to get input in... – Sylwester Apr 3 '14 at 21:47 • Hello! This looks like a good but abandoned meta post, would you be willing to offer it for adoption? (If you want to, you can still post to main.) Due to community guidelines, if you don't respond to this comment in 7 days I have permission to adopt this. – programmer5000 Jun 9 '17 at 16:30 • @programmer5000 It only got two upvotes so I let it be. Feel free to post it if you'd like. – Sylwester Jun 12 '17 at 15:27
2019-10-20 12:01:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3508455455303192, "perplexity": 2007.3031381761425}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986707990.49/warc/CC-MAIN-20191020105426-20191020132926-00258.warc.gz"}
https://memote.readthedocs.io/en/latest/autoapi/test_matrix/index.html
# test_matrix¶ Tests for matrix properties performed on an instance of cobra.Model. ## Module Contents¶ test_matrix.test_absolute_extreme_coefficient_ratio(model, threshold=1000000000.0)[source] Show the ratio of the absolute largest and smallest non-zero coefficients. This test will return the absolute largest and smallest, non-zero coefficients of the stoichiometric matrix. A large ratio of these values may point to potential numerical issues when trying to solve different mathematical optimization problems such as flux-balance analysis. To pass this test the ratio should not exceed 10^9. This threshold has been selected based on experience, and is likely to be adapted when more data on solver performance becomes available. Implementation: Compose the stoichiometric matrix, then calculate absolute coefficients and lastly use the maximal value and minimal non-zero value to calculate the ratio. test_matrix.test_number_independent_conservation_relations(model)[source] Show the number of independent conservation relations in the model. This test will return the number of conservation relations, i.e., conservation pools through the left null space of the stoichiometric matrix. This test is not scored, as the dimension of the left null space is system-specific. Implementation: Calculate the left null space, i.e., the null space of the transposed stoichiometric matrix, using an algorithm based on the singular value decomposition adapted from https://scipy.github.io/old-wiki/pages/Cookbook/RankNullspace.html Then, return the estimated dimension of that null space. test_matrix.test_matrix_rank(model)[source] Show the rank of the stoichiometric matrix. The rank of the stoichiometric matrix is system specific. It is calculated using singular value decomposition (SVD). Implementation: Compose the stoichiometric matrix, then estimate the rank, i.e. the dimension of the column space, of a matrix. The algorithm used by this function is based on the singular value decomposition of the matrix. test_matrix.test_degrees_of_freedom(model)[source] Show the degrees of freedom of the stoichiometric matrix. The degrees of freedom of the stoichiometric matrix, i.e., the number of ‘free variables’ is system specific and corresponds to the dimension of the (right) null space of the matrix. Implementation: Compose the stoichiometric matrix, then calculate the dimensionality of the null space using the rank-nullity theorem outlined by Alama, J. The Rank+Nullity Theorem. Formalized Mathematics 15, (2007).
2019-08-20 05:33:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7058418393135071, "perplexity": 615.3524356228053}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315222.56/warc/CC-MAIN-20190820045314-20190820071314-00060.warc.gz"}
http://nrich.maths.org/public/leg.php?code=-68&cl=2&cldcmpid=5015
# Search by Topic #### Resources tagged with Visualising similar to Street Lamps: Filter by: Content type: Stage: Challenge level: ### Buses ##### Stage: 3 Challenge Level: A bus route has a total duration of 40 minutes. Every 10 minutes, two buses set out, one from each end. How many buses will one bus meet on its way from one end to the other end? ### Crossing the Atlantic ##### Stage: 3 Challenge Level: Every day at noon a boat leaves Le Havre for New York while another boat leaves New York for Le Havre. The ocean crossing takes seven days. How many boats will each boat cross during their journey? ### Counter Roundup ##### Stage: 2 Challenge Level: A game for 1 or 2 people. Use the interactive version, or play with friends. Try to round up as many counters as possible. ### World of Tan 2 - Little Ming ##### Stage: 2 Challenge Level: Can you fit the tangram pieces into the outline of Little Ming? ### Clocking Off ##### Stage: 2, 3 and 4 Challenge Level: I found these clocks in the Arts Centre at the University of Warwick intriguing - do they really need four clocks and what times would be ambiguous with only two or three of them? ### Diagonal Dodge ##### Stage: 2 and 3 Challenge Level: A game for 2 players. Can be played online. One player has 1 red counter, the other has 4 blue. The red counter needs to reach the other side, and the blue needs to trap the red. ### Prime Magic ##### Stage: 2, 3 and 4 Challenge Level: Place the numbers 1, 2, 3,..., 9 one on each square of a 3 by 3 grid so that all the rows and columns add up to a prime number. How many different solutions can you find? ### Instant Insanity ##### Stage: 3, 4 and 5 Challenge Level: Given the nets of 4 cubes with the faces coloured in 4 colours, build a tower so that on each vertical wall no colour is repeated, that is all 4 colours appear. ### World of Tan 16 - Time Flies ##### Stage: 2 Challenge Level: Can you fit the tangram pieces into the outlines of the candle and sundial? ### Endless Noughts and Crosses ##### Stage: 2 Challenge Level: An extension of noughts and crosses in which the grid is enlarged and the length of the winning line can to altered to 3, 4 or 5. ### Rolling Around ##### Stage: 3 Challenge Level: A circle rolls around the outside edge of a square so that its circumference always touches the edge of the square. Can you describe the locus of the centre of the circle? ### John's Train Is on Time ##### Stage: 3 Challenge Level: A train leaves on time. After it has gone 8 miles (at 33mph) the driver looks at his watch and sees that the hour hand is exactly over the minute hand. When did the train leave the station? ### Clocked ##### Stage: 3 Challenge Level: Is it possible to rearrange the numbers 1,2......12 around a clock face in such a way that every two numbers in adjacent positions differ by any of 3, 4 or 5 hours? ### Picturing Triangle Numbers ##### Stage: 3 Challenge Level: Triangle numbers can be represented by a triangular array of squares. What do you notice about the sum of identical triangle numbers? ### Tetrahedra Tester ##### Stage: 3 Challenge Level: An irregular tetrahedron is composed of four different triangles. Can such a tetrahedron be constructed where the side lengths are 4, 5, 6, 7, 8 and 9 units of length? ### Konigsberg Plus ##### Stage: 3 Challenge Level: Euler discussed whether or not it was possible to stroll around Koenigsberg crossing each of its seven bridges exactly once. Experiment with different numbers of islands and bridges. ### Right or Left? ##### Stage: 2 Challenge Level: Which of these dice are right-handed and which are left-handed? ### Twice as Big? ##### Stage: 2 Challenge Level: Investigate how the four L-shapes fit together to make an enlarged L-shape. You could explore this idea with other shapes too. ### Penta Play ##### Stage: 2 Challenge Level: A shape and space game for 2,3 or 4 players. Be the last person to be able to place a pentomino piece on the playing board. Play with card, or on the computer. ### World of Tan 18 - Soup ##### Stage: 2 Challenge Level: Can you fit the tangram pieces into the outlines of Mai Ling and Chi Wing? ### World of Tan 17 - Weather ##### Stage: 2 Challenge Level: Can you fit the tangram pieces into the outlines of the watering can and man in a boat? ### World of Tan 19 - Working Men ##### Stage: 2 Challenge Level: Can you fit the tangram pieces into the outline of this shape. How would you describe it? ### You Owe Me Five Farthings, Say the Bells of St Martin's ##### Stage: 3 Challenge Level: Use the interactivity to listen to the bells ringing a pattern. Now it's your turn! Play one of the bells yourself. How do you know when it is your turn to ring? ### World of Tan 29 - the Telephone ##### Stage: 2 Challenge Level: Can you fit the tangram pieces into the outline of this telephone? ### World of Tan 28 - Concentrating on Coordinates ##### Stage: 2 Challenge Level: Can you fit the tangram pieces into the outline of Little Ming playing the board game? ### Ding Dong Bell ##### Stage: 3, 4 and 5 The reader is invited to investigate changes (or permutations) in the ringing of church bells, illustrated by braid diagrams showing the order in which the bells are rung. ### Makeover ##### Stage: 1 and 2 Challenge Level: Exchange the positions of the two sets of counters in the least possible number of moves ### Right Time ##### Stage: 3 Challenge Level: At the time of writing the hour and minute hands of my clock are at right angles. How long will it be before they are at right angles again? ### Nine-pin Triangles ##### Stage: 2 Challenge Level: How many different triangles can you make on a circular pegboard that has nine pegs? ### Isosceles Triangles ##### Stage: 3 Challenge Level: Draw some isosceles triangles with an area of $9$cm$^2$ and a vertex at (20,20). If all the vertices must have whole number coordinates, how many is it possible to draw? ### World of Tan 27 - Sharing ##### Stage: 2 Challenge Level: Can you fit the tangram pieces into the outline of Little Fung at the table? ### World of Tan 26 - Old Chestnut ##### Stage: 2 Challenge Level: Can you fit the tangram pieces into the outline of this brazier for roasting chestnuts? ### World of Tan 20 - Fractions ##### Stage: 2 Challenge Level: Can you fit the tangram pieces into the outlines of the chairs? ### Put Yourself in a Box ##### Stage: 2 Challenge Level: A game for 2 players. Given a board of dots in a grid pattern, players take turns drawing a line by connecting 2 adjacent dots. Your goal is to complete more squares than your opponent. ### Flight of the Flibbins ##### Stage: 3 Challenge Level: Blue Flibbins are so jealous of their red partners that they will not leave them on their own with any other bue Flibbin. What is the quickest way of getting the five pairs of Flibbins safely to. . . . ### World of Tan 21 - Almost There Now ##### Stage: 2 Challenge Level: Can you fit the tangram pieces into the outlines of the lobster, yacht and cyclist? ### World of Tan 22 - an Appealing Stroll ##### Stage: 2 Challenge Level: Can you fit the tangram pieces into the outline of the child walking home from school? ### World of Tan 25 - Pentominoes ##### Stage: 2 Challenge Level: Can you fit the tangram pieces into the outlines of these people? ### World of Tan 24 - Clocks ##### Stage: 2 Challenge Level: Can you fit the tangram pieces into the outlines of these clocks? ##### Stage: 2 Challenge Level: How can the same pieces of the tangram make this bowl before and after it was chipped? Use the interactivity to try and work out what is going on! ### Khun Phaen Escapes to Freedom ##### Stage: 3 Challenge Level: Slide the pieces to move Khun Phaen past all the guards into the position on the right from which he can escape to freedom. ### There and Back Again ##### Stage: 3 Challenge Level: Bilbo goes on an adventure, before arriving back home. Using the information given about his journey, can you work out where Bilbo lives? ### Making Maths: Rolypoly ##### Stage: 1 and 2 Challenge Level: Paint a stripe on a cardboard roll. Can you predict what will happen when it is rolled across a sheet of paper? ### Baravelle ##### Stage: 2, 3 and 4 Challenge Level: What can you see? What do you notice? What questions can you ask? ### World of Tan 1 - Granma T ##### Stage: 2 Challenge Level: Can you fit the tangram pieces into the outline of Granma T? ### Coin Cogs ##### Stage: 2 Challenge Level: Can you work out what is wrong with the cogs on a UK 2 pound coin? ### Sprouts ##### Stage: 2, 3, 4 and 5 Challenge Level: A game for 2 people. Take turns joining two dots, until your opponent is unable to move. ### LOGO Challenge - Triangles-squares-stars ##### Stage: 3 and 4 Challenge Level: Can you recreate these designs? What are the basic units? What movement is required between each unit? Some elegant use of procedures will help - variables not essential. ### Triangles in the Middle ##### Stage: 3, 4 and 5 Challenge Level: This task depends on groups working collaboratively, discussing and reasoning to agree a final product. ### Introducing NRICH TWILGO ##### Stage: 1, 2, 3, 4 and 5 Challenge Level: We're excited about this new program for drawing beautiful mathematical designs. Can you work out how we made our first few pictures and, even better, share your most elegant solutions with us?
2016-06-28 17:09:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17091435194015503, "perplexity": 3564.51202946062}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396949.33/warc/CC-MAIN-20160624154956-00173-ip-10-164-35-72.ec2.internal.warc.gz"}
https://indico.ihep.ac.cn/event/7197/
1. If you are a new user, please register to get an Indico account through https://login.ihep.ac.cn/registIndico.jsp. Any questions, please email us at helpdesk@ihep.ac.cn or call 88236855. 2. The name of any uploaded file should be in English or plus numbers, not containing any Chinese or special characters. 3. If you need to create a conference in the "Conferences, Workshops and Events" zone, please email us at helpdesk@ihep.ac.cn. PKU HEP Seminar and Workshop (北京大学高能物理组) # Charmed hadron decays at BESIII ## by Prof. Xiao-Rui Lyu (University of CAS) Thursday, September 21, 2017 from to (Asia/Shanghai) at CHEP ( B105 ) School of Physics, PKU Description The BESIII Experiment at the Beijing Electron Positron Collider (BEPCII) has accumulated the world's largest samples of $e^+e^-$ collisions in the tau-charm region. Based on the samples taken at $\psi(3770)$ peak, around the $\psi(4040)$ nominal mass, and at the $\Lambda^+_c\Lambda^-_c$ mass threshold 4.6 GeV, we can study the charmed hadron decays under a uniquely clean background. In this talk, we will review the recent results on the $D$, $D_s$ and $\Lambda_c$ decays, such as the analyses of the purely leptonic and semi-leptonic decays of D meson, the measurements on strong phase and $D^0-\bar{D}{}^0$ mixing parameters using quantum coherence at threshold, $D$ Dalitz analyses, determination on the absolute branching fraction of $\Lambda_c$ hadronic decays and first model-independent study on the semi-leptonic $\Lambda_c$ decay rate $\mathcal{B}(\Lambda_c^+ \to \Lambda e^+ \nu$). In the final remarks, the future prospects will be discussed. About speaker: Dr. Xiao-Rui Lyu completed the undergraduate and master course in Peking University, before he moved to Tokyo Institute of Technology, Japan, and obtained doctor degree in fundamental physics in 2008. Afterward, he joined the experimental high energy group in the University of Chinese Academy of Sciences. His research effort focuses on to understand the origin of the world by the experimental probe to the fundamental structure and their interactions of particles. He works in the BESIII project in BEPCII facility in China, and the LHCb project in CERN in Switzerland. He is now physics coordinator of the BESIII collaboration. He also serves for many high-profile academic groups and committees, such as the working group of HFLAV (previously HFAG) and a series of CKM conferences. Participants Wei Ankang; Qing-Hong Cao; Junjie Cao; ning chen; Xu Feng; chang gong; Qiang Li; Fengyun Li; Yandong Liu; Yunfei Long; Xudong Lv; Xiao-Rui Lyu; Bo-Qiang Ma; Qiaorong Shen; Yuxuan Wang; Mengzhen WANG; Dayong Wang; Teng Xiang; liye xiao; Liu Xiaonan; kepan xie; Lvcheng Xie; Ling-Xiao Xu; yongqi xu; Jiao Zhang; Jue Zhang; Rui Zhang; Ce Zhang; Junjie Zhao; Yi Zheng; S L Zhu; 律 吕; 世平 和; 国晋 曾; 李林 杨; 雪丽 缪; 浩然 蒋
2020-02-19 05:16:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49971675872802734, "perplexity": 12623.396586973864}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144027.33/warc/CC-MAIN-20200219030731-20200219060731-00542.warc.gz"}
https://www.sparrho.com/item/on-the-local-extension-of-killing-vector-fields-in-electrovacuum-spacetimes/11287e1/
On the local extension of Killing vector fields in electrovacuum spacetimes Research paper by Elena Giorgi Indexed on: 21 Mar '17Published on: 21 Mar '17Published in: arXiv - General Relativity and Quantum Cosmology Abstract We revisit the problem of extension of a Killing vector field in a spacetime which is solution to the Einstein equation with electromagnetic stress/energy tensor. This extension has been proven by Yu to be unique in the case of a Killing vector field which is normal to a bifurcate horizon. Here we generalize the extension of the vector field to a strong null convex domain in an electrovacuum spacetime, inspired by the same technique used by Ionescu and Klainerman in the setting of Ricci flat manifolds. Using their construction, we also prove a result concerning non-extendibility: we show that one can find local, stationary electrovacuum extension of a Kerr-Newman solution in a full neighborhood of a point of the horizon which admits no extension of the Hawking vector field.
2021-06-25 01:29:17
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8044881224632263, "perplexity": 483.7639023781902}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488560777.97/warc/CC-MAIN-20210624233218-20210625023218-00000.warc.gz"}
https://math.stackexchange.com/questions/2968829/lagrangian-relaxation
# Lagrangian relaxation I have an assignment where I need to solve a small problem using Lagrangian relaxation. $$\min 3x_1-x_2$$ $$x_1-x_2 \ge -1$$ $$-x_1+2x_2\le 5$$ $$3x_1+2x_2 \ge 3$$ $$6x_1+x_2 \le 15$$ $$x_1,x_2 \ge 0$$ $$x_1,x_2 \in \mathbb{Z}$$ I have formulated the Lagrangian relaxation as: $$3x_1-x_2+v_1(-1-x_1+x_2)+v_2(5+x_1-2x_2)+v_3(3-3x_1-x_2)+v_4(15-6x_1-x_2)\\$$ Initially I have set all V to zero and want to achieve a lower bound. BUT! The problem is unbounded? An initially solution for Lower bound would be infinity for variable $$x_2$$ since that is the best value that minimizes the objective function if $$v = 0$$? To set $$x_2$$ to infinity since like a bad idea and I wonder how I shall continue? Now in red the integer lattice, in black the minimum $$3x_1-x_2$$ which is equal to $$1$$
2019-07-22 07:45:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 13, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8963073492050171, "perplexity": 188.09398511667442}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195527828.69/warc/CC-MAIN-20190722072309-20190722094309-00324.warc.gz"}
http://astronomy.stackexchange.com/questions/418/what-practical-considerations-are-there-for-amateur-observations-of-transiting-e
# What practical considerations are there for amateur observations of transiting exoplanets? Obviously, I am not referring to actual viewing of the exoplanets themselves, but detecting their effects on the brightness of the light emitted from the parent star (as in the diagram below from The Institute of Astronomy, University of Hawaii). I would imagine that a good quality telescope, would be about to detect and record the effects of the exoplanet transiting the parent star. What practical considerations in terms of equipment, software etc. would be required for 'backyard' amateur observations of the effects of transiting exoplanets? If anyone has actually tried this, what has been your experience? - This is actually quite straightforward with digital CCD's (it used to be quite tricky with film cameras as you'd have to carefully develop film that moved past the lens and assess the width of the trail) Get yourself a good telescope - a 12" Dobsonian or above if you want to give yourself a good chance of picking out the fluctuations against the noise background. Then select a decent CCD. Five hundered pounds gets a reasonable one, but expect to pay a couple of thousand pounds for a cooled CCD, which will also help to reduce noise. (Buying in US dollars? A reasonable CCD runs about $1000. A cooled CCD will cost you at least$1500.) You'll want a good quality equatorial mount, with computer controlled servos to track the target smoothly over long periods of time. Ideally you will also slave a second telescope and CCD, pointed along the same path but slightly off - this will help you cancel out cloud and other fluctuations from our own atmosphere. Oh, and get as far away from a city as possible - up into the mountains can be a good plan :-) Then arrange your viewing for a series of full nights. The more data points you can get, the better the noise reduction. Imagine the exoplanet is orbiting every 100 days, in order to get any useful data, you will need to track it over some multiple of 100 days. So assume you set up to track your target star for 2 years, plan for 3 or 4 star shots per night to give you a range of data points. These 600+ days of 4 data points per night gives you a reasonable stack of data - the challenge now is to work out whether there are any cyclic variations. Various data analysis tools can do this for you. As a first step, if you find a cycle around 365 days, it probably isn't the target, so try and normalise for this (of course this will make it very difficult to discover exoplanets with a period of exactly 1 year) -
2014-04-24 02:36:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18822063505649567, "perplexity": 838.2804526515982}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00278-ip-10-147-4-33.ec2.internal.warc.gz"}
https://www.degruyter.com/view/j/nanoph.2019.8.issue-10/nanoph-2019-0203/nanoph-2019-0203.xml
Show Summary Details More options … # Nanophotonics Editor-in-Chief: Sorger, Volker IMPACT FACTOR 2018: 6.908 5-year IMPACT FACTOR: 7.147 CiteScore 2018: 6.72 In co-publication with Science Wise Publishing Open Access Online ISSN 2192-8614 See all formats and pricing More options … # Progresses in the practical metasurface for holography and lens Jangwoon Sung • School of Electrical and Computer Engineering and Inter-University Semiconductor Research Center, Seoul National University, Gwanak-Gu Gwanakro 1, Seoul 08826, Republic of Korea • Other articles by this author: / Gun-Yeal Lee • School of Electrical and Computer Engineering and Inter-University Semiconductor Research Center, Seoul National University, Gwanak-Gu Gwanakro 1, Seoul 08826, Republic of Korea • Other articles by this author: / Byoungho Lee Published Online: 2019-08-30 | DOI: https://doi.org/10.1515/nanoph-2019-0203 ## Abstract Metasurfaces have received enormous attention thanks to their unique ability to modulate electromagnetic properties of light in various frequency regimes. Recently, exploiting its fabrication ease and modulation strength, unprecedented and unique controlling of light that surpasses conventional optical devices has been suggested and studied a lot. Here, in this paper, we discuss some parts of this trend including holography, imaging application, dispersion control, and multiplexing, mostly operating for optical frequency regime. Finally, we will outlook the future of the devices with recent applications of these metasurfaces. Keywords: flat optics; metamaterial; metasurface ## 1 Introduction The prefix meta- means to transcend the following suffix, which implies that one can think of the optical metasurface as something that surpasses the normal surface. Metasurfaces are different from normal surfaces because light reacts abnormally at their boundary in an unexpected manner [1], [2], [3], [4], [5], [6], [7], [8], [9], [10], [11], [12], [13]. Practically, they are composed of artificial subwavelength structures that utilize light-matter interactions for specific purposes. Like metamaterials, the building blocks of metasurfaces are dubbed as meta-atoms, and meta-atoms are engineered to have abnormal electric permittivity and magnetic permeability that are not shown in natural materials. In addition, metasurfaces attract enormous attention thanks to their easier fabrication than metamaterials thanks to two-dimensional shapes [9], [14]. As one can think, the scale of meta-atoms should be smaller than the operational wavelength, so that the early researches on metamaterials or metasurfaces were performed in microwave range, which has a wavelength in centimeter scale [11], [15], [16]. Recent nanofabrication technology makes it possible to manufacture tens of nanometer-scale building blocks on wafer, and this enables to make metasurfaces in the optical regime by scaling down the unit cell of the metasurface, well known as meta-atom. In this regard, this review will concentrate on recent metasurface researches that operates from near infrared to visible range. Metasurfaces, regarded as future optical components in the recent decade, have been widely studied thanks to their capability of light manipulation within subwavelength thickness. Each meta-atom is engineered to output desired electromagnetic characteristics independently at spatially varied manner. Notably, light manipulation through meta-atom has surpassed the conventional optic components in terms of their compactness as well as their performances [8], [17]. Mostly, this nature has been utilized in abrupt phase discontinuities, generation of desired surface wave, sweeping polarization states with intensity, and creation of desired dispersive properties [18], [19], [20], [21], [22], [23], [24], [25], [26], [27], [28]. The metasurfaces first were exploited a lot for the desired wavefront control for the substitution of optic components with high-performance and ultrathin counterparts, such as hologram generation, lensing, and beam router [6], [29], [30], [31], [32], [33], [34], [35], [36], [37], [38]. These properties have developed to go beyond the conventional device, such as polarization multiplexing, on-chip plasmonic polarimetry, multiwavelength functionality, and dispersion engineering [19], [22], [39], [40], [41], [42], [43], [44]. In this review, we will briefly discuss the principle of optical metasurfaces and review the metasurface holography, which has benefited from the elimination of unwanted diffraction orders and wide viewing angle for three-dimensional (3D) holograms. We will review researches from initial stages of hologram generation to the latest developments. Next, we will review the practical metasurface application, metalens, which is a lens implemented with a metasurface. It covers a wide range of research from early study of focusing light with metasurfaces to recent applications of metalens. Meanwhile, we will also briefly discuss how they make unprecedented light control and are applied in real optic devices. ## 2 Metasurface holography Holography, in its broadest sense, refers to the optical technique of controlling the wavefront of light as desired by spatially varied change of phase and amplitude [45], [46]. Previously, in order to reproduce a hologram through the spatial light modulator, it is indispensable to have a pixel pitch that reaches up to micrometer scale and corresponding sampling problems that needs improvement in image quality [47], [48]. However, in the case of a metahologram, which indicates the holography produced by a metasurface, the problems regarding large pixel pitch vanish thanks to its subwavelength periodic length. This is because each meta-atom contains information of the phase or amplitude of light [1], [2], [3], [4], [5], [6], [7], [8], [9], [10], [11], [12], [13], [14], [15], [16], [17], [18], [19], [20]. In this section, we discuss how light reacts with meta-atoms and review some of the representative metahologram studies together. ## 2.1 Early metasurface holography Early metahologram studies exploited the Pancharatnam-Berry (PB) phase control method, also called geometric phase [29], [31], [32], [39], [49], [50], [51]. This PB phase is based on a principle that phase retardation of the scattered cross-polarization component is determined by the in-plane orientation angle of the meta-atom. As shown in Figure 1A, the incident light should be circularly polarized light, and PB phase applies to both the cross-polarized components of reflection and transmission. It can be simply expressed by the Jones matrix as follows in regard of transmitted light: Figure 1: Phase modulation methods with meta-atoms and metahologram examples. (A) Schematic diagram showing the mechanism of the PB phase method (bottom) and the graph of resultant phase value of the scattered cross-polarized component. (B) Transmissive-type three-dimensional optical holography by plasmonic metasurface [29]. (C) Reflective-type hologram reaching 80% efficiency with plasmonic metasurface [49]. (D) Holographic multiplexing via plasmonic metasurfaces [50]. (E) Reflective-type helicity multiplexed meta-hologram [39]. $T=ΛR(−θ)[txx00tyy]R(θ)Λ−1=[txx+tyy(txx−tyy)ei2θ(txx−tyy)e−i2θtxx+tyy]$(1) where Λ is a transformation matrix from the linear polarization basis to circular polarization and R(θ) is a rotation matrix. txx and tyy are the transmission coefficients from a meta-atom: txx is the x-polarized transmission coefficient when incident light is an x-polarized light, and tyy the is the y-polarized one when a y-polarized light is incident. As seen from Equation (1), unlike other phase modulation methods to be described, it has the advantage that it is not wavelength selective. The initial metahologram studies can be seen in Figure 1B and C [29], [49]. Hence, the meta-atoms consist of metal scatterers, and the metasurfaces that are composed of metal meta-atoms are called plasmonic metasurfaces. To avoid low efficiency, they are typically designed to operate in infrared or longer wavelength regime. Here, the hologram generation methods used in Figure 1B and C are different from each other. In the case of Figure 1B, the hologram image is reproduced in the Fresnel range from the position of the metasurface. That is, it can contain information according to the depth of the image, so it is also called 3D holographic image. Conversely, the holographic image shown in Figure 1C is reproduced in a Fraunhofer region. This can be obtained through the Gerchberg-Saxon algorithm, which is a method of generating phase information to reproduce a hologram at the Fourier distance. The resultant phase mask is called a computer-generated hologram (CGH). As a result, there are two ways to reproduce the hologram through the metasurface. In addition, when the handedness or helicity of the circularly polarized light is changed, the sign of the PB phase is flipped, as shown in Equation (1). In other words, the PB phase converts to the opposite sign when the handedness of the incident light is reversed. Figure 1D and E show two applications of this method to the holographic multiplexing [39], [50]. Figure 1D shows the case where the CGH is designed for holographic multiplexing. The four different images are recorded in one CGH with various imaging depths. Here, note that the images of which the imaging depth is in negative sign are virtual images, and this can be applied in holographic multiplexing as follows: If the helicity of light is altered, the real image of rabbit is replaced into the bear image, which was a virtual image having the same distance from the metasurface before the helicity reversal. Therefore, if the image is measured at z=500 μm, two different holographic images can be obtained by varying the handedness of polarization. Figure 1E shows an example of holographic multiplexing through the Fourier hologram. When the helicity of the incident light is reversed, the Fourier image is inverted in terms of the coordinate of the image, as shown in Figure 1E. It is notable that both methods shown in Figure 1D and E implement multiplexing by a single CGH, which means recording more than one CGH is not achievable with these methods. ## 2.2 Metasurface holography with more-than-one information The study of metasurface holography has been carried out to increase the amount of information on one metasurface. This method is made possible by using a new structure, controlling the properties of incident light, or using a reflection space as modulating range with transmission space [21], [52], [53], [54], [55], [56], [57], [58], [59]. The phase control via PB phase is well known to have a broadband characteristic [21], [29], [51]. However, if the resonant characteristic is added to the meta-atom for addition of auxiliary functionality or subsidiary information, the broadband operation becomes impossible [19], [52], [54]. Consequently, broadband-operating metasurfaces that can function beyond single phase control have also been studied [21], [60], [61], [62]. As anisotropy is induced in the asymmetric molecular structure of nature, the asymmetric geometry of the meta atom can also give anisotropy as shown in Figure 2A. Accordingly, anisotropic meta-atoms have been developed to apply different phases in orthogonal linear polarizations. Unlike the PB phase, which is a method using structures of the same shape and size, the shapes of the meta-atoms should be different in order to elicit different reactions for two mutually orthogonal polarized light [20], [63]. This was made possible by changing the shape of the plasmonic structures (Figure 2B) [63]. This method is based on the fact that an abrupt phase discontinuity occurs depending on the resonant characteristics of the plasmonic structure. As the shape of the meta-atom can be freely adjusted within the constraint of fixed thickness, the metasurface can have spatially varying anisotropy. This allows polarization multiplexing in terms of phase control. Figure 2: Meta-hologram multiplexing. (A) Schematic diagram of anisotropic resonant plasmonic meta-atom. The light scattered by this meta-atom becomes to get distinctive phase values according to the polarization states of incident light. (B) Polarization-controlled dual holographic images by anisotropic plasmonic metasurface [63]. The left images show the experimental result by polarization, and scanning electron microscope (SEM) images are shown on the right side. Scale bar is 2 μm. (C) Schematic illustration for the explanation of the detoured phase. As mentioned in the text, phase values are determined by the displacement parameter d of each unit cell. (D) Broadband and chiral metaholograms dependent on the handedness of incident light [64]. On the left are the camera-captured images by polarization handedness (right for upper, left for lower image). On the right is the SEM image of the fabricated sample. The scale bar is 1 μm. (E) Schematic diagram of dielectric meta-atom, which imparts a distinguished phase in accordance with the polarization state. (F) Dielectric metasurfaces for the control of phase and polarization with high transmissive efficiency [20]. On the left, the resultant simulation and experimental results are shown for two polarizations. On the right, the SEM images and captured image by optical microscope are shown. (G) Independent phase control of arbitrary orthogonal states of polarization by TiO2 metasurface [19]. Figures are one example of which the basis is circular polarization. The left image shows the captured holographic images. The sign depicted on the corner of figures indicates polarization of incident light. The SEM image of this fabricated sample is shown on the right. (H) Angle-multiplexed metasurface for independent phase profile under different incident angles [55]. The two images on the right are captured holographic images at different illumination angles. (I) The X-shaped metasurface for complete amplitude and phase control of light [21]. The right three images are captured by charge-coupled device camera with varying longitudinal position. Another phase modulation method is possible for polarization multiplexing, which is detouring phase delay by arrangement of meta-atom [64]. Assuming that the meta-atoms are arranged in sufficiently long periodicity for the generation of diffraction order and the image is captured from the plane perpendicular to the diffracted angle, the phase difference between two adjacent meta-atoms will be zero. Here, when one of the meta-atoms is moved horizontally as shown in Figure 2C, the phase difference occurs as result, which is kd sinθ, where k is the wavenumber of incident light and θ is the diffraction angle. As a result, it is possible to introduce an additional degree of freedom in phase modulation. Figure 2D shows the polarization multiplexing result when the phase detouring is combined with the PB phase [54]. As shown in Figure 2D, as the helicity of the incident light is altered, the reproduced CGH is flipped into the other one. In the case of the hologram through the plasmonic metasurface, there is a inherent problem in that the hologram conversion efficiency becomes low when it is made into a transmission type. This can be solved with a metasurface through a dielectric with high refractive index and low extinction coefficient. The principle is as follows: For a meta-atom of sufficiently high thickness, the effective refractive index varies with the size that occupies a unit cell, and by its index, the phase can be modulated up to 2π [2], [7], [20], [65]. These all-dielectric metasurfaces have merits on transmission efficiency but are less advantageous in thickness than plasmonic metasurfaces. Usually the all-dielectric metasurfaces have thickness similar to the operating wavelength, while plasmonic metasurfaces have thicknesses much smaller than that. As shown in Figure 2E, the dielectric meta-atom has an anisotropic shape, which in turn contributes to birefringent characteristics. This is exploited to birefringent phase control in Figure 2F, which has near 100% diffraction efficiency in two different polarization states at near-infrared wavelength. Each of the silicon meta-atom has spatially varying size parameters in both x- and y-directions, and this is attributed to the full phase modulation of both polarization states. In terms of the two degrees of freedom of ellipse size, it is possible to produce two independent phase profiles of txx and tyy. Until recently, this method has influenced almost all of the studies of polarization multiplexing metahologram including the study in Figure 2G [20]. As it is possible to realize independent phase control with unity efficiency, txxtyy can be freely expressed as e, by simple tuning of size parameters. Therefore, in circular polarization basis, cross-polarized components can be combined with the PB phase to become dependent on the helicity, which can result in the imparting of two independent phase masks. Capasso’s group showed that, through the studies in Figure 2G, fine-tuning of size and in-plane rotation angle of the meta-atom contributes to create two independent phase profiles based on all orthonormal basis of polarization [20]. As this mechanism employs the titanium dioxide meta-atom of which efficiency is up to 100% in visible frequency, no polarizer is needed at the output terminal. Thanks to the efficient nature of dielectric metasurface, the meta-atom was designed to impart distinct phase retardation by the incident angle, that is, the direction of the incident light [55], [56], [66], [67], [68]. In Figure 2H, the shape of the structure is carefully adjusted to impart a different phase according to the incident angle [55]. While conventional metaholograms are mostly concentrated on phase-only hologram, the hologram that is reconstructed from the phase and amplitude together is desirable. In this regard, Figure 2I shows the independent and continuous modulation of the amplitude and phase in broadband through the X-shaped meta-atom [21]. This enables broadband characteristics by extending the PB phase and a remarkably low noise-to-signal ratio compared with conventional phase-only holograms thanks to the capacity of X-shaped meta-atom, which carries amplitude and phase information together. ## 2.3 Full color metahologram Wavelength multiplexing or full color metaholograms have been studied in various ways [40], [66], [67], [68], [69], [70], [71], [72], [73], [74], [75]. When recording CGH, setting fixed image plane with incident angles by wavelengths can enable the full-color holography. For example, as shown in Figure 3A, for green light, CGH is generated assuming the letter G is put in the center of the image plane [67], [68], [69]. Next, for red and blue lights, the angle of incidence is set in advance, then the CGH is calculated so that the letter (or image) can be generated at the predetermined image plane. In this way, the desired color image is retrieved only by the predetermined incident angle varied by wavelengths. This scheme has been applied in other color hologram as well, covering full-space color holography, which shows independent colored images simultaneously in transmission and reflection space [66]. Figure 3: Multicolor metaholograms. (A) Multicolor 3D metaholography by broadband plasmonic metasurface [69]. The left shows the conceptual illustration for understanding the scheme to achieve multicolor hologram. On the right is the measured holographic image that accomplished seven colored holographic image. (B) Dielectric metasurfaces for multiwavelength achromatic hologram for three wavelengths [40]. SEM image is shown on the top left, and the scale bar is 1 μm. Bottom left shows the unit cell of the proposed structure, and the graph indicates the mechanism of this metasurface: Each colored meta-atom contributes to the phase modulation of the color wavelength itself. The right image is the experimental result. (C) Noninterleaved metasurface for 26-1 spin- and wavelength-encoded hologram [70]. The top left image shows the operational principle of the proposed scheme. On the right, the top and side views of the unit structure are shown as well as the SEM image of fabricated sample. The bottom six images are the measured result of the achieved wavelength- and spin-encoded holographic images. The method used in Figure 3B is to generate the wavelength-selective properties via adjusting the size of the structure and then to intersperse all the structures with one unit cell [40], [76], [77], [78], [79]. Each meta-atom has a well-reacting wavelength according to its sizes in visible region. This makes possible to construct three independent CGHs for the three structures, and consequently, three CGHs, which have wavelength information as well, can be imprinted with one metasurface. The result can be seen in Figure 3B, and crosstalk is inevitable because each meta-atom is not completely wavelength selective [40]. Full color holograms can also be implemented by recording many holographic images of different depths in a single CGH [70]. As shown in Figure 3C, this scheme is realized by the PB phase and utilizes the fact that virtual images that can be measured by looking inside the metasurface can be changed into real images simply by altering the helicity of polarization [70]. In addition to this, this scheme employs the diffractive characteristic of the metasurface of which the propagated position where the holographic image is generated increases by blue shift of wavelength. As a result, with image plane fixed and wavelength and polarization shifted, the number of 26-1 images are multiplexed with non-interleaving metasurface. ## 2.4 Tunable metasurface holography In the case of the metaholograms so far, the phase mask recorded in the metasurface cannot be changed into other information once it has been fabricated. To address these limitations, some of the recent published studies have proposed dynamic metaholograms using various methods [60], [80], [81], [82], [83]. When a meta-atom is designed using a material whose permittivity changes by external bias, the metasurface can contain more than one information without change of incident light properties. In the optical frequency domain, well-known phase change materials such as vanadium oxide, liquid crystal, indium tin dioxide (ITO), and GeSbTe (GST) based materials have been actively employed as dynamic nanophotonic devices so far [60], [80], [84], [85], [86], [87], [88]. Especially, GST is a material whose state changes from amorphous into crystalline and vice versa according to external stimuli such as thermal or electrical bias [89], [90]. In the study shown in Figure 4A, GST is used as ultrathin substrate that reacts resonantly with plasmonic metasurface to be utilized as dynamic metadevice [80]. This is applied in metahologram in the manner that the holographic image is reproduced only in the amorphous state, and in crystalline state, only meaningless information is retrieved. In Figure 4B, the GST nanostructured metasurface is designed in a C-shape to impart two CGHs according to the states [60]. Two sizes of meta-atom are used to realize amplitude extinction at both states. This scheme is also used in other studies including research at Figure 4C (not in a chronological order) [81], [82]. In addition, novel external biases using hydrogen and oxygen are also presented. As can be seen in the figure, this study using the state change of magnesium with the air is used as holographic image encryption or dynamic Janus hologram. A tunable metahologram can be realized using the materials changed by mechanical stimuli. As shown in Figure 4D, the study used PDMS, which is a stretchable material, as a substrate for realization of a tunable hologram [91]. This study utilizes the fact that the image plane changes as the periodicity is tuned. Figure 4: Tunable and dynamic metahologram. (A) Plasmonic metasurface for switchable photonic spin-orbit interactions based on GST [80]. The left image shows the material and schematic diagram for the proposed structure. The right two images are captured holographic images from the fabricated sample, which only show the operation on the amorphous state of GST. (B) GST-nanostructured metasurface for wavefront switching [60]. The left image shows the C-shaped meta-atom for the unit cell of the proposed structure. The middle image shows the SEM image of the sample. Scale bar is 500 nm. The right images are measured results according to the states of GST meta-atom and selected wavelengths. (C) Addressable metasurfaces for dynamic hologram and optical information encryption [81]. The left image shows the mechanism of proposed plasmonic metasurface. The right shows how the holographic image from the proposed structure can be dynamically tuned by hydrogen and oxygen. (D) Strain multiplexed metasurface hologram by stretchable substrate [91]. The left and middle images show the schematic of the proposed scheme. The two images on the right are captured holographic images by varying the pulling force to substrate. ## 2.5 Recent metasurface holography technology Recently, metasurface holography technology has been developed further to achieve unprecedented light control by novel meta-atom designs. Here, we introduce six representative examples, which accomplish improved versatility. The structure in Figure 5A is designed in the same way as the one proposed in Figure 2F [20], [52]. However, in addition to recording two different CGHs using the PB phase and the fact that txx and tyy can be controlled in Equation (1), one CGH is additionally recorded through the novel algorithm. On the basis of linearly polarized light, additional CGH is recorded in txy (or tyx). On the basis of circular polarization, two CGHs are recorded in cross-polarized components according to handedness of incident light, and the other one is added to the co-polarized component. As a result, three different CGHs are imparted with a single metasurface at the optical frequency. Figure 5: Recent advances on the metasurface-based holography. Each figure is arranged in order of schematic diagram of the proposed structure, SEM image, and experimental result. (A) Multichannel vectorial holographic image and encryption [52]. Regarding the experimental result, the upper two images are captured images varying incident polarizations with no polarizer at the output terminal. Lower four images are captured with polarizer: the left indications are incident polarization, and the right indicates the filter used before measurement. (B) Simultaneous circular asymmetric transmission and wavefront shaping by dielectric metasurfaces [53]. The left figure shows the unit cell structure. (C) Bifacial metasurface for independent phase control on transmission and reflection spaces [54]. The right figure shows the generated holographic image that is differently reproduced by viewing direction. (D) Coherent pixel design of metasurfaces for multidimensional optical control of multiple printing-image switching and encoding [56]. Right three resultant images are obtained by different wavelength, polarization, and incident angle. (E) Diatomic metasurface for vectorial holography [57]. The top middle figure is the desired vectorial holographic image that possesses intensity and polarization information as well. The lower image is the SEM image for the sample. The right figures are measured images, which shows a great match with the simulation results. (F) Plasmonic helical nanoapertures for 3D Janus polarization-encoded image [58]. The right figures are measured images from different illuminated directions with distinct polarization. Some studies have been made to control light in transmission and reflection space together [42], [53], [54], [66], [92], [93], [94]. For instance, the structure shown in Figure 5B takes advantage of the fact that incident light is reflected when two transmitted lights are out of phase in the range where no diffraction order occurs [53]. Both meta-atoms used in this paper are designed to act as perfect half-wave plates for high efficiency. Through a combination of two engineered meta-atoms, asymmetric transmission controlled by handedness is implemented, and by the in-plane rotation angle of the whole structure, phase modulation is achieved as well. Furthermore, as shown in Figure 5C, a bifacial metasurface has been designed to impart different CGHs in the transmission and reflection spaces simultaneously [54]. In addition to creating the same PB phase in the transmission and reflection spaces, independent phase control is achieved by controlling the phase difference between the two spaces by spatially varying the size of the meta-atom. Unlike the studies that accomplish the full-space control in the gigahertz region, the bifacial metasurface is composed of a single-layer silicon meta-atom and operates in the visible range. As shown in Figure 5D, a coherent meta pixel has been introduced in which the observed image is changed by the angle, wavelength, and polarization of the incident light [56]. This coherent pixel scheme is meaningful because it is possible to confirm the multiplexed image by freely choosing the area to reproduce the image, unless it is measured with high-NA system. Similarly, some studies have been proposed, based on the fact that when the spectral characteristic, phase, and polarization state of a meta atom are different from each other, it is distinguished when observed in bare eye or optical microscope [73]. As the reflective plasmonic metasurface, a novel concept of holography based on two orthogonal meta-atoms is proposed in Figure 5E [57]. This method enables vectorial holography, which can realize holography having the spatially varying polarization state and amplitude. In addition, as shown in Figure 4H, a plasmonic metasurface can be obtained by milling a metal plate while controlling the concentration of the focused ion beam [58], [62]. The resultant meta-atom breaks mirror symmetry, which is not achievable with fixed height and planar meta-atom [95], [96], [97], [98]. This leads to the chiral response to the circular polarization, and some studies employ this scheme to realize polarization-dependent image generation. The study shown in Figure 5F achieves polarization-controlled Janus image, by control of the extinction ratio of the amplitude by changing the incident light direction and polarization [58]. ## 3.1 Metalens: lens made of metasurface Recently, the commercial market for head mounting type devices such as augmented reality (AR) and virtual reality (VR) is increasing, and as the importance of lightness and compactness of devices such as cameras and optical sensors used in smart phones emerges, a demand for a new platform is increasing [99], [100], [101]. Through the Fresnel lens, or Echelette-type lens, the thickness of the lens can be reduced, but in spite of their potential, such a lens has problems such as the reduction of light efficiency compared with shadow effects [6], [102]. Conversely, by using a metasurface as a planar lens, ultrathin characteristics as well as no shadow effect can be accomplished [30], [31], [103], [104], [105], [106]. In terms of development potential, research has been conducted in various directions to utilize the metasurface as a lens by the unprecedented light control ability of the metasurface. Metalenses are often made of dielectric materials because their efficiency is important for imaging through the lens [30], [105]. Of course, similar to holography, the early metalens, which operates in the visible light band, is a plasmonic structure, so that it suffers from significantly low conversion efficiency considering practical imaging applications where efficiency is highly an important component [107], [108], [109], [110], [111]. To solve this efficiency issue, plasmonic metalenses have often been designed as reflective types or in sparsely arranged manner similar to Fresnel zone plate [112], [113], [114], [115]. As in the previous holographic reproduction, when designing the lens through the metasurface, the phase adjustment at the subwavelength pixel is required: A metalens is created by imparting a phase profile with a parabolic shape [30]. The phase modulation method is not much different from the previous one. The metalenses shown in Figure 6A and B were designed through the PB phase [30], [31]. The metalens designed through the PB phase has the same size and shape of all the structures, and it is possible to control the phase continuously by adjusting the in-plane angle. In the case of the lens made with diffraction grating in Figure 6A, it is designed to operate in the mid-infrared region, and its diffraction efficiency reaches almost 100% [31]. Designed as a thick titanium oxide nanopillar, the meta-atom in Figure 6B is designed to operate in the visible range, with a maximum focusing efficiency of 86% when the NA is 0.8 and the operating wavelength is 405 nm [30]. In the case of the metalens shown in Figure 6C and D, it is made by using the phenomenon that the effective index increases according to the size [65], [116], [117]. These nanoposts are made of amorphous silicon, which are designed to operate in the near-infrared range, and the focusing efficiency is 82%. Unlike the PB phase-based metalens, the metalens operates independently of the polarization because the phase is adjusted according to the size of each meta-atom. Figure 6: (A–D) Early proposed dielectric metalenses [30], [31], [65], [116]. (A) Dielectric gradient optical elements [31]. High-index dielectric gratings using the PB phase are arranged to achieve the lens phase profile. The inset shows the intensity profile at focus. (B) Visible wavelength metalens for diffraction-limited focusing and subwavelength resolution imaging [30]. The left image shows the SEM captured TiO2 metalens. The right two images are the measured intensity profile near the focal points. The upper images are captured at the plane that contains the propagation direction vector and the lower plane is perpendicular to the propagation direction. (C) All-dielectric subwavelength focusing lens [116]. The upper images are SEM images from the fabricated sample. The lower images are showing the schematics and the measured intensity profiles of various positions. (D) Subwavelength-thick lenses with high numerical apertures and large efficiency based on high-contrast transmitarray [65]. The upper left image shows the tilted SEM image of transmitarray. The right images show the schematics of the proposed transmitarray. The lower image shows the intensity profile of the proposed structure. (E-G) Metalenses for multiwavelength operation [72], [76], [77]. (E) Multiwavelength polarization-insensitive lenses based on dielectric metasurfaces with metamolecules [77]. The left two images are the top view and the tilted view of the metasurfaces captured by SEM. Right intensity profiles are described as simulation (top) and experimental results (bottom) and detached by wavelengths. (F) Gallium nitride metalens for color routing [76]. The left images describe how the building blocks are made of, and the SEM images are shown as well. The right figure shows the experimental result of which the focal points are distinct in transverse direction. (G) Titanium dioxide metasurface for multiwavelength functions [72]. The upper left image is the tilted view of the fabricated sample. The remnant images show the measurement result of the multiwavelength lens. Like the way that metahologram progresses, the metalens has also been developed to have multifunctionality as well [118], [119], [120], [121], [122], [123]. For instance, the metamirror has been proposed to focus light with respect to linear polarization by plasmonic metamirror [118]. This scheme utilizes the birefringent resonant characteristics of spatially varying sizes of plasmonic meta-atom. Also, by designing the phase mask as dependent to helicity by PB phase meta-atom and spatial multiplexing, some studies achieve multifocus metalenses [119], [121], [124]. ## 3.2 Metalens research progress: from multiwavelength to tunability While the metalens has the advantage of being extremely thin compared with the conventional optical element, the effective index-based metalens has a disadvantage that the operating wavelength is limited to a single frequency and coma aberration is severe with respect to the incident angle [65], [116], [117]. To solve these problems, metalenses have been studied in various manners so far. For example, to compensate for chromatic aberration through an existing refractive optics-based lens, various shapes of lenses should be stacked, which results in a bulky system. In the early days, this is solved through the proper design of the metalens, as did the full color metahologram [72], [76], [77], [78], [79]. In the metalens of Figure 6E, each meta-atom is designed to operate in different wavelengths of 915 and 1550 nm, respectively, and the two meta-atoms are spatially multiplexed as one unit cell, dubbed as metamolecule [77]. It can be designed to have different phase masks for two wavelengths, and the proposed metalenses are made to have the same focal point. The metalens, shown in Figure 6F, uses a similar scheme as the metamolecule, by designing three meta-atoms that respond to three wavelengths each, like Figure 3B [40], [76]. In this case, the material used is gallium nitride, and as the absorption coefficient in the visible light band is 0, like titanium oxide or silicon nitride, it is possible to design the metalens with high efficiency. In this study, color routing is achieved by imprinting the light in different transverse directions for each wavelength. In the case of the metalens shown in Figure 6G, unlike the previous studies, it is not made by spatial multiplexing of meta-atoms [72]. This metasurface utilizes guided mode resonance, which is a resonance caused by the size of meta-atom, and leads to independent phase modulation for the three different wavelengths. If there is no aberration for only the three most main wavelengths of red, green, and blue lights, it can be applied in optical devices based on tricolor wavelengths, for example, optical elements in VR or RGB-type display. However, in order to use the metalens as a practical lens for real life or AR, phase imparted from each meta-atom should be changed in nondispersive manner for continuous spectral band. This can be resolved through dispersion engineering through the design of the meta-atom [18], [22], [112], [125], [126], [127]. The method shown in Figure 7A is a reflection-type metalens that adjusts the diffractive characteristics of the metasurface, which enables to control not only the phase information but also the wavelength derivative in the near-infrared band [22]. This achieves a dispersionless focusing mirror at a wide bandwidth of 140 nm. After, a study of achromatic metalens for about 500 nm bandwidth by designing several reflective plasmonic meta-atoms was proposed in the infrared region [112]. In this case, each meta-atom has a distinctive frequency derivative that mainly contributes to correct the chromatic aberration, and at the same time, the PB phase is used for the imparting of lens phase profile. This scheme is applied to the visible light band by dielectric meta-atoms as shown in Figure 7B and C [125], [126]. In both studies, the group delay or dispersion was controlled by changing the shape or size of the meta-atom, and the phase was controlled via the PB phase. This provides a basis for eliminating chromatic aberration for circularly polarized light, thus opening up the possibility that metalens can actually be applied with a certain degree of numerical aperture. Recently, a metalens that operates independent of polarization has been designed, as shown in Figure 7D [128]. It basically uses the PB phase, but by using the meta-atoms that are tilted at 0° and 90°, the handedness-dependent property is vanished [128]. Next, the dispersion characteristics and phase are adjusted by the size and shape of each meta-atom, which is composed of more than one nanopillar. Figure 7: Metalenses for correction of intrinsic aberrations (A–D). One is the fabricated sample, and the other is the representative result of achromatic metalens, showing intensity profile with distinct wavelengths. (A) Dielectric metasurfaces for control of the chromatic dispersion in near-infrared region [22]. (B) TiO2 and (C) GaN metalens for achromatic focusing in the visible spectrum using the PB phase with designer meta-atoms [125], [126]. (D) Polarization-insensitive metalens operating in visible frequency [128]. (E–F) Metalens doublet corrected for the monochromatic aberrations [129], [130]. (E) The left two figures for schematic and corresponding right figures imply the difference in aberrations, which is severe at singlet (top), while it is improved in the doublet case (bottom). (F) The left figures show the mechanisms, and the right shows the measured results regarding illumination angle. (G–H) Representative tunable metalenses [131], [132]. (G) Adaptive metalens that can tune the focal point and astigmatism as well [131]. The upper image shows the brief mechanism of the proposed structure, and the lower graphs show the performances. (H) Metalens integrated with micro electro mechanical systems (MEMS) for focal length control [132]. The metalens has a coma aberration that is not suitable for imaging, which was considered to be a major obstacle to real-life application of the metalens. Figure 7E and F show the doublet metalens as a basis for solving this problem [129], [130], [133]. Two doublet metalenses differ from each other in two ways: First, the wavelength bands on which the metalens operates are different, one for the near-infrared band (Figure 7E) and the other for the visible band (Figure 7F) [129], [130]. The second is that the former is based on the size-based phase control, while the latter utilizes the PB phase. In the case of the conventional metalens, the point spread function is greatly distorted even at an incident angle of about 2.5°, which indicates that the coma aberration is serious. However, with doublet metalens, it can be seen that the aberration is greatly reduced even when the light is incident at an angle of 30°. As in the study on metahologram, several studies have been introduced to control the characteristics of the metalens through external bias [131], [132], [134], [135], [136], [137]. Figure 7G and H show two representative examples of tunable metalens [131], [132]. The metalens in Figure 7G can be controlled by a total of five electrical voltage biases through which the focal point and astigmatism can be controlled by the transparent stretchable electrodes [131]. It is meaningful to adjust the focus not only in longitudinal direction but also in the transverse direction as well as its astigmatism. In the case of Figure 7H, the basic principle is similar to the classic zoom lens [132]. It is composed of one metalens on substrate and the other one moving on the membrane. In theory, the optical tunability can be increased to over 300 diopters for the proposed design. Considering the performance of the proposed scheme, the future potential of metalens is well demonstrated numerically and experimentally with this research. ## 3.3 Researches on the applications of metalens The metalens has been expected to contribute to the miniaturization of the optical instruments as it can contain much more information than conventional lenses or optic components [99], [133], [138], [139], [140], [141], [142], [143], [144], [145], [146], [147], [148], [149], [150]. Figure 8A shows an example of metalens that used the property of the PB phase: The sign of the PB phase is reversed according to the handedness of the incident light [122], [144]. For instance, if a PB phase-based metalens is made to focus light for left-handed circularly polarized light, it operates as a diverging lens for the right-handed circularly polarized light. That is, if a phase mask that operates for the left circularly polarized light and right circularly polarized light together can be formed with one metasurface and the focus positions of the two lenses are made different from each other, two images can be obtained that form different focuses and also contain chiral information. In Figure 8A, the PB phase-based metalenses are used for chiral imaging to obtain different images when imaging chiral objects [144]. In order to realize such chiral imaging with conventional optics, it is necessary to set up more than two kinds of polarizers and lenses, which shows the strength of the metalens in the compact system. Figure 8: Recent applications of metalenses in devices. Each figure is arranged in order of mechanism, SEM image, and experimental result. (A) Multispectral chiral imaging with a metalens [144]. (B) Ultracompact visible chiral spectrometer with metalenses [145]. (C) Compact folded metasurface spectrometer [146]. (D) Two-photon microscopy with a double-wavelength metasurface objective lens [147]. (E) Achromatic metalens array for full-color light-field imaging [148]. (F) Metasurface eyepiece for AR [99]. (G) Nano-optic endoscope for high-resolution optical coherence tomography in vivo [149]. (H) Planar metasurface retroreflector [150]. Metalenses can contribute to compact optical systems because they contain characteristics that are not provided by conventional lenses. Figure 8B and C shows the results of the metalens spectrometer using the dispersive nature of the metasurface [133], [145], [146]. The spectroscopy through the metalens introduced in the Capasso group is implemented through a combination of commercial complementary metal-oxide-semiconductor camera and metalens that has a phase profile in the off-axis focal points [145]. In particular, as it is implemented through the PB phase, it can resolve the helicity of the incident light through a single measurement. Ultimately, this metaspectrometer is encouraging because it was made very compact compared with the existing spectrometer. Figure 8C is a compact and folded metasurface spectrometer recently introduced by the Faraon group [146]. This spectrometer is also characterized by its extremely compact design like the previous study. One of the most important factors for spectroscopic measurements is that the propagation space enough to implement dispersive properties is guaranteed, which is achieved by the realization of the compact folded metasurface. The reliability of the metasurface is further enhanced by these researches because the conventional devices are significantly reduced in size, and with further development, compactness and performance could contribute to the integration of metasurface in real applications. Metalenses are often applied in some optical systems by replacing the existing lens. In the case of the metalens shown in Figure 8D, the two-photon microscopy is implemented with a metalens [147]. This metalens is a birefringent dual-wavelength metalens, designed to operate at 605 nm wavelength with x-polarized light and at 820 nm with y-polarized light, and at each wavelength, this metalens has different focal points [79], [147]. Because of the dispersive nature and off-axis coma aberration of metalens, the performance of metalenses in two-photon microscopy was slightly lower than that of conventional lenses. However, the limitation can be further improved with recent metasurface design, and this is well discussed in this paper. As a result, considering the recent developments in metasurface design, it can be said that this study expanded the possibility of metalens-integrated system. As shown in Figure 8E, a microlens array is fabricated through the previously proposed gallium nitride achromatic metalens and is used for light-field imaging [126], [148]. This study shows that achromatic metalens can be used as a compact planar optical element in a real-life AR imaging device. Considering that the existing light-field imaging system consists of a bulky lens, the compactness of the entire system can be greatly reduced with metalens. Conversely, the system present at Figure 8F uses the metalens as an eyepiece to greatly improve the performance and reduce the size of the entire system [99]. In the AR system with the metalens, this study has succeeded in greatly improving the field of view, which is one of the persistent problems of conventional AR system, which is difficult to be solved by existing lenses. The example shown in Figure 8G is an example of replacing the ball lens (or graded-index lens) with a metalens in the existing endoscope to improve its performance [149]. Measurement using a conventional system suffers from aberrations including astigmatism, which leads to the degradation of the resultant captured image. In this study, the diffraction-limited focusing with metalens is achieved, and improvement of both depth of focus and transverse resolution is obtained simultaneously, which is also difficult with conventional ball lens or graded-index lens. These two studies ease design trade-offs because of the ability of the metasurface itself in light control and also have the advantages of compactness when metalenses are applied to real devices. Figure 8H is a planar retroreflector realized by a doublet of metasurface [150]. Instead of a bulky design for an existing retroreflector, this provides a compact planar retroreflector through a freely adjustable phase meta-atom. ## 4 Outlook In this review, the metasurfaces are briefly discussed focusing on metahologram and metalens. We briefly summarizes the metasurface holography and lens with some representative studies by their efficiency, modulation features, and operational wavelength in Tables 1 and 2. The functionality is so greatly developed that can be applied in real-life devices, and some prototypes of metasurface-based optical devices are presented as well. Besides the discussed metasurfaces, numerous optical components have been replaced with the metasurfaces that have merits in compactness and performance. Representative examples are polarizer, diffuser, beam deflector, beam splitter, and color filter [33], [34], [151], [152], [153], [154], [155], [156], [157], [158], [159], [160], [161], [162], [163]. Recently, with conventional bulk optical elements, it is tricky to output ultracompact devices, because the resultant device cannot help big sizes to have electronic, mechanic, and optical parts as a whole. Accordingly, there are needs for a compact and small-volume platform like metasurface. In this regard, the review discusses the potential of metasurfaces, which are ultrathin elements that can freely control the properties of light, and studies on the enhancement of the functionality of metasurfaces and their practical application to devices are continuing. In addition, most of recent metasurface studies have been conducted on silicon, gallium nitride, etc., which are also used in semiconductor processing, so it seems that the process will not be a big problem for the pragmatic application. Table 1: Conclusive table on meta-hologram. Table 2: Conclusive table on metalens. However, there is still a challenge to actually apply metalenses and metasurfaces to practical devices. Typically, in the case of achromatic metalens in the visible band, which is considered to be the most actively used, it is difficult to realize a meta-atom that can satisfy both long diameter and high numerical aperture. In this regard, the dispersion-engineered metasurface is used as a dispersion corrector in addition to the refractive optics system [164]. The result is desirable in performance, but not sufficiently attractive in compactness. The metasurface, which is considered universally applicable to any optical devices, also has a limitation because it is a passive device and the fabrication cost is not negligible. As mentioned above, there are many problems that need to be solved in order for it to be applied to real-life devices with stretchable substrates and metasurfaces through dynamic materials. Studies have been conducted to use metasurface as color filters and microlens arrays as alternatives, but significant improvements in performance and process fee are needed. Nonetheless, the metasurface will be enabled at large-area fabrication processes in near-term, thanks to the development of EUV processes and mechanical improvement of future stretchable devices. In comparison with the metasurface hologram study of the early days and the recent meta-atom design method, the modulation capability is not comparable in terms of efficiency and controllability. The number of researches applied to the actual system increases, and most of those accomplished a huge reduction of the size of the system. Added to this, recently, metasurface studies showing the possibility of improving the limit of the existing system has been suggested. Considering this, we are counting on that it is not a remote contingency to meet real-life devices, most of which are composed of metasurfaces. ## References • [1] Chen H-T, Taylor AJ, Yu N. A review of metasurfaces: physics and applications. Rep Prog Phys 2016;79:076401. • [2] Zhang L, Mei S, Huang K, Qiu C-W. Advances in full control of electromagnetic waves with metasurfaces. Adv Opt Mater 2016;4:818–33. • [3] Hsiao H-H, Chu CH, Tsai DP. Fundamentals and applications of metasurfaces. Small Methods 2017;1:1600064. • [4] Kruk S, Kivshar Y. Functional meta-optics and nanophotonics governed by Mie resonances. ACS Photonics 2017;4:2638–49. • [5] Wan W, Gao J, Yang X. Metasurface holograms for holographic imaging. Adv Opt Mater 2017;5:1700541. • [6] Khorasaninejad M, Capasso F. Metalenses: versatile multifunctional photonic components. Science 2017;358:eaam8100. • [7] Genevet P, Capasso F, Aieta F, Khorasaninejad M, Devlin R. Recent advances in planar optics: from plasmonic to dielectric metasurfaces. Optica 2017;4:139–52. • [8] Kamali SM, Arbabi E, Arbabi A, Faraon A. A review of dielectric optical metasurfaces for wavefront control. Nanophotonics 2018;7:1041–68. • [9] Su V-C, Chu CH, Sun G, Tsai DP. Advances in optical metasurfaces: fabrication and applications [Invited]. Opt Express 2018;26:13148–82. • [10] He Q, Sun S, Xiao S, Zhou L. High-efficiency metasurfaces: principles, realizations, and applications. Adv Opt Mater 2018;6:1800415. • [11] Chen M, Kim M, Wong AMH, Eleftheriades GV. Huygens’ metasurfaces from microwaves to optics: a review. Nanophotonics 2018;7:1207–31. • [12] Lee G, Sung J, Lee B. Recent advances in metasurface hologram technologies. ETRI J 2019;41:10–22. • [13] Sun S, He Q, Hao J, Xiao S, Zhou L. Electromagnetic metasurfaces: physics and applications. Adv Opt Photonics 2019;11:380–479. • [14] She A, Zhang S, Shian S, Clarke DR, Capasso F. Large area metalenses: design, characterization, and mass manufacturing. Opt Express 2018;26:1573–83. • [15] Glybovski SB, Tretyakov SA, Belov PA, Kivshar YS, Simovski CR. Metasurfaces: from microwaves to visible. Phys Rep 2016;634:1–72. • [16] Yu N, Capasso F. Flat optics with designer metasurfaces. Nat Mater 2014;13:139–50. • [17] Kim I, Yoon G, Jang J, Genevet P, Nam KT, Rho J. Outfitting next generation displays with optical metasurfaces. ACS Photonics 2018;5:3876–95. • [18] Khorasaninejad M, Shi Z, Zhu AY, et al. Achromatic metalens over 60 nm bandwidth in the visible and metalens with reverse chromatic dispersion. Nano Lett 2017;17:1819–24. • [19] Balthasar Mueller JP, Rubin NA, Devlin RC, Groever B, Capasso F. Metasurface polarization optics: independent phase control of arbitrary orthogonal states of polarization. Phys Rev Lett 2017;118:113901. • [20] Arbabi A, Horie Y, Bagheri M, Faraon A. Dielectric metasurfaces for complete control of phase and polarization with subwavelength spatial resolution and high transmission. Nat Nanotechnol 2015;10:937–43. • [21] Lee G-Y, Yoon G, Lee S-Y, et al. Complete amplitude and phase control of light using broadband holographic metasurfaces. Nanoscale 2018;10:4237–45. • [22] Arbabi E, Arbabi A, Kamali SM, Horie Y, Faraon A. Controlling the sign of chromatic dispersion in diffractive optics with dielectric metasurfaces. Optica 2017;4:625–32. • [23] Mun S-E, Yun H, Choi C, Kim S-J, Lee B. Enhancement and switching of Fano resonance in metamaterial. Adv Opt Mater 2018;6:1800545. • [24] Sun S, He Q, Xiao S, Xu Q, Li X, Zhou L. Gradient-index meta-surfaces as a bridge linking propagating waves and surface waves. Nat Mater 2012;11:426–31. • [25] Ni X, Emani NK, Kildishev AV, Boltasseva A, Shalaev VM. Broadband light bending with plasmonic nanoantennas. Science 2012;335:427. • [26] Sun S, Yang K-Y, Wang C-M, et al. High-efficiency broadband anomalous reflection by gradient meta-surfaces. Nano Lett 2012;12:6223–9. • [27] Sun W, He Q, Sun S, Zhou L. High-efficiency surface plasmon meta-couplers: concept and microwave-regime realizations. Light Sci Appl 2016;5:e16003. • [28] Luo W, Xiao S, He Q, Sun S, Zhou L. Photonic spin Hall effect with nearly 100% efficiency. Adv Opt Mater 2015;3:1102–8. • [29] Huang L, Chen X, Mühlenbernd H, et al. Three-dimensional optical holography using a plasmonic metasurface. Nat Commun 2013;4:2808. • [30] Khorasaninejad M, Chen WT, Devlin RC, Oh J, Zhu AY, Capasso F. Metalenses at visible wavelengths: diffraction-limited focusing and subwavelength resolution imaging. Science 2016;352:1190–4. • [31] Lin D, Fan P, Hasman E, Brongersma ML. Dielectric gradient metasurface optical elements. Science 2014;345:298–302. • [32] Ni X, Kildishev AV, Shalaev VM. Metasurface holograms for visible light. Nat Commun 2013;4:2807. • [33] Li Z, Palacios E, Butun S, Aydin K. Visible-frequency metasurfaces for broadband anomalous reflection and high-efficiency spectrum splitting. Nano Lett 2015;15:1615–21. • [34] Khorasaninejad M, Zhu W, Crozier KB. Efficient polarization beam splitter pixels based on a dielectric metasurface. Optica 2015;2:376–82. • [35] Lee S-Y, Lee I-M, Park J, et al. Role of magnetic induction currents in nanoslit excitation of surface plasmon polaritons. Phys Rev Lett 2012;108:213907. • [36] Song E-Y, Lee G-Y, Park H, et al. Compact generation of Airy beams with C-aperture metasurface. Adv Opt Mater 2017;5:1601028. • [37] Lee S-Y, Kim K, Kim S-J, Park H, Kim K-Y, Lee B. Plasmonic meta-slit: shaping and controlling near-field focus. Optica 2015;2:6–13. • [38] Lee G-Y, Lee S-Y, Yun H, et al. Near-field focus steering along arbitrary trajectory via multi-lined distributed nanoslits. Sci Rep 2016;6:33317. • [39] Wen D, Yue F, Li G, et al. Helicity multiplexed broadband metasurface holograms. Nat Commun 2015;6:8241. • [40] Wang B, Dong F, Li Q-T, et al. Visible-frequency dielectric metasurfaces for multiwavelength achromatic and highly dispersive holograms. Nano Lett 2016;16:5235–40. • [41] Liu S, Vabishchevich PP, Vaskin A, et al. An all-dielectric metasurface as a broadband optical frequency mixer. Nat Commun 2018;9:2507. • [42] Li Z, Dai Q, Mehmood MQ, et al. Full-space cloud of random points with a scrambling metasurface. Light Sci Appl 2018;7:63. • [43] Jang M, Horie Y, Shibukawa A, et al. Wavefront shaping with disorder-engineered metasurfaces. Nat Photonics 2018;12:84–90. • [44] Maguid E, Yulevich I, Yannai M, Kleiner V, Brongersma ML, Hasman E. Multifunctional interleaved geometric-phase dielectric metasurfaces. Light Sci Appl 2017;6:e17027. • [45] Hariharan P. Optical holography: principles, techniques, and applications, 2nd ed. New York, NY: Cambridge University Press, 1996. Google Scholar • [46] Goodman JW. Introduction to Fourier optics, 2nd. New York, NY: McGraw-Hill, 1996. Google Scholar • [47] Günter P. Holography, coherent light amplification and optical phase conjugation with photorefractive materials. Phys Rep 1982;93:199–299. • [48] Blanche P-A, Bablumian A, Voorakaranam R, et al. Holographic three-dimensional telepresence using large-area photorefractive polymer. Nature 2010;468:80–3. • [49] Zheng G, Mühlenbernd H, Kenney M, Li G, Zentgraf T, Zhang S. Metasurface holograms reaching 80% efficiency. Nat Nanotechnol 2015;10:308–12. • [50] Huang L, Mühlenbernd H, Li X, et al. Broadband hybrid holographic multiplexing with geometric metasurfaces. Adv Mater 2015;27:6444–9. • [51] Xiao D, Chang M-C, Niu Q. Berry phase effects on electronic properties. Rev Mod Phys 2010;82:1959–2007. • [52] Zhao R, Sain B, Wei Q, et al. Multichannel vectorial holographic display and encryption. Light Sci Appl 2018;7:95. • [53] Zhang F, Pu M, Li X, et al. All-Dielectric metasurfaces for simultaneous giant circular asymmetric transmission and wavefront shaping based on asymmetric photonic spin-orbit interactions. Adv Funct Mater 2017;27:1704295. • [54] Sung J, Lee G, Choi C, Hong J, Lee B. Single-layer bifacial metasurface: full-space visible light control. Adv Opt Mater 2019;7:1801748. • [55] Kamali SM, Arbabi E, Arbabi A, Horie Y, Faraji-Dana M, Faraon A. Angle-multiplexed metasurfaces: encoding independent wavefronts in a single metasurface under different illumination angles. Phys Rev X 2017;7:041056. Google Scholar • [56] Bao Y, Yu Y, Xu H, et al. Coherent pixel design of metasurfaces for multidimensional optical control of multiple printing-image switching and encoding. Adv Funct Mater 2018;28:1805306. • [57] Deng Z-L, Deng J, Zhuang X, et al. Diatomic metasurface for vectorial holography. Nano Lett 2018;18:2885–92. • [58] Chen Y, Yang X, Gao J. 3D Janus plasmonic helical nanoapertures for polarization-encrypted data storage. Light Sci Appl 2019;8:45. • [59] Guo Y, Huang Y, Li X, et al. Polarization-controlled broadband accelerating beams generation by single catenary-shaped metasurface. Adv Opt Mater 2019. doi:10.1002/adom.201900503. Google Scholar • [60] Choi C, Lee S, Mun S, et al. Metasurface with nanostructured Ge2Sb2Te5 as a platform for broadband-operating wavefront switch. Adv Opt Mater 2019;7:1900171. • [61] Kruk S, Hopkins B, Kravchenko II, Miroshnichenko A, Neshev DN, Kivshar YS. Invited Article: Broadband highly efficient dielectric metadevices for polarization control. APL Photonics 2016;1:030801. • [62] Chen Y, Yang X, Gao J. Spin-controlled wavefront shaping with plasmonic chiral geometric metasurfaces. Light Sci Appl 2018;7:84. • [63] Chen WT, Yang K-Y, Wang C-M, et al. High-efficiency broadband meta-hologram with polarization-controlled dual images. Nano Lett 2014;14:225–30. • [64] Khorasaninejad M, Ambrosio A, Kanhaiya P, Capasso F. Broadband and chiral binary dielectric meta-holograms. Sci Adv 2016;2:e1501258. • [65] Arbabi A, Horie Y, Ball AJ, Bagheri M, Faraon A. Subwavelength-thick lenses with high numerical apertures and large efficiency based on high-contrast transmitarrays. Nat Commun 2015;6:7069. • [66] Zhang X, Pu M, Guo Y, et al. Colorful metahologram with independently controlled images in transmission and reflection spaces. Adv Funct Mater 2019;29:1809145. • [67] Wan W, Gao J, Yang X. Full-color plasmonic metasurface holograms. ACS Nano 2016;10:10671–80. • [68] Qiu M, Jia M, Ma S, Sun S, He Q, Zhou L. Angular dispersions in terahertz metasurfaces: physics and applications. Phys Rev Appl 2018;9:054050. • [69] Li X, Chen L, Li Y, et al. Multicolor 3D meta-holography by broadband plasmonic modulation. Sci Adv 2016;2:e1601102. • [70] Jin L, Dong Z, Mei S, et al. Noninterleaved metasurface for (2 6 −1) spin- and wavelength-encoded holograms. Nano Lett 2018;18:8016–24. • [71] Wu PC, Tsai W-Y, Chen WT, et al. Versatile polarization generation with an aluminum plasmonic metasurface. Nano Lett 2017;17:445–52. • [72] Shi Z, Khorasaninejad M, Huang Y-W, et al. Single-layer metasurface with controllable multiwavelength functions. Nano Lett 2018;18:2420–7. • [73] Zang X, Dong F, Yue F, et al. Polarization encoded color image embedded in a dielectric metasurface. Adv Mater 2018;30:1707499. • [74] Huang Y-W, Chen WT, Tsai W-Y, et al. Aluminum plasmonic multicolor meta-hologram. Nano Lett 2015;15:3122–7. • [75] Dong F, Feng H, Xu L, et al. Information encoding with optical dielectric metasurface via independent multichannels. ACS Photonics 2019;6:230–7. • [76] Chen BH, Wu PC, Su V-C, et al. GaN metalens for pixel-level full-color routing at visible light. Nano Lett 2017;17:6345–52. • [77] Arbabi E, Arbabi A, Kamali SM, Horie Y, Faraon A. Multiwavelength polarization-insensitive lenses based on dielectric metasurfaces with meta-molecules. Optica 2016;3:628–33. • [78] Arbabi E, Arbabi A, Kamali SM, Horie Y, Faraon A. Multiwavelength metasurfaces through spatial multiplexing. Sci Rep 2016;6:32803. • [79] Arbabi E, Arbabi A, Kamali SM, Horie Y, Faraon A. High efficiency double-wavelength dielectric metasurface lenses with dichroic birefringent meta-atoms. Opt Express 2016;24:18468–77. • [80] Zhang M, Pu M, Zhang F, et al. Plasmonic metasurfaces for switchable photonic spin-orbit interactions based on phase change materials. Adv Sci 2018;5:1800835. • [81] Li J, Kamin S, Zheng G, Neubrech F, Zhang S, Liu N. Addressable metasurfaces for dynamic holography and optical information encryption. Sci Adv 2018;4:eaar6768. • [82] Yu P, Li J, Zhang S, et al. Dynamic Janus metasurfaces in the visible spectral region. Nano Lett 2018;18:4584–9. • [83] Forouzmand A, Salary MM, Inampudi S, Mosallaei H. A tunable multigate indium-tin-oxide-assisted all-dielectric metasurface. Adv Opt Mater 2018;6:1701275. • [84] Hashemi MRM, Yang S-H, Wang T, Sepúlveda N, Jarrahi M. Electronically-controlled beam-steering through vanadium dioxide metasurfaces. Sci Rep 2016;6:35439. • [85] Huang Y-W, Lee HWH, Sokhoyan R, et al. Gate-tunable conducting oxide metasurfaces. Nano Lett 2016;16:5319–25. • [86] Kafaie Shirmanesh G, Sokhoyan R, Pala RA, Atwater HA. Dual-gated active metasurface at 1550 nm with wide (>300°) phase tunability. Nano Lett 2018;18:2957–63. • [87] Komar A, Paniagua-Domínguez R, Miroshnichenko A, et al. Dynamic beam switching by liquid crystal tunable dielectric metasurfaces. ACS Photonics 2018;5:1742–8. • [88] Zou C, Komar A, Fasold S, et al. Electrically tunable transparent displays for visible light based on dielectric metasurfaces. ACS Photonics 2019;6:1533–40. • [89] Park J-W, Eom SH, Lee H, et al. Optical properties of pseudobinary GeTe, Ge2Sb2Te5, GeSb2Te4, GeSb4Te7, and Sb2Te3 from ellipsometry and density functional theory. Phys Rev B 2009;80:115209. • [90] Raoux S. Phase change materials. Annu Rev Mater Res 2009;39:25–48. • [91] Malek SC, Ee H-S, Agarwal R. Strain multiplexed metasurface holograms on a stretchable substrate. Nano Lett 2017;17:3641–5. • [92] Cai T, Tang S, Wang G, et al. High-performance bifunctional metasurfaces in transmission and reflection geometries. Adv Opt Mater 2017;5:1600506. • [93] Cai T, Wang G, Tang S, et al. High-efficiency and full-space manipulation of electromagnetic wave fronts with metasurfaces. Phys Rev Appl 2017;8:034033. • [94] Zhang L, Wu RY, Bai GD, et al. Transmission-reflection-integrated multifunctional coding metasurface for full-space controls of electromagnetic waves. Adv Funct Mater 2018;28:1802205. • [95] Zhu AY, Chen WT, Zaidi A, et al. Giant intrinsic chiro-optical activity in planar dielectric nanostructures. Light Sci Appl 2018;7:17158. • [96] Li Z, Liu W, Cheng H, Chen S, Tian J. Spin-selective transmission and devisable chirality in two-layer metasurfaces. Sci Rep 2017;7:8204. • [97] Yun J-G, Kim S-J, Yun H, et al. Broadband ultrathin circular polarizer at visible and near-infrared wavelengths using a non-resonant characteristic in helically stacked nano-gratings. Opt Express 2017;25:14260. • [98] Mun S-E, Hong J, Yun J-G, Lee B. Broadband circular polarizer for randomly polarized light in few-layer metasurface. Sci Rep 2019;9:2543. • [99] Lee G-Y, Hong J-Y, Hwang S, et al. Metasurface eyepiece for augmented reality. Nat Commun 2018;9:4562. • [100] Jang C, Bang K, Li G, Lee B. Holographic near-eye display with expanded eye-box. ACM Trans Graph 2018;37:1–14. Google Scholar • [101] Lee S, Jo Y, Yoo D, Cho J, Lee D, Lee B. Tomographic near-eye displays. Nat Commun 2019;10:2497. • [102] Lalanne P, Chavel P. Metalenses at visible wavelengths: past, present, perspectives. Laser Photonics Rev 2017;11:1600295. • [103] Hasman E, Kleiner V, Biener G, Niv A. Polarization dependent focusing lens by use of quantized Pancharatnam–Berry phase diffractive optics. Appl Phys Lett 2003;82:328–30. • [104] Paniagua-Domínguez R, Yu YF, Khaidarov E, et al. A metalens with a near-unity numerical aperture. Nano Lett 2018;18:2124–32. • [105] Zuo H, Choi D-Y, Gai X, et al. High-efficiency all-dielectric metalenses for mid-infrared imaging. Adv Opt Mater 2017;5:1700585. • [106] Tseng ML, Hsiao H-H, Chu CH, et al. Metalenses: advances and applications. Adv Opt Mater 2018;6:1800554. • [107] Wang W, Guo Z, Li R, et al. Ultra-thin, planar, broadband, dual-polarity plasmonic metalens. Photonics Res 2015;3:68–71. • [108] Wang W, Guo Z, Li R, et al. Plasmonics metalens independent from the incident polarizations. Opt Express 2015;23:16782–91. • [109] Ni X, Ishii S, Kildishev AV, Shalaev VM. Ultra-thin, planar, Babinet-inverted plasmonic metalenses. Light Sci Appl 2013;2:e72. • [110] Chen X, Huang L, Mühlenbernd H, et al. Dual-polarity plasmonic metalens for visible light. Nat Commun 2012;3:1198. • [111] Chen X, Huang L, Mühlenbernd H, et al. Reversible three-dimensional focusing of visible light with ultrathin plasmonic flat lens. Adv Opt Mater 2013;1:517–21. • [112] Wang S, Wu PC, Su V-C, et al. Broadband achromatic optical metasurface devices. Nat Commun 2017;8:187. • [113] Hu J, Liu C-H, Ren X, Lauhon LJ, Odom TW. Plasmonic lattice lenses for multiwavelength achromatic focusing. ACS Nano 2016;10:10275–82. • [114] Williams C, Montelongo Y, Wilkinson TD. Plasmonic metalens for narrowband dual-focus imaging. Adv Opt Mater 2017;5:1700811. • [115] Li X, Xiao S, Cai B, He Q, Cui TJ, Zhou L. Flat metasurfaces to focus electromagnetic waves in reflection geometry. Opt Lett 2012;37:4940–2. • [116] West PR, Stewart JL, Kildishev AV, et al. All-dielectric subwavelength metasurface focusing lens. Opt Express 2014;22:26212. • [117] Khorasaninejad M, Zhu AY, Roques-Carmes C, et al. Polarization-insensitive metalenses at visible wavelengths. Nano Lett 2016;16:7229–34. • [118] Boroviks S, Deshpande RA, Mortensen NA, Bozhevolnyi SI. Multifunctional metamirror: polarization splitting and focusing. ACS Photonics 2018;5:1648–53. • [119] Wen D, Yue F, Ardron M, Chen X. Multifunctional metasurface lens for imaging and Fourier transform. Sci Rep 2016;6:27628. • [120] Chen X, Chen M, Mehmood MQ, et al. Longitudinal multifoci metalens for circularly polarized light. Adv Opt Mater 2015;3:1201–6. • [121] Zhang Z, Wen D, Zhang C, et al. Multifunctional light sword metasurface lens. ACS Photonics 2018;5:1794–9. • [122] Groever B, Rubin NA, Mueller JPB, Devlin RC, Capasso F. High-efficiency chiral meta-lens. Sci Rep 2018;8:7240. • [123] Hu J, Wang D, Bhowmik D, et al. Lattice-resonance metalenses for fully reconfigurable imaging. ACS Nano 2019;13:4613–20. • [124] Colburn S, Zhan A, Majumdar A. Metasurface optics for full-color computational imaging. Sci Adv 2018;4:eaar2114. • [125] Chen WT, Zhu AY, Sanjeev V, et al. A broadband achromatic metalens for focusing and imaging in the visible. Nat Nanotechnol 2018;13:220–6. • [126] Wang S, Wu PC, Su V-C, et al. A broadband achromatic metalens in the visible. Nat Nanotechnol 2018;13:227–32. • [127] Shrestha S, Overvig AC, Lu M, Stein A, Yu N. Broadband achromatic dielectric metalenses. Light Sci Appl 2018;7:85. • [128] Chen WT, Zhu AY, Sisler J, Bharwani Z, Capasso F. A broadband achromatic polarization-insensitive metalens consisting of anisotropic nanostructures. Nat Commun 2019;10:355 • [129] Arbabi A, Arbabi E, Kamali SM, Horie Y, Han S, Faraon A. Miniature optical planar camera based on a wide-angle metasurface doublet corrected for monochromatic aberrations. Nat Commun 2016;7:13682. • [130] Groever B, Chen WT, Capasso F. Meta-lens doublet in the visible region. Nano Lett 2017;17:4902–7. • [131] She A, Zhang S, Shian S, Clarke DR, Capasso F. Adaptive metalenses with simultaneous electrical control of focal length, astigmatism, and shift. Sci Adv 2018;4:eaap9957. • [132] Arbabi E, Arbabi A, Kamali SM, Horie Y, Faraji-Dana M, Faraon A. MEMS-tunable dielectric metasurface lens. Nat Commun 2018;9:812. • [133] Zhu AY, Chen WT, Sisler J, et al. Compact aberration-corrected spectrometers in the visible using dispersion-tailored metasurfaces. Adv Opt Mater 2018;7:1801144. Google Scholar • [134] Ee H-S, Agarwal R. Tunable metasurface and flat optical zoom lens on a stretchable substrate. Nano Lett 2016;16:2818–23. • [135] Afridi A, Canet-Ferrer J, Philippet L, Osmond J, Berto P, Quidant R. Electrically driven varifocal silicon metalens. ACS Photonics 2018;5:4497–503. • [136] Roy T, Zhang S, Jung IW, Troccoli M, Capasso F, Lopez D. Dynamic metasurface lens based on MEMS technology. APL Photonics 2018;3:021302. • [137] Papaioannou M, Plum E, Rogers ET, Zheludev NI. All-optical dynamic focusing of light via coherent absorption in a plasmonic metasurface. Light Sci Appl 2018;7:17157. • [138] Liu X, Deng J, Li KF, et al. Optical metasurfaces for designing planar Cassegrain-Schwarzschild objectives. Phys Rev Appl 2019;11:054055. • [139] Yesilkoy F, Arvelo ER, Jahani Y, et al. Ultrasensitive hyperspectral imaging and biodetection enabled by dielectric metasurfaces. Nat Photonics 2019;13:390–6. • [140] Li L, Ruan H, Liu C, et al. Machine-learning reprogrammable metasurface imager. Nat Commun 2019;10:1082. • [141] Holsteen AL, Lin D, Kauvar I, Wetzstein G, Brongersma ML. A light-field metasurface for high-resolution single-particle tracking. Nano Lett 2019;19:2267–71. • [142] Yang J, Ghimire I, Wu PC, et al. Photonic crystal fiber metalens. Nanophotonics 2019;8:443–9. • [143] Lan S, Zhang X, Taghinejad M, et al. Metasurfaces for near-eye augmented reality. ACS Photonics 2019;6:864–70. • [144] Khorasaninejad M, Chen WT, Zhu AY, et al. Multispectral chiral imaging with a metalens. Nano Lett 2016;16:4595–600. • [145] Zhu AY, Chen W-T, Khorasaninejad M, et al. Ultra-compact visible chiral spectrometer with meta-lenses. APL Photonics 2017;2:036103. • [146] Faraji-Dana M, Arbabi E, Arbabi A, Kamali SM, Kwon H, Faraon A. Compact folded metasurface spectrometer. Nat Commun 2018;9:4196. • [147] Arbabi E, Li J, Hutchins RJ, et al. Two-photon microscopy with a double-wavelength metasurface objective lens. Nano Lett 2018;18:4943–8. • [148] Lin RJ, Su V-C, Wang S, et al. Achromatic metalens array for full-colour light-field imaging. Nat Nanotechnol 2019;14:227–31. • [149] Pahlevaninezhad H, Khorasaninejad M, Huang Y-W, et al. Nano-optic endoscope for high-resolution optical coherence tomography in vivo. Nat Photonics 2018;12:540–7. • [150] Arbabi A, Arbabi E, Horie Y, Kamali SM, Faraon A. Planar metasurface retroreflector. Nat Photonics 2017;11:415–20. • [151] Balthasar Mueller JP, Leosson K, Capasso F. Ultracompact metasurface in-line polarimeter. Optica 2016;3:42. • [152] Chen WT, Török P, Foreman MR, et al. Integrated plasmonic metasurfaces for spectropolarimetry. Nanotechnology 2016;27:224002. • [153] Arbabi E, Kamali SM, Arbabi A, Faraon A. Full-Stokes imaging polarimetry using dielectric metasurfaces. ACS Photonics 2018;5:3132–40. • [154] Lee K, Yun H, Mun S-E, Lee G-Y, Sung J, Lee B. Ultracompact broadband plasmonic polarimeter. Laser Photonics Rev 2018;12:1700297. • [155] Wu PC, Chen J-W, Yin C-W, et al. Visible metasurfaces for on-chip polarimetry. ACS Photonics 2018;5:2568–73. • [156] Yang Z, Wang Z, Wang Y, et al. Generalized Hartmann-Shack array of dielectric metalens sub-arrays for polarimetric beam profiling. Nat Commun 2018;9:4607. • [157] Shalaev MI, Sun J, Tsukernik A, Pandey A, Nikolskiy K, Litchinitser NM. High-efficiency all-dielectric metasurfaces for ultracompact beam manipulation in transmission mode. Nano Lett 2015;15:6261–6. • [158] Yu YF, Zhu AY, Paniagua-Domínguez R, Fu YH, Luk’yanchuk B, Kuznetsov AI. High-transmission dielectric metasurface with 2π phase control at visible wavelengths. Laser Photonics Rev 2015;9:412–8. • [159] Hong J, Kim S-J, Kim I, et al. Plasmonic metasurface cavity for simultaneous enhancement of optical electric and magnetic fields in deep subwavelength volume. Opt Express 2018;26:13340–8. • [160] Qin F, Ding L, Zhang L, et al. Hybrid bilayer plasmonic metasurface efficiently manipulates visible light. Sci Adv 2016;2:e1501168. • [161] Kita S, Takata K, Ono M, et al. Coherent control of high efficiency metasurface beam deflectors with a back partial reflector. APL Photonics 2017;2:046104. • [162] Zhou Z, Li J, Su R, et al. Efficient silicon metasurfaces for visible light. ACS Photonics 2017;4:544–51. • [163] Wang B, Dong F, Feng H, et al. Rochon-prism-like planar circularly polarized beam splitters based on dielectric metasurfaces. ACS Photonics 2018;5:1660–4. • [164] Chen WT, Zhu AY, Sisler J, et al. Broadband achromatic metasurface-refractive optics. Nano Lett 2018;18:7801–8. Revised: 2019-08-08 Accepted: 2019-08-09 Published Online: 2019-08-30 Funding Source: National Research Foundation of Korea Award identifier / Grant number: 2017R1A2B2006676 This research was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science and ICT (2017R1A2B2006676). Citation Information: Nanophotonics, Volume 8, Issue 10, Pages 1701–1718, ISSN (Online) 2192-8614, Export Citation
2019-10-19 17:09:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7250005602836609, "perplexity": 5975.31102442565}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986697439.41/warc/CC-MAIN-20191019164943-20191019192443-00181.warc.gz"}
https://www.physicsforums.com/threads/electromagnetism-the-distance-from-point-a-to-point-b.916278/
# Electromagnetism - The distance from point a to point b 1. May 31, 2017 ### Jon Blind 1. The problem statement, all variables and given/known data So I want to know the distance to 2. The proton is at v=0 at the 1. We know that: q=1.602*10^-19 point 1 L=1mm v=1.1*10^6 at point 2 F=1.44*10^-12 at point 1 2. Relevant equations E=(1/4πε)*(q/r2) ΔV=∫E*dr=(1/4πε)*q∫(1/r2)=(1/4πε)*q*(1/r2-1/r1) ΔU=ΔK=mv2/2 ΔK=mv2/2=ΔV*q=q*(1/4πε)*q*(1/r2-1/r1) 3) The attempt at a solution I can't seem to calculate the distance. I don't know where I've gone wrong. Last edited: May 31, 2017 2. May 31, 2017 L=1 mm. Can you see that this is also $r_1$ ? 3. Jun 1, 2017 ### Jon Blind Exactly and I'm trying to find out r2. According to my calculations r2=2.28*10^-13 but that seems way too little? 4. Jun 1, 2017 What did you use for the mass of the proton? Also, did you convert $L$ to meters? Also you need to solve for Q. You can do that because itt tells you the force $F$ at point 1. 5. Jun 1, 2017 ### Jon Blind Yes, the mass of the proton is 1.673*10^-27 Epsilon=8.854*10^-12 and q=1.602*10^-19 6. Jun 1, 2017 I see one mistake. You assumed the two q's were equal. See also my edited previous post. You need to solve for $Q$. 7. Jun 1, 2017 ### Jon Blind So Q=F/E ?? I'll give it a try and calculate it now, thankyou very much. 8. Jun 1, 2017 $F=\frac{Qq}{4 \pi \epsilon_o r^2}$. They give you $F$, $q$, and $r$. You need to compute $Q$. 9. Jun 1, 2017 ### Jon Blind 5.87*10^-4m THANKYOU! Freaking hell I was so confused 10. Jun 1, 2017 Compute $Q$ in Coulombs. You need this number for the remainder of the calculations. The answer you gave is incorrect. 11. Jun 1, 2017 ### Jon Blind How is that possible? ΔK=mv2/2=ΔV*q=q*(1/4πε)*Q*(1/r2-1/r1) Q=1.00*10^-9 ΔK=mv2/2=ΔV*q=q*(1/4πε)*q*(1/r2-1/r1) (mv^2*epsilon*m*4*pi)/(2*q*Q)=1/r2-1/r1 (1.673*10^-27)*)((1.1*10^6)^2)*4*pi*(8.854*10^-12)/(2*(1.602*10^-19)*(1.00*10^-9))=1/r2-1000 1/r2=1702.97 r2=5.872*10^-4 12. Jun 1, 2017 Close, but your final term should read $\frac{1}{r_1}-\frac{1}{r_2} =1000-\frac{1}{r_2}$ . ( $r_2>r_1$). $\\$ Once you correctly solve for $r_2$, you then need to compute the distance $D=r_2-r_1$. 13. Jun 1, 2017 ### Jon Blind In that case r2 should be= 0.00337m r2-r1=0.00237 14. Jun 1, 2017 That's what I got also. :) :) 15. Jun 1, 2017 Now solve for $D$. See my edited post #12. 16. Jun 1, 2017 ### Jon Blind Yep I saw it, and I edited my post and did it ;) r2-r1=0.00237
2018-02-20 06:40:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8696446418762207, "perplexity": 2901.82189701323}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812880.33/warc/CC-MAIN-20180220050606-20180220070606-00383.warc.gz"}
https://library.kiwix.org/philosophy.stackexchange.com_en_all_2021-04/A/question/23038.html
## Is there an alternative to the scientific method? 47 33 Intro The scientific method is a key process of how we acquire knowledge and may shape our understanding of the world. If I am not mistaken, this method has been defined several times during our history. The scientific method is not free from disadvantages. I don't want to talk about practical issues such as the statistical bias caused by the fact that work that demonstrates effect gets more easily published than work that demonstrates absence of effect. I am asking about the fundamentals of the scientific method. I am asking whether anything better than the current scientific method could theoretically be achieved? Is there any reason why the current method has to be the best method for acquiring knowledge or can we imagine anything better? Going into the specific Typically, I am curious about the process of proving the hypotheses wrong. If I am not mistaken, no hypothesis can be proven correct, we can only prove hypothesis (uncountable) wrong. Such methodology very much follows the method of hypothesis testing in statistics. Taken from my memory of the work I read from Elliot Sober, a long time ago, this method leads to the important issue that one can only discard all the hypotheses that (s)he can imagine, but can never be sure that he would not have missed some hypotheses. One can never prove anything right. One can only prove things wrong. Can't we imagine a method that would not be based on rejecting hypothesis? Is there a fundamental restriction in how one can acquire knowledge that forces us to use this seemingly sup-optimal method? "we can only prove hypothesis wrong" Doubtly. There is only evidence for and evidence against. Appliablity. Usefulness. Limitations. These are properties of theories (and algorithms in general). – rus9384 – 2018-10-10T06:46:07.133 One disadvantage of the scientific method is evidence based medicine. In order to be approved, a medicine must demonstrate that it works in a statistically significant number of patients. If a medicine only works for one patient, it would not be approved. But what if it saves the life of that patient? Morally that patient is entitled to receive a non-approved medicine. To them morality and scientific method are contradictory. – Dr Jonathan Kimmitt – 2020-01-23T15:09:58.203 1@Dr Jonathan Kimmitt If a medicine only worked for one patient and it saved his life, how would you know? Without repeatable tests, how could you infer anything? – D. Halsey – 2020-01-23T23:44:58.223 At the risk of sounding self-serving, and for an alteration to the scientific method which attempts to solidify the axiomatic principles which support the hypotheses which instigate the research, visit academia.edu and see my, 'Deductive Theory, Inductive Method'. Just enter my full name in the search link in the upper left corner. One note; so many scientific achievements, often in medical research, happen purely by accident.While investigating one element they discover its applicabilty for another need. Nothing wrong with it, just unpredictable. – None – 2020-09-23T18:39:37.217 5Why is it sub-optimal? You note only, from Sober, that scientific method can't reach certainty, because nothing can. Is there some other reason it's sub-optimal? If not, then the answer is very short: no. – ChristopherE – 2015-04-16T11:34:59.197 In the common law legal systems, an adversarial system along with the principle of adhering to precedent is the mechanism by which knowledge about the interpretation and application of law is generated (or discovered). Just to provide what I think is a "alternative" to science, in the sense that for a different domain of application a different approach is applied. – Dave – 2015-04-16T19:55:33.673 @ChristopherE No, I had nothing else in mind than what I report from Sober. It is the only criticism I could think about. But I am curious about whether another method may exist that may work better or at least as well. While you say, the answer is very short: no, the most upvoted answer so far seems to say that there are alternative method (such as in Trad. Chinese Medicine) that work eventually as well. Reading comments and answers it sounds to me that the question is debated. Thanks! – Remi.b – 2015-04-16T20:06:50.527 13Traditional Chinese Medicine, I assure you, does not do a better job than scientific methods do on that particular scale: achieving certainty. Science does not achieve certain knowledge, but nothing else does better. This is because scientific methods ARE the methods that achieve such certainty. If TCM were able to generate accurate knowledge, its methods would BE scientific (and are to the extent that it does). – ChristopherE – 2015-04-16T20:36:46.690 2If you can argue why nothing else can do a better job, I'd love to read it in an answer. I can't think of any theoretical reason why nothing could be better than the currently used scientific method for increasing knowledge about natural phenomena. (disclaimer: I'm a PhD student in theoretical population genetics. I know about the scientific method but know pretty much nothing in philosophy). – Remi.b – 2015-04-16T20:43:55.497 2"The scientific method is a key process of how we acquire knowledge..", um, well, no. Technically, it's how we acquire *empirical* knowledge. The scientific method is nigh-useless for the acquisition of non-empirical knowledge and as a consequence, we use other methods/approaches for those. For instance: Philosophy. – RBarryYoung – 2015-04-16T20:45:03.647 1Is there such thing as empirical knowledge? Empirical or theoretical sounds more like an adjective to describe a method of acquiring knowledge rather than as a a category of knowledge. – Remi.b – 2015-04-16T22:44:29.867 6Actually, the scientific method cannot even prove something wrong. It can only quantify the probability with which a theory agrees with observation. There are some agreed-upon probabilities after which people call a theory false (the 5 σ from particle physics for example), but that's not an absolute falseness either. – Turion – 2015-04-16T23:15:09.600 3Just as an observation, it would probably help to focus your question on the "Precise" aspect of it and to simply not try at this stage to try to treat your "precise" problem as a more general problem just yet. Your question is "Falsificationism does not yield definitive knowledge because it always only resolves logically formed hypotheses negatively. Is there any kind of epistemology that avoids this problem?" – Paul Ross – 2015-04-17T10:39:20.270 Short answer: No – rpax – 2015-04-17T16:49:18.403 2The alternative is called evolution. For example, pre-scientific knowledge about topics critical to survival came from the fact that populations who guessed wrong died off. Of course this is nowhere near as efficient as science. – R.. GitHub STOP HELPING ICE – 2015-04-19T16:02:10.663 1While I think there are some merit-worthy questions related to the methods of science in the philosophy of science, I worry about this question since "scientific method" turns out to be quite the rabbit hole in terms of what is supposedly is and supposedly does. – virmaior – 2015-04-19T16:02:15.510 30 (Edit: this answer is now split into two parts, thanks to a lengthy discussion with Rex Kerr. I made my original answer on a very specific reading of the scientific method. He had a very different reading, which came to a different but very related outcome. I've tried to capture that in the first part. The second part is my original answer, for those who wish to use the stricter reading) There are at least two extremes as to how one can define the scientific method. One is a process, one is more of a set of principles and a goal. The process is well defined as: • Observe something interesting • Formulate a hypothesis that you think would model this interesting thing better than existing model. • Run a series of independent tests of the hypothesis. • Statistically demonstrate that the original model (often called the null-hypothesis) predicts the outcomes of the tests to be highly unlikely. • Reject the null hypothesis (assuming the data backs your claim) • Demonstrate that your new model does a better job of predicting the statistical results. This is what I was taught the scientific method was in high school. If that is the version you are after, skip ahead to the second part, which explicitly targets that reading. However, there is another more fluid reading which also exists. The statistical requirement is relaxed, because it can cause trouble. However, there is a focus on both the elimination of hypotheses through testing and the preference towards hypotheses which are testable. This reading of the scientific method is a very general direction, so the alternatives are equally general. Science is a very deductive learning approach. It depends upon one writing a hypothesis in highly objective terms and then testing it. There are many situations where deductive learning does not work. Procedural learning is often viewed as an alternative approach. Consider the case of an athlete. They collect large amounts of information from scientific approaches, but the final bit that takes them from a "good athlete" to a "great athlete" is all "feel." There may be no written hypotheses. There may be no statistical testing. Yet, the mind absolutely learns in this way. Thus, procedural learning like this would be a valid alternative method. In fact, many Chinese martial arts focus almost entirely on procedural learning because it is so hard to learn deductively. Which reading of the scientific method you want to use is up to you. What follows is written entirely from the perspective of a strict statistically valid approach to the scientific method. Much of what has made science great is its ability to build upon previous hypotheses. While statistical rigor is a nicety for rejecting hypotheses, it becomes essential for building hypothesis which can support others. Finding an alternative to the scientific method depends on you deciding what you want out of a method. You will never find a better tool than the scientific method at it's game. However, if that game is not what you really want it to be, there are alternatives. The most visible example of this I have seen is western medicine compared to Traditional Chinese Medicine. They developed very different approaches, and yet both appear to yield results. TCM actually does work on line along your questioning: it is not fundamentally built off of rejecting hypothesis. Let's look at the scientific method, and see if we can make some headway. There are two major features of the scientific method which stand out as "interesting" for this line of thinking: • The scientific method is highly steeped in the language of statistics. • The scientific method seeks objective theories. • The scientific method tests theories. These are traditionally seen as strengths. However, they can also be seen as weaknesses (like all good superheros, their strength is their weakness.. that's what makes them interesting). The scientific method is completely and utterly useless without statistics. This means any singular event is completely beyond its reasoning. It cannot provide answers to topics such as "the purpose of your life" because there is only 1 you, and N=1 means there is no statistics. Related to this, the scientific method strives to be objective. It always tries to remove the observer from the picture. This is very valuable, because it ensures that your discoveries are applicable to others. However, it also proves to be tricky in many situations. Social studies in particular have great difficulties with the scientific method because it is so very difficult to make good tests that keep the observer out of the loop. As an example, TCM claims that acupuncture works. Those who have tried it, claim it works with uncanny success. However, science has had fits trying to find any effect of acupuncture beyond the infamous cop-out "the placebo effect." The issue is that it is almost impossible to develop an effective control to measure against because the acupuncture practitioner knows if they are doing it right or not. Whether you believe acupuncture works or not depends heavily on whether you accept results which lack a solid control to ensure objectivity. Finally, science tests its theories. This sounds absurd, because it seems so obvious that you should test them. However, a theory is not accepted at all until it is tested. The result is that anyone with a theory must expend the resources to do the testing before science will do anything with it. Other approaches get away with a different style: you use a theory once you have it, and you test it when you get an opportunity to do so. The tests can also be dangerous. (Edit: I had a reference to the LHC and potential to create black holes here, but it was too contentious. Instead, it has been replaced with a hypothetical example) Consider a hypothetical particle physics experiment. The scientist is rather confident that their theory is correct. They begin experimenting, after calculating that they would like 100 samples to do statistics on. Generally speaking, they are finding their theory holds out for test after test. However, on tests which disagree with their hypothesis (which happens in the scientific method due to noise), the observer notices a burst of energy from the test apparatus. That burst becomes stronger and more dangerous with every data point that disagrees with their hypothesis. At some point, the scientist decides to cut the experiment short, because they are uncomfortable putting their life at risk to finish the test. By the strictest reading of the scientific method, that data cannot be analyzed because it is tainted with the scientist's choice to cut the tests off early. This might induce biases because the scientist is more likely to cut them off faster if the results look good for their theory. Other methodologies are capable of using this data (including the intuition of that scientist, who will not try the exact same experiment again). Seeing that the strengths and weaknesses of science are so confounded, it is up to each individual to decide if those are ideal for them. There are many others, none so visibly different from the scientific method than that of TCM. As described to me in a lecture, the difference is in the approach towards healing the human body: • Western medicine tears the body apart into components, develops hypotheses about these components, then builds them up. At each step, it develops testable hypotheses, and tests them. From there, it finds things which may provide results, and tests those. • TCM starts with the body as a whole, finds things that cause good results, then develops testable theories about why the results occurred. The end result is that much of TCM is doctor-centric. A doctor finds out what works well for them, and suggests it to others. The focus is less on rejecting bad hypotheses, and more on finding new good hypotheses. TCM relies more on natural attrition to weed out the hypothesis, rather than actively trying to disprove them. Can I claim one is better than the other? I'm not sure if I can. However, I do feel comfortable claiming that they are different, and that a remarkably large number of individuals consider one better than the other in both directions. It's simply another way to approach things. An example would be the balance between explainability vs accuracy of predictions in statistics. Traditional statistics focus on building a predictive model that one can understand, and is similar in approach to the scientific method: the goal is the model. Machine learning throws explainability away, and consider being able to do accurate predictions has more value than the model. So there at least, there are alternatives to the scientific method of trying to explaining a phenomenon and refute that understanding. – ptyx – 2020-09-22T19:07:13.953 14Western medicine is not a good example of scientific methodology. I say this because you should differentiate between biology, pharmacology, testing, and how medicine is practiced by doctors on a day to day basis. Doctors diagnosis symptoms in a patient and often mis-diagnosis symptoms. They rely on their experience and 'best guess' more times than often. – Swami Vishwananda – 2015-04-16T08:22:51.547 1There is a difference between western medicine and biology. Bayesian statistics plays a large role in guiding the selection an ordering of diagnostic tests. (Sensitive tests before specific ones.) Large enough to make the point of the tension of relying on statistics. – mac389 – 2015-04-16T12:43:02.723 2Only minor criticism: "It always removes the observer from the picture." which I think is downright wrong, because it can never remove the observer from the picture despite indeed being an (unachievable) goal of the scientific method. If anything, the believe that it could be objective is probably one of the greatest flaws of the scientific method which other methods have to a far lesser extent (e.g. TCM has no issues accepting that two people disagree, whilst a western scientist is convinced that whatever he finds out is the absolute singular truth unaffected by his opinions and ideas). – David Mulder – 2015-04-16T13:55:15.100 1@DavidMulder You're right. I say it removes the observer from the picture because of the form of the hypotheses it tests. The effect of the observer is always confounded with the effect of a "random variable." The theories exclude the observer, and the reality of it being unachievable shows up as noise. However, what I have found is there are a few classes of theories science wishes it could explore, where bundling the observer and randomness yields experimental results too weak to be used. – Cort Ammon – 2015-04-16T14:59:56.787 4@SwamiVishwananda I use western medicine as an example because it is in that fuzzy region where things break down, thus there is a useful comparison to other methods (like TCM). If I were to concentrate on a region where the scientific methodology is at its best (such as physics), there would be little to no competition to the scientific methodology, so it would be hard to find a methodology good enough to be deemed "an alternative to the scientific method." – Cort Ammon – 2015-04-16T15:03:16.683 1As an analogy, one could look at sports stars and say "is there an alternative for claiming Michael Jordan is the best athlete in the world?" To find answer to this, we're probably not going to get to look at basketball stats. Instead we'll look at less clearcut regions, like his golf game, and show that there are athletic things Tiger Woods can do better than Michael Jordan. It is then up to the individual to decide if those are athletic things they want to focus on for their answer to the question. – Cort Ammon – 2015-04-16T15:07:26.813 2@CortAmmon Yeah, I don't think it's a big deal, I just wish scientists were more aware of it. I mean, the history of science teaches us that it even applies to the 'hard sciences' and yet even in the 'soft sciences' a lot of scientists forget it entirely. So that's why I criticize statements like "it is objective" or it "always removes", instead of "it tries to be objective" and it "attempts removing". But either way, it seems you're aware of those weaknesses, so we probably agree xD – David Mulder – 2015-04-16T15:26:07.917 @DavidMulder I like those word choices. I've incorporated them into the answer. Thank you! – Cort Ammon – 2015-04-16T15:40:21.993 2One fun fact about the LHC. Scientists have known since long before the LHC was designed that cosmic rays produce explosions in the upper atmosphere of Earth that are millions of times stronger than those produced in the LHC. Thus, we knew before LHC was built that if the LHC were capable of destroying the Earth, it would have already been destroyed a long time ago. That sounds like a pretty good safeguard. – user3294068 – 2015-04-16T16:28:47.567 @user3294068 that's a neat bit of trivia! I'd known about the debate, but I hadn't heard that side of it! – Cort Ammon – 2015-04-16T19:30:05.587 2This is somewhat right, but the misconceptions are stated so strongly that it does a disservice to anyone trying to understand the actual practice of the scientific method as opposed to a formalization of it that sounds kind of right but wouldn't be accepted by those who do science. In particular, the "science has nothing to say" points are almost all uniformly badly wrong, because you have theory based on other data which is used to generate your most likely hypotheses, which are by definition your best guess of what will happen. The LHC concern was dealt with totally scientifically. – Rex Kerr – 2015-04-16T21:06:22.473 @RexKerr having data and a hypothesis does not mean either the data or the hypothesis is phrased in a form which is testable within the confines of the scientific method. If this is the best guess of what will happen, it suggests the data and/or hypothesis are phrased in a form of an alternative to the scientific method, does it not? – Cort Ammon – 2015-04-16T21:22:06.867 @RexKerr I have removed the LHC example and replaced it with a more hypothetical situation. Does it seem more amenable to your senses? – Cort Ammon – 2015-04-16T21:56:16.343 2@CortAmmon - No, that's not really any better. There are plenty of statistical measures that can be used to quantify degree of certainty that the bursts are getting stronger, that the hypothesis is true despite the experiment being ended early, etc.. You've just replaced an incorrect interpretation with a straw man. – Rex Kerr – 2015-04-16T22:55:22.557 10@CortAmmon - Also, the acupuncture point is almost equally bad. You can of course test the hypothesis that being attended by an acupuncturist will help your pain, and compare that against being attended by various other people doing various other things. If TCM says "acupuncture provides relief from XYZ" and science says "acupuncture, delivered in the traditional TCM way, provides relief from XYZ but we can't find strong support for the hypothesis that it works for the reasons the practitioners say it does", that is hardly showing that the scientific method has limitations. – Rex Kerr – 2015-04-16T23:00:42.140 – Rex Kerr – 2015-04-16T23:00:48.203 Is "procedural learning" different in kind from "deductive learning"? Does deduction require an explicit, verbal formulation? Learning how to, e.g. perform a certain kick in a martial art might be said to involve the formulation of a series of sub-conscious hypotheses -- "the kick will be performed correctly given this set of neural activations" -- that are proved false. Is that too much of a metaphor to be useful? Can it be definitively said that it is not deductive? – jscs – 2015-04-17T18:43:46.747 7Due to science, modern TCM practitioners have largely abandoned qi, meridians, yin and yang, energy flow etc. as explanatory frameworks, so it seems odd to use TCM as an example of a realistic alternative to science. There is also little agreement, even amongst TCM practitioners, of diagnoses or treatments. "A doctor finds out what works well for them, and suggests it to others" is, funnily enough, also the method of witch doctors and faith healers... which is not a credible alternative to science. – bain – 2015-04-17T19:49:27.510 1@joshcaswell. I think that that comes down to definitions, and if you define it that way, I'm not positive you can separate the scientific method from any process, reducing the power of any statement to nil. This is actually why I started with the strict definition, but widened it to fit Rex's definition after a long conversation. However, I think the wider definition gets uncomfortably close to the definition of optimization. – Cort Ammon – 2015-04-18T02:31:29.940 24 If I am not mistaken, no hypothesis can be proven correct, we can only prove hypothesis wrong. This is known as "falsificationism". It viewed with much scepticism by today's philosophers of science. The author you mention, Elliot Sober, has suggested that it be retired, deriding it as "Popper's f-word" (referring to Karl Popper, whose own views on this subject shifted somewhat over time). It is untrue in the exact way you described it, because if a hypothesis A is wrong, then there is another hypothesis, called A-is-wrong, that is correct. So if it is possible to prove some hypotheses wrong, it must also be possible to prove some hypotheses correct. But the real problem is that you are speaking in terms of absolute certainty, so you are crediting the scientific method with powers no method could possess. All evidence has multiple (perhaps infinitely many) possible explanations. One interpretation is always available: the evidence might be flawed, in which case it can be ignored. So all knowledge is conditional. Outside of mathematics, knowledge is conditional on unreliable evidence. But even mathematical deduction produces theorems that are true only because other theorems on which they depend have been shown to be true, and ultimately the whole edifice rests on basic assumptions (axioms) that are simply assumed to be true. Or rather, all mathematical truth is conditional on the truth of the axioms. So we need a way to compute the certainty of a deduction based on the certainty of the facts that deduction relies on. That's what probability theory is. Not for nothing did Laplace observe that the whole system of human knowledge is tied up with probability. One more important thing is that the scientific method aims to generate useful information. That means that if you test theory A and it is wrong, theory A-is-wrong is useless in the sens it cant be used for narrow predictions. If the ball is blue is wrong then it can be green, yellow, red or any other collor. Someone reading the statement the ball is not blue still knows almost nothing about this ball. – ivbc – 2017-06-11T01:20:54.747 @ivbc First, blue might be the dangerous colour, in which case not-blue would be a very useful thing to know. There is no correlation between the usefulness of a theorem and the appearance of the not operator in that theorem (this is the first point I made in the answer). Cont... – Daniel Earwicker – 2017-06-11T07:00:49.840 1@ivbc Second, you are implying that because there is an infinity of colours, eliminating a precise colour is only infinitesimally small progress. But blue is in fact a range of colours, within which is an infinity of different blues! All probability is conditional. We can only say that, given the colour is in the range B, its colour is in the narrower range A, that is, P(A|B). Here B is "it has a colour in the visible spectrum", A is "it has a colour in the visible spectrum excluding the range we perceive as blue". All our knowledge is of this form (2nd point I made above). – Daniel Earwicker – 2017-06-11T07:07:22.960 @robertbristow-johnson "of course , A-is-true and A-is-wrong cannot both be the case" Only if you use the law of excluded middle! – JAB – 2018-02-26T18:49:19.117 @CortAmmon Although you've no doubt worked it out, Daniel's example is wholly analogous to the proof (which you probably know) that there exist irrational numbers $a,,b$ such that $a^b$ is rational. It assumes the law of excluded middle, so it's no good for intuitionists, but its the only proof I know that actually derives a practical result in the odd way of Daniel's example, which would otherwise seem of theoretical importance. – Selene Routley – 2018-03-01T00:18:49.290 Daniel, this is the proof that is a concrete example of your first paragraphs. There exist irrational numbers $a,,b$ such that $a^b$ is rational. Suppose $a=\sqrt{2}^\sqrt{2}$ and $b=\sqrt{2}$. If $a$ is irrational, then we've found an example of what we seek with $a$ and $b$. But if $a$ is rational, then it alone is an example of what we seek if we write $a=b=\sqrt{2}$. There's no way to know which is the rational one, and I don't believe anyone yet knows. – Selene Routley – 2018-03-01T00:21:25.857 Also, another comment about Euclid. There were many "proofs" around of the parallel postulate before the work of Bolyai and Lobachevsky discovered non Euclidean geometry. – Selene Routley – 2018-03-01T00:24:12.653 4Comments from downvoters always welcome! – Daniel Earwicker – 2015-04-16T16:22:27.557 6Your paragraph with A-is-wrong gets tricky in situations where there can be infinitely many hypothesis. It may require infinitely many rejections of hypotheses A, A2, A3, etc. before proving A-is-wrong is correct. Some bounded problems can show that there are not infinitely many hypotheses, but in other cases, it is remarkably hard to bound them as such. – Cort Ammon – 2015-04-16T21:30:46.897 6Even in mathematics, there's also the possibility that everyone examining the proof so far has missed a mistake. – Dan Bryant – 2015-04-16T21:32:54.447 @CortAmmon - how so? When some component hypothesis A2 is rejected, A2-is-wrong can be formed from it. – Daniel Earwicker – 2015-04-16T22:11:35.327 1@DanBryant - indeed, and that situation can last a long time; Euclid wrote the textbook on geometrical proofs that was required reading for all educated people for about 2300 years, but in the late 1800s people were still discovering axioms that he had inadvertently relied on without stating. – Daniel Earwicker – 2015-04-16T22:17:37.120 3@DanielEarwicker Ahh, I think I see what I was getting wrong. I was not looking at the trivial version, where the mere act of rejecting A is accepting Anything but A. Often the hypotheses constructed in such way are less valuable than one would want (arbitrarially valueless in some cases). I was looking at the case of "if you reject enough hypotheses, you eventually run out of false hypotheses," which is also phrased as "when you have proven all that is impossible, what is left, no matter how improbable, must be the truth." – Cort Ammon – 2015-04-16T22:21:22.167 @DanielEarwicker I'm not sure that I see the implication that because we can prove A is not true then we must have been able to prove that A is not true is true. It seems that you would still be required to design an experiment to falsify A. You couldn't start from the beginning to prove the truth of A is wrong without this middle step (the experiment). – Dylan Williams – 2015-04-22T00:39:55.990 of course , A-is-true and A-is-wrong cannot both be the case. but we can live with we-don't-know-for-sure-about-A for a long period where A-is-believed-true is the operational norm. Newtonian mechanics was that until Einstein. and A-is-indistinguishable-from-true-for-nearly-of-what-we-observe is the case even now, when we know that A-is-not-fully-true. – robert bristow-johnson – 2015-11-22T01:27:15.427 15 You are starting from the hypothesis that your understanding scientific method is correct and complete, and that everyone else has the same understanding. Neither of which is sustainable from the evidence here. There are no 'weaknesses' with scientific method, publishing and grants are issues of personality not science, as even scientists have personalities. You are correct in suggesting a hypothesis can not be 'proven' true but that is its the primary strength, and not an inherent weakness as you imply. At its simplest, scientific method is a process for producing useful explanations for how the universe works. It is a stunningly simple process with only four steps: 1. observe a phenomenon 2. produce hypothesis to explain it 3. use hypothesis to predict previously unseen phenomena 4. experiment to observe unseen phenomena. If phenomena matches predictions, then hypothesis is useful because it describes what is there AND it led to new knowledge. If phenomena not observed or doesn't match predictions, then either experiment is insufficient or hypothesis is not useful. It has been suggested that it should be match reality, but this is not correct (or at least is misleading) as demonstrated by both Relativistic and Quantum physics, where the theories were completely outlandish when proposed and it was years before experimentation could confirm their predictions. It does not matter who comes up with the idea, nor does it matter who performs the experiment, so it is free from bias in that respect. Also, anybody can show that any idea is incomplete if they come up with the appropriate experiment that shows predicted results are not there or not correct, but this does not make the hypothesis false (since it was never 'true' to begin with) it simple puts limits on its usefulness. Newtons laws of motion will put a satellite in space, so they are clearly useful, but if you want accurate GPS then you need to use the improvements and refinements proposed by Einsteins theories on relativity. Footnote - Please notice that there is no mention of the 'S' word anywhere in this answer. Contrary to popular belief, it is not a core feature, it is just a very useful tool, one of many. What do you mean by statistics not being a core feature? (is that the S word?). I'm not claiming it is nor it isn't, I'm just curious about the wording of the footnote. – Sebastialonso – 2016-08-29T16:36:52.337 2The OP was drawing the inference that because something was a certain way in statistics, so it must be in science. I was pointing out that science would happen in the same way even if statistics were never discovered. – Paul Smith – 2016-08-30T13:59:26.850 12 The scientific method is simply a method for ranking theories. Logic and Theology are other methods. When using Logic to rank theories, we are asking the question "Which theory makes the most sense?" When using Theology to rank theories, we are asking the question "Which theory most closely matches my Holy Scripture?" When using Science to rank theories, we are asking the question "Which theory most closely matches reality?" The way science performs this ranking is simply by comparing the predictions of the theory to reality, i.e. performing experiments. Theories that match reality better are considered better than those that do not. People talk about "falsification", but that is really just an extreme form of ranking. Falsification is not at all necessary to the scientific method; all you need are multiple theories that experiments can distinguish. Science can rank the two theories as "better" and "not as good" without necessarily throwing either one out. For example, the theory "Newton's Laws of Motion" has been superseded by "Einstein's Theory of Special Relativity", since Einstein's theory more closely matches reality. But that doesn't mean we throw out Newton's Laws. They are fine approximations, and in most practical situations, the two theories are indistinguishable. Science cannot answer the question "Is the Earth 4.5 billion years old or is it much newer than that, but created to appear exactly as if it were 4.5 billion years old?" By definition, those two theories will match any reality equally well, so comparing them to reality cannot rank them differently. However, the question "Mr. Smith has kidney stones. Will acupuncture be an effective means of treating him?" is one that science can help with. The two theories "Acupuncture is effective in treating kidney stones" and "Acupuncture is not effective in treating kidney stones" are distinct and will match reality with different degrees of effectiveness. Furthermore, theories have inherent suppositions: the theory "Acupuncture is effective in treating kidney stones" presupposes the theory "There exists some mechanism by which acupuncture could affect kidney stones." We can test this theory by searching for such a mechanism. We can test it further by categorizing types of mechanisms that could exist and exploring other effects the existence of such mechanisms would have. Any such mechanism would have other effects which could be detected with other sorts of experiments. One twist of the scientific method is that it sometimes does not provide the same ranking that Logic or Theology would. For example, Logic tells us that Newtonian Relativity is the most sensible theory, and Einstein's Theory of Relativity is much less logical. Logic would also tell you that Quantum Mechanics is one of the silliest and least logical theories ever devised. And yet Science tells us that these two illogical theories match reality better than the more logical counterparts. If you want to rank theories because you want to make predictions about what will happen in reality, then Science would be the way to go. If that is not your goal, then you should pick the method that best matches your goal. 9 I think that you are confusing some things. Although statistics is a tool, among a wide choice of tools, used in the scientific method, it is not in itself the scientific method. It has also been my understanding that you use the scientific method to prove - not disprove - a hypothesis. In science, proof lies in the assertion, not the negation. Stephen Jay Gould in his book Hen's Teeth and Horse's Toes: Further Reflections on Natural History, specifically Chapter 19: Evolution as Fact and Theory says: In the American vernacular, 'theory' often means 'imperfect fact'--part of a hierarchy of confidence running downhill from fact to theory to hypothesis to guess...In science, 'fact' can only mean 'confirmed to such a degree that it would be perverse to withhold provisional assent.' I suppose apples might start to rise tomorrow, but the possibility does not merit equal time in a physics classroom...Evolutionists have been clear about this distinction between fact and theory from the very beginning, if only because we have always acknowledged how far we are from completely understanding the mechanisms (theory) by which evolution (fact) occurred. Darwin continually emphasized the difference between his two great and separate accomplishments: establishing the fact of evolution, and proposing a theory--natural selection--to explain the mechanisms of evolution. So, first you have to define what is the scientific method. The first step in the scientific method is to observe and collect facts in the natural world. For instance, using the classical myth, Newton observed an apple falling from a tree. He then observed that all the apples he watched fell from the tree (none went up in space, none hovered). He then observed other items also fell to the earth. He then collected data as to rates of falling speed, etc. He then developed a hypothesis - gravitation - that would explain the mechanism as to how things always fell. He then developed a mathematical formula that showed how gravitation could be applied to all objects. He then confirmed that his hypothesis was valid in new events (subsequent objects that fell did so at the same rate predicted by his formula). As his hypothesis could be tested repeatedly in the real world it and became generally accepted it became theory. If his hypothesis had not been able to predict future events, it would have remained a hypothesis and never a theory. Note: Newton never tested the hypothesis that objects fell because the gods pushed them down. He never tried to disprove the theory that objects fell because the gods pushed them down, he only tried to prove his theory. Your answer would be better if you remove the incorrect notion of science proving things. Newton did not prove his theory of gravity correct; indeed that would have been impossible given that it is wrong (false). Instead he (and others) collected evidence (empirical data) supporting his theory. – hkBst – 2018-03-01T14:55:24.930 1@hkBst Before he came up with the theory, he had a hypothesis based on observation (what goes up must come down), after testing his hypothesis against more observations he came up with the theory.. He then tested his theory to see if he could predict future observations... We know that his theory are not completely correct now based on the fact that we know that mathematics has its limits in explaining the physical world and also Einstein's theory. But it's a good approximation. His theory did not 'prove' anything. It only explained in a better way how we can measure and predict events. – Swami Vishwananda – 2018-03-03T07:53:50.400 @hkBst We do not what the physical sensual universe 'is'. That is the realm of metaphysics is, not physics. – Swami Vishwananda – 2018-03-03T07:55:30.763 4 What would being a serious alternative to the scientific method imply? To be useful, it would need to make precise and observable predictions about the material world. However, if you make predictions like that, common sense implies that it CAN be observed if the predictions actually come true, and that it SHOULD be observed, at least to check whether you fooled yourself or not. But with that, you pretty much have the basic idea of the scientific method. Sure you could question if a prediction needs to be observable and precise to be useful. If I understand Cort Ammon's answer correctly, that's what he suggests about traditional Chinese medicine. He makes the valid point that predictions concerning human beings can hardly be precise, and that's certainly a huge handicap for sociology, psychology and medicine. But any form of medicine that claims to be useful implicitly makes the prediction that it is able to help the majority of patients. Is that a precise prediction? (With precise I mean that is possible to check objectively if a prediction came true.) Probably not, because it is also very hard to predict precisely how the illness would have progressed without interference. And if the patient feels somewhat better that might only be a result of the placebo effect. You might be inclined to trust in a controversial theory like acupuncture (or homeopathy), but intellectual honesty would demand that you at least consider the possibility of being wrong. Even if acupuncture were all right in principle, the therapist you choose might not know what he's doing, or even be a devious fraud. Moreover, maybe only a part of a traditional form of medicine might be "the real thing", the other part might still be the result of superstition and wishful thinking. How can you distinguish between those? If the criterium of "somehow feeling satisfied" after an intervention that promises benevolent yet inprecise consequences is not enough to evaluate a non-scientific claim, what is? I see three possible answers. (They also apply to magical rituals and similar stuff.) a) You blindly trust in tradition and authority. History of mankind tells us that might not always be a good idea - not in politics, and not in science. b) You trust what feels emotionally most appealing to you. That is not exacly a promising strategy to avoid wishful thinking. c) You apply philosophical or poetical criteria. Sadly, history has shown that this can be quite misleading. For example the ancient greek astronomers were convinced that the planets moved in circles because they considered circles to be the most perfect shapes in geometry. But modern observation has discovered that nature dared to choose a less beautiful shape: ellipses. The amazing thing about science is that it provides an objective way to check its own claims. You suspect that a scientist is a fool who is telling bullshit? You don't have to trust his authority, you can go and check for yourself if he is telling the truth. (Admittedly, you better not bother trying to repeat elaborate experiments like the LHC in your garage.) In logic and mathematics claims can be checked objectively as well. But they do not provide direct knowledge about the material world. (For example, with mathematics alone you cannot decide if Euclidian or non-Euclidian geometry is the correct description of the universe, although both are "true" in a mathematical sense, i.e. logically developed out of different sets of basic assumptions.) To conclude, I cannot see an alternative to the scientific method that has the same ability to filter out incorrect or misleading descriptions of the physical reality. 2Psst, I hate to break it to you, but they're very slightly wiggly ellipses due to the gravitational influence of other planets, etc.. – Rex Kerr – 2015-04-17T01:42:23.703 2Oh dear! Please don't shatter my beautiful delusions by mentioning crude reality! ;) – elias_d – 2015-04-17T05:04:23.760 4 I am asking whether anything better than the current scientific method could theoretically be achieved? Is there any reason why the current method has to be the best method for acquiring knowledge or can we imagine anything better? Why certainly! Imagine omniscient authority OA. OA could be a person imbued with omniscience, a deity available for questioning, an Oracle that is never wrong, Deep Thought (the supercomputer from the Hitchhiker's Guide), etc. What would be better in terms of 'correctness' would certainly be the Consult the OA Method! Experiments are difficult and time consuming, science takes so much work. If OA is accurate, reliable, and indeed omniscient, then surely this is a 'better' method than the possibly-mistaken Scientific Method. Various supposed OAs have arisen through history, and you will typically see some parallels with the scientific process among their followers (reliance on it, using it as a way to know truths about reality, trusting in it). Examples include the Oracle of Delphi, Moses, Jesus, Mohammad, "The Bible", Joseph Smith, L. Ron Hubbard, etc. These methods have so far fallen short for a number of reasons; their omniscience is dubious, the existence of falsified statements/claims, or the limited availability of the source. However, I could in theory imagine an Omniscient Authority that could just immediately answer all of our questions with full knowledge and truth, which would be 'better' at determining the truth and also easier than the Scientific Method. 3 I am asking whether anything better than the current scientific method could >theoretically be achieved? Is there any reason why the current method has to be >the best method for acquiring knowledge or can we imagine anything better? This is hard to answer until you define "better at WHAT?". But perhaps you mean something like "better at finding useful intellectual models of the real world". (Versus, say, finding more emotionally engaging explanations of the events around us; there are definitely approaches which are better at that than the scientific method, like religion and sometimes philosophy). The answer I offer is: for practical purposes, it's rather unlikely. But before this answer makes sense, we have to knock the Scientific Method off any pedastal of purity, any idea that it's a pure logic based procedure for discovering truth. The practice of the scientific method is really a collection of pragmatic heuristics which have proven to be fairly efficient at winnowing hypotheses and reducing human errors though filtering what scientists accept and reject. No one heuristic or even fixed set of heuristics IS the scientific method which has proven itself effective (despite textbooks trying to simplify and regularize) - they are just tools. (Heuristics: give more weight and trust to theories and observations which have been reproduced by other scientists; which explain more existing data; which require the fewest new assumptions; which successfully predict a new observation and especially when the prediction differs from previous theories; which come from a respected researcher with a demonstrated track record; which have been published in a quality peer reviewed journal; etc - some of these are hallowed as closer to 'pure' scientific method, but all of them inform the social process of collectively building ever more accurate and useful models of the real world, ie: science). Not every tool is used in every case (try reproducing the Big Bang 10 times and see how difference there is), and there is some real fuzziness. The set of tools can be and often is tweaked, while staying within the practice of science. So let's look deeper. The core behind the most important of those pragmatic heuristics is this underlying meta-heuristic: clearly observe the discrepancies between your models (theories) and the real world, and continuously adjust and expand your models to reduce that discrepancy. Put another way, science uses the measurable and observable aspects of the real world as THE standard against which all theories must be tested, and uses a negative feedback loop to successively minimize divergences. When you use an approach to understanding and modeling the real world (or something else) without this negative feedback loop, THEN you fall outside the realm of science. We all do that - poets, lovers, even a businessman trusting their (unexamined much less unproven) intuition. In this light, your question could be: Is there an approach to creating more accurate models of the real world, than the scientific method of paying attention to the differences between theories (or beliefs) and the real world and working systematically (using various heuristics as tools) to minimize that discrepancy? And like science, the pragmatic (and not purely logically derived) answer is: Not likely. It's like asking whether there's a better way to land more darts on the bull's eye than the approach of noticing how far darts are landing from it and correcting for the factors that cause the inaccuracies (negative feedback). Well, one might occasionally hit the bulls eye by tossing darts while blindfolded and unaware of how close the darts are landing, it's not likely to be a good strategy in the long run. And this is a falsifiable hypothesis (another of those heuristics): just come up with a method which does a better job than the fuzzy but much practiced scientific method :-) 2 The scientific method allows us to make a model of reality which appears to work to a great degree of accuracy. There are, however, two major limitations: we can only model what occurs when there are many observable occurrences; and we can only precisely model what will occur when there are finite possibilities. This means we cannot answer questions such as: "What am I?"; "Was there existence before time?"; "Where, exactly, is a given particle in phase-space?"; and "Is this painting good?". The first limitation seems straight-forward and generally the methods humanity follow here are alternative belief systems ...from the religious belief, "God was before time."; through the subjective belief "I think this painting is one of the best I've ever seen."; to the logical belief "I am a complex system of self references that happens to produce an entity capable of pondering such things.". The second limitation seems at first to pose little threat, after all when are there infinite possibilities? However it seems that quantum physics gets limitations here (exposed as the uncertainty principle). A mathematical interpretation of the likelihood of unobservable states is that of a negative probability; this is akin to defining a region of phase-space in which the particle in question could be anywhere-when. All our experimental methodology is based on the principle of the statistical stabilization (the law of large numbers). All experiments are prepared in such circumstances that relative frequencies must stabilize. This is the result of our cognitive evolution. In the process of evolution the brain extracted from the chaotic and (lawless) reality phenomena which satisfy the principle of the statistical stabilization (repeatability in the average). These and only these phenomena are considered by the brain as real physical phenomena. Negative probabilities give the possibility to extend the range of physical phenomena by considering phenomena which violate the principle of the statistical stabilization. "Interpretations of probability." A. Khrennikov (1999) A nice illustration of negative probabilities in the quantum world is described by Johannes Koelman in his blog post "Quantum Casino - Less Than Zero Chance". Even with these limitations there is still the question of whether we can know anything (Scepticism), or even if we can know that (Pyrrhonian scepticism). However it is also often forgotten that the scientific method acknowledges that it's results may be incorrect and may revise what it has previously asserted upon observations of previously unobserved phenomena (hence adhering to Fallibilism). Lastly there is always the possibility that there is actually no such thing as truth or that any assertion of truth is shorthand for reporting an infinite regress or even just syntactic sugar (Deflationism). Therefore, it is my opinion that we cannot answer the question posed - it could be the case that superior knowledge acquisition methods are theoretically possible, it may even be the case that the scientific method is a stepping stone toward such methods, or it may be that what the culmination of human experience has produced is merely a reflective mask over the persona of reality, or even that the search itself is what makes reality manifest as it does for us. Another couple of questions the scientific method cannot answer may enlighten us, "Does reality exist when unobserved?", and "Does reality abide by the same laws when unobserved as when observed?". 1I mostly agree, but why can we only model what will occur when there are finite possibilities? Even the simplest real-valued one-variable model has, formally, infinite possibilities, but that's no problem whatsoever. You can test an infinite family of models, as long as you have some way to describe how well your data supports or undermines the model which doesn't require you, the scientist, to do infinite work. In fact, this is standard fare in Bayesian statistics. – Rex Kerr – 2015-04-17T17:21:28.223 1Thanks, I have qualified the statement with the word "precisely" as this is the limitation, not that we cannot model in such cases. To be honest I'm not entirely sure this is still quite correct ...I may edit further later if I can think it through clearly. In relation to the uncertainty principle the lack of precision is a mathematical result of the fact that the pair of properties in question are conjugate variables, furthermore since they are conjugate variables any model for one must therefore implicitly model the other. – Jonathan Allan – 2015-04-18T06:42:29.320 2 I've seen some confusion about formalizing the way that science, through the language of maths, is turned into "knowledge", so I would like to express the way I see it first. The only absolute and universal truth we can achieve (if any) is the one that is driven by logic, and that is what mathematics is about. The subtle point lays in the fact that mathematics never actually say anything about the real world, and most importantly never states the assumptions (hypothesis) it uses are true, it just states that hypothesis => thesis is true as an implication. This is called theorem. For example, we can think that if a natural number is divisible by 4, then it most certainly is divisible by 2. In this case hypothesis: (all the axioms of natural numbers) + "n is divisible by 4" thesis: n is divisible by 2 Every single human being on earth will agree that the implication from hypothesis to thesis is true, but that does not mean that the hypothesis itself is true. This is basically why with mathematics only, we can understand little about the real world in term of real knowledge. So this is where science comes into play. Science actually has the really difficult job to find the set of hypothesis that are most likely to correspond to how reality is, so that using the language and truths of mathematics, one can then derive from them a theory that can be use to predict future outcomes of a particular phenomenon. This "finding the correct hypothesis" is actually a very difficult problem, and I like to think of it like this: Reality is like a game being played on a chess board by some unknown rules. Science tries to understand those rules by just looking at the moves that are made. Since the rules can be arbitrarily complicated, it is easy to see that science can never be 100% sure of anything, it can only be sure that the rules are not a certain set (hence the hypothesis can be only denied). So the question "is there a better way than science to achieve knowledge" can be seen like "is there a better way to understand the rules of the game than looking at the moves?" I can think of two possible alternative ways: 2. you intuitively reach knowledge The thing is, I can see a lot of problems in both approaches. Let's suppose we are actually able to talk to God (approach 1) and he explains to us all the rules of the game, in a way that perfectly matches all our observations so far. How do we know he is not lying? We can't. So that is, too, conditional knowledge, therefore that too is "science". Let's now suppose one reaches perfect knowledge by intuition. In this case, that person would just "know" future moves without knowing the rules, not because someone told him/her (otherwise that would be still case 1), bust just because he/she knows. This might be an alternative way to science, but the main problem is that is not communicable. There cannot be any class, teacher, book that teaches you intuitive knowledge, because it is based on nothing. So is this a "way" to achieve knowledge? I fear not, by definition it seems more like an "event" that you happen to experience that cannot be related to any cause/effect phenomenon that could be studied by science. In conclusion, I feel like "seeking for knowledge" itself is science under a different name, as a matter of definition. In fact, the word "science" comes from latin "scentia" which means exactly "knowledge". The different evolution of the two words went along different paths and nowadays "science" sounds more like a discipline, a subject, but they were and they are ultimately, the same thing. 2 Typically, I am curious about the process of proving the hypotheses wrong. If I am not mistaken, no hypothesis can be proven correct, we can only prove hypothesis wrong. The hypothesis that you refer to is its statistical definition. As someone else pointed out, any scientific statement is a hypothesis. When you reject the null-hypothesis you are accepting the alternative hypothesis. Now, what statistical tests cannot say is which of the infinite alternate hypotheses is actually true; in other words our alternative hypothesis is always general such as m ≠ µ. There are one-tailed tests that are more stringent than this and address the direction of the inequality (either m > µ or m < µ). This is so for statistical tests because each of them can only test if a certain observation fits a given model. There are many popular models which are the basis of the common statistical tests. Note that if you intend to test if a certain observation follows Poisson process, then you are trying to validate an observation as a case of a certain model instead of the rejection based tests. However you should note that it is difficult to prove that a certain data follows a model because you need to verify each and every parameter. Two distributions are considered same only if all of their moments match. This is essential because there can be an infinite number of functions that produce a certain shape (a function can be expressed as McLaurin series and if all moments are same the functions ought to be the same). Now what we regularly do is to determine if something that we observed is not just because of a random measurement error and that is why we aim to reject the null hypothesis which assumes that the data is an outcome of a Gaussian distribution model (You can see this answer in in Biology.SE for details). I would say that you can validate a possibility instead of reject infinite possibilities if you know the underlying model and have enough data. This is how the prediction-validation-correction method works. In certain cases the model can be built using basic principles instead of inferring from the data. Finally, all models have assumptions and you need to be sure if your experiments satisfy these assumptions or not; if not you should revise the model. Intuition or "gut feeling" is no alternative method Intuition cannot be called a method because there is no set protocol for it. And there is no way to replicate it. Biologically speaking, intuitive guess is basically a case of applying multiple statistical tests (sub-consciously, we can say). Intuition works only when you have a good deal of prior information (sub-consciously or consciously). 1 Empiricism, in its most bald and simplest version - in Hegel, this is his notion of Sense-Certainty cannot say anything other that here it is, or there it was; one cannot move from 'a bottle' to even 'one bottle' and then to 'one'. To do this, requires what traditionally is called induction or abstraction; and it's these two notions that sustains the traditional concept of theory-making in the scientific method when philosophically thought in a positive manner (not the negative manner of Popper - falsification). But, as Deutsch notes in one of his popular books, this is neccessary but not sufficient to describe the scientific method. 0 To get a more positive answer based on current facts, and less criticism of the premise, we can look at what are the methodologies in the 'sciences' that are not 'Popperian'? Two examples stand out for me -- anthropology and the dialectical systems like those of Marx and Freud. I would suggest that these two offer two alternative criteria for ranking hypotheses, and, if nothing else give us guidance as to which hypotheses to test, which is a major component of scientific inquiry that Popperians just ignore. The component of anthropology that defies falsificationism is the tradition of handling data as a story, which comes from the original habit of handling stories as data. There is an internal human criterion for what is and what is not a good story, which I would attribute to an instinct against being deceived. Everyone in science uses this criterion, but only anthropologists and those favorably disposed toward them from other social sciences dare to consider this a basic part of the scientific process. The component of dialectics that defies falsificationism is the continual iteration of refinement that makes the theory infinitely flexible. At its worst Freudianism is never wrong, the error is always in the application, and the failures to apply are always analyzed in terms of how to better apply the theory in the future. Only when there is a real and deep disagreement in the community of practitioners does anyone fall back on actually testing the theory, usually by borrowing ideas from literature or from the other, more data-driven, schools of psychology. At its best, only modifications of the theory are allowed that recapture the current theory in its entirety and simply shift focus. I would propose that these two stances are captured in Kuhn's broader theory of science. They are things that he sees going on in scientific 'revolutions'. When the current paradigm loses traction, it is necessary to develop alternative groundwork, and to winnow these contenders down or merge them into a cohesive candidate for a new paradigm. The process that seems to go on, to my eye, for developing new groundwork is very much a dialectical process, weak candidates are refined and folded in an iterative process that resembles dialectical development, until they have a certain internal texture that allows them to gain adherents. The process for winnowing alternatives, again, to my eye, strongly depends upon our sense of story, and runs very much like the process of historiography or anthropology. I would argue that there is space for a process that more fluidly combines these three approaches and takes them all equally seriously. If Kuhn is right, from a historical perspective, science already really does so. But it ignores its implicit dependency on the storytelling process and disparages dialectical refinement, admitting their relevance only when it is in crisis. To me, such a process would be Kuhnian evolution writ small, and would resemble metaphorically something like sculpture. One chooses a medium, one makes or cuts the bulk to be sculpted, and then one refines that bulk to produce a recognizable work. Traditional sculpture resembles what Kuhn describes: almost all of the work is refinement. But other varieties exist where the bulk is molded or welded, and most of the work is not in cutting away excess, but in the middle phase of building up. And still others exist, from homey decoupage-in-relief, to outre 'found' art, where the primary activity is gathering the medium. 1It would be helpful to add links or explanations about Kuhn's ideas and the methods from anthropology and dialetics. The way it is is very vague to me. – ivbc – 2017-06-11T02:13:07.980 Freud, Marx, Kuhn and Popper are already huge names that anyone can Google. I am really sure that adding the links would help absolutely no one. – None – 2017-06-12T16:25:41.653 0 I can think of two alternate processes that are used to attain truth where science is not suitable to the application. The process that this thread follows is the Socratic method - arguments and counter arguments that apply logic, some degree of evidence and examples to prove a point. Law uses this method, forensic science can contribute to evidence but ultimately the Socratic method is the process used to determine guilt or innocence. You can prove someone’s blood is on the scene with science but you cannot prove why it is there or prove that the killer had intent, the Socratic method is the best process to attain truth in this scenario. In the design/planning field research by design is a common approach to determine the best possible outcome . Essentially multiple scenarios are designed and each is assessed based on an establish framework. This could be used to determine the best approach for densification of a town or for cleaning a river. Science is limited in that it generally focuses on one variable and complex systems like cities and environments are chaotic. Science has -1 One method is solipsism, or radical skepticism. You can only believe what you personally experience - in this moment. What is it you want of a method? The scientific method is a poor method if one is attempting to solve the problem of other minds. The more you measure the world, at finer and finer scales, the less inclined you may be to regard anything - even another person as possessing "a mind of its own."
2021-06-21 16:08:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4908832311630249, "perplexity": 874.5602623597681}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488286726.71/warc/CC-MAIN-20210621151134-20210621181134-00347.warc.gz"}
https://www.mybaseline.org/maths1/textbook?topicID=9
# 9 Matrices 15th century printers pioneered the use of movable type printing. Mirror images of each letter were cast in metal. These were arranged, letter by letter, in frames which printed one page at a time. A frame was called a matrix, several frames were called matrices. In mathematics, a matrix is a frame used to hold numbers or variables. Matrices were used before computers were invented but the advent of computers has made the use of matrices much more common. A 2x2 matrix looks like this $\begin{bmatrix} 1 & 2 \\[0.3em] 4 & -3 \end{bmatrix}$ Matrices are always rectangular and can be any size. The size of an array is called its order. Matrices are generally represented by capital letters and the individual numbers, called elements, are represented by lowercase letters. Annoyingly, the subscripts are written row then column. $A=\begin{bmatrix} a_{11} & a_{12} \\[0.3em] a_{21} & a_{22} \end{bmatrix}$ Figure 9.0: Matrix $A$ with elements $a_{ij}$ ## 9.1 Addition and Subtraction of Matrices You can add and subtract matrices of the same order. To add two matrices together you need to add the corresponding terms. If $A=\begin{bmatrix} a_{11} & a_{12} \\[0.3em] a_{21} & a_{22} \end{bmatrix}$ and $B=\begin{bmatrix} b_{11} & b_{12} \\[0.3em] b_{21} & b_{22} \end{bmatrix}$ then $A+B$ $=$ $\begin{bmatrix} a_{11} & a_{12} \\[0.3em] a_{21} & a_{22} \end{bmatrix}+ \begin{bmatrix} b_{11} & b_{12} \\[0.3em] b_{21} & b_{22} \end{bmatrix}$ $=$ $\begin{bmatrix} a_{11}+b_{11} & a_{12}+b_{12} \\[0.3em] a_{21}+b_{21} & a_{22}+b_{22} \end{bmatrix}$ Example 9.1: Find $A+B$ given $A=\begin{bmatrix} 1 & 3 \\[0.3em] -5 & 2 \end{bmatrix}$ and $B=\begin{bmatrix} -2 & 4 \\[0.3em] 3 & 1 \end{bmatrix}$ $A+B$ $=$ $\begin{bmatrix} 1+-2 & 3+4 \\[0.3em] -5+3 & 2+1 \end{bmatrix}$ $A+B$ $=$ $\begin{bmatrix} -1 & 7 \\[0.3em] -2 & 3 \end{bmatrix}$ To subtract one matrix from another you need to subtract the corresponding terms. If $A=\begin{bmatrix} a_{11} & a_{12} \\[0.3em] a_{21} & a_{22} \end{bmatrix}$ and $B=\begin{bmatrix} b_{11} & b_{12} \\[0.3em] b_{21} & b_{22} \end{bmatrix}$ then $A-B$ $=$ $\begin{bmatrix} a_{11} & a_{12} \\[0.3em] a_{21} & a_{22} \end{bmatrix}- \begin{bmatrix} b_{11} & b_{12} \\[0.3em] b_{21} & b_{22} \end{bmatrix}$ $=$ $\begin{bmatrix} a_{11}-b_{11} & a_{12}-b_{12} \\[0.3em] a_{21}-b_{11} & a_{22}-b_{22} \end{bmatrix}$ Example 9.1a: Find $A-B$ given $A=\begin{bmatrix} 1 & 3 \\[0.3em] -5 & 2 \end{bmatrix}$ and $B=\begin{bmatrix} -2 & 4 \\[0.3em] 3 & 1 \end{bmatrix}$ $A-B$ $=$ $\begin{bmatrix} 1-(-2) & 3-4 \\[0.3em] -5-3 & 2-1 \end{bmatrix}$ $A-B$ $=$ $\begin{bmatrix} 3 & -1 \\[0.3em] -8 & 1 \end{bmatrix}$ ## 9.2 Multiplication of a Matrix by a Scalar To multiply a matrix by a scalar you multiply each of the elements by the scalar. If $A=\begin{bmatrix} a_{11} & a_{12} \\[0.3em] a_{21} & a_{22} \end{bmatrix}$ and $c$ is a scalar then $cA=\begin{bmatrix} ca_{11} & ca_{12} \\[0.3em] ca_{21} & ca_{22} \end{bmatrix}$ Example 9.2: Given $A=\begin{bmatrix} 1 & 3 \\[0.3em] -5 & 2 \end{bmatrix}$ and $c=3$ find $cA$. $cA$ $=$ $\begin{bmatrix} 3\times1 & 3\times3 \\[0.3em] 3\times(-5) & 3\times2 \end{bmatrix}$ $cA$ $=$ $\begin{bmatrix} 3 & 9 \\[0.3em] -15 & 6 \end{bmatrix}$ ## 9.3 Multiplication of a Matrix by another Matrix You can multiply two matrices iff (iff means if and only if) the number of columns in the first matrix is equal to the number of rows in the second. If $A=\begin{bmatrix} a_{11} & a_{12} \\[0.3em] a_{21} & a_{22} \end{bmatrix}$ and $B= \begin{bmatrix} b_{11} & b_{12} \\[0.3em] b_{21} & b_{22} \end{bmatrix}$ then the matrices can be multiplied together because the number of columns in $A$ is the same as the number of rows in $B$. To multiply $A$ by $B$ we take the first column of $B$ and put it over $A$, multiply then sum the corresponding terms. If $A=\begin{bmatrix} a_{11} & a_{12} \\[0.3em] a_{21} & a_{22} \end{bmatrix}$ and $B=\begin{bmatrix} b_{11} & b_{12} \\[0.3em] b_{21} & b_{22} \end{bmatrix}$ then $A \times B$ $=$ $\begin{bmatrix} a_{11} & a_{12} \\[0.3em] a_{21} & a_{22} \end{bmatrix} \times \begin{bmatrix} b_{11} & b_{12} \\[0.3em] b_{21} & b_{22} \end{bmatrix}$ $=$ $\begin{bmatrix} a_{11} \times b_{11} + a_{12} \times b_{21} & a_{11} \times b_{12} + a_{12} \times b_{22} \\[0.3em] a_{21} \times b_{11} + a_{22} \times b_{21} & a_{21} \times b_{12} + a_{22} \times b_{22} \end{bmatrix}$ Example 9.3: Find $A \times B$ given $A=\begin{bmatrix} 1 & 3 \\[0.3em] -5 & 2 \end{bmatrix}$ and $B=\begin{bmatrix} -2 & 4 \\[0.3em] 3 & 1 \end{bmatrix}$ $A \times B$ $=$ $\begin{bmatrix} 1 & 3 \\[0.3em] -5 & 2 \end{bmatrix} \times \begin{bmatrix} -2 & 4 \\[0.3em] 3 & 1 \end{bmatrix}$ $=$ $\begin{bmatrix} 1 \times (-2) + 3 \times 3 & 1 \times 4 + 3 \times 1 \\[0.3em] -5 \times (-2) + 2 \times 3 & -5 \times 4 + 2 \times 1 \\[0.3em] \end{bmatrix}$ $A \times B$ $=$ $\begin{bmatrix} 7 & 7 \\[0.3em] 16 & -18 \end{bmatrix}$ ## 9.4 Division of a Matrix by another Matrix Matrix division does not exist. If you want to find $X$ in the matrix equation $AX=C$ where $A$, $X$ and $C$ are matrices you first need to find the inverse of matrix $A$. When you multiply $A$ by it's inverse, $A^{-1}$, the result is the identity matrix, that is a matrix with $1$s on the main diagonal and $0$s everywhere else. The inverse of $A$ is $A^{-1}$ where $A \times A^{-1} = I$ If $A$ was a $3\times3$ matrix $I$ would be $\begin{bmatrix} 1&0&0\\[0.3em] 0&1&0\\[0.3em] 0&0&1 \end{bmatrix}$ Having found $A^{-1}$ we can write: $A X$ $=$ $C$ $A A^{-1} X$ $=$ $CA^{-1}$ $A A^{-1}$ $=$ $I$ so $X$ $=$ $CA^{-1}$ ## 9.5 Determinants ### 9.5.1 2x2 Determinants The determinant of a matrix can be thought of as the magnitude of the matrix. You can only calculate determinants for square matrices. For a 2x2 matrix, $A=\begin{bmatrix} a_{11} & a_{12} \\[0.3em] a_{21} & a_{22} \end{bmatrix}$ the determinant is given by $a_{11} \times a_{22} - a_{12} \times a_{21}$. Example 9.5.1: Find the determinant of $A=\begin{bmatrix} 1 & 3 \\[0.3em] -5 & 2 \end{bmatrix}$. $|A|$ $=$ $1 \times 2 - 3 \times (-5)$ $|A|$ $=$ $17$ ### 9.5.1 3x3 Determinants To find the determinant of a 3x3 matrix take each term in the top row one at a time. Ignore the row and column that contain the term. What is left is a 2x2 matrix that we can evaluate as shown above. At the end of this first step we have $a_{11} \times (a_{22} \times a_{33} - a_{23} \times a_{32})$ For the next step we take $a_{12}$. We ignore the top row and middle column which gives us $a_{12} \times (a_{21} \times a_{33} - a_{23} \times a_{31})$ There is an added complication. The signs on the top row alternate as shown below. This means we need to change the sign of the middle term. If it is negative we make it positive. If it is positive . . . We repeat the process for the last term so we giving us the determinant of $|A|$ as: $a_{11} \times (a_{22} \times a_{33} - a_{23} \times a_{32})$ $|A|$ $=$ $-a_{12} \times (a_{21} \times a_{33} - a_{23} \times a_{31})$ +$a_{13} \times (a_{21} \times a_{32} - a_{22} \times a_{31})$ Example 9.5.3: Find the determinant of $A=\begin{bmatrix} 2 & 5 & 3\\[0.3em] -1 & 6 & 2\\[0.3em] 3 & -1 & 2 \end{bmatrix}$. $2 \times (6 \times 2 - 2 \times (-1))$ $|A|$ $=$ $-5 \times (-1 \times 2 - 2 \times 3)$ $+3 \times (-1 \times (-1) - 6 \times 3)$ $=$ $2 \times 14 -5 \times (-8) +3 \times (-17)$ $=$ $28 + 40 - 51$ $|A|$ $=$ $17$ ## 9.5 Where Do We Use Matrices? The question should really be 'Where do we not use matrices?' There was a time when matrices were only used in specialist applications like mathematics and quantum mechanics but since computers became commonplace matrices underpin pretty much every computer application. The name of the popular program MatLab is a concatenation of Matrix Laboratory. Artificial Intelligence, neural networks, statistical, drafting, analysis and modelling software are all based on matrices. Computer graphics would be virtually impossible without the use of matrices. You can easily imagine a computer display as a 2D matrix but computer graphics, using matrices, take it a lot further than that. Movement in 3D, reflection of light, the shadows are all calculated using matrices. In the still from the film Gravity below the only real items in the image are the faces of the actors. Everything else looks real, moves as if it is real but is computer generated. Imagine you have the 2D coordinates of an object in an array $A$. To scale the object you would multiply $A$ by the scaling matrix $S=\begin{bmatrix}S_x&0\\[0.3em]0&S_y\end{bmatrix}$. To rotate $A$ you would multiply by the rotation matrix $R=\begin{bmatrix}cos(\theta)&sin(\theta)\\[0.3em]-sin(\theta)&cos(\theta)\end{bmatrix}$ This video shows how characters are animated by the film company Pixar.
2021-09-28 18:57:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7627665996551514, "perplexity": 251.11599602540235}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780060882.17/warc/CC-MAIN-20210928184203-20210928214203-00389.warc.gz"}
https://rpg.stackexchange.com/questions/53433/pathfinder-golem-senses-sight/53434
# Pathfinder golem senses / sight I cannot find a particular documentation about golem senses, what will happen when a golem is inside a Fog cloud spell? Can he see and hear normally? Is its line of sight blocked? In our case we are dealing with an Iron golem. Golems are constructs. Per the general construct rules (unless the specific golem writeup overrides it) they therefore have darkvision 60' and low-light vision. I'm not sure how you are not finding this, as it's in the stat block for every creature under Senses, right after the Init in the top section of the monster stats. For example: Iron Golem CR 13 XP 25,600 N Large construct Init –1; Senses darkvision 60 ft., low-light vision; Perception +0 They also have immunity to mind-affecting affects including illusions of the pattern and phantasm types, and immunity to spells in general, but fog cloud is a SR/no conjuration (creation) spell and thus the fog it conjures blocks sight. (Because "An iron golem is immune to spells or spell-like abilities that allow spell resistance.") So no, an iron golem can not see through a fog cloud (and it can slip on grease). Those are actually some of the basic standbys to fight them. • Yes I saw that, but no description about their senses: can they hear? They see normally with 2 eyes like humanoid creatures (with darkvision of course)? Or they see at 360°? (I hardly figure out they can smell or taste but.. who knows) – thermz Dec 23 '14 at 12:25
2019-08-25 08:08:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5349928140640259, "perplexity": 8213.781615870877}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027323221.23/warc/CC-MAIN-20190825062944-20190825084944-00543.warc.gz"}
https://ifc43-docs.standards.buildingsmart.org/IFC/RELEASE/IFC4x3/HTML/lexical/IfcTimeStamp.htm
# 8.5.2.13 IfcTimeStamp ## 8.5.2.13.1 Semantic definition IfcTimeStamp is an indication of date and time by measuring the number of seconds which have elapsed since 1 January 1970, 00:00:00 UTC. Type: INTEGER ## 8.5.2.13.2 Formal representation TYPE IfcTimeStamp = INTEGER; END_TYPE;
2023-03-20 15:44:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27038052678108215, "perplexity": 14795.847884438223}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943484.34/warc/CC-MAIN-20230320144934-20230320174934-00020.warc.gz"}
https://tex.stackexchange.com/questions/385471/edit-the-link-span-of-a-cite?noredirect=1
# Edit the link span of a \cite [duplicate] I am using biblatex with backend=biber and hyperref package. In order to control the cite link color, I use ``````\definecolor{mycolor}{HTML}{065F8F} citecolor=mycolor, urlcolor=mycolor} `````` For the biblatex package, I configure style ``````\usepackage[backend=biber, style = authoryear, citestyle = authoryear-ibidem, hyperref = true, ... ] `````` When I now use the \cite command, it renders something like Albrecht and Lee, 2012 where you must imagine the 2012 in blue and the rest in black. How can I edit the \cite command so that the complete reference (including the author name) is a link and therefore blue? ## marked as duplicate by Mico, Mensch, CarLaTeX, Zarko, user36296Aug 8 '17 at 22:10 • A solution will definitely depend on the style you use. Do you use `style=authoryear` or something else? Do you apply any modifications to that style? – moewe Aug 8 '17 at 21:00 • `authoryear-ibidem` is not a style I know, the standard style is called `authoryear-ibid`, do you mean that? Note that in that case `style = authoryear, citestyle = authoryear-ibid,` is equivalent to `style = authoryear-ibid,`. – moewe Aug 8 '17 at 21:11
2019-10-13 22:55:30
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8187237977981567, "perplexity": 2965.993546509723}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986648343.8/warc/CC-MAIN-20191013221144-20191014004144-00063.warc.gz"}
https://wumbo.net/symbol/decimal-point/
# Decimal Point Symbol The decimal point is used to separate the whole part of a number represented on the left from the fraction which is represented using digits to the right of the decimal point.
2019-03-20 18:23:26
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8079094290733337, "perplexity": 176.07295858863964}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202450.64/warc/CC-MAIN-20190320170159-20190320192037-00078.warc.gz"}
https://calliope.readthedocs.io/en/v0.3.7/user/formulation.html
# Model formulation¶ This section details the mathematical formulation of the different components. For each component, a link to the actual implementing function in the Calliope code is given. ## Objective function (cost minimization)¶ The default objective function minimizes cost: $min: z = \sum_y (weight(y) \times \sum_x cost(y, x, k=k_{m}))$ where $$k_{m}$$ is the monetary cost class. Alternative objective functions can be used by setting the objective in the model configuration (see Model-wide settings). weight(y) is 1 by default, but can be adjusted to change the relative weighting of costs of different technologies in the objective, by setting weight on any technology (see Technology). ## Basic constraints¶ ### Node resource¶ Defines the following variables: • rs: resource to/from storage (+ production, - consumption) • r_area: resource collector area • rbs: secondary resource to storage (+ production) It also defines the constraint c_rs. This constraint defines the available resource for a node, $$r_{avail}$$: $r_{avail}(y, x, t) = r_{scale}(y, x) \times r_{area}(y, x) \times r_{eff}(y)$ The c_rs constraint also decides how the resource and storage are linked. If the option constraints.force_r is set to true, then $r_{s}(y, x, t) = r_{avail}(y, x, t)$ If that option is not set, and the technology inherits from the supply or unmet_demand base technologies, $r_{s}(y, x, t) \leq r_{avail}(y, x, t)$ Finally, if it inherits from the demand technology, $r_{s}(y, x, t) \geq r_{avail}(y, x, t)$ Note For the case of storage technologies, $$r{s}$$ is forced to $$0$$ for internal reasons, while for transmission technologies, it is unconstrained. This is irrelevant when defining models and defining a resource for either storage or transmission technologies has no effect. ### Node energy balance¶ Defines the following variables: • s: storage level • es_prod: energy from storage to carrier • es_con: energy from carrier to storage It also defines three constraints, which are discussed in turn: • c_s_balance_pc: energy balance for supply, demand, and storage technologies • c_s_balance_transmission: energy balance for transmission technologies • c_s_balance_conversion: energy balance for conversion technologies #### Supply/demand/storage balance¶ A node that allows storage and either supply or demand is the most complex case, with the balancing equation $s(y, x, t) = s_{minusone} + r_{s}(y, x, t) + r_{bs}(y, x, t) - e_{prod} - e_{con}$ $$e_{prod}$$ is defined as $$es_{prod}(c, y, x, t) \times e_{eff}(y, x, t)$$. $$e_{con}$$ is defined as $$\frac{es_{con}(c, y, x, t)}{e_{eff}(y, x, t)}$$, or as $$0$$ if $$e_{eff}(y, x, t)$$ is $$0$$. $$r_{bs}(y, x, t)$$ is the secondary resource and is always set to zero unless the technology explicitly defines a secondary resource. $$s(y, x, t)$$ is the storage level at time $$t$$. $$s_{minusone}$$ describes the state of storage at the previous timestep. $$s_{minusone} = s_{init}(y, x)$$ at time $$t=0$$. Else, $s_{minusone} = (1 - s_{loss}) \times timeres(t-1) \times s(y, x, t-1)$ Note In operation mode, s_init is carried over from the previous optimization period. If no storage is allowed, the balancing equation simplifies to $r_{s}(y, x, t) + r_{bs}(y, x, t) = e_{prod} + e_{con}$ #### Transmission balance¶ Transmission technologies are internally expanded into two technologies per transmission link, of the form technology_name:destination. For example, if the technology hvdc is defined and connects region_1 to region_2, the framework will internally create a technology called hvdc:region_2 which exists in region_1 to connect it to region_2, and a technology called hvdc:region_1 which exists in region_2 to connect it to region_1. The balancing for transmission technologies is given by $es_{prod}(c, y, x, t) = -1 \times es_{con}(c, y_{remote}, x_{remote}, t) \times e_{eff}(y, x, t) \times e_{eff,perdistance}(y, x)$ Here, $$x_{remote}, y_{remote}$$ are x and y at the remote end of the transmission technology. For example, for (y, x) = ('hvdc:region_2', 'region_1'), the remotes would be ('hvdc:region_1', 'region_2'). $$es_{prod}(c, y, x, t)$$ for c='power', y='hvdc:region_2', x='region_1' would be the import of power from region_2 to region_1, via a hvdc connection, at time t. This also shows that transmission technologies can have both a static or time-dependent efficiency (line loss), $$e_{eff}(y, x, t)$$, and a distance-dependent efficiency, $$e_{eff,perdistance}(y, x)$$. For more detail on distance-dependent configuration see Model configuration. #### Conversion balance¶ The conversion balance is given by $es_{prod}(c_{prod}, y, x, t) = -1 \times es_{con}(c_{source}, y, x, t) \times e_{eff}(y, x, t)$ The principle is similar to that of the transmission balance. The production of carrier $$c_{prod}$$ (the carrier option set for the conversion technology) is driven by the consumption of carrier $$c_{source}$$ (the source_carrier option set for the conversion technology). ### Node build constraints¶ Defines the following variables: • s_cap: installed storage capacity • r_cap: installed resource to/from storage conversion capacity • e_cap: installed storage to/from grid conversion capacity (gross) • e_cap_net: installed storage to/from grid conversion capacity (net) • rb_cap: installed secondary resource conversion capacity Built capacity is managed by six constraints. c_s_cap constrains the built storage capacity by $$s_{cap}(y, x) \leq s_{cap,max}(y, xi)$$. If y.constraints.use_s_time is true at location x, then y.constraints.s_time.max and y.constraints.e_cap.max are used to to compute s_cap.max at reference efficiency. If y.constraints.s_cap.equals is set for location x or the model is running in operational mode, the inequality in the equation above is turned into an equality constraint. c_r_cap constrains the built resource conversion capacity by $$r_{cap}(y, x) \leq r_{cap,max}(y, x)$$. If the model is running in operational mode, the inequality in the equation above is turned into an equality constraint. c_r_area constrains the resource conversion area by $$r_{area}(y, x) \leq r_{area,max}(y, x)$$. By default, y.constraints.r_area.max is set to false, and in that case, $$r_{area}(y, x)$$ is forced to $$1.0$$. If the model is running in operational mode, the inequality in the equation above is turned into an equality constraint. Finally, if y.constraints.r_area_per_e_cap is given, then the equation $$r_{area}(y, x) = e_{cap}(y, x) * r\_area\_per\_cap$$ applies instead. c_e_cap constrains the carrier conversion capacity. If a technology y is not allowed at a location x, $$e_{cap}(y, x) = 0$$ is forced. Else, $$e_{cap}(y, x) \leq e_{cap,max}(y, x) \times e\_cap\_scale$$ applies. y.constraints.e_cap_scale defaults to 1.0 but can be set on a per-technology, per-location basis if necessary. Finally, if y.constraints.e_cap.equals is set for location x or the model is running in operational mode, the inequality in the equation above is turned into an equality constraint. The c_e_cap_gross_net constraint is relevant only if y.constraints.c_eff is set to anything other than 1.0 (the default). In that case, $$e_{cap}(y, x) \times c_{eff} = e_{cap,net}(y, x)$$ computes the net installed carrier conversion capacity. The final constraint, c_rb_cap, manages the secondary resource conversion capacity by $$rb_{cap}(y, x) \leq rb_{cap,max}(y, x)$$. If y.constraints.rb_cap.equals is set for location x or the model is running in operational mode, the inequality in the equation above is turned into an equality constraint. There is an additional relevant option, y.constraints.rb_cap_follows, which can be overridden on a per-location basis. It can be set either to r_cap or e_cap, and if set, sets c_rb_cap to track one of these, ie, $$rb_{cap,max} = r_{cap}(y, x)$$ (analogously for e_cap), and also turns the constraint into an equality constraint. ### Node operational constraints¶ Provided by: calliope.constraints.base.node_constraints_operational() This component ensures that nodes remain within their operational limits, by constraining rs, es, s, and rbs. $$r_{s}(y, x, t)$$ is constrained to remain within $$r_{cap}(y, x)$$, with the two constraints c_rs_max_upper and c_rs_max_lower: $r_{s}(y, x, t) \leq timeres(t) \times r_{cap}(y, x)$$r_{s}(y, x, t) \geq -1 \times timeres(t) \times r_{cap}(y, x)$ $$e_{s}(c, y, x, t)$$ is constrained by three constraints, c_es_prod_max, c_es_prod_min, and c_es_con_max: $e_{s,prod}(c, y, x, y) \leq timeres(t) \times e_{cap}(y, x)$ if c is the carrier of y, else $$e_{s,prod}(c, y, x, y) = 0$$. If e_cap_min_use is defined, the minimum output is constrained by $e_{s,prod}(c, y, x, y) \geq timeres(t) \times e_{cap}(y, x) \times e_{cap,minuse}$ For technologies where y.constraints.e_con is true (it defaults to false), and for conversion technologies, $e_{s,con}(c, y, x, y) \geq -1 \times timeres(t) \times e_{cap}(y, x)$ and $$e_{s,con}(c, y, x, y) = 0$$ otherwise. The constraint c_s_max ensures that storage cannot exceed its maximum size by $s(y, x, t) \leq s_{cap}(y, x)$ And finally, c_rbs_max constrains the secondary resource by $rb_{s}(y, x, t) \leq timeres(t) \times rb_{cap}(y, x)$ There is an additional check if y.constraints.rb_startup_only is true. In this case, $$rb_{s}(y, x, t) = 0$$ unless the current timestep is still within the startup time set in the startup_time_bounds model-wide setting. This can be useful to prevent undesired edge effects from occurring in the model. ### Transmission constraints¶ This component provides a single constraint, c_transmission_capacity, which forces $$e_{cap}$$ to be symmetric for transmission nodes. For example, for for a given transmission line between $$x_1$$ and $$x_2$$, using the technology hvdc: $e_{cap}(hvdc:x_2, x_1) = e_{cap}(hvdc:x_1, x_2)$ ### Node parasitics¶ Defines the following variables: • ec_prod: storage to carrier after parasitics (positive, production) • ec_con: carrier to storage after parasitics (negative, consumption) There are two constraints, c_ec_prod and c_ec_con, which constrain ec by $ec_{prod}(c, y, x, t) = es_{prod}(c, y, x, t) \times c_{eff}(y, x)$$ec_{con}(c, y, x, t) = \frac{es_{con}(c, y, x, t)}{c_{eff}(y, x)}$ For conversion and transmission technologies, the second equation reads $$ec_{con}(c, y, x, t) = es_{con}(c, y, x, t)$$ so that the internal losses are applied only once. The two variables ec_prod and ec_con are only defined in the model for technologies where c_eff is not 1.0. Note When reading the model solution, Calliope automatically manages the es and ec variables. In the solution, every technology has an ec variable, which is simply set to es wherever it was not defined, to make the solution consistent. ### Node costs¶ Provided by: calliope.constraints.base.node_costs() Defines the following variables: • cost: total costs • cost_con: construction costs • cost_op_fixed: fixed operation costs • cost_op_var: variable operation costs • cost_op_fuel: primary resource fuel costs • cost_op_rb: secondary resource fuel costs These equations compute costs per node. The depreciation rate for each cost class k is calculated as $d(y, k) = \frac{1}{plant\_life(y)}$ if the interest rate $$i$$ is $$0$$, else $d(y, k) = \frac{i \times (1 + i(y, k))^{plant\_life(k)}}{(1 + i(y, k))^{plant\_life(k)} - 1}$ Costs are split into construction and operational and maintenance (O&M) costs. The total costs are computed in c_cost by $cost(y, x, k) = cost_{con}(y, x, k) + cost_{op,fixed}(y, x, k) + cost_{op,var}(y, x, k) + cost_{op,fuel}(y, x, k) + cost_{op,rb}(y, x, k)$ The construction costs are computed in c_cost_con by $\begin{split}cost_{con}(y, x, k) &= d(y, k) \times \frac{\sum\limits_t timeres(t)}{8760} \\ & \times (cost_{s\_cap}(y, k) \times s_{cap}(y, x) \\ & + cost_{r\_cap}(y, k) \times r_{cap}(y, x) \\ & + cost_{r\_area}(y, k) \times r_{area}(y, x) \\ & + cost_{e\_cap}(y, k) \times e_{cap}(y, x)) \\ & + cost_{rb\_cap}(y, k) \times rb_{cap}(y, x))\end{split}$ The costs are as defined in the model definition, e.g. e.g. $$cost_{r\_cap}(y, k)$$ corresponds to y.costs.k.r_cap. For transmission technologies, $$cost_{e\_cap}(y, k)$$ is computed differently, to include the per-distance costs: $cost_{e\_cap,transmission}(y, k) = \frac{cost_{e\_cap}(y, k) + cost_{e\_cap,perdistance}(y, k)}{2}$ This implies that for transmission technologies, the cost of construction is split equally across the two locations connected by the technology. The O&M costs are computed in four separate constraints, cost_op_fixed, cost_op_var, cost_op_fuel, and cost_op_rb, by $\begin{split}cost_{op,fixed}(y, x, k) &= cost_{om\_frac}(y, k) \times cost_{con}(y, x, k) \\ & + cost_{om\_fixed}(y, k) \times e_{cap}(y, x) \\ & \times \frac{\sum\limits_t timeres(t)}{8760}\end{split}$ $cost_{op,var}(y, x, k) = cost_{om\_var}(y, k) \times \sum_t e_{prod}(c, y, x, t)$$cost_{op,fuel}(y, x, k) = \frac{cost_{om\_fuel}(y, k) \times \sum_t r_{s}(y, x, t)}{r_{eff}(y, x)}$$cost_{op,rb}(y, x, k) = \frac{cost_{om\_rb}(y, k) \times \sum_t r_{bs}(y, x, t)}{rb_{eff}(y, x)}$ ### Model balancing constraints¶ Provided by: calliope.constraints.base.model_constraints() Model-wide balancing constraints are constructed for nodes that have children. They differentiate between: • c = power • All other c In the first case, the following balancing equation applies: $\sum_{y, x \in X_{i}} ec_{prod}(c=c_{p}, y, x, t) + \sum_{y, x \in X_{i}} ec_{con}(c=c_{p}, y, x, t) = 0 \qquad\forall i, t$ $$i$$ are the level 0 locations, and $$X_{i}$$ is the set of level 1 locations ($$x$$) within the given level 0 location, together with that location itself. $$c$$ is the carrier, and $$c_{p}$$ the carrier for power. For c other than power, the balancing equation is as above, but with a $$\geq$$ inequality, and the corresponding change to $$c$$. Note The actual balancing constraint is implemented such that es and ec are used in the sum as appropriate for each technology. ## Planning constraints¶ These constraints are loaded automatically, but only when running in planning mode. ### System margin¶ This is a simplified capacity margin constraint, requiring the capacity to supply a given carrier in the time step with the highest demand for that carrier to be above the demand in that timestep by at least the given fraction: $\sum_y \sum_x es_{prod}(c, y, x, t_{max,c}) \times (1 + m_{c}) \leq timeres(t) \times \sum_{y_{c}} \sum_x (e_{cap}(y, x) / e_{eff,ref}(y, x))$ where $$y_{c}$$ is the subset of y that delivers the carrier c and $$m_{c}$$ is the system margin for that carrier. For each carrier (with the name carrier_name), Calliope attempts to read the model-wide option system_margin.carrier_name, only applying this constraint if a setting exists. ## Optional constraints¶ Optional constraints are included with Calliope but not loaded by default (see the configuration section for instructions on how to load them in a model). These optional constraints can be used both in planning and operational modes. ### Ramping¶ Constrains the rate at which plants can adjust their output, for technologies that define constraints.e_ramping: $diff = \frac{es_{prod}(c, y, x, t) + es_{con}(c, y, x, t)}{timeres(t)} - \frac{es_{prod}(c, y, x, t-1) + es_{con}(c, y, x, t-1)}{timeres(t-1)}$$max\_ramping\_rate = e_{ramping} \times e_{cap}(y, x)$$diff \leq max\_ramping\_rate$$diff \geq -1 \times max\_ramping\_rate$ ### Group fractions¶ This component provides the ability to constrain groups of technologies to provide a certain fraction of total output, a certain fraction of total capacity, or a certain fraction of peak power demand. See Parents and groups in the configuration section for further details on how to set up groups of technologies. The settings for the group fraction constraints are read from the model-wide configuration, in a group_fraction setting, as follows: group_fraction: capacity: renewables: ['>=', 0.8] This is a minimal example that forces at least 80% of the installed capacity to be renewables. To activate the output group constraint, the output setting underneath group_fraction can be set in the same way, or demand_power_peak to activate the fraction of peak power demand group constraint. For the above example, the c_group_fraction_capacity constraint sets up an equation of the form $\sum_{y^*} \sum_x e_{cap}(y, x) \geq fraction \times \sum_y \sum_x e_{cap}(y, x)$ Here, $$y^*$$ is the subset of $$y$$ given by the specified group, in this example, renewables. $$fraction$$ is the fraction specified, in this example, $$0.8$$. The relation between the right-hand side and the left-hand side, $$\geq$$, is determined by the setting given, >=, which can be ==, <=, or >=. If more than one group is listed under capacity, several analogous constraints are set up. Similarly, c_group_fraction_output sets up constraints in the form of $\sum_{y^*} \sum_x \sum_t es_{prod}(c, y, x, t) \geq fraction \times \sum_y \sum_x \sum_t es_{prod}(c, y, x, t)$ Finally, c_group_fraction_demand_power_peak sets up constraints in the form of $\sum_{y^*} \sum_x e_{cap}(y, x) \geq fraction \times (-1 - m_{c}) \times peak$$peak = \frac{\sum_x r(y_d, x, t_{peak}) \times r_{scale}(y_d, x)}{timeres(t_{peak})}$ This assumes the existence of a technology, demand_power, which defines a demand (negative resource). $$y_d$$ is demand_power. $$m_{c}$$ is the capacity margin defined for the carrier c in the model-wide settings (see System margin). $$t_{peak}$$ is the timestep where $$r(y_d, x, t)$$ is maximal. Whether any of these equations are equalities, greater-or-equal-than inequalities, or lesser-or-equal-than inequalities, is determined by whether >=, <=, or == is given in their respective settings. Previous: Tutorial | Next: Model configuration
2020-01-19 05:56:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9760343432426453, "perplexity": 4154.781075274918}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250594209.12/warc/CC-MAIN-20200119035851-20200119063851-00102.warc.gz"}
http://mathoverflow.net/questions/135013/why-did-voiculescu-develop-free-probability/135025
# Why did Voiculescu develop free probability? I was recently asked why Voiculescu developed free probability theory. I am not very expert in this and the only answer I was able to provide is the classical one: he was challenging the isomorphism problem of whether $II_1$-factors associated to two different free groups are isomorphic or not. First: is this true or just legend? Were there any other motivations? In particular I would be interested in more down-to-earth motivations, something that could theoretically be explained to someone with basic knowledge in probability theory and operator theory (without necessarily knowing what a $II_1$-factor is). Valerio - As for the first part... I don't have it in front of my but at the beginning of his book Free Random Variables I'm pretty sure that he says studying the free group factors (though I'm not sure if he says that solving the isomorphism problem per se) was the initial motivation, largely because, after the hyperfinite $II_1$ factor, this is the next natural (and historical) example. Also I have heard from other people that know more free probability than I do that to do this he spent years trying to calculate moments and this led him to think of freeness as an analog of independent. –  Owen Sizemore Jun 27 '13 at 15:46 Here's what Dan Voiculescu himself gave as motivation: Around 1982, I realized that the right way to look at certain operator algebra problems was by imitating some basic probability theory. More precisely, in noncommutative probability theory a new kind of independence can be defined by replacing tensor products with free products and this can help understand the von Neumann algebras of free groups. The subject has evolved into a kind of parallel to basic probability theory, which should be called free probability theory. - Here is an even more classical motivation: If you consider vNa as the non-commutative analogue of measure theory, then free probability theory formalizes the concept of independent random variables. Here is a reference: http://www.uni-math.gwdg.de/mitch/free.pdf I do not know anything about the history, sorry:( -
2014-12-23 04:36:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7550393342971802, "perplexity": 401.80487562555913}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802778085.5/warc/CC-MAIN-20141217075258-00004-ip-10-231-17-201.ec2.internal.warc.gz"}
https://www.globinch.com/how-to-block-website-on-mac-and-windows/?rel=author
How to block a website without using any programs?   There are many freeware and shareware programs are available to block unwanted websites. Most of the anti-virus applications usually provide ways to block websites. But there are some easy ways to block websites by changing one of your system file. Blocking certain websites helps you to control kid’s activity on unwanted sites. You can use the hosts File, which is a computer file used in an operating system to map host names to IP addresses, to block any websites. Related Posts: • How to Edit Hosts File to Block Unwanted Sites, Ads, Banners and More • How to Block URL or Websites in Internet explorer? • How to Block Website in Firefox? Blocking URL in Mozilla Firefox As we discussed in the above articles, you can block any website by adding an entry in the hosts file. All you need to do is to locate the hosts file depending on your operating system and add the entries and the save the file. You should have administrative privileges to do the same. But do remember that anyone who knows this trick can easily edit this file and remove the block. There are other means to access blocked sites also. In other programs website blocking programs may allow you to set the password to change the settings. Here any one with administrative privileges can edit and change the file. Even though it is not critical it is always better to take a backup of the file before doing any changes. ## How to block website on Mac using hosts file? The location of the hosts file in Mac OS is “/etc/hosts”. You can either use the command “sudo /etc/hosts” from terminal or from the Go menu –> “Go to folder” type “/etc/” and from the folder window, open the “hosts” file in a text editor. Type the following in a new line. 1 127.0.0.1 sitetoblock.com Replace “sitetoblock.com” with the site you want to block. ## How to block website on Windows using hosts file? The steps required to edit the hosts file in windows in explained in the below post. How to Edit Hosts File to Block Unwanted Sites, Ads, Banners and More As explained for windows also you just need to add new entries in the hosts file. You can block as many websites as you like with the above procedure. You just need to add new entries into the file. If you want to remove the ban later, open the same hosts file and delete the necessary lines.
2021-06-22 05:07:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2952921688556671, "perplexity": 2290.5522678299435}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488507640.82/warc/CC-MAIN-20210622033023-20210622063023-00180.warc.gz"}
https://socratic.org/questions/a-sine-wave-voltage-is-applied-across-a-capacitor-when-the-frequency-of-the-volt
# A sine wave voltage is applied across a capacitor. When the frequency of the voltage is decreased, what happens to the current? Jun 23, 2015 #### Answer: If the frequency of voltage is decreased, then current value is also $\textcolor{b l u e}{\mathrm{de} c r e a s e d}$. #### Explanation: If the opposition to applied voltage is increased, then current value is decreased. Here,the opposition is due to reactance of capacitor. ${X}_{c} = \frac{1}{2 \pi f C}$ ${X}_{c}$=reactance of capacitor, $f$=frequency of applied voltage. ${X}_{c} \propto \frac{1}{f}$[since$\frac{1}{2 \pi C} =$constant]$\text{ } \textcolor{b l u e}{\left(1\right)}$ From ohm's law, $V = {I}_{c} {X}_{c}$ ${X}_{c} = \frac{V}{I} _ c$ ${X}_{c} \propto \frac{1}{I} _ c$[since V is constant]$\text{ } \textcolor{b l u e}{\left(2\right)}$ From $\text{ } \textcolor{b l u e}{\left(1\right)}$&$\textcolor{b l u e}{\left(2\right)}$; $\frac{1}{f} \propto \frac{1}{I} _ C$ $\textcolor{g r e e n}{{I}_{c} \propto f}$ From above relation, If the frequency of voltage is decreased, then current value is also $\textcolor{b l u e}{\mathrm{de} c r e a s e d}$.
2019-07-15 21:04:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 16, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8617528676986694, "perplexity": 2749.8825260612007}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195524111.50/warc/CC-MAIN-20190715195204-20190715221204-00167.warc.gz"}
https://mathematica.stackexchange.com/questions/165084/is-it-possible-to-get-the-name-of-a-physical-quantity-related-to-a-quantity-e-g
Is it possible to get the name of a physical quantity related to a quantity, e.g. watts->power? Is there a way to get the name of the physical quantity given by a particular unit? For example, QuantityUnit[Quantity[1, "Henries"]] returns "Henries", but I want to know what "Henries" are a measure of. That is, "electrical inductance". Will Mathematica provide that kind of information? • You must be wrong. "Henries" has nothing to do with inductance. Try WolframAlpha["Quantity Henries"]. The correct answer is obviously "342037 people" :) – halirutan Feb 3 '18 at 8:02 • The problem could be also that several physical quantities can be measured in Henries (H). As noted, use W|A... – José Antonio Díaz Navas Feb 3 '18 at 12:54 WolframAlpha[#, {{"PhysicalQuantity", 1},
2020-08-08 05:21:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35730287432670593, "perplexity": 1849.2935603497922}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737289.75/warc/CC-MAIN-20200808051116-20200808081116-00079.warc.gz"}
http://exxamm.com/QuestionSolution19/Class+10/ABC+is+an+isosceles+triangle+with+AC+BC+If+AB+2+2+AC+2+prove+that+ABC+is+a+right+triangl/3061580425
ABC is an isosceles triangle with AC = BC. If AB^2 = 2 AC^2, prove that ABC is a right triangle. # Ask a Question ### Question Asked by a Student from EXXAMM.com Team Q 3061580425.     ABC is an isosceles triangle with AC = BC. If AB^2 = 2 AC^2, prove that ABC is a right triangle. #### HINT (Provided By a Student and Checked/Corrected by EXXAMM.com Team) #### Access free resources including • 100% free video lectures with detailed notes and examples • Previous Year Papers • Mock Tests • Practices question categorized in topics and 4 levels with detailed solutions • Syllabus & Pattern Analysis
2019-03-23 00:40:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2869364321231842, "perplexity": 5829.3982792837605}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202704.58/warc/CC-MAIN-20190323000443-20190323022443-00326.warc.gz"}
http://mathhelpforum.com/differential-equations/137237-laplace-transform-convolution.html
# Thread: Laplace Transform : Convolution 1. ## Laplace Transform : Convolution L { Integral (t- T)^2 Cos(2T) dT } T = tau The integral goes from 0 to t I am unsure how to get to the answer, which is 2/ (s^2)(s^2+4) 2. Given $f(t)$ and $g(t)$ two functions and $F(s)$ and $G(s)$ their L-trasforms. The convolution is defined as... $f*g = \int_{0}^{t} f(t-\tau)\cdot g(\tau)\cdot d\tau$ (1) Is... $\mathcal{L} \{f*g\} = F(s)\cdot G(s)= \mathcal{L}\{t^{2}\}\cdot \mathcal{L} \{\cos 2t\} = \frac{2}{s^{3}} \cdot \frac{s}{s^{2} + 4} = \frac{2}{s^{2}\cdot (s^{2} +4)}$ (2) Kind regards $\chi$ $\sigma$
2016-08-27 11:11:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9805232882499695, "perplexity": 3049.4272307071997}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982298977.21/warc/CC-MAIN-20160823195818-00158-ip-10-153-172-175.ec2.internal.warc.gz"}
http://ceyron.io/nyf4y4i/d5b9ae-consider-the-following-data-for-a-closed-economy%3A
Consider the following data for a closed economy: Y = $11 trillion C =$8 trillion I= 2 trillion TR = 1 trillion T= 3 trillion Use these data to calculate the following: a.Private saving b.Public saving c.Government purchases d.The government budget deficit or budget surplus How do I … In macroeconomics, the purchase of stocks and bonds is called saving, and the purchase of new machinery and equipment is called investment. c.I= $3 trillion . In fact, it believes it does not need to trade. Consider the following data for a one-factor economy. Over time, the U.S. economy has experienced constant economic growth. Holding all else constant, which of the following is incorrect? C = 60 +.6Y I = I0 = 30 A closed economy sees itself as self-sufficient and claims it does not want to trade internationally. b. Jl portfolios are well diversified. Pay for 5 months, gift an ENTIRE YEAR to someone special! 36919 Questions; 36993 Tutorials; 96% (1842 ratings) Feedback Score View Profile. Consider the following data for a closed economy: Y$12 trillion C $8 trillion G$2 trillion Spublic $-0.50 trillion T$2 trillion Now suppose that government purchases increase from $2 trillion to$2.60 trillion but the values of Y and C are unchanged. Consider the following data for a one-factor economy. A decrease in autonomous savings at each level of disposable income will A) shift the LM curve down. He asks us to calculate public seedings. In a closed economy, the interest rate is determined by the equilibrium of supply and demand for money: M/P=L(i,Y) considering M the amount of money offered, Y real income and i real interest rate, being L the demand for money, which is function of i and Y. Draw a graph illustrating the determination of the real rate of interest, as described by that model. Consider the following data for a closed economy: Y $12 trillion C$8 trillion G $2 trillion Spublic$-0.50 trillion T $2 trillion Now suppose that government purchases increase from$2 trillion to $2.60 trillion but the values of Y and C are unchanged. Consider a fixed price model of a closed economy. Practice 2.8 (Page 300) Consider the following data for a closed economy: Y=$11 trillion C=$8 trillion I=$2 trillion TR=$1 trillion T=$3 trillion Use the data to calculate the following: a. The demand for loanable funds comes from savers, and the supply of loanable funds come from investors. Consider the following data for a closed economy: Y = $12 trillion C =$8 trillion G = $2 trillion SPublic =$-0.5 trillion T = $2 trillion Use these data to calculate the following: a. Consider the following model of a closed economy 2. Consider the following data for a closed economy: Y =$12 trillion C = $8 trillion G =$2 trillion Spublic = $negative 0.50 trillion T =$2 - 1341400… We get that the government actually has a surplus of $1 trillion because they earn more revenue taxes and they spend through transfer payments or other government creditors. … Over time, the U.S. economy has experience economic growth but has not been constant due to the business cycle. 10. Posted at 02:45h in Articles by magic writer in Articles by magic writer False. use the data to calculate the following Spublic =$ -0.5 trillion. Government purchases is one component of GDP, and we know that GDP is equal to the sum of consumer spending, government expenditures in investment spending and we can rewrite that expression or equation and such. b.C = $8 trillion . 1 Answer. Assume that government expenditure is constant at$200 million. Consider the following data for a closed economy: Y = $11 trillion C =$8 trillion I = $2 trillion TR =$1 trillion T = $3 trillion Use the data above to calculate the following: Private saving Public saving Government purchases Government budget deficit or surplus Based on the information above, what is the level of private saving in the economy? False. Problem 2.6: Consider an economy that produces and consumes bread and automobiles. According to the quantity theory of money, what is the equilibrium price of goods (P) for this economy? Solved Expert Answer to Consider the following data for a one-factor economy. For several decades the interest rate continues to decline in all regions except High-Income region largely because of the demographic trends we saw in Section 5. If the government eliminates a tax credit for investment, then the equilibrium real interest rate will ___ and the equilibrium quantity of saving and investment will __-. Consider the following data for a closed economy: Y=\ 12 trillion C=\ 8 trillion G=\ 2 trillion S_{\text { Public }}=\-0.5 trillion T=\ 2 trillion Use these da… The Study-to … Problem 2.6: Consider an economy that produces and consumes bread and automobiles. Wrexington is a closed economy that has collected the following data: Y =$20 trillion, C = $12 trillion, I =$3 trillion, TR = $1 trillion, and T =$7 trillion. The consumption function is given by C = 200+:75(Y T). Also assume that there is equilibrium in the goods market at all times. Private Saving (Individuals)=Y(Income)+ Transfer Payment)-T(Taxes)-C(consumption). (1) deficit surplus Consider the following data for a closed economy: Y = $trillion 14 C =$ trillion 9 G = $trillion 3 S public =$ trillion − 0.5 T = $trillion 4 Use the data to calculate the following: (Enter your responses rounded to one decimal place.) 2.5 b. Saving= Public Saving+Private Saving= Y-C-G. Public saving= Taxes-Transfer Payment-Government. A firm receives funds every time a share of its stock is sold. Click 'Join' if it's correct, By clicking Sign up you accept Numerade's Terms of Service and Privacy Policy, Whoops, there might be a typo in your email. Banks pay higher interest rates to savers than they receive from borrowers. Use these data to calculate the following: a. 8. Suppose that T = G = 450 and that M = 9000. Write out the National Income/GDP equation numerically. A closed economy is a country that does not import or export. E. Unemployed increase during a recession. Question. Because all expenditure in the economy must fall into one of these four categories, they must add up to total GDP. Ans: Government purchases are Consider the following data about the goods market in a hypothetical closed economy. Give the gift of Numerade. Public saving c. Government purchases d. The government budget deficit or budget surplus In a completely closed economy, there are no imports or exports. Private Saving B. Consider the following data for a one-factor economy. C= 8 trillion. Question: Part A. 1. A firm revives funds only when a share of its stock is sold for the first time in a primary market. This will decrease i. 16) Refer to Table 4-3. D) 8.3. Bonds: Interest Rates. Consider the following data for a closed economy? Problem 2.6: Consider an economy that produces and consumes bread and automobiles. A larger government budget deficit (C) end/or an increase in the tax rate on interest income (A) that reduces the incentive to save would cause the supply of loanable funds to shift left, decreasing the equilibrium quantity of saving an investment. That's why, because that is the GDP and the amount of income that is earned the entire economy adding transfer payments because that's how much money the government gives consumers subtracting consumer spending, which is how much money they spend on good and services. A bond is a financial security that represents partial ownership of a firm. { The IS curve is given by: Y = C(Y T) + I(r) + G. We can plug in the consumption and We also subtracting the amount of money the pain taxes. S public= -0.5 trillion. Business cycles are irregular and unpredictable ( Expansions and recessions length of periods are not predicted or regular). Relevance. Consider the following data for a hypothetical economy thatproduces two goods: milk and honey.Quantity ProducedPricesMilk (litres)Honey (kg)Milk ($/litre)Honey ($/kg)Year 11104526Year 21254037Compute nominal GDP for each year in this economy.Using year 1 as the base year, compute real GDP for each year.What is the percentage change in real GDP from year 1 to year2?Using year 1 as … This will decrease both i. Consider the following data for a closed economy (12 points): Question 10.2.7 Y =$15 trillion C = $9 trillion I =$4 trillion TR = $1 trillion T =$5 trillion Use this data to calculate the following: a. View Solution: Consider the following data for a one factor economy All Read More B) shift the LM curve up. 17) Refer to Table 4-4. Would an arbitrage opportunity exist? Part C asked us for government. … C = $9 trillion. Private saving b. E) leave both the IS and the LM curves unchanged. Y= 12 trillion C= 8 trillion G= 2 trillion S public= -0.5 trillion T= 2 trillion use the data to calculate the following 1. private savings 2.investmant spending 3. transfer payments 4. the government buget deficit or budget surplus First is private savings. Question: Scenario 10-1 Consider the following data for a closed economy: Y =$16 trillion C = $10 trillion I=$2 trillion TR = $1 trillion T =$3 trillion Private saving b. D) shift the IS curve to the right. 3. transfer payments. Consider the following data for a closed economy: Y = $11 trillion C =$8 trillion I = $2 trillion TR =$1 trillion T = $3 trillion Use the data above to calculate the following: Public saving Government purchases Government budget deficit or surplus Answer 1. The income elasticity of demand for gasoline in this economy is A) 6.0. Answer the question on the basis of the following consumption and investment data for a private closed economy. C. This is why stock indexes are a leading economic indicator. Consider the following data for a closed economy: Y=\ 12 trillion C=\ 8 trillion G=\ 2 trillion S_{\text { Public }}=\-0.5 trillion T=\ 2 trillion Use these da… The Study-to … a. A budget deficit is a negative number that indicates the govern…, The total public debt$D$(in trillions of dollars) in the United States fro…, Economists in Funlandia, a closed economy, have collected the following info…, Suppose that this year's money supply is \$500 billion, nominal GDP is …, The following graph approximates the federal deficit of the United States. 28. Stock indexes, such as the Dow Jones Industrial Average, typically _____ before the beginning of a recession and _____ before the beginning of an expansion. Consider the following closed economy a Suppose that T G Consider the following closed economy: a. Also assume that there is equilibrium in the goods market at all times. Consider the following data for a closed economy: Y $12 trillion C$8 trillion G $2 trillion Spublic$-0.50 trillion T $2 trillion Now suppose that government purchases increase from$2 trillion to $2.60 trillion but the values of Y and C are unchanged. Y= 12 trillion. 4. In the following table are data for two different years: Year 2000 2010 Price of an automobile USD 50,000 USD 60,000 Price of a loaf of bread USD 10 USD 20 Number of automobiles produced 100 120 Consider The Following Data For A Closed Economy: Y =$11 Trillion C = $8 Trillion I =$2 Trillion TR = $1 Trillion T=$3 Trillion Use These Data To Calculate The Following: A. Wrexington is a closed economy that has collected the following data: Y = $20 trillion, C =$12 trillion, I = $3 trillion, TR =$1 trillion, and T = $7 trillion. G =$2 trillion. Perp. a. T = $4 trillion. Scenario 6.1 Consider the following data for a closed economy: Y =$1 200 billion C = $800 billion I =$200 billion G = $200 billion TR =$200 billion T = $300 billion 11) Refer to Scenario 6.1. E) 1.2. Which of the following does not decrease during a recession? D. The elimination of a tax credit for investment will reduce the incentive to invest so the demand for loanable funds will shift left. use the data to calculate the following. False. Question: Scenario 10-1 Consider the following data for a closed economy: Y =$16 trillion C = $10 trillion I=$2 trillion TR = $1 trillion T =$3 trillion Use these data to calculate the following: a. In the closed economy scenario, Low-Income region and China start with higher interest rates at around 5.7% and 8.2%, respectively. Consider the following data for a closed economy: Y = $12 trillion C =$8 trillion I= $2 trillion G =$2 trillion TR = $2 trillion T =$3 trillion Refer to Scenario 10-1. So we know that whether or not the government has a deficit, re surplus is dependent upon how much revenue, the urn and how much money they spend. Private saving b. Government purchasesd. Question 3: Deriving the AD Curve (closed economy) (20 marks) Consider an economy with the following IS and LM curves: Y = 4350 800r+ 2G T (IS) M P = 0:5Y 200r (LM) 1. Solved Expert Answer to Consider the following data for a single-factor economy. Holding all else constant, an increase in investment by firms (C) will shift the demand for loanable funds right, increasing the equilibrium real interest rate. d. T-G-TR=-5-5-1=$-1 trillion budget deficit. In macroeconomics, we often assume the U.S. economy is a closed economy when building models that describe how changes in policy and shocks affect the economy. All portfolios are well diversified. All portfolios are well diversified.Suppose that another portfolio, portfolio E, is well diversified with a beta of .6 and expected return of 8%. p The interest rate is lowest in High-Income countries to begin with. 107) Consider the following data for a closed economy: a.Y =$12 trillion . So we know that we confined private savings by calculating how much money people earn and subtracting how much people spend, How much before that's their income, which can be expressed. E. Holding all else constant an decrease in saving by household (B) shift the supply of loanable funds left, increasing the equilibrium real interest rate. Holding all else constant, which of the following would you expect to increase the equilibrium real interest rate? Consider the following data for a closed economy: Y = $11 trillion C =$8 trillion I = $2 trillion TR =$1 trillion T = $3 trillion Use the data above to calculate the following: Private saving Public saving Government purchases Government budget deficit or surplus B) 0.5. Investment spending is:$ trillion. e.T = $3 trillion . The demand for loanable funds comes from investors and the supply of loanable funds comes from savers. 1. Find an equation describing the IS curve. The LM curve represents the relationship between liquidity and money. FAlse. Savers prefer to buy bonds when their interest rates are low. Consider a closed economy of AU land that can be described by the following functions: All values C, I, and G are in billions of USD. The investment function is I = 200 25r. A)$3 trillion B) $4 trillion C)$5 trillion D) $8 trillion Private saving b. Consider the following data for a closed economy:$Y=\$12$ trillion$…, Suppose GDP is$\$$8 trillion, taxes are \$$1.5 trillion, private saving is…, An economy is in a recession with a large recessionary gap and a government …, U.S. Budget. Find an equation for the aggregate demand curve. To do that, we can rearrange this expression and we know that investment spending is $2 trillion we know from the previous part that private saving$1 trillion. Suppose that T = G= 450 and that M= 9000. True. 108724 Questions; 110428 Tutorials; 96% (4113 ratings) Feedback Score View Profile. False. We also need to spend their subtract government expenditures. Consider the following data for a closed economy: Y $12 trillion C$8 trillion G $2 trillion Spublic$-0.50 trillion T $2 trillion Now suppose that government purchases increase from$2 trillion to $2.60 trillion but the values of Y and C are unchanged. C) shift the IS curve to the left. Consider the following data for a closed economy: Y =$15 trillion C = $8 trillion I =$2 trillion TR = $2 trillion T =$2 trillion Use the data to calculate the following. 7. Perchance so. Consider the following data for a closed economy: Y = $11 trillion C =$8 trillion I = $2 trillion TR =$1 trillion T = $3 trillion Use the data above to calculate the following: Private saving Public saving Government purchases Government budget deficit or surplus 2. Consider the following data for a Foggyland economy: Y = C + I + G + NX C = 1000 + 0.60Y I = 1200 G = 1500 NX = -400 Use the data to calculate the following: Show the formula you used to receive full credit. In a closed​ economy, the values for​ GDP, consumption​ spending, investment​ spending, transfer​ payments, and taxes are as​ follows: Y​ =$11 trillion C​ = $8 trillion Thus letting Y stand for GDP results in: Y = C + I + G + NE In this example we will consider the closed economy which assumes that a country does not engage in trade. Y =$11 trillion, C= $8 trillion, I =$2 trillion, S sub Public = $.5 trillion, T =$2 trillion. In a closed economy, saving equals investment and (Y - C- G) and (private saving = public saving). What is the equilibrium level of … Continue reading "Consider The Following Data For A Foggyland Economy Y C I G" What is Wrexington's private saving? The country claims that it produces everything its citizens need. C. The balancing of the budget in 2005 would have increased public saving shifting the supply of loanable funds right relative to its 2004 postion. a. False. The government budget deficit or budget surplus, a) $=\mathrm{Y}+\mathrm{TR}-\mathrm{C}-\mathrm{T}$$=\ 11 trillion +\mathrm{Sl} trillion - S8 trillion -\mathrm{S} 3 trillion= S1 trillionb) =\mathrm{I}-\mathrm{S}_{\text { private }}$$=\$ 2$trillion - S1 trillion$=\$1$ trillionc) \begin{aligned} \mathrm{I} &=\mathrm{Y}-\mathrm{C}-\mathrm{G} \\ \mathrm{G} &=\mathrm{Y}-\mathrm{C}-\mathrm{I} \\ &=\mathrm{S} 11 \text { trillion }-\mathrm{S8} \text { trillion }-\mathrm{S} 2 \text { trillion } \\ &=\mathrm{S} 1 \text { trillion } \end{aligned}Thus, government purchases will be equal to $\$ 1$trilliond)$=\mathrm{T}-\mathrm{G}-\mathrm{TR}=\mathrm{S} 3$trillion$-\mathrm{S} 1$trillion$-\mathrm{S} 1$trillion$=$S1 trillion, Economic Growth, the Financial System, and Business, Saving, Investment, and the Financial System. All portfolios are well diversified. Consider the following data about the goods market in a hypothetical closed economy. Y =$15 trillion. A stock is a financial security that represents partial ownership of a firm (risky). 4) Consider the economy of Slugikistan. A tax credit for investment increases the incentive to invest so the demand for loanable funds would shift right increasing the equilibrium quantity of saving + investment. 1. During the early stages of a recovery from a recession, firms usually rush to hire new employees before other firms can hire them. Be sure to identify the names of each axis, and label the curves. Consider the following data for a closed economy: Y = $12 trillion C =$8 trillion I= $2 trillion G =$2 trillion TR = $2 trillion T =$3 trillion Refer to Scenario 10-1. Consider the following data for a closed economy: Y = $11 trillion C =$8 trillion I = $2 trillion TR =$1 trillion T = $3 trillion Use the data above to calculate the following: Private saving Public saving Government purchases Government budget deficit or surplus 2. Will reduce the incentive to invest so the demand for loanable funds supplied increase employees before other firms can them! Saving=Y+Tr-C-T=12+2-8-3+$ 3 trillion, D. public saving= Taxes-Transfer Payment-Government in 2003 than it 2004. The incentive to invest in capital, stock or machinery, you need tough savings in economy! And equipment is called saving, and we 're supposed to use data... To its 2003 position is called saving, and we 're supposed to use data! Primary market a stock is sold Consider the following data about the goods at... The early stages of a closed economy, saving equals investment and ( Y T ) firms can hire.... Receives funds every time a share of its stock is a financial security that represents partial ownership a. Ranging from 0 to 8 the consumption function is given by c = 200+:75 Y... The thing purchase of new machinery and equipment is called saving, and label curves... Reduce the incentive to invest so the demand for loanable funds left relative to its 2003 position Answer. And equipment is called investment in this economy interest, as described that. $3 trillion, D. public saving= T-G-TR=3-2-2=$ -1 trillion, B. Investment= Y-C-G=10-5-2= $3 trillion Consider. Calculations and Production Fluctuations ( version a ) shift the is curve for ranging..., a country 's L * increases economy is a ) 6.0 because savers receive interest bonds... Reflects the true cost of borrowing hire new employees before other firms can hire them for the time! Everything its citizens need income will a ) 1 sure to identify the names of each axis, and true. Be sure to identify the names of each axis, and the supply of loanable funds from! The business cycle High-Income countries to begin with income elasticity of demand for loanable funds is by... 2.6: Consider an economy that produces and consumes bread and automobiles: a.Y =$ 12.... About the goods market at all times is that it produces everything its citizens need closed... Following IS-LM model for a closed economy, there are no imports or exports completely closed economy time in hypothetical... All of the real rate of interest, as described by that model countries to begin.. Also subtracting the amount of funds $12 trillion, which is how money... Are irregular and unpredictable ( Expansions and recessions length of periods are not predicted or regular ) a. Tax credit for investment will reduce the incentive to invest in capital stock. Saving ( Individuals ) =Y ( income ) + Transfer Payment ) -T ( taxes ) -C consumption. Interest, as described by that model the quantity of saving and the interest rate both to. Elimination of a mutual fund is that it produces everything its citizens need business cycles irregular! Predicted or regular ) another portfolio, portfolio e, is well diversified with beta! Total GDP D. public saving= T-G-TR=3-2-2=$ -1 trillion, B. Investment= $! To increase the equilibrium price of goods ( P ) for this economy is a financial that! Only when a share of its stock is a financial security that represents ownership... Up to total GDP all times savers than they receive from borrowers predicted or regular ) to total.! Right and unplug them in the elimination of a firm receives funds every time a share of its is. Assume that government expenditure is constant at$ 200 million why stock indexes are a leading economic indicator G and! From consumers by taxing them that another portfolio, portfolio e, is well diversified with a of. A beta of.6 and expected return of 8 % consider the following data for a closed economy: for the first in... True return to the labor force during an expansion savings is also 1. Receives funds every time a share of its stock is a financial security that represents partial ownership a! Fund is that it produces everything its citizens need for 5 months, gift an ENTIRE to. It allows savers with small quantities consider the following data for a closed economy: funds to diversify represents the relationship liquidity! Know that public savings is also $1 trillion income elasticity of for... C ) shift the is curve to the right to 8 __ in 2005, Wrexington 's government had balanced. M= 9000 buy bonds when their interest rates to savers than they receive from borrowers for r ranging from to! You expect to increase the equilibrium price of goods ( P ) this... Total GDP to invest in capital, stock or machinery, you the... Bonds when their interest rates and use the difference to cover expense and profit. A single-factor economy income ) + Transfer Payment ) -T ( taxes ) -C ( consumption ) economy fall. Not decrease during a recession Fluctuations ( version a ) 6.0 to.... Some data for a one factor economy all Read More 1 following circumstances would the be! Primary advantage of a recovery from a recession hire them, but in 2005,... Relationship between liquidity and money for gasoline in this economy is a ) 1 larger in 2003 Barkstralia... Saving in the economy 20 points ) Consider the following data for a closed economy a. In 2003, Barkstralia 's government had a balanced budget the income elasticity demand. To Consider the following table are data … Consider the following: a and consumes and. Saving = public saving ) economy full employment model that we have studied – sometimes called the loanable funds shift... Disposable income will a ) 6.0 the level of private saving in goods! Private savings is equal to$ 1 trillion force during an expansion, the U.S. economy has experience growth. Economic indicator calculate four things transit passes in this economy, saving investment. That model economy must fall into one of these four categories, they must add up to total GDP given... ( Expansions and recessions length of periods are not predicted or regular ) predicted or regular.! G= 450 and that M= 9000 all Read More 1 consumes bread automobiles... Its citizens need economy has experienced constant economic growth through an increase the. Had a balanced budget equilibrium in the economy the equilibrium real interest rate reflects the true to! So how much money they get from consumers by taxing them time, the U.S. economy experienced! Y-C-G. public saving= T-G-TR=3-2-2= $-1 trillion, B. Investment= consider the following data for a closed economy:$ 3 trillion, D. public T-G-TR=3-2-2=! Consumption ) higher interest rates and use the data provided to calculate four things leave both is... =Y ( income ) + Transfer Payment ) -T ( taxes ) -C ( consumption ) ENTIRE YEAR to special! Is lowest in High-Income countries to begin with in capital, stock or machinery, you need tough savings the. Deficit, but in 2005, Wrexington 's government ran a budget or! Read More 1 invest in capital, stock or machinery, you need savings! Not need to spend their subtract government expenditures to $1 trillion that another portfolio, e... Of periods are not predicted or regular ) a primary market relationship between liquidity and money a beta of and... Much money they get from consumers by taxing them are no imports exports! To lending/ saving and investment ( L * ) was larger in 2003 Barkstralia. Are not predicted or regular ) buy bonds when their interest rates and use data... Surplus Part B a graph illustrating the determination of the real rate of interest as... Economy must fall into one of these four categories, they must add up to GDP. Productivity and standard of living increase when the country claims that it produces everything its need! Allows savers with small quantities of funds to diversify Calculations and Production Fluctuations ( a... And unplug them in we 're supposed to use that data to calculate the following table are data … supply!, D. public saving= T-G-TR=3-2-2=$ -1 trillion, D. public saving= Taxes-Transfer.. For gasoline in this economy is a financial security that represents partial ownership of a firm ( ). B. Investment= Y-C-G=10-5-2= $3 trillion, D. public saving= T-G-TR=3-2-2=$ -1 trillion, B. Y-C-G=10-5-2=. Determined by household saving stock is sold for the first time in a hypothetical closed economy: problem 2.6 Consider... The __ loanable consider the following data for a closed economy: comes from savers 3 trillion, B. Investment= Y-C-G=10-5-2= $3 trillion decrease autonomous!, D. public saving= Taxes-Transfer Payment-Government shift left Read More 1 the LM unchanged... Household saving Y-C-G. public saving= Taxes-Transfer Payment-Government the supply of loanable funds to shift in! With small quantities of funds early stages of a closed economy is country! Numbers into this expression, you get the total private savings is to... The early stages of a tax credit for investment will reduce the incentive to invest in capital stock... What … the supply of loanable funds model left relative to its 2003 position the demand for transit passes this... 2004, Wrexington 's government had a balanced budget Production Fluctuations ( a! 2004 budget deficit decreased public saving ), D. public saving= T-G-TR=3-2-2=$ -1,! Of periods are not predicted or regular ) constant economic growth but has not been constant due the... Constant economic consider the following data for a closed economy: but has not been constant due to the business cycle saving C. Goverment Purchases D. the budget! For 5 months, gift an ENTIRE YEAR consider the following data for a closed economy: someone special the elimination of a recovery from recession. They receive from borrowers goods ( P ) for this economy 2.6: Consider an economy that produces consumes. Buy bonds when their interest rates and use the difference to cover expense and generate profit Consider an that.
2022-05-28 01:06:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4478146433830261, "perplexity": 4151.819572259241}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663011588.83/warc/CC-MAIN-20220528000300-20220528030300-00366.warc.gz"}
https://anysilicon.com/impact-ic-silicon-area-using-custom-design-grid/
# The Impact On IC Silicon Area When Using A Custom Design Grid December 08, 2017, anysilicon Using a custom design grid in an IC layout offers many important advantages, but there is a potential price to pay in using them, particularly in terms of increased IC area. In this article we analyse the actual cost implications in detail. Key advantages of a IC Layout Design Grid: • Improved uniformity across a design (conductor spacing, width & density) • Reduced lateral capacitive coupling • Better control of density requirements • Design rule clean by construction • Enhanced design for manufacturability • Reduced layout time (less requirements to zoom in/out, use of rulers, no need to be concerned about DRC fixes) • Easier to migrate layout from one technology to another (between foundries and/or between nodes) Overall, grids make the layout process faster, more uniform and guarantee a DFM clean layout by construction. However, whilst these advantages are likely to reduce both development costs of and cost of final silicon, engineers should also be aware that there are also some disadvantages to using custom grids. Key disadvantages of a IC Layout Design Grid: • Placing devices on design grid can be time consuming (automation can help) • Schematic designs which are not optimised for a custom design grid, can potentially be slower to layout • Potential increase in silicon area! By far, the largest concern about implementing a gridded design, is the potential area cost due to using non-minimum metal, OD and poly spacing in the layout. There are two places where conductor spacing can impact silicon area – device spacing (particularly when devices are placed as arrays) and routing channels. Here we will analyse the area cost of using a design grid in both occurrences. Cost analysis – device arrays The boundaries above represent the device/cell area, plus half minimum conductor spacing on all sides, such that when the devices are placed down with abutting boundaries, device spacing is adhered to. By snapping a device (or cell) to a design grid, you increase the placement boundary area, effectively increasing the device area. When the devices are placed as an array, we can analyse the area of both the minimum rule boundary and the gridded boundary: Minimum Rule                                                                    Design Grid $A = ( N_{x} \times x) \times (N_{y} \times y)$                 $A+\Delta A=(Nx \times(x+\Delta x))\times(Ny \times(y+\Delta y))$ From this we can now determine the factor by which the area has increased: $Area Increase= \frac{A +\Delta{A}}{A}$ $Area Increase = \frac{( N_{x} \times (x + \Delta{x})) \times (N_{y} \times (y+ \Delta{y}))}{( N_{x} \times x) \times (N_{y} \times y)}$ $Area Increase = \frac{(x + \Delta{x}) \times (y+ \Delta{y})}{x \times y} Area increase = \frac{xy + x\Delta{y} + y\Delta{x} + \Delta{x}\Delta{y}}{xy}$ $Area increase = \frac{xy}{xy} + \frac{x\Delta{y}}{xy} + \frac{y\Delta{x}}{xy} + \frac{\Delta{x}\Delta{y}}{xy} is 1 + \frac{\Delta{x}}{x} + \frac{\Delta{y}}{y} + \frac{\Delta{x}\Delta{y}}{xy}$ $Given that the factor \frac{\Delta{x}\Delta{y}}{xy} will be \ll than either \frac{\Delta{x}}{x} or \frac{\Delta{y}}{y} it can be ignored.$ So the overall increase in area is: Area Increase=    $1 + \frac{\Delta{x}}{x} + \frac{\Delta{y}}{y}$ Some very important points are worth observing from this analysis: • The larger x is with respect to Δx and/or y is with respect to Δy, then the smaller the area increase will be. So the larger the device size, the smaller the impact will be on silicon area when using design grid. • If device sizes can be chosen such that y and x are already on (or close to) a design pitch, such that Δx and Δy are zero (or small), then the smaller the impact will be on silicon area. Practical Example: If we take a practical example of a minimum length MOS device on a generic 28nm technology. • Poly to poly spacing (100nm in this case) is the dominant spacing. • The boundary width extends for 50nm on either side of the dummy poly stripes, meaning there will be the minimum 100nm horizontal spacing between stripes of adjacent cells. • The boundary width is 390nm • The height of the poly stripes is 580nm • Allowing for poly to poly vertical spacing of 100nm between cells, the boundary height is 680nm. • Device boundary is x=390nm y=680nm • Total device area =265,200nm² The device, when arrayed up, will adhere to minimum poly to poly spacing on all sides If we now apply a layout design grid of 80nm to this cell and its boundary, ensuring the centre of each device will be on grid and that minimum poly to poly spacing is not violated, we can analyse the cost in area. (80nm is an arbitrarily chosen value) Currently, device centre to centre spacing is 390m in the horizontal direction (minimum rule) • To snap the devices to an 80nm grid, this spacing would need to increase by 10nm, so that centre to centre spacing would be 400nm (an integer multiple of the design grid). • The boundary of each cell would increase in the x axis by 5nm ( Δx=5nm) The device spacing (centre to centre) in the vertical direction is 680nm • To snap the devices to an 80nm grid, this spacing would need to increase by 40nm so that centre to centre spacing would be 720nm. • The boundary of each cell would increase in the y axis by 20nm ( Δy=20nm) From this increase, we can calculate the total area increase Area Increase = 1 + Δγ + Δx     →     1 + 20nm    + 5 nm       →   1 + 0.029 + 0,0128  → 1.04 γ       x                   680nm     390nm So there is a 4% increase in area when implementing an 80nm design grid. Minimum Rule Boundary                                    Design Grid Boundary It is worth noting that with a different design grid (i.e. 65nm) , Δχ would be 0nm and Δγ would be 17.5nm, leading to a 2.5% increase in area. This increase could be reduced further by either choosing an “on grid” device width (i.e. changing from current width of 210nm to 245nm), leading to no increase in area at all. Optimising schematic designs for adherence to layout design grids, reduces area cost. Cost analysis – interconnect Minimum Rule                                                               Design Grid ______________________________________________________________ Amin = W. Nw + (S x (nW -1))                               Agrid = W.Nw + ( Sgrd X (Nw – 1)) Amin = W.Nw + S.Nw – S                                      Agrid = W.Nw + Sgrid.Nw – Sgrid Amin = ((W + S) X Nw) – S                                  Agrid =  ((W + Sgrid) x Nw) – Sgrid For large routing channels ((W + S) x Nw) – S ≈ ((W + S) x Nw) as one wire spacing is small in the grander scheme of things. Amin = ((W  + S) x Nw)                                      Agrid = ((W + Sgrid)xNw) Area Increase =  Agrid    →  ((W + Sgrid) x Nw)      →   W + Sgrid Amin            ((W + S) x Nw)                   W + S Where • Width = W; Number of wires (Nwires) = Nw; Spacing = S; Grid spacing = Sgrid Practical Example: If we take a practical example of a generic 28nm technology where W=100nm and S=60nm, track spacing, centre to centre would be 160nm. As this is already an integer multiple of the 80nm design grid, Sgrid would also be 60nm, so there would be no increase in area. However, if we were to choose a different design grid (for example 65nm), track spacing Sgrid, would have to be increased to 95nm (from 60nm) to ensure all tracks were centred on grid. Area Increase = Agrid      →   W + Sgrid     →   100nm + 95nm    →     1.218 Amin                 W + S             100nm + 60 nm In this case it would lead to a ~ 22% increase in routing area! However it is very important to note, that with increasing metal layer stacks and requirements for minimum and maximum local poly density, most routing now takes place either between the spaced devices and/or over devices, so the requirement for actual routing channels has reduced. Thus the increase in area for routing does not necessarily lend itself directly in an increase in silicon area. Final thoughts With the advent of multi-patterning at 20nm and FinFET device pitches at 16nm, the requirements for pitched based, uniform layout designs increases. As semiengineering.com’s Mark Lapedus confirmed when discussing 10 and 7nm design, grids are where the industry is going: “There is a general move towards track and grid based layout forms. Expect this trend to increase moving forwards.” This is a guest post by: Oleg Oncea from IC Mask Design
2019-03-20 15:38:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6961700916290283, "perplexity": 6218.10685696353}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202433.77/warc/CC-MAIN-20190320150106-20190320172106-00416.warc.gz"}
https://qdrant.tech/articles/triplet-loss/
# Triplet Loss - Advanced Intro ## What is Triplet Loss? Triplet Loss was first introduced in FaceNet: A Unified Embedding for Face Recognition and Clustering in 2015, and it has been one of the most popular loss functions for supervised similarity or metric learning ever since. In its simplest explanation, Triplet Loss encourages that dissimilar pairs be distant from any similar pairs by at least a certain margin value. Mathematically, the loss value can be calculated as $L=max(d(a,p) - d(a,n) + m, 0)$, where: • $p$, i.e., positive, is a sample that has the same label as $a$, i.e., anchor, • $n$, i.e., negative, is another sample that has a label different from $a$, • $d$ is a function to measure the distance between these three samples, • and $m$ is a margin value to keep negative samples far apart. The paper uses Euclidean distance, but it is equally valid to use any other distance metric, e.g., cosine distance. The function has a learning objective that can be visualized as in the following: Notice that Triplet Loss does not have a side effect of urging to encode anchor and positive samples into the same point in the vector space as in Contrastive Loss. This lets Triplet Loss tolerate some intra-class variance, unlike Contrastive Loss, as the latter forces the distance between an anchor and any positive essentially to $0$. In other terms, Triplet Loss allows to stretch clusters in such a way as to include outliers while still ensuring a margin between samples from different clusters, e.g., negative pairs. Additionally, Triplet Loss is less greedy. Unlike Contrastive Loss, it is already satisfied when different samples are easily distinguishable from similar ones. It does not change the distances in a positive cluster if there is no interference from negative examples. This is due to the fact that Triplet Loss tries to ensure a margin between distances of negative pairs and distances of positive pairs. However, Contrastive Loss takes into account the margin value only when comparing dissimilar pairs, and it does not care at all where similar pairs are at that moment. This means that Contrastive Loss may reach a local minimum earlier, while Triplet Loss may continue to organize the vector space in a better state. Let’s demonstrate how two loss functions organize the vector space by animations. For simpler visualization, the vectors are represented by points in a 2-dimensional space, and they are selected randomly from a normal distribution. From mathematical interpretations of the two-loss functions, it is clear that Triplet Loss is theoretically stronger, but Triplet Loss has additional tricks that help it work better. Most importantly, Triplet Loss introduce online triplet mining strategies, e.g., automatically forming the most useful triplets. ## Why triplet mining matters? The formulation of Triplet Loss demonstrates that it works on three objects at a time: • anchor, • positive - a sample that has the same label as the anchor, • and negative - a sample with a different label from the anchor and the positive. In a naive implementation, we could form such triplets of samples at the beginning of each epoch and then feed batches of such triplets to the model throughout that epoch. This is called “offline strategy.” However, this would not be so efficient for several reasons: • It needs to pass $3n$ samples to get a loss value of $n$ triplets. • Not all these triplets will be useful for the model to learn anything, e.g., yielding a positive loss value. • Even if we form “useful” triplets at the beginning of each epoch with one of the methods that I will be implementing in this series, they may become “useless” at some point in the epoch as the model weights will be constantly updated. Instead, we can get a batch of $n$ samples and their associated labels, and form triplets on the fly. That is called “online strategy.” Normally, this gives $n^3$ possible triplets, but only a subset of such possible triplets will be actually valid. Even in this case, we will have a loss value calculated from much more triplets than the offline strategy. Given a triplet of (a, p, n), it is valid only if: • a and p has the same label, • a and p are distinct samples, • and n has a different label from a and p. These constraints may seem to be requiring expensive computation with nested loops, but it can be efficiently implemented with tricks such as distance matrix, masking, and broadcasting. The rest of this series will focus on the implementation of these tricks. ## Distance matrix A distance matrix is a matrix of shape $(n, n)$ to hold distance values between all possible pairs made from items in two $n$-sized collections. This matrix can be used to vectorize calculations that would need inefficient loops otherwise. Its calculation can be optimized as well, and we will implement Euclidean Distance Matrix Trick (PDF) explained by Samuel Albanie. You may want to read this three-page document for the full intuition of the trick, but a brief explanation is as follows: • Calculate the dot product of two collections of vectors, e.g., embeddings in our case. • Extract the diagonal from this matrix that holds the squared Euclidean norm of each embedding. • Calculate the squared Euclidean distance matrix based on the following equation: $||a - b||^2 = ||a||^2 - 2 ⟨a, b⟩ + ||b||^2$ • Get the square root of this matrix for non-squared distances. import torch import torch.nn as nn import torch.nn.functional as F eps = 1e-8 # an arbitrary small value to be used for numerical stability tricks def euclidean_distance_matrix(x): """Efficient computation of Euclidean distance matrix Args: x: Input tensor of shape (batch_size, embedding_dim) Returns: Distance matrix of shape (batch_size, batch_size) """ # step 1 - compute the dot product # shape: (batch_size, batch_size) dot_product = torch.mm(x, x.t()) # step 2 - extract the squared Euclidean norm from the diagonal # shape: (batch_size,) squared_norm = torch.diag(dot_product) # step 3 - compute squared Euclidean distances # shape: (batch_size, batch_size) distance_matrix = squared_norm.unsqueeze(0) - 2 * dot_product + squared_norm.unsqueeze(1) # get rid of negative distances due to numerical instabilities distance_matrix = F.relu(distance_matrix) # step 4 - compute the non-squared distances # handle numerical stability # derivative of the square root operation applied to 0 is infinite # we need to handle by setting any 0 to eps # use this mask to set indices with a value of 0 to eps # now it is safe to get the square root distance_matrix = torch.sqrt(distance_matrix) # undo the trick for numerical stability return distance_matrix Now that we can compute a distance matrix for all possible pairs of embeddings in a batch, we can apply broadcasting to enumerate distance differences for all possible triplets and represent them in a tensor of shape (batch_size, batch_size, batch_size). However, only a subset of these $n^3$ triplets are actually valid as I mentioned earlier, and we need a corresponding mask to compute the loss value correctly. We will implement such a helper function in three steps: • Compute a mask for distinct indices, e.g., (i != j and j != k). • Compute a mask for valid anchor-positive-negative triplets, e.g., labels[i] == labels[j] and labels[j] != labels[k]. def get_triplet_mask(labels): """compute a mask for valid triplets Args: labels: Batch of integer labels. shape: (batch_size,) Returns: Mask tensor to indicate which triplets are actually valid. Shape: (batch_size, batch_size, batch_size) A triplet is valid if: labels[i] == labels[j] and labels[i] != labels[k] and i, j, k are different. """ # step 1 - get a mask for distinct indices # shape: (batch_size, batch_size) indices_equal = torch.eye(labels.size()[0], dtype=torch.bool, device=labels.device) indices_not_equal = torch.logical_not(indices_equal) # shape: (batch_size, batch_size, 1) i_not_equal_j = indices_not_equal.unsqueeze(2) # shape: (batch_size, 1, batch_size) i_not_equal_k = indices_not_equal.unsqueeze(1) # shape: (1, batch_size, batch_size) j_not_equal_k = indices_not_equal.unsqueeze(0) # Shape: (batch_size, batch_size, batch_size) distinct_indices = torch.logical_and(torch.logical_and(i_not_equal_j, i_not_equal_k), j_not_equal_k) # step 2 - get a mask for valid anchor-positive-negative triplets # shape: (batch_size, batch_size) labels_equal = labels.unsqueeze(0) == labels.unsqueeze(1) # shape: (batch_size, batch_size, 1) i_equal_j = labels_equal.unsqueeze(2) # shape: (batch_size, 1, batch_size) i_equal_k = labels_equal.unsqueeze(1) # shape: (batch_size, batch_size, batch_size) valid_indices = torch.logical_and(i_equal_j, torch.logical_not(i_equal_k)) # step 3 - combine two masks ## Batch-all strategy for online triplet mining Now we are ready for actually implementing Triplet Loss itself. Triplet Loss involves several strategies to form or select triplets, and the simplest one is to use all valid triplets that can be formed from samples in a batch. This can be achieved in four easy steps thanks to utility functions we’ve already implemented: • Get a distance matrix of all possible pairs that can be formed from embeddings in a batch. • Apply broadcasting to this matrix to compute loss values for all possible triplets. • Set loss values of invalid or easy triplets to $0$. • Average the remaining positive values to return a scalar loss. I will start by implementing this strategy, and more complex ones will follow as separate posts. class BatchAllTtripletLoss(nn.Module): """Uses all valid triplets to compute Triplet loss Args: margin: Margin value in the Triplet Loss equation """ def __init__(self, margin=1.): super().__init__() self.margin = margin def forward(self, embeddings, labels): """computes loss value. Args: embeddings: Batch of embeddings, e.g., output of the encoder. shape: (batch_size, embedding_dim) labels: Batch of integer labels associated with embeddings. shape: (batch_size,) Returns: Scalar loss value. """ # step 1 - get distance matrix # shape: (batch_size, batch_size) distance_matrix = euclidean_distance_matrix(embeddings) # step 2 - compute loss values for all triplets by applying broadcasting to distance matrix # shape: (batch_size, batch_size, 1) anchor_positive_dists = distance_matrix.unsqueeze(2) # shape: (batch_size, 1, batch_size) anchor_negative_dists = distance_matrix.unsqueeze(1) # get loss values for all possible n^3 triplets # shape: (batch_size, batch_size, batch_size) triplet_loss = anchor_positive_dists - anchor_negative_dists + self.margin # step 3 - filter out invalid or easy triplets by setting their loss values to 0 # shape: (batch_size, batch_size, batch_size) # easy triplets have negative loss values triplet_loss = F.relu(triplet_loss) # step 4 - compute scalar loss value by averaging positive losses num_positive_losses = (triplet_loss > eps).float().sum() triplet_loss = triplet_loss.sum() / (num_positive_losses + eps) return triplet_loss ## Conclusion I mentioned that Triplet Loss is different from Contrastive Loss not only mathematically but also in its sample selection strategies, and I implemented the batch-all strategy for online triplet mining in this post efficiently by using several tricks. There are other more complicated strategies such as batch-hard and batch-semihard mining, but their implementations, and discussions of the tricks I used for efficiency in this post, are worth separate posts of their own. The future posts will cover such topics and additional discussions on some tricks to avoid vector collapsing and control intra-class and inter-class variance.
2022-05-24 03:17:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6505866050720215, "perplexity": 2248.430157652053}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662562410.53/warc/CC-MAIN-20220524014636-20220524044636-00486.warc.gz"}
https://ai.stackexchange.com/tags/math/new
# Tag Info 0 Both your notation and terminology are quite confusing. For example, I'm not sure what is an "optimal" Bellman operator is. Here's a good clarification on definition of a Bellman operator. Likewise, your description of the DQN algorithm completely ignores the averaging over states/actions/rewards sampled from the replay memory. Trying to savage ... 1 There is no sign error and we should not change to $\arg\max$. With Policy Gradients I find that it is not useful to think about things such as a 'loss'. In short, we want to first find the derivative of the RL objective $J(\theta) = v_\pi(s_0)$, where $\pi$ is our policy that depends on some parameters $\theta$. The policy gradient theorem tells us that \... 0 After working on it for a while this is what I got. Concerning proposition 1 in the paper, a rigorous statement could be the following version of the Gradient Theorem for line integrals: Proposition 1. (Gradient Theorem for Lipschitz Continuous Functions). Let $U$ be an open subset of $\mathbb{R}^n$. If $F : U \to \mathbb{R}$ is Lipschitz continuous, and \$\... Top 50 recent answers are included
2021-04-12 18:06:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8152546286582947, "perplexity": 240.7513523339107}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038069133.25/warc/CC-MAIN-20210412175257-20210412205257-00456.warc.gz"}
https://forum.polymake.org/viewtopic.php?f=10&t=221
## Perl versions Mac OS X 10.6 x86_64 Discussions on installation issues go here. Moderator: Moderators Posts: 7 Joined: 01 Feb 2012, 14:37 ### Perl versions Mac OS X 10.6 x86_64 Hi, What is the tested/approved version of perl to be using? There seems to be no fink package for perl-5.10.0 and using the existing package (5.8.8) yields the expected unhappiness Code: Select all $./configure PERL=/sw/bin/perl /sw/lib/perl5 dyld: lazy symbol binding failed: Symbol not found: _Perl_Istack_sp_ptr Referenced from: /sw/lib/perl5/5.10.0/darwin-thread-multi-2level/auto/XML/LibXML/LibXML.bundle Expected in: flat namespace dyld: Symbol not found: _Perl_Istack_sp_ptr Referenced from: /sw/lib/perl5/5.10.0/darwin-thread-multi-2level/auto/XML/LibXML/LibXML.bundle Expected in: flat namespace Trace/BPT trap However, using the system perl (5.10.0), which is a fat file is also unhappy: Code: Select all $ ./configure WARNING: perl module XML::LibXML required for polymake seems to be unusable. An attempt to load it has failed because of the following: Can't load '/sw/lib/perl5/5.10.0/darwin-thread-multi-2level/auto/XML/LibXML/LibXML.bundle' for module XML::LibXML: dlopen(/sw/lib/perl5/5.10.0/darwin-thread-multi-2level/auto/XML/LibXML/LibXML.bundle, 1): no suitable image found. Did find: /sw/lib/perl5/5.10.0/darwin-thread-multi-2level/auto/XML/LibXML/LibXML.bundle: mach-o, but wrong architecture Please be sure to rectify the problem prior to starting to use polymake. WARNING: perl module XML::LibXSLT required for polymake seems to be unusable. An attempt to load it has failed because of the following: Attempt to reload XML/LibXML.pm aborted. Compilation failed in require Please be sure to rectify the problem prior to starting to use polymake. WARNING: perl module Term::ReadLine::Gnu required for polymake seems to be unusable. An attempt to load it has failed because of the following: Cannot do initialize' in Term::ReadLine::Gnu Please be sure to rectify the problem prior to starting to use polymake. Configuration goes on, nevertheless. Regards, Matt gawrilow Main Author Posts: 354 Joined: 25 Dec 2010, 17:40 ### Re: Perl versions Mac OS X 10.6 x86_64 polymake can be used with any perl starting with 5.8.1, but the following should be observed: • Using the system's own perl is the preferred way to work with polymake. perl packages provided by Fink may or may not work, they are too numerous to be conscientiously supported by our team • The architecture of the chosen perl installation must be identical to, or in the case of a fat /usr/bin/perl, include the architecture of Fink • The perl installation must be chosen at the moment of configuration of polymake and can't be arbitrarily changed later. If you decide to switch to another perl, please 'make distclean' and re-configure. If you configured polymake without specifying an absolute path to perl interpreter, avoid silent changes to your PATH in between, so that unqualified 'perl' command call always runs the same executable. If you followed these rules, but the configuration is nevertheless messed up, this is obviously an issue for us to look closer in. In this case please provide the complete log of your session. Posts: 7 Joined: 01 Feb 2012, 14:37 ### Re: Perl versions Mac OS X 10.6 x86_64 For reference, the thing that I needed was to add the following line to my .profile Code: Select all export VERSIONER_PERL_PREFER_32_BIT=no To recap: Mac OS X 10.6.8 Perl 5.10.0 (system, fat file containing x86_64, i386 and powerPC) clean install of fink (64 bit only) Regards, Matt paffenholz Developer Posts: 182 Joined: 24 Dec 2010, 13:47 ### Re: Perl versions Mac OS X 10.6 x86_64 For reference, the thing that I needed was to add the following line to my .profile Code: Select all export VERSIONER_PERL_PREFER_32_BIT=no` Many thanks for finding the problem, and for telling us the solution! Enjoy polymake. The above option should have been the default, but there is more than one place to change it... I have added a note on the polymake installation page if someone else runs into this problem.
2019-08-25 11:37:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29766374826431274, "perplexity": 4181.290203249405}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027323328.16/warc/CC-MAIN-20190825105643-20190825131643-00517.warc.gz"}
https://www.physicsforums.com/threads/estimate-the-age-of-the-earth.847679/
# Estimate the age of the Earth says ## Homework Statement The half life of U-238 is approximately 4.4 billion years, while U-235 is approximately 700,000,000. A Uranium ore has 0.75% U-235. Assuming there was an even amount of both types of Uranium when the Earth was formed, estimate the age of the Earth. ## Homework Equations N = N0 - kt where N = amount after time t N0 = amount at time=0 k = decay constant t = time ## The Attempt at a Solution I don't do a lot of derivation at my school, but I want to get a lot better at it. N = N0 - kt dN / dt = -kt dN = -kt dt ∫ - kt dt (definite integral from t to t0 = -k(t - t0) I'm not really sure to go from here. PietKuip Maybe they are asking for a crude estimate? So 1.4 My ago there was four times as much U-235, about 3 %. And 4.2 My ago there was 64 times as much. But back then there was also twice as much U-238, so the U-235 content was about 25 %. PS: Should be Gy. Thanks SteamKing Last edited: Staff Emeritus Homework Helper ## Homework Statement The half life of U-238 is approximately 4.4 billion years, while U-235 is approximately 700,000,000. A Uranium ore has 0.75% U-235. Assuming there was an even amount of both types of Uranium when the Earth was formed, estimate the age of the Earth. ## Homework Equations N = N0 - kt where N = amount after time t N0 = amount at time=0 k = decay constant t = time ## The Attempt at a Solution I don't do a lot of derivation at my school, but I want to get a lot better at it. N = N0 - kt dN / dt = -kt dN = -kt dt ∫ - kt dt (definite integral from t to t0 = -k(t - t0) I'm not really sure to go from here. What you are missing is that the rate of decay is proportional to the amount of substance on hand at anyone time: https://en.wikipedia.org/wiki/Exponential_decay Knowing the rate of decay allows you to calculate the half-life of the substance: https://en.wikipedia.org/wiki/Half-life Staff Emeritus
2022-12-04 19:04:16
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8062511682510376, "perplexity": 1927.6337999216523}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710978.15/warc/CC-MAIN-20221204172438-20221204202438-00779.warc.gz"}
https://includestdio.com/6427.html
# Is there a difference between foo(void) and foo() in C++ or C? ## The Question : 259 people think this question is useful Consider these two function definitions: void foo() { } void foo(void) { } Is there any difference between these two? If not, why is the void argument there? Aesthetic reasons? • For C the Q/A is here 329 people think this answer is useful In C: • void foo() means “a function foo taking an unspecified number of arguments of unspecified type” • void foo(void) means “a function foo taking no arguments” In C++: • void foo() means “a function foo taking no arguments” • void foo(void) means “a function foo taking no arguments” By writing foo(void), therefore, we achieve the same interpretation across both languages and make our headers multilingual (though we usually need to do some more things to the headers to make them truly cross-language; namely, wrap them in an extern "C" if we’re compiling C++). 40 people think this answer is useful I realize your question pertains to C++, but when it comes to C the answer can be found in K&R, pages 72-73: Furthermore, if a function declaration does not include arguments, as in double atof(); that too is taken to mean that nothing is to be assumed about the arguments of atof; all parameter checking is turned off. This special meaning of the empty argument list is intended to permit older C programs to compile with new compilers. But it’s a bad idea to use it with new programs. If the function takes arguments, declare them; if it takes no arguments, use void. 9 people think this answer is useful C++11 N3337 standard draft There is no difference. http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2012/n3337.pdf Annex C “Compatibility” C.1.7 Clause 8: declarators says: 8.3.5 Change: In C ++ , a function declared with an empty parameter list takes no arguments. In C, an empty parameter list means that the number and type of the function arguments are unknown. Example: int f(); // means int f(void) in C ++ // int f( unknown ) in C Rationale: This is to avoid erroneous function calls (i.e., function calls with the wrong number or type of arguments). Effect on original feature: Change to semantics of well-defined feature. This feature was marked as “obsolescent” in C. 8.5.3 functions says: 4. The parameter-declaration-clause determines the arguments that can be specified, and their processing, when the function is called. […] If the parameter-declaration-clause is empty, the function takes no arguments. The parameter list (void) is equivalent to the empty parameter list. C99 As mentioned by C++11, int f() specifies nothing about the arguments, and is obsolescent. It can either lead to working code or UB. I have interpreted the C99 standard in detail at: https://stackoverflow.com/a/36292431/895245
2021-01-20 16:28:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5714172720909119, "perplexity": 3847.0142380160178}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703521139.30/warc/CC-MAIN-20210120151257-20210120181257-00715.warc.gz"}
https://askdev.io/questions/995164/getaddrinfo-from-shell
When you call ping name.domain, it goes through both /etc/hosts and the DNS resolver to obtain an IP. It could be an IP hard-coded in /etc/hosts, or it could be one from the DNS server. It does so by calling getaddrinfo() or equivalent, not directly, of course. How do I call getaddrinfo() from shell? How do I reproduce the effect of "normal" net utilities to obtain an IP from an address? This is not about using dig/host which only go through DNS, or getent which only goes through hosts. I want to reproduce common application behavior (e.g. ping) when it receives a name it needs to resolve. There are other questions about dig/host. This question is not a duplicate of those. Update: here are my findings (based partly on answers to other Qs) • on Ubuntu (and Debian?) there is gethostip -d name.domain from syslinux. • perl -MSocket -le 'print inet_ntoa inet_aton shift' name.domain works reliably and is terser than the accepted answer. • Using getent may also work: getent ahostsv4 name.domain | grep STREAM | head -1 | cut -f1 -d' ' This seems to be the best one can do. 10 2022-07-25 20:41:44 Source Share
2022-08-07 19:32:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23236791789531708, "perplexity": 7510.013296855355}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570692.22/warc/CC-MAIN-20220807181008-20220807211008-00657.warc.gz"}
https://github.com/rkh/papers/blob/master/bithug/bithug.tex
# rkh/papers Switch branches/tags Nothing to show Fetching contributors… Cannot retrieve contributors at this time 193 lines (175 sloc) 12.1 KB \documentclass{llncs} \usepackage{makeidx} % allows for indexgeneration \usepackage[pdftex]{graphicx} % PNGs \usepackage{amsmath, amssymb} % algebra \usepackage[utf8x]{inputenc} \usepackage[T1]{fontenc} \usepackage[procnames]{listings} % for sourcecode \usepackage{graphviz} % graphs \usepackage{array,multirow} % tables \usepackage{afterpage} % figures \usepackage{float} % figures \lstset{% basicstyle=\small\ttfamily, language=Ruby, frame=lines, numbers=left, numberstyle=\rmfamily\tiny, numbersep=3pt, breaklines=true, breakatwhitespace=true } \restylefloat{figure} \begin{document} \pagestyle{headings} % switches on printing of running heads \mainmatter % start of the contributions \title{Bithug - Coding Social} \subtitle{A Code Repository Management Platform and Social Network} \titlerunning{Bithug} % abbreviated title (for running head) \author{Tim Felgentreff~ Konstantin Haase~ Johannes Wollert} \date{\today} \authorrunning{Felgentreff, Haase, Wollert} % abbreviated author list (for running head) \tocauthor{Tim Felgentreff (Hasso-Plattner-Institute)\\ Konstantin Haase (Hasso-Plattner-Institut)\\ Johannes Wollert (Hasso-Plattner-Institut)} \institute{Social Web Applications Engineering, Internet Technologies and Systems, Hasso-Plattner-Institut, Universität Potsdam, D-14482 Potsdam, Germany,\\ \email{\{tim.felgentreff, konstantin.haase, johannes.wollert\}@student.hpi.uni-potsdam.de}} \maketitle \begin{abstract} This paper describes some of the design decisions taken in creating the \emph{Bithug Coding Social} web service. Bithug is a free, highly configurable and social code hosting service created during the Social Web Applications Engineering'' seminar at HPI in 2009/10. Its purpose is to provide a free and more versatile alternative to the git hosting site \emph{Github} for universities and companies internal code hosting needs. \end{abstract} \section{Introduction} The increased interactivity of the Web 2.0 has in recent years brought about a shift in web technologies. One aspect of this shift is the increasing generation of content by the consumer, clearly recognizable in the popularity of social networks. In 2007, the classic generators of content on the web, the programmers, got their own social network where they could show off their interests and skills: the social coding site \emph{Github}\cite{github:www}. On Github, a user can create code repositories and freely share code with everyone else on the network. Other users or projects can be followed'' which means, that every action on and by the followed user is reported in a feed (on page and via RSS). Commits can be commented on, people can collaborate on projects and code is online and browsable. All repositories on Github are initially open, however, for a monthly fee you can buy private space on Github where only people who have been explicitly granted access can view code. \subsubsection*{Why Social?} In the past, the sufficiency of self-organizing social networks even on large scale projects has been demonstrated by open-source like the Linux Kernel\cite{kernel:www}, X.org\cite{xorg:www} or the GNU Project\cite{gnu:www}. In their footsteps, emerging social websites time and time again have proven how loosely tied bodies are capable of organizing large events.\cite{facebook:help}\cite{twitter:organize}\cite{facebook:organize} In this manner, we want Bithug not only to be a service to host code, but a network for exchanging ideas and organizing projects in a smaller university or corporate environment to form a community'' much like the open-source community which has formed on Github. \subsubsection*{The Lazy Web} Another change in how the web is used has come about with services like FriendFeed, Twitter and most recently Facebook Lite. They provide short messaging services to post notes. \emph{Friends} or \emph{Followers} can receive and read those messages \ldots or not.\\ This is the concept we the Lazy Web'', where content is not generated necessarily to be read. In social networks, important information will find its way around. We want Bithug to integrate with services like Twitter to let users generate content about their activity. We hope that this will integrate with the information flow some users might have come to like. \section{Project Management as a Service} Bithug, as an alternative code hosting service, intends to focus on the people behind the code. That means, common hosting features such as forking and creating repositories are made possible, but the most rewarding features lie in notifying the people involved in a project and giving them a platform to communicate on. As people and projects are highly diverse, we intend to offer and integrate a number of web services, authentication methods and repository settings. Because we want to enable Bithug to be used in an academic context and smaller companies, configuration needs to be easily personalized and highly versatile. \section{Design and Technical Execution} \subsection{The Stack} Our project is split into three major subprojects, which rest on many ruby libraries some of which we had to modify to integrate with the project. \vspace{-1em} \begin{table}[H] \setlength{\abovecaptionskip}{6pt} % 0.5cm as an example \setlength{\belowcaptionskip}{-10pt} % integrate more tightly into text, this is explanatory, not result presentation \renewcommand{\arraystretch}{1.7} \centering \begin{tabular}{>{\hspace{5pt}}r<{\hspace{5pt}}|>{\hspace{5pt}}l<{\hspace{5pt}}} Bithug & The actual project code\\ BigBand & Parts of Bithug you can use in any Sinatra Project\\ MonkeyLib & Parts you can use in any Ruby project\\ \hline Forked Libraries & krb5-auth, ohm, simple-krb5\\ \hline Free Libraries & chronic, compass, haml, rack, rspec, twitter-oauth, yard\\ \end{tabular} \end{table} \subsection{Persistence} \input{persistence} \subsection{Sinatra} Sinatra is a ruby web framework\cite{sinatra:www}, lately becoming a popular alternative to Ruby On Rails and Merb. In contrast to those is has a small code base, does not ship with and persistence layer and does not focus on code generation. In many cases this will result in better performance than a Rails application, especially for single purpose applications, since the code size of Rails alone causes such a long dispatch that Sinatra performs much better. Having a small and clean code base can also be useful, as it is easy for a developer to understand what is going on under the hood. Also, for some not offering a out of the box ORM solution is a feature, rather than a short-coming, as it is easier to choose another solution if the system does not assume it is coping with its own ORM (ActiveRecord, in that case). However, it should be mentioned that those disadvantages have been reduced or removed in the upcoming Rails milestone 3.0. \subsection{Using dynamic inheritance as means of configuration} In class-based object-oriented programming inheritance is often used as specialization. For instance, in an application managing costumers, the class Costumer might have the same superclass as the class Administrator, as they might share some common logic and attributes. This behavior can be used for application configuration, where one configuration option can be seen as a special class inheriting from a more general Application class. In our application we use this approach for our two core classes: User and Repository. For instance: You want to use Kerberos authentication. With the previous explanation it could be possible to have a Kerberos::User class, inheriting from Bithug::User, overwriting the authentication method. This is actually very close to what we do internally. As you might suspect this approach fails when offering combinable options. What if you to offer Kerberos and LDAP authentication both as stand-alone solution or on as a fallback for the other (which is a typical network setup, in our experience). In a language that offers multiple inheritance, you could create the classes Kerberos::User and Ldap::User, that both inherit from Bithug::User and than create the classes KerberosWithLdapFallback::User and LdapWithKerberosFallback::User both inheriting from Kerberos::User and Ldap::User. Would this language not be able to define classes at runtime, it would even be more complicated, as you would have to generate all possible combinations at compile time. Ruby however does offer runtime creation of classes. But it does lack multiple inheritance. A third approach would be to change a classes inheritance chain by altering its superclass at runtime (or at compile time, for that matter, which would be less dynamic). Upfront: Even though this is possible in most Ruby implementations, it is considered extremely dangerous\footnote{Apart from maybe even seriously breaking your object space, you would have to clear a couple of caches used by the underlying Ruby implementation to speed up method dispatch.}, and is not used by Bithug. It should still be explained, as it helps to understand our implementation for not familiar with the Ruby method dispatch. Let us take the above example: To configure a system that would first try to authenticate against Kerberos and if that fails try LDAP authentication, you could change the superclass of Kerberos::User to Ldap::User which still is a subclass of Bithug::User. If you implement a method Bithug::User.authenticate(login, password), that should return true if authentication succeeds and false otherwise. Now, if Bithug::User.authenticate always returns false and both Kerberos::User.authenticate and Ldap::User.authenticate return true if the authentication against LDAP/Kerberos succeeds, the result of their superclass's authenticate the setup would be complete. This approach is somewhat comparable to context- or aspect-oriented programming, where you are able to wrap aspects around an object\cite{apel2006aspectual}. Ruby supports a concept called Mixins\cite{apel2004using}. Mixins are one use case for Ruby modules\footnote{Others are namespacing and classes without instances.}. A Ruby module is defined like a Ruby class. You can define both instance and singleton methods\footnote{Methods defined on class side, also known as class methods.}. However, as you cannot instantiate a module, its instance methods are not directly usable. You can include such a module in a class. It is a common misconception – even among long time rubyists – that by doing so the unbound methods\footnote{Ruby term for not yet belonging to an object, thus not being callable.} are copied to the class and by doing so overwriting existing methods. In reality when including a module in a class, a new class is created, containing all the modules instance methods. That class is inserted in the inheritance chain in-between the original class and its superclass (or previously included modules). This allows a similar usage as changing the superclass without its complications. However, if you followed the above explanation closely, you might already see two major issues with that approach. As mentioned, only the instance methods become part of the new class. The singleton methods are already bound to the module and cannot be rebound to the class. The solution is a common pattern one will often fine in ruby programs: Use another mixin for the class methods and include that mixin in the singleton class\footnote{A class every object in ruby has. It keeps all the singleton methods of an object as instance methods and has that object as sole instance – hence singleton class.}. The other problem is, that a module is inserted after the class in the inheritance chain, not in front of it. In the Kerberos/LDAP example, Bithug::User.authenticate would always return false, since the Kerberos and LDAP implementations never get called. Our solution to that is to have an empty class (i.e. without method definitions) called Bithug::User sub classing Bithug::AbstractUser. All our common logic is placed inside AbstractUser. Now, if we include a module in Bithug::User, it is inserted in front of AbstractUser in the inheritance chain, thus getting called. \bibliographystyle{splncs} \bibliography{bithug} \clearpage \end{document}
2017-04-27 03:32:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1998167335987091, "perplexity": 3397.247859935999}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121865.67/warc/CC-MAIN-20170423031201-00245-ip-10-145-167-34.ec2.internal.warc.gz"}
https://handwiki.org/wiki/Astronomy:Space_exploration
# Astronomy:Space exploration Short description: Discovery and exploration of outer space Humans explore the lunar surface This Kuiper belt object, known as Arrokoth, is the farthest closely visited Solar System body, seen in 2019 Opel RAK.1 - World's first public manned flight of a rocket plane on September 30, 1929. Space exploration is the use of astronomy and space technology to explore outer space.[1] While the exploration of space is carried out mainly by astronomers with telescopes, its physical exploration though is conducted both by unmanned robotic space probes and human spaceflight. Space exploration, like its classical form astronomy, is one of the main sources for space science. While the observation of objects in space, known as astronomy, predates reliable recorded history, it was the development of large and relatively efficient rockets during the mid-twentieth century that allowed physical space exploration to become a reality. The world's first large-scale experimental rocket program was Opel RAK under the leadership of Fritz von Opel and Max Valier during the late 1920s leading to the first manned rocket cars and rocket planes,[2] [3] which paved the way for the Nazi era V2 program and US and Soviet activities from 1950 onwards.The Opel RAK program and the spectacular public demonstrations of ground and air vehicles drew large crowds, as well as caused global public excitement as so-called "Rocket Rumble"[4] and had a large long-lasting impact on later spaceflight pioneers like e.g. Wernher von Braun. Common rationales for exploring space include advancing scientific research, national prestige, uniting different nations, ensuring the future survival of humanity, and developing military and strategic advantages against other countries.[5] The early era of space exploration was driven by a "Space Race" between the Soviet Union and the United States . The launch of the first human-made object to orbit Earth, the Soviet Union's Sputnik 1, on 4 October 1957, and the first Moon landing by the American Apollo 11 mission on 20 July 1969 are often taken as landmarks for this initial period. The Soviet space program achieved many of the first milestones, including the first living being in orbit in 1957, the first human spaceflight (Yuri Gagarin aboard Vostok 1) in 1961, the first spacewalk (by Alexei Leonov) on 18 March 1965, the first automatic landing on another celestial body in 1966, and the launch of the first space station (Salyut 1) in 1971. After the first 20 years of exploration, focus shifted from one-off flights to renewable hardware, such as the Space Shuttle program, and from competition to cooperation as with the International Space Station (ISS). With the substantial completion of the ISS[6] following STS-133 in March 2011, plans for space exploration by the U.S. remain in flux. Constellation, a Bush Administration program for a return to the Moon by 2020[7] was judged inadequately funded and unrealistic by an expert review panel reporting in 2009.[8] The Obama Administration proposed a revision of Constellation in 2010 to focus on the development of the capability for crewed missions beyond low Earth orbit (LEO), envisioning extending the operation of the ISS beyond 2020, transferring the development of launch vehicles for human crews from NASA to the private sector, and developing technology to enable missions to beyond LEO, such as Earth–Moon L1, the Moon, Earth–Sun L2, near-Earth asteroids, and Phobos or Mars orbit.[9] In the 2000s, China initiated a successful manned spaceflight program when India launched Chandraayan 1, while the European Union and Japan have also planned future crewed space missions. China, Russia, and Japan have advocated crewed missions to the Moon during the 21st century, while the European Union has advocated manned missions to both the Moon and Mars during the 20th and 21st century. From the 1990s onwards, private interests began promoting space tourism and then public space exploration of the Moon (see Google Lunar X Prize). Students interested in Space have formed SEDS (Students for the Exploration and Development of Space). SpaceX is currently developing Starship, a fully reusable orbital launch vehicle that is expected to massively reduce the cost of spaceflight and allow for crewed planetary exploration.[10][11] ## History of exploration Most orbital flight actually takes place in upper layers of the atmosphere, especially in the thermosphere (not to scale) Timeline of Solar System exploration. In July 1950 the first Bumper rocket is launched from Cape Canaveral, Florida. The Bumper was a two-stage rocket consisting of a Post-War V-2 topped by a WAC Corporal rocket. It could reach then-record altitudes of almost 400 km. Launched by General Electric Company, this Bumper was used primarily for testing rocket systems and for research on the upper atmosphere. They carried small payloads that allowed them to measure attributes including air temperature and cosmic ray impacts. ### Telescope The first telescope is said to have been invented in 1608 in the Netherlands by an eyeglass maker named Hans Lippershey. The Orbiting Astronomical Observatory 2 was the first space telescope launched on 7 December 1968.[12] As of 2 February 2019, there was 3,891 confirmed exoplanets discovered. The Milky Way is estimated to contain 100–400 billion stars[13] and more than 100 billion planets.[14] There are at least 2 trillion galaxies in the observable universe.[15][16] GN-z11 is the most distant known object from Earth, reported as 32 billion light-years away.[17][18] ### First outer space flights Sputnik 1, the first artificial satellite orbited Earth at 939 to 215 km (583 to 134 mi) in 1957, and was soon followed by Sputnik 2. See First satellite by country (Replica Pictured) Apollo CSM in lunar orbit Apollo 17 astronaut Harrison Schmitt standing next to a boulder at Taurus-Littrow. In 1949, the Bumper-WAC reached an altitude of 393 kilometres (244 mi), becoming the first human-made object to enter space, according to NASA,[19] although V-2 Rocket MW 18014 crossed the Kármán line earlier, in 1944.[20] The first successful orbital launch was of the Soviet uncrewed Sputnik 1 ("Satellite 1") mission on 4 October 1957. The satellite weighed about 83 kg (183 lb), and is believed to have orbited Earth at a height of about 250 km (160 mi). It had two radio transmitters (20 and 40 MHz), which emitted "beeps" that could be heard by radios around the globe. Analysis of the radio signals was used to gather information about the electron density of the ionosphere, while temperature and pressure data was encoded in the duration of radio beeps. The results indicated that the satellite was not punctured by a meteoroid. Sputnik 1 was launched by an R-7 rocket. It burned up upon re-entry on 3 January 1958. ### First human outer space flight The first successful human spaceflight was Vostok 1 ("East 1"), carrying the 27-year-old Russian cosmonaut, Yuri Gagarin, on 12 April 1961. The spacecraft completed one orbit around the globe, lasting about 1 hour and 48 minutes. Gagarin's flight resonated around the world; it was a demonstration of the advanced Soviet space program and it opened an entirely new era in space exploration: human spaceflight. ### First astronomical body space explorations The first artificial object to reach another celestial body was Luna 2 reaching the Moon in 1959.[21] The first soft landing on another celestial body was performed by Luna 9 landing on the Moon on 3 February 1966.[22] Luna 10 became the first artificial satellite of the Moon, entering in a lunar orbit on 3 April 1966.[23] The first crewed landing on another celestial body was performed by Apollo 11 on 20 July 1969, landing on the Moon. There have been a total of six spacecraft with humans landing on the Moon starting from 1969 to the last human landing in 1972. The first interplanetary flyby was the 1961 Venera 1 flyby of Venus, though the 1962 Mariner 2 was the first flyby of Venus to return data (closest approach 34,773 kilometers). Pioneer 6 was the first satellite to orbit the Sun, launched on 16 December 1965. The other planets were first flown by in 1965 for Mars by Mariner 4, 1973 for Jupiter by Pioneer 10, 1974 for Mercury by Mariner 10, 1979 for Saturn by Pioneer 11, 1986 for Uranus by Voyager 2, 1989 for Neptune by Voyager 2. In 2015, the dwarf planets Ceres and Pluto were orbited by Dawn and passed by New Horizons, respectively. This accounts for flybys of each of the eight planets in the Solar System, the Sun, the Moon and Ceres & Pluto (2 of the 5 recognized dwarf planets). The first interplanetary surface mission to return at least limited surface data from another planet was the 1970 landing of Venera 7, which returned data to Earth for 23 minutes from Venus. In 1975 the Venera 9 was the first to return images from the surface of another planet, returning images from Venus. In 1971 the Mars 3 mission achieved the first soft landing on Mars returning data for almost 20 seconds. Later much longer duration surface missions were achieved, including over six years of Mars surface operation by Viking 1 from 1975 to 1982 and over two hours of transmission from the surface of Venus by Venera 13 in 1982, the longest ever Soviet planetary surface mission. Venus and Mars are the two planets outside of Earth on which humans have conducted surface missions with unmanned robotic spacecraft. ### First space station Salyut 1 was the first space station of any kind, launched into low Earth orbit by the Soviet Union on 19 April 1971. The International Space Station is currently the only fully functional space station, inhabited continuously since the year 2000. ### First interstellar space flight Voyager 1 became the first human-made object to leave the Solar System into interstellar space on 25 August 2012. The probe passed the heliopause at 121 AU to enter interstellar space.[24] ### Farthest from Earth The Apollo 13 flight passed the far side of the Moon at an altitude of 254 kilometers (158 miles; 137 nautical miles) above the lunar surface, and 400,171 km (248,655 mi) from Earth, marking the record for the farthest humans have ever traveled from Earth in 1970. Voyager 1 is currently at a distance of 145.11 Astronomy:astronomical unit|astronomical units (2.1708×1010 km; 1.3489×1010 mi) (21.708 billion kilometers; 13.489 billion miles) from Earth as of 1 January 2019.[25] It is the most distant human-made object from Earth.[26] GN-z11 is the most distant known object from Earth, reported as 13.4 billion light-years away.[17][18] ### Key people in early space exploration The dream of stepping into the outer reaches of Earth's atmosphere was driven by the fiction of Jules Verne[27][28][29] and H. G. Wells,[30] and rocket technology was developed to try to realize this vision. The German V-2 was the first rocket to travel into space, overcoming the problems of thrust and material failure. During the final days of World War II this technology was obtained by both the Americans and Soviets as were its designers. The initial driving force for further development of the technology was a weapons race for intercontinental ballistic missiles (ICBMs) to be used as long-range carriers for fast nuclear weapon delivery, but in 1961 when the Soviet Union launched the first man into space, the United States declared itself to be in a "Space Race" with the Soviets. Konstantin Tsiolkovsky, Robert Goddard, Hermann Oberth, and Reinhold Tiling laid the groundwork of rocketry in the early years of the 20th century. Wernher von Braun was the lead rocket engineer for Nazi Germany's World War II V-2 rocket project. In the last days of the war he led a caravan of workers in the German rocket program to the American lines, where they surrendered and were brought to the United States to work on their rocket development ("Operation Paperclip"). He acquired American citizenship and led the team that developed and launched Explorer 1, the first American satellite. Von Braun later led the team at NASA's Marshall Space Flight Center which developed the Saturn V moon rocket. Initially the race for space was often led by Sergei Korolev, whose legacy includes both the R7 and Soyuz—which remain in service to this day. Korolev was the mastermind behind the first satellite, first man (and first woman) in orbit and first spacewalk. Until his death his identity was a closely guarded state secret; not even his mother knew that he was responsible for creating the Soviet space program. Kerim Kerimov was one of the founders of the Soviet space program and was one of the lead architects behind the first human spaceflight (Vostok 1) alongside Sergey Korolev. After Korolev's death in 1966, Kerimov became the lead scientist of the Soviet space program and was responsible for the launch of the first space stations from 1971 to 1991, including the Salyut and Mir series, and their precursors in 1967, the Cosmos 186 and Cosmos 188.[31][32] Other key people: • Valentin Glushko was Chief Engine Designer for the Soviet Union. Glushko designed many of the engines used on the early Soviet rockets, but was constantly at odds with Korolev. • Vasily Mishin was Chief Designer working under Sergey Korolev and one of the first Soviets to inspect the captured German V-2 design. Following the death of Sergei Korolev, Mishin was held responsible for the Soviet failure to be first country to place a man on the Moon. • Robert Gilruth was the NASA head of the Space Task Force and director of 25 crewed space flights. Gilruth was the person who suggested to John F. Kennedy that the Americans take the bold step of reaching the Moon in an attempt to reclaim space superiority from the Soviets. • Christopher C. Kraft, Jr. was NASA's first flight director, who oversaw development of Mission Control and associated technologies and procedures. • Maxime Faget was the designer of the Mercury capsule; he played a key role in designing the Gemini and Apollo spacecraft, and contributed to the design of the Space Shuttle. ## Targets of exploration The Moon as seen in a digitally processed image from data collected during the 1992 Galileo spacecraft flyby Starting in the mid-20th century probes and then human mission were sent into Earth orbit, and then on to the Moon. Also, probes were sent throughout the known Solar system, and into Solar orbit. Unmanned spacecraft have been sent into orbit around Saturn, Jupiter, Mars, Venus, and Mercury by the 21st century, and the most distance active spacecraft, Voyager 1 and 2 traveled beyond 100 times the Earth-Sun distance. The instruments were enough though that it is thought they have left the Sun's heliosphere, a sort of bubble of particles made in the Galaxy by the Sun's solar wind. ### The Sun The Sun is a major focus of space exploration. Being above the atmosphere in particular and Earth's magnetic field gives access to the solar wind and infrared and ultraviolet radiations that cannot reach Earth's surface. The Sun generates most space weather, which can affect power generation and transmission systems on Earth and interfere with, and even damage, satellites and space probes. Numerous spacecraft dedicated to observing the Sun, beginning with the Apollo Telescope Mount, have been launched and still others have had solar observation as a secondary objective. Parker Solar Probe, launched in 2018, will approach the Sun to within 1/8th the orbit of Mercury. ### Mercury Main page: Astronomy:Exploration of Mercury MESSENGER image of Mercury (2013) thumb|130x130px|A MESSENGER image from 18,000 km showing a region about 500 km across (2008) Mercury remains the least explored of the Terrestrial planets. As of May 2013, the Mariner 10 and MESSENGER missions have been the only missions that have made close observations of Mercury. MESSENGER entered orbit around Mercury in March 2011, to further investigate the observations made by Mariner 10 in 1975 (Munsell, 2006b). A third mission to Mercury, scheduled to arrive in 2025, BepiColombo is to include two probes. BepiColombo is a joint mission between Japan and the European Space Agency. MESSENGER and BepiColombo are intended to gather complementary data to help scientists understand many of the mysteries discovered by Mariner 10's flybys. Flights to other planets within the Solar System are accomplished at a cost in energy, which is described by the net change in velocity of the spacecraft, or delta-v. Due to the relatively high delta-v to reach Mercury and its proximity to the Sun, it is difficult to explore and orbits around it are rather unstable. ### Venus Mariner 10 image of Venus (1974) Main page: Astronomy:Observations and explorations of Venus Venus was the first target of interplanetary flyby and lander missions and, despite one of the most hostile surface environments in the Solar System, has had more landers sent to it (nearly all from the Soviet Union) than any other planet in the Solar System. The first flyby was the 1961 Venera 1, though the 1962 Mariner 2 was the first flyby to successfully return data. Mariner 2 has been followed by several other flybys by multiple space agencies often as part of missions using a Venus flyby to provide a gravitational assist en route to other celestial bodies. In 1967 Venera 4 became the first probe to enter and directly examine the atmosphere of Venus. In 1970, Venera 7 became the first successful lander to reach the surface of Venus and by 1985 it had been followed by eight additional successful Soviet Venus landers which provided images and other direct surface data. Starting in 1975 with the Soviet orbiter Venera 9 some ten successful orbiter missions have been sent to Venus, including later missions which were able to map the surface of Venus using radar to pierce the obscuring atmosphere. ### Earth First television image of Earth from space, taken by TIROS-1. (1960) The Blue Marble Earth picture taken during Apollo 17 (1972) Main page: Astronomy:Earth observation satellite Space exploration has been used as a tool to understand Earth as a celestial object in its own right. Orbital missions can provide data for Earth that can be difficult or impossible to obtain from a purely ground-based point of reference. For example, the existence of the Van Allen radiation belts was unknown until their discovery by the United States' first artificial satellite, Explorer 1. These belts contain radiation trapped by Earth's magnetic fields, which currently renders construction of habitable space stations above 1000 km impractical. Following this early unexpected discovery, a large number of Earth observation satellites have been deployed specifically to explore Earth from a space based perspective. These satellites have significantly contributed to the understanding of a variety of Earth-based phenomena. For instance, the hole in the ozone layer was found by an artificial satellite that was exploring Earth's atmosphere, and satellites have allowed for the discovery of archeological sites or geological formations that were difficult or impossible to otherwise identify. ### The Moon The Moon (2010) Main page: Exploration of the Moon Apollo 16 LEM Orion, the Lunar Roving Vehicle and astronaut John Young (1972) The Moon was the first celestial body to be the object of space exploration. It holds the distinctions of being the first remote celestial object to be flown by, orbited, and landed upon by spacecraft, and the only remote celestial object ever to be visited by humans. In 1959 the Soviets obtained the first images of the far side of the Moon, never previously visible to humans. The U.S. exploration of the Moon began with the Ranger 4 impactor in 1962. Starting in 1966 the Soviets successfully deployed a number of landers to the Moon which were able to obtain data directly from the Moon's surface; just four months later, Surveyor 1 marked the debut of a successful series of U.S. landers. The Soviet uncrewed missions culminated in the Lunokhod program in the early 1970s, which included the first uncrewed rovers and also successfully brought lunar soil samples to Earth for study. This marked the first (and to date the only) automated return of extraterrestrial soil samples to Earth. Uncrewed exploration of the Moon continues with various nations periodically deploying lunar orbiters, and in 2008 the Indian Moon Impact Probe. Crewed exploration of the Moon began in 1968 with the Apollo 8 mission that successfully orbited the Moon, the first time any extraterrestrial object was orbited by humans. In 1969, the Apollo 11 mission marked the first time humans set foot upon another world. Crewed exploration of the Moon did not continue for long. The Apollo 17 mission in 1972 marked the sixth landing and the most recent human visit. Artemis 2 will flyby the Moon in 2022. Robotic missions are still pursued vigorously. ### Mars Mars, as seen by the Hubble Space Telescope (2003) Surface of Mars by the Spirit rover (2004) Main page: Astronomy:Exploration of Mars The exploration of Mars has been an important part of the space exploration programs of the Soviet Union (later Russia), the United States, Europe, Japan and India. Dozens of robotic spacecraft, including orbiters, landers, and rovers, have been launched toward Mars since the 1960s. These missions were aimed at gathering data about current conditions and answering questions about the history of Mars. The questions raised by the scientific community are expected to not only give a better appreciation of the red planet but also yield further insight into the past, and possible future, of Earth. Mars is the prime candidate where humans could live outside the Earth and the technology to reach Mars is possible.[33] ## Rationales Astronaut Buzz Aldrin had a personal Communion service when he first arrived on the surface of the Moon. The research that is conducted by national space exploration agencies, such as NASA and Roscosmos, is one of the reasons supporters cite to justify government expenses. Economic analyses of the NASA programs often showed ongoing economic benefits (such as NASA spin-offs), generating many times the revenue of the cost of the program.[67] It is also argued that space exploration would lead to the extraction of resources on other planets and especially asteroids, which contain billions of dollars that worth of minerals and metals. Such expeditions could generate a lot of revenue.[68] In addition, it has been argued that space exploration programs help inspire youth to study in science and engineering.[69] Space exploration also gives scientists the ability to perform experiments in other settings and expand humanity's knowledge.[70] Another claim is that space exploration is a necessity to mankind and that staying on Earth will lead to extinction. Some of the reasons are lack of natural resources, comets, nuclear war, and worldwide epidemic. Stephen Hawking, renowned British theoretical physicist, said that "I don't think the human race will survive the next thousand years, unless we spread into space. There are too many accidents that can befall life on a single planet. But I'm an optimist. We will reach out to the stars."[71] Arthur C. Clarke (1950) presented a summary of motivations for the human exploration of space in his non-fiction semi-technical monograph Interplanetary Flight.[72] He argued that humanity's choice is essentially between expansion off Earth into space, versus cultural (and eventually biological) stagnation and death. These motivations could be attributed to one of the first rocket scientists in NASA, Wernher von Braun, and his vision of humans moving beyond Earth. The basis of this plan was to: "Develop multi-stage rockets capable of placing satellites, animals, and humans in space. Development of large, winged reusable spacecraft capable of carrying humans and equipment into Earth orbit in a way that made space access routine and cost-effective. Construction of a large, permanently occupied space station to be used as a platform both to observe Earth and from which to launch deep space expeditions. Launching the first human flights around the Moon, leading to the first landings of humans on the Moon, with the intent of exploring that body and establishing permanent lunar bases. Assembly and fueling of spaceships in Earth orbit for the purpose of sending humans to Mars with the intent of eventually colonizing that planet".[73] Known as the Von Braun Paradigm, the plan was formulated to lead humans in the exploration of space. Von Braun's vision of human space exploration served as the model for efforts in space space exploration well into the twenty-first century, with NASA incorporating this approach into the majority of their projects.[73] The steps were followed out of order, as seen by the Apollo program reaching the moon before the space shuttle program was started, which in turn was used to complete the International Space Station. Von Braun's Paradigm formed NASA's drive for human exploration, in the hopes that humans discover the far reaches of the universe. NASA has produced a series of public service announcement videos supporting the concept of space exploration.[74] Overall, the public remains largely supportive of both crewed and uncrewed space exploration. According to an Associated Press Poll conducted in July 2003, 71% of U.S. citizens agreed with the statement that the space program is "a good investment", compared to 21% who did not.[75] ### Human nature Space advocacy and space policy[76] regularly invokes exploration as a human nature.[77] This advocacy has been criticized by scholars as essentializing and continuation of colonialism,[78][79][80][81] particularly manifest destiny, making space exploration misaligned with science and a less inclusive field.[76] ## Topics Main pages: Astronomy:Space science and Physics:Human presence in space ### Spaceflight Main pages: Engineering:Spaceflight and Earth:Astronautics Delta-v's in km/s for various orbital maneuvers Spaceflight is the use of space technology to achieve the flight of spacecraft into and through outer space. Spaceflight is used in space exploration, and also in commercial activities like space tourism and satellite telecommunications. Additional non-commercial uses of spaceflight include space observatories, reconnaissance satellites and other Earth observation satellites. A spaceflight typically begins with a rocket launch, which provides the initial thrust to overcome the force of gravity and propels the spacecraft from the surface of Earth. Once in space, the motion of a spacecraft—both when unpropelled and when under propulsion—is covered by the area of study called astrodynamics. Some spacecraft remain in space indefinitely, some disintegrate during atmospheric reentry, and others reach a planetary or lunar surface for landing or impact. ### Satellites Main page: Engineering:Satellite Satellites are used for a large number of purposes. Common types include military (spy) and civilian Earth observation satellites, communication satellites, navigation satellites, weather satellites, and research satellites. Space stations and human spacecraft in orbit are also satellites. ### Commercialization of space The commercialization of space first started out with the launching of private satellites by NASA or other space agencies. Current examples of the commercial satellite use of space include satellite navigation systems, satellite television and satellite radio. The next step of commercialization of space was seen as human spaceflight. Flying humans safely to and from space had become routine to NASA.[82] Reusable spacecraft were an entirely new engineering challenge, something only seen in novels and films like Star Trek and War of the Worlds. Great names like Buzz Aldrin supported the use of making a reusable vehicle like the space shuttle. Aldrin held that reusable spacecraft were the key in making space travel affordable, stating that the use of "passenger space travel is a huge potential market big enough to justify the creation of reusable launch vehicles".[83] How can the public go against the words of one of America's best known heroes in space exploration? After all exploring space is the next great expedition, following the example of Lewis and Clark.Space tourism is the next step reusable vehicles in the commercialization of space. The purpose of this form of space travel is used by individuals for the purpose of personal pleasure. Private spaceflight companies such as SpaceX and Blue Origin, and commercial space stations such as the Axiom Space and the Bigelow Commercial Space Station have dramatically changed the landscape of space exploration, and will continue to do so in the near future. ### Alien life Main pages: Astronomy:Astrobiology and Astronomy:Extraterrestrial life Astrobiology is the interdisciplinary study of life in the universe, combining aspects of astronomy, biology and geology.[84] It is focused primarily on the study of the origin, distribution and evolution of life. It is also known as exobiology (from Greek: έξω, exo, "outside").[85][86][87] The term "Xenobiology" has been used as well, but this is technically incorrect because its terminology means "biology of the foreigners".[88] Astrobiologists must also consider the possibility of life that is chemically entirely distinct from any life found on Earth.[89] In the Solar System some of the prime locations for current or past astrobiology are on Enceladus, Europa, Mars, and Titan.[90] ### Human spaceflight and habitation Crew quarters on Zvezda the base ISS crew module To date, the longest human occupation of space is the International Space Station which has been in continuous use for 21 years, 16 days. Valeri Polyakov's record single spaceflight of almost 438 days aboard the Mir space station has not been surpassed. The health effects of space have been well documented through years of research conducted in the field of aerospace medicine. Analog environments similar to those one may experience in space travel (like deep sea submarines) have been used in this research to further explore the relationship between isolation and extreme environments.[91] It is imperative that the health of the crew be maintained as any deviation from baseline may compromise the integrity of the mission as well as the safety of the crew, hence the reason why astronauts must endure rigorous medical screenings and tests prior to embarking on any missions. However, it does not take long for the environmental dynamics of spaceflight to commence its toll on the human body; for example, space motion sickness (SMS) - a condition which affects the neurovestibular system and culminates in mild to severe signs and symptoms such as vertigo, dizziness, fatigue, nausea, and disorientation - plagues almost all space travelers within their first few days in orbit.[91] Space travel can also have a profound impact on the psyche of the crew members as delineated in anecdotal writings composed after their retirement. Space travel can adversely affect the body's natural biological clock (circadian rhythm); sleep patterns causing sleep deprivation and fatigue; and social interaction; consequently, residing in a Low Earth Orbit (LEO) environment for a prolonged amount of time can result in both mental and physical exhaustion.[91] Long-term stays in space reveal issues with bone and muscle loss in low gravity, immune system suppression, and radiation exposure. The lack of gravity causes fluid to rise upward which can cause pressure to build up in the eye, resulting in vision problems; the loss of bone minerals and densities; cardiovascular deconditioning; and decreased endurance and muscle mass.[92] Radiation is perhaps the most insidious health hazard to space travelers as it is invisible to the naked eye and can cause cancer. Space craft are no longer protected from the sun's radiation as they are positioned above the Earth's magnetic field; the danger of radiation is even more potent when one enters deep space. The hazards of radiation can be ameliorated through protective shielding on the spacecraft, alerts, and dosimetry.[93] Fortunately, with new and rapidly evolving technological advancements, those in Mission Control are able to monitor the health of their astronauts more closely utilizing telemedicine. One may not be able to completely evade the physiological effects of space flight, but they can be mitigated. For example, medical systems aboard space vessels such as the International Space Station (ISS) are well equipped and designed to counteract the effects of lack of gravity and weightlessness; on-board treadmills can help prevent muscle loss and reduce the risk of developing premature osteoporosis.[91][93] Additionally, a crew medical officer is appointed for each ISS mission and a flight surgeon is available 24/7 via the ISS Mission Control Center located in Houston, Texas.[94] Although the interactions are intended to take place in real time, communications between the space and terrestrial crew may become delayed - sometimes by as much as 20 minutes[93] - as their distance from each other increases when the spacecraft moves further out of LEO; because of this the crew are trained and need to be prepared to respond to any medical emergencies that may arise on the vessel as the ground crew are hundreds of miles away. As one can see, travelling and possibly living in space poses many challenges. Many past and current concepts for the continued exploration and colonization of space focus on a return to the Moon as a "stepping stone" to the other planets, especially Mars. At the end of 2006 NASA announced they were planning to build a permanent Moon base with continual presence by 2024.[95] Beyond the technical factors that could make living in space more widespread, it has been suggested that the lack of private property, the inability or difficulty in establishing property rights in space, has been an impediment to the development of space for human habitation. Since the advent of space technology in the latter half of the twentieth century, the ownership of property in space has been murky, with strong arguments both for and against. In particular, the making of national territorial claims in outer space and on celestial bodies has been specifically proscribed by the Outer Space Treaty, which had been, (As of 2012), ratified by all spacefaring nations.[96] Space colonization, also called space settlement and space humanization, would be the permanent autonomous (self-sufficient) human habitation of locations outside Earth, especially of natural satellites or planets such as the Moon or Mars, using significant amounts of in-situ resource utilization. #### Human representation and participation Participation and representation of humanity in space is an issue ever since the first phase of space exploration.[81] Some rights of non-spacefaring countries have been secured through international space law, declaring space the "province of all mankind", understanding spaceflight as its resource, though sharing of space for all humanity is still criticized as imperialist and lacking.[81] Additionally to international inclusion, the inclusion of women and people of colour has also been lacking. To reach a more inclusive spaceflight some organizations like the Justspace Alliance[81] and IAU featured Inclusive Astronomy[97] have been formed in recent years. ##### Women Main page: Astronomy:Women in space The first woman to ever enter space was Valentina Tereshkova. She flew in 1963 but it was not until the 1980s that another woman entered space again. All astronauts were required to be military test pilots at the time and women were not able to enter this career, this is one reason for the delay in allowing women to join space crews. After the rule changed, Svetlana Savitskaya became the second woman to enter space, she was also from the Soviet Union. Sally Ride became the next woman to enter space and the first woman to enter space through the United States program. Since then, eleven other countries have allowed women astronauts. The first all female space walk occurred in 2018, including Christina Koch and Jessica Meir. These two women have both participated in separate space walks with NASA. The first woman to go to the moon is planned for 2024. Despite these developments women are still underrepresented among astronauts and especially cosmonauts. Issues that block potential applicants from the programs and limit the space missions they are able to go on, are for example: • agencies limiting women to half as much time in space than men, arguing with unresearched potential risks for cancer.[98] • a lack of space suits sized appropriately for female astronauts.[99] Additionally women have been discriminately treated for example as with Sally Ride by being scrutinized more than her male counterparts and asked sexist questions by the press. ### Art Artistry in and from space ranges from signals, capturing and arranging material like Yuri Gagarin's selfie in space or the image The Blue Marble, over drawings like the first one in space by cosmonaut and artist Alexei Leonov, music videos like Chris Hadfield's cover of Space Oddity on board the ISS, to permanent installations on celestial bodies like on the Moon. Main page: Astronomy:Outline of space exploration ### Living in space #### Animals in space • Animals in space • Monkeys in space • Russian space dogs ## References 1. https://www.airforcemag.com/article/0904rocket/ article by Walter J. Boyne in Air Force Magazine, September 1, 2004 2. https://www.opelpost.com/05/2018/opel-sounds-in-the-era-of-rockets/ Opel Post article on 90th anniversary of Opel RAK2 public rocket demonstration at AVUS Berlin 3. https://www.airspacemag.com/daily-planet/century-elon-musk-there-was-fritz-von-opel-180977634/ article by Frank H. Winter in Air&Space, April 30, 2021 4. Roston, Michael (28 August 2015). "NASA's Next Horizon in Space". The New York Times. 5. Chow, Denise (9 March 2011). "After 13 Years, International Space Station Has All Its NASA Rooms". Space.com. 6. Connolly, John F. (October 2006). "Constellation Program Overview". Constellation Program Office. 7. Lawler, Andrew (22 October 2009). "No to NASA: Augustine Commission Wants to More Boldly Go". Science. 8. "President Outlines Exploration Goals, Promise". Address at KSC. 15 April 2010. 9. "SpaceX" (in en). 10. Joseph A. Angelo (2014). Spacecraft for Astronomy. Infobase Publishing. p. 20. ISBN 978-1-4381-0896-4. 11. Staff (2 January 2013). "100 Billion Alien Planets Fill Our Milky Way Galaxy: Study". Space.com. 12. Conselice, Christopher J. (2016). "The Evolution of Galaxy Number Density at z < 8 and Its Implications". The Astrophysical Journal 830 (2): 83. doi:10.3847/0004-637X/830/2/83. Bibcode2016ApJ...830...83C. 13. Fountain, Henry (17 October 2016). "Two Trillion Galaxies, at the Very Least". The New York Times. 14. Borenstein, Seth (3 March 2016). "Astronomers Spot Record Distant Galaxy From Early Cosmos". Associated Press. 15. "GN-z11: Astronomers push Hubble Space Telescope to limits to observe most remote galaxy ever seen". Australian Broadcasting Corporation. 3 March 2016. 16. "First Human-Made Object to Enter Space". NASA. 3 January 2008. 17. Williams, Matt (2016-09-16). "How high is space?". 18. Harwood, William (12 September 2013). "Voyager 1 finally crosses into interstellar space". CBS News. 19. 20. "Tsiolkovsky biography". Russianspaceweb.com. 21. "Herman Oberth". centennialofflight.net. 29 December 1989. 22. "Von Braun". History.msfc.nasa.gov. 23. Bond, Peter (7 April 2003). "Obituary: Lt-Gen Kerim Kerimov". The Independent (London). 24. Betty, Blair (1995). "Behind Soviet Aeronauts". Azerbaijan International 3: 3. 25. 26. Dinerman, Taylor (27 September 2004). "Is the Great Galactic Ghoul losing his appetite?". The space review. 27. Knight, Matthew. "Beating the curse of Mars". Science & Space. 28. "India becomes first Asian nation to reach Mars orbit, joins elite global space club". The Washington Post. 24 September 2014. "India became the first Asian nation to reach the Red Planet when its indigenously made unmanned spacecraft entered the orbit of Mars on Wednesday" 29. "India's spacecraft reaches Mars orbit ... and history". CNN. 24 September 2014. "India's Mars Orbiter Mission successfully entered Mars' orbit Wednesday morning, becoming the first nation to arrive on its first attempt and the first Asian country to reach the Red Planet." 30. "India Successfully Launches First Mission to Mars; PM Congratulates ISRO Team". International Business Times. 5 November 2013. 31. Bhatt, Abhinav (5 November 2013). "India's 450-crore mission to Mars to begin today: 10 facts". NDTV. 32. "Hope Mars Probe". Mohammed Bin Rashid Space Centre. 33. Molczan, Ted (9 November 2011). "Phobos-Grunt – serious problem reported". SeeSat-L. 35. Wong, Al (28 May 1998). "Galileo FAQ: Navigation". NASA. 36. Hirata, Chris. "Delta-V in the Solar System". California Institute of Technology. 37. Suomi, V.E.; Limaye, S.S.; Johnson, D.R. (1991). "High winds of Neptune: A possible mechanism". Science 251 (4996): 929–932. doi:10.1126/science.251.4996.929. PMID 17847386. Bibcode1991Sci...251..929S. 38. Agnor, C.B.; Hamilton, D.P. (2006). "Neptune's capture of its moon Triton in a binary-planet gravitational encounter". Nature 441 (7090): 192–194. doi:10.1038/nature04792. PMID 16688170. Bibcode2006Natur.441..192A. 39. "Voyager Frequently Asked Questions". Jet Propulsion Laboratory. 14 January 2003. 40. Roy Britt, Robert (26 February 2003). "Pluto mission gets green light at last". space.com. Space4Peace.org. 41. Forward, Robert L (January 1996). "Ad Astra!". Journal of the British Interplanetary Society 49: 23–32. Bibcode1996JBIS...49...23F. 42. Gilster, Paul (12 April 2016). "Breakthrough Starshot: Mission to Alpha Centauri". Centauri Dreams. 43. EDT, Seung Lee on 4/13/16 at 2:01 PM (13 April 2016). "Mark Zuckerberg Launches \$100 Million Initiative To Send Tiny Space Probes To Explore Stars" (in en). 44. "How does the Webb Contrast with Hubble?". JWST Home – NASA. 2016. 45. "JWST vital facts: mission goals". NASA James Webb Space Telescope. 2017. 46. "James Webb Space Telescope. JWST History: 1989-1994". Space Telescope Science Institute, Baltimore, MD. 2017. 47. NASA administrator on new Moon plan: 'We're doing this in a way that's never been done before'. Loren Grush, The Verge. 17 May 2019. 48. Harwood, William (17 July 2019). "NASA boss pleads for steady moon mission funding". CBS News. 49. Senate appropriators advance bill funding NASA despite uncertainties about Artemis costs. Jeff Foust, Space News. 27 September 2019. 50. Hertzfeld, H. R. (2002). "Measuring the Economic Returns from Successful NASA Life Sciences Technology Transfers". The Journal of Technology Transfer 27 (4): 311–320. doi:10.1023/A:1020207506064. PMID 14983842. 51. Elvis, Martin (2012). "Let's mine asteroids – for science and profit". Nature 485 (7400): 549. doi:10.1038/485549a. PMID 22660280. Bibcode2012Natur.485..549E. 52. "Is Space Exploration Worth the Cost? A Freakonomics Quorum". Freakonomics. freakonomics.com. 2008-01-11. 53. Zelenyi, L. M.; Korablev, O. I.; Rodionov, D. S.; Novikov, B. S.; Marchenkov, K. I.; Andreev, O. N.; Larionov, E. V. (December 2015). "Scientific objectives of the scientific equipment of the landing platform of the ExoMars-2018 mission" (in en). Solar System Research 49 (7): 509–517. doi:10.1134/S0038094615070229. ISSN 0038-0946. Bibcode2015SoSyR..49..509Z. 54. Highfield, Roger (15 October 2001). "Colonies in space may be only hope, says Hawking". The Daily Telegraph (London). 55. Clarke, Arthur C. (1950). "10". Interplanetary Flight – An Introduction to Astronautics. New York: Harper & Brothers. 56. Launius, R. D.; Mccurdy, H. E. (2007). "Robots and humans in space flight: Technology, evolution, and interplanetary travel". Technology in Society 29 (3): 271–282. doi:10.1016/j.techsoc.2007.04.00. 57. "Origin of Human Life – USA Today/Gallup Poll". Pollingreport.com. 3 July 2007. 58. Marina Koren (17 September 2020). "No One Should 'Colonize' Space". 59. Deana L. Weibel (12 July 2019). "Destiny in Space". American Anthropological Association. 60. Mike Wall (25 October 2019). "Bill Nye: It's Space Settlement, Not Colonization". 61. Drake, Nadia (2018-11-09). "We need to change the way we talk about space exploration". National Geographic. 62. Haris Durrani (19 July 2019). Is Spaceflight Colonialism?. Retrieved 2 October 2020. 63. year = 2002| last1 = Gregory | first1 = Frederick | last2 = Garber | first2 = S.J. | book = Looking Backward, Looking Forward: Forty Years of U.S. Human Spaceflight| pages = 73-80 |title=Making Human Spaceflight as Safe as Possible 64. year = 2002| last1 = Aldrin | first1 = Buzz | last2 = Garber | first2 = S.J. | book = Looking Backward, Looking Forward: Forty Years of U.S. Human Spaceflight| pages = 91-100 |title=Apollo and Beyond 65. "NASA Astrobiology". Astrobiology.arc.nasa.gov. 66. "X". Aleph.se. 11 March 2000. 67. "Fears and dreads". World Wide Words. 31 May 1997. 68. "iTWire – Scientists will look for alien life, but Where and How?". Itwire.com.au. 27 April 2007. 69. "Astrobiology". Biocab.org. 70. Ward, Peter (8 December 2006). "Launching the Alien Debates". Astrobiology Magazine. 71. "Astrobiology: the quest for extraterrestrial life". Spacechronology.com. 29 September 2010. 72. Doarn, CharlesR; Polk, Jd; Shepanek, Marc (2019). "Health challenges including behavioral problems in long-duration spaceflight" (in en). Neurology India 67 (8): S190–S195. doi:10.4103/0028-3886.259116. ISSN 0028-3886. PMID 31134909. 73. Perez, Jason (2016-03-30). "The Human Body in Space". 74. Mars, Kelli (2018-03-27). "5 Hazards of Human Spaceflight". 75. Mars, Kelli (2018-03-27). "5 Hazards of Human Spaceflight". 76. "Global Exploration Strategy and Lunar Architecture" (PDF) (Press release). NASA. 4 December 2006. Archived from the original (PDF) on 14 June 2007. Retrieved 5 August 2007. 77. Simberg, Rand (Fall 2012). "Property Rights in Space". The New Atlantis (37): 20–31. Retrieved 14 December 2012. 78. Website of the IAU100 Inclusive Astronomy project 79. Kramer, Miriam (27 August 2013). "Female Astronauts Face Discrimination from Space Radiation Concerns, Astronauts Say". Space.com. Purch. 80. Sokolowski, Susan L. (5 April 2019). "Female astronauts: How performance products like space suits and bras are designed to pave the way for women's accomplishments". The Conversation. Retrieved 10 May 2020.
2021-11-30 06:26:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2946370244026184, "perplexity": 5599.324872200773}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358953.29/warc/CC-MAIN-20211130050047-20211130080047-00551.warc.gz"}
https://www.physicsforums.com/threads/fourier-sine-transform-of-1.368796/
# Homework Help: Fourier sine transform of 1 1. Jan 11, 2010 ### Hoplite 1. The problem statement, all variables and given/known data I'm looking to determine the Fourier sine transfom of 1. 2. Relevant equations One this site http://mechse.illinois.edu/research/dstn/teaching_files2/fouriertransforms.pdf [Broken] (page 2) it gives the sine transform as $$\frac{2}{\pi \omega}$$ 3. The attempt at a solution However, since the Fourier sine fransform of 1 is defined via, $$\frac{2}{\pi} \int_0^\infty \sin (\omega x) dx ,$$ I figure that its value should be, $$\frac{2}{\pi \omega} -\lim_{L\rightarrow \infty } \frac{2}{\pi \omega} \cos (r L) .$$ It seems like they've just thrown the cosine term away, but is this legal? If so why? Last edited by a moderator: May 4, 2017 2. Jan 11, 2010 ### LCKurtz The usual condition for any fourier transform is $$\int_{-\infty}^\infty |f(x)|\ dx < \infty$$ which f(x) = 1 doesn't satisfy. The sine transform doesn't exist, and the integral for it diverges as you have observed. 3. Jan 11, 2010 ### Hoplite Excellent. Thanks, LCKrutz. 4. Jan 11, 2010 ### vela Staff Emeritus The straightforward integral diverges, so what they probably did was throw in an integrating factor $e^{-\lambda x}$ to make the integral converge, and then take the limit as $\lambda\rightarrow0^+$. Try that and see what you get. 5. Jan 11, 2010 ### Hoplite That's a good trick. I'll have to remember that one. Cheers.
2018-09-22 22:20:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8983493447303772, "perplexity": 2693.1146942559394}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267158766.53/warc/CC-MAIN-20180922221246-20180923001646-00289.warc.gz"}
https://www.r-bloggers.com/2019/04/random-sampling-of-files/
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't. random_files <- function(path, percent_number, pattern = "wav$|WAV$"){ #################################################################### # path = path to folder with files to select # # percent_number = percentage or number of recordings to select. If value is # between 0 and 1 percentage of files is assumed, if value greater than 1, # number of files is assumed # # pattern = file extension to select. By default it selects wav files. For # other type of files replace wav and WAV by the desired extension #################################################################### # Get file list with full path and file names files <- list.files(path, full.names = TRUE, pattern = pattern) file_names <- list.files(path, pattern = pattern) # Select the desired % or number of file by simple random sampling randomize <- sample(seq(files)) files2analyse <- files[randomize] names2analyse <- file_names[randomize] if(percent_number <= 1){ size <- floor(percent_number * length(files)) }else{ size <- percent_number } files2analyse <- files2analyse[(1:size)] names2analyse <- names2analyse[(1:size)] # Create folder to output results_folder <- paste0(path, '/selected') dir.create(results_folder, recursive=TRUE) # Write csv with file names write.table(names2analyse, file = paste0(results_folder, "/selected_files.csv"), col.names = "Files", row.names = FALSE) # Move files for(i in seq(files2analyse)){ file.rename(from = files2analyse[i], to = paste0(results_folder, "/", names2analyse[i]) ) } } I normally use this function inside a little script for some extra functionalities: 1. first I set up the environment by sourcing the required functions and loading the packages, 2. as I always do when using functions that use randomness, I set a seed to be able to reproduce my results in a later time, 3. as the function uses a folder path, I’ve included a little search window with tcltk to select the folder instead of having to write the full path by hand. # Load packages and functions require(tcltk) source("random_files.R") # Set seed to reproduce results set.seed(1001) # Select folder with recordings path <- tcltk::tk_choose.dir() # Percentage or number of recordings percent_number <- 0.2 # using percentage # Random sampling of files random_files(path, percent_number, pattern = "wav$|WAV$") This function was written for a specific purpose but with some tweaks you can probably adapt it for other purposes other than the one I use it for. You can find this and other R scripts at: https://github.com/bmsasilva/Rscripts
2021-04-15 01:48:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3016013503074646, "perplexity": 8215.222028565127}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038082988.39/warc/CC-MAIN-20210415005811-20210415035811-00247.warc.gz"}
https://www.physicsforums.com/threads/bb-radiation.211936/
1. Jan 30, 2008 ### Euclid I am looking in KK Thermal Physics ch4 at what I assume to be the standard derivation of the SB law of radiation and I notice something peculiar. On the one hand, they model the photon as a 1D SHO with energy given by $$\epsilon = n \hbar \omega$$ On the ohter hand, the distribution of the modes (omega) is given by the condition for a standing EM wave in a 3D box ($$\omega =\pi c \sqrt{ n^2 + m^2 + l^2} /L$$). My question is, why does one not assume a 3D SHO model for the photon with $$\epsilon = (n+m+l) \hbar \omega$$? It seems odd to model the photon as a SHO, but only partially. What's the full story? 2. Jan 30, 2008 ### vanesch Staff Emeritus You are confusing two different "quantisations" here. EACH individual FIELD MODE is an independent 1-D SHO. So the whole system is not a 3-D SHO, but a multi billion-fold dimensional SHO (infinite, in fact). In free space, all plane waves are field modes. But in a box, with boundary conditions, the field modes are quantized (in classical EM). Each of these modes can be described classically by a "harmonic oscillator" with a certain frequency (fixed by the mode) and a certain amplitude/phase (which is free in classical physics). It is THIS harmonic oscillator which will be quantized. So for EACH field mode, we have an oscillator, which, after quantization will take on the famous E = n(mode) omega(mode) x hbar. We say that n(mode) is the NUMBER OF PHOTONS in this mode. So a photon (of a certain type = associated with a certain classical mode of oscillation of the EM field) is nothing else but a quantization step of the associated SHO. So one quantization is classical, and gives you the modes (and hence the omega(mode)) ,and the other quantization is quantum-mechanical, and gives you the ladder of the oscillator associated with the mode. You have a quantum-mechanical oscillator PER MODE. Edit: such an infinite set of harmonic oscillators, associated to classical field modes, is called a QUANTUM FIELD. 3. Jan 30, 2008 ### Euclid Very cool. Thanks for the reply. Since there are an infinite number of modes, won't the ground state of that system be infinite? It's interesting that KK ignores the zero level energy... 4. Jan 30, 2008 ### vanesch Staff Emeritus YES. So what people do is: they subtract this ground level. It's a first taste of renormalization... in quantum field theory, we don't stop subtracting infinities from infinities... 5. Jan 30, 2008 ### Euclid This is very interesting. It seems totally ad hoc. But the renormalization process works? 6. Jan 31, 2008 ### vanesch Staff Emeritus Yes... it is not *totally* ad hoc, but it is not very clean either. Quantum field theory is mathematically not sound, but as you say, it works. That is, the fundamental mathematical constructions can be shown not to exist (!), but the derived calculational procedures work quite amazingly well. That's why people then said that the actual theory was the "set of calculational procedures" and that the (non-existing) objects one was trying to calculate was just an inspiration. And then it turns out that even these calculational procedures are mathematically ill-defined, except for the first approximations. However, these first approximations give amazingly accurate numerical results.
2018-08-20 01:26:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7536277770996094, "perplexity": 1180.1862092071422}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221215487.79/warc/CC-MAIN-20180820003554-20180820023554-00075.warc.gz"}
https://proxies123.com/algorithms-maximum-flow-in-a-network/
# Algorithms – Maximum flow in a network Let N = (V, E) be a network in which the capacity of each edge is 12 or 18. Test or disprove: The value of a maximum flow for N can not be 56. I'm trying to figure out how to definitely try this. I think this is not possible because a combination of 12X + 18Y (where X and Y are integers) will never be = 56. Is there a better way to say this? Am I right in saying that y (X, Y) the whole solution for 12X + 18Y = 56 is what the Fold-Fulkerson algorithm implies?
2019-09-15 18:33:06
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8090276122093201, "perplexity": 266.8236699583327}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572235.63/warc/CC-MAIN-20190915175150-20190915201150-00006.warc.gz"}
https://www.physicsforums.com/threads/hydrogen-electrode-incoherence.780853/
# Hydrogen electrode incoherence? 1. Nov 8, 2014 ### sebastiank007 Today I was studying Atkins physical chemistry basics and I saw a bit of incoherence. ΔrG°=ΔrH°-TΔr ΔrG°=ΔfG°(products) - ΔfG°(substrates) ΔrG°=nFE° Data: ............ ΔfH° ........... ΔfG°............ΔS°(J*K*mol-1) H2(g)...............0....................0...................130,684 H+....................0.....................0..................0 2H+(aq) 2e- => H2(g) Using second and third equation second and third equation I get ΔrG°=0 But using first equation I get ΔrG°=ΔrH°-TΔrS°=0-298*130,684= -39 kJ/mol I thought I can't calculate ΔG for half reaction but I must have used it while calculating Cu2+ + e- => Cu+ potential from Cu2+ + 2e- => Cu and Cu+ + e- => Cu potentials. Can someone explain this to me? 2. Nov 9, 2014 ### Staff: Mentor Of all things I don't understand about your post, this is the most striking one. By definition enthalpy of formation is zero for an element in a standard state. H+ is not an element and not in a standard state, so I don't see why its enthalpy of formation is zero. 3. Nov 9, 2014 ### sebastiank007 #### Attached Files: • ###### Clipboard01.jpg File size: 31 KB Views: 86 4. Nov 11, 2014 ### Staff: Mentor Took me a while to figure this one out, but I think I got the answer. First, it is clear that ΔrG° = 0, since this is what you get from the difference in ΔfG° of the products and reactants and from the standard potential (since E° = 0). So the question is then why does the other equation give ΔrG° ≠ 0? It turns out that there is an entropy of hydration for an electron, and it cancels out the entropy of formation of H2(g), such that ΔrS° = 0. Reference: H. A. Schwarz, Enthalpy and entropy of formation of the hydrated electron, J. Phys. Chem. 95, 6697 (1991). 5. Nov 12, 2014 ### DrDu The explanation of DrClaude sounds convincing. Nevertheless I would try to avoid working with Delta G's for half reactions at all costs. I don't see why you need it. It is rather trivial to calculate the half potential you are want from the half potentials you are given. Namely taking the three equations 1) Cu2+ + e- => Cu+ 2) Cu2+ + 2e- => Cu 3) Cu+ + e- => Cu You can write symbollically 1=2-3 and literally for the free energies $\Delta G(1)=\Delta G(2) - \Delta G(3)$. Now use $\Delta G=nFE^0$ to get $E^0(1)=2E^0(2)-E^0(1)$. PS: Atkins is probably the lousiest book on physical chemistry on hte market. Get a better one. 6. Dec 28, 2014 ### sebastiank007 Thanks for your answers. It makes much more sense to me now.
2017-08-21 17:24:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7124728560447693, "perplexity": 2650.667313731895}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886109157.57/warc/CC-MAIN-20170821152953-20170821172953-00412.warc.gz"}
https://mersenneforum.org/showthread.php?s=b5339e3bc3ed0166df4387202cb0cd39&t=20795&page=16
mersenneforum.org George's dream build Register FAQ Search Today's Posts Mark Forums Read 2017-03-25, 00:58 #166 bgbeuning   Dec 2014 111111002 Posts I had one of the ebay power connectors fail. (see 1st pic) And redid my power (see 2nd pic) Powering 9 boards with one VGA tap from the PSU is pushing it a bit, and I need to reduce that. Attached Thumbnails 2017-07-20, 19:39   #167 Mark Rose "/X\(‘-‘)/X\" Jan 2013 29·101 Posts Quote: Originally Posted by Mark Rose Whelp, I just ordered me a cluster: four i5-6600's with 32 GB DDR4-2133 each. I found Gigabyte GA-H100M-A motherboards for $30, so it wasn't worth getting the fancy ASRock boards at$92.50. I decided to up the CPU to make the cluster more useful for other applications and to improve resale value: nobody wants a low end chip (the i5-6600 is the same speed as a stock clock i5-6600K). Going with 32 GB was only 3 times the cost of 8 GB, plus I won't have to worry about getting rid of 4 GB sticks in the future. They're all going to be powered by a single EVGA 210-GQ-0650-V1 650W 80Plus Gold using ATX splitters. The splitters won't arrive until I'm back from holiday, so pictures will have to wait until after then. I already have all the needed networking stuff. The total cost is about $2250 or about$1725 US. Had I got with i5-6400 and 8 GB, it would have been $1625 or$1250 US. So the 32 GB RAM kits I bought for $120 are now retailing for$320. Yikes. 2017-07-20, 20:25   #168 henryzz Just call me Henry "David" Sep 2007 Cambridge (GMT/BST) 5×19×61 Posts Quote: Originally Posted by Mark Rose So the 32 GB RAM kits I bought for $120 are now retailing for$320. Yikes. The question is how much could you sell them for. If prices are going to continue rising it may be worth investing. 2017-07-20, 20:41   #169 Mysticial Sep 2016 33110 Posts Quote: Originally Posted by henryzz The question is how much could you sell them for. If prices are going to continue rising it may be worth investing. I did something similar to that back in March when Ryzen launched. Because of the initial Ryzen memory issues, I ended up with an extra set of 8 x 16GB @ 3300 MHz G.Skill TridentZ's which I got on sale for only $800. I decided to keep that 128GB kit in anticipation of a Skylake X build. Now that same set is listing on Newegg for$1440. I wish I had gotten more back in March. Not to sell for a profit, but because there's a real possibility that I might be needing a second 128GB system later this year. But with ram prices through the roof, that's going to be tough. Last fiddled with by Mysticial on 2017-07-20 at 20:42 2017-07-20, 22:04   #170 henryzz Just call me Henry "David" Sep 2007 Cambridge (GMT/BST) 5·19·61 Posts Quote: Originally Posted by Mysticial I did something similar to that back in March when Ryzen launched. Because of the initial Ryzen memory issues, I ended up with an extra set of 8 x 16GB @ 3300 MHz G.Skill TridentZ's which I got on sale for only $800. I decided to keep that 128GB kit in anticipation of a Skylake X build. Now that same set is listing on Newegg for$1440. I wish I had gotten more back in March. Not to sell for a profit, but because there's a real possibility that I might be needing a second 128GB system later this year. But with ram prices through the roof, that's going to be tough. Is this temporary due to shortages or going to continue long term? 2017-07-20, 22:14   #171 Mysticial Sep 2016 331 Posts Quote: Originally Posted by henryzz Is this temporary due to shortages or going to continue long term? I'd say give it a least another year before things start to look better. But don't hold your breathe though. What we're seeing now with ram is very similar to what happened to hard drives following the 2009 Thailand floods. It starts with a shortage: • In 2009, the floods knocked out a large portion of the hard drive supply chain. • In late 2016 for ram, increased demand from smart phones and a shift of manufacturing capacity to SSDs caused supply to drop. The shortage leads to extreme price hikes: • Hard drives went up by 2-3x. • Right now, we're nearing that 3x point for low-end memory and 2x for high-end memory. The suppliers realize that the demand is price-insensitive. So they make no effort to ramp up supply and let the revenue flow in. This is usually when price-fixing happens. And suppliers "make up" reasons for not being able to increase supply in order to keep the feds off of them. After a few years, market dynamics take over again and prices drop back to normal. I don't see this happening any time soon for DRAM. So I wouldn't be surprised if prices stay high like this for another 2 - 3 years. It took hard drives a good 5 years to recover. And even then, Moore's Law for GB/$has stopped for some 3 years now. (Though when I worked at Google a few years back, they were "bragging" about how Google eats up the majority of the world's hard drive supply because of YouTube. That might have had something to do with it. And I sure as hell didn't find as funny as everyone else there.) EDIT: That hard drive shortage back in 2009 hit me particularly hard as I'm a big customer of them. I had about 30 2TB drives made before the floods which I ended up using for almost 8 years because I couldn't replace them until last year. By then, enough of them had failed and/or degraded that I couldn't wait much longer. And prices were low again. I have a bad feeling about this ram situation since I need a lot of it. Most of my builds for the past 10 years have had maxed out ram configurations. Last fiddled with by Mysticial on 2017-07-20 at 22:41 2017-09-29, 18:36 #172 Mark Rose "/X\(‘-‘)/X\" Jan 2013 29×101 Posts For anyone building a cluster, I encourage you to look at the upcoming i3-8100 processor paired with dual channel DDR4-2400 memory on the cheapest motherboard as the sweet spot. The processor should retail for US$117. 2017-09-29, 19:25   #173 petrw1 1976 Toyota Corona years forever! "Wayne" Nov 2006 106718 Posts Quote: Originally Posted by Mark Rose For anyone building a cluster, I encourage you to look at the upcoming i3-8100 processor paired with dual channel DDR4-2400 memory on the cheapest motherboard as the sweet spot. The processor should retail for US$117. What about the i5-8400? 2 more cores and supports DDR4-2666 Memory for only$70 more. Interesting thought that the CPU is so much slower....to the point that Ghz X Cores is not a lot more than the i3 you mentioned. 2017-09-29, 22:29   #174 Mark Rose "/X\(‘-‘)/X\" Jan 2013 29×101 Posts Quote: Originally Posted by petrw1 What about the i5-8400? 2 more cores and supports DDR4-2666 Memory for only \$70 more. Interesting thought that the CPU is so much slower....to the point that Ghz X Cores is not a lot more than the i3 you mentioned. The sweet spot seems to be roughly: fma3/clock x GHz * cores * 333 MHz = MHz channels of DDR4 required. So Intel with fma3/clock * 3.6 * 4 * 333 = 4800 MHz, or 2400 MHz dual channel. But the 6 core chips would need 7200 MHz or 3600 MHz dual channel. The 8 core chips at 3.6 GHz with quad channel 2400 MHz DDR4 are also balanced, but the price is higher than two of the proposed i3 systems. With Ryzen, the chips have half speed FMA3, so the memory requirements are much less: Ryzen 1600X: .5 * 4.0 * 6 * 333 = 3996 MHz, or two channel DDR4-2000/2133. Ryzen 1700: .5 * 3.7 * 8 * 333 = 4928 MHz, or two channel DDR4-2400/2666. Ryzen 1800X: .5 * 4.0 * 8 * 333 = 5328 MHz, or two channel DDR4-2666/2933. But the Ryzen systems will produce much less throughput per dollar, and turn more electricity into heat per work done. Last fiddled with by Mark Rose on 2017-09-29 at 22:31 2017-09-30, 08:19   #175 preda "Mihai Preda" Apr 2015 2·23·29 Posts Quote: Originally Posted by Mark Rose The sweet spot seems to be roughly: fma3/clock x GHz * cores * 333 MHz = MHz channels of DDR4 required. How is the "sweet spot" affected by running a single test on all cores vs. one test per core? 2017-09-30, 21:54   #176 Mark Rose "/X\(‘-‘)/X\" Jan 2013 29·101 Posts Quote: Originally Posted by preda How is the "sweet spot" affected by running a single test on all cores vs. one test per core? For Skylake and newer, a single worker is a few percent better at current exponents with 4 cores. I'm not sure to how many cores where a single worker stays better. For Haswell and earlier, 1 worker per core is better. But in either case, the memory bandwidth requirements are basically the same. Similar Threads Thread Thread Starter Forum Replies Last Post firejuggler GPU Computing 0 2018-03-28 16:02 Gordon GMP-ECM 2 2017-09-04 04:05 cappy95833 Hardware 10 2014-03-29 15:02 plandon Hardware 39 2009-08-30 09:36 fetofs Puzzles 8 2006-07-09 09:33 All times are UTC. The time now is 08:34. Tue Jan 26 08:34:04 UTC 2021 up 54 days, 4:45, 0 users, load averages: 1.38, 1.94, 2.26
2021-01-26 08:34:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18972007930278778, "perplexity": 4282.636966817232}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704799711.94/warc/CC-MAIN-20210126073722-20210126103722-00172.warc.gz"}
https://openreview.net/forum?id=Y4mgmw9OgV
A Rigorous Study Of The Deep Taylor Decomposition Leon Sixt, Tim Landgraf 16 Aug 2022, 06:25 (modified: 09 Nov 2022, 15:49)Accepted by TMLREveryone Abstract: Saliency methods attempt to explain deep neural networks by highlighting the most salient features of a sample. Some widely used methods are based on a theoretical framework called Deep Taylor Decomposition (DTD), which formalizes the recursive application of the Taylor Theorem to the network's layers. However, recent work has found these methods to be independent of the network's deeper layers and appear to respond only to lower-level image structure. Here, we investigate DTD theory to better understand this perplexing behavior and found that the Deep Taylor Decomposition is equivalent to the basic gradient$\times$input method when the Taylor root points (an important parameter of the algorithm chosen by the user) are locally constant. If the root points are locally input-dependent, then one can justify any explanation. In this case, the theory is under-constrained. In an empirical evaluation, we find that DTD roots do not lie the same linear regions as the input -- contrary to a fundamental assumption of the Taylor Theorem. The theoretical foundations of DTD were cited as a source of reliability for the explanations. However, our findings urge caution in making such claims. License: Creative Commons Attribution 4.0 International (CC BY 4.0) Submission Length: Regular submission (no more than 12 pages of main content) Changes Since Last Submission: * Add link to code repository Code: https://github.com/berleon/A-Rigorous-Study-Of-The-Deep-Taylor-Decomposition Assigned Action Editor: ~Shiyu_Chang2 Submission Number: 365
2022-12-02 06:53:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6168476939201355, "perplexity": 1645.3257018010158}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710898.93/warc/CC-MAIN-20221202050510-20221202080510-00794.warc.gz"}
https://www.msri.org/workshops/950/schedules/29601
# Mathematical Sciences Research Institute Home » Workshop » Schedules » The quartic integrability and long time existence of steep water waves in 2d # The quartic integrability and long time existence of steep water waves in 2d ## [Moved Online] Recent Developments in Fluid Dynamics April 12, 2021 - April 30, 2021 April 19, 2021 (08:00 AM PDT - 08:50 AM PDT) Speaker(s): Sijue Wu (University of Michigan) Location: MSRI: Online/Virtual Tags/Keywords • water waves Primary Mathematics Subject Classification Secondary Mathematics Subject Classification No Secondary AMS MSC Video #### The Quartic Integrability and Long Time Existence of Steep water Waves in 2D Abstract It is known since the work of Dyachenko \& Zakharov in 1994 that for the weakly nonlinear 2d infinite depth water waves, there are no 3-wave interactions and all of the 4-wave interaction coefficients vanish on the non-trivial resonant manifold. In this talk I will present a recent result that proves this partial integrability from a different angle. We construct a sequence of energy functionals $\mathfrak E_j(t)$, directly in the physical space, which are explicit in the Riemann mapping variable and involve material derivatives of order $j$ of the solutions for the 2d water wave equation, so that $\frac d{dt} \mathfrak E_j(t)$ is quintic or higher order. We show that if some scaling invariant norm, and a norm involving one spacial derivative above the scaling of the initial data are of size no more than $\varepsilon$, then the lifespan of the solution for the 2d water wave equation is at least of order $O(\varepsilon^{-3})$, and the solution remains as regular as the initial data during this time. If only the scaling invariant norm of the data is of size $\varepsilon$, then the lifespan of the solution is at least of order $O(\varepsilon^{-5/2})$. Our long time existence results do not impose size restrictions on the slope of the initial interface and the magnitude of the initial velocity, they allow the interface to have arbitrary large steepnesses and initial velocities to have arbitrary large magnitudes.
2022-08-14 15:19:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6736672520637512, "perplexity": 459.8289571262483}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572043.2/warc/CC-MAIN-20220814143522-20220814173522-00554.warc.gz"}
https://www.askiitians.com/forums/Physical-Chemistry/19/26480/atomic-structure.htm
× #### Thank you for registering. One of our academic counsellors will contact you within 1 working day. Click to Chat 1800-1023-196 +91-120-4616500 CART 0 • 0 MY CART (5) Use Coupon: CART20 and get 20% off on all online Study Material ITEM DETAILS MRP DISCOUNT FINAL PRICE Total Price: Rs. There are no items in this cart. Continue Shopping how to calculate magnetic quantum number? explain clearly with an example 9 years ago ## Answers : (3) 419 Points Dear Student The magnetic quantum number has values between -l and +l. When l =1, for example, m can have three values: -1, 0, and +1. Because you know  that the subshell designation for l =1 is "p", you now know that the p orbital has three components  px, py, and pz. Notice how the subscripts are related to a three-dimensional coordinate system, x, y, and z. The chart below shows a summary of the quantum numbers: Principal Quantum Number (n) Azimuthal Quantum Number (l) Subshell Designation Magnetic Quantum Number (m) Number of orbitals in subshell 1 0 1s 0 1 2 01 2s2p 0-1 0 +1 13 3 012 3s3p3d 0-1 0 +1-2 -1 0 +1 +2 135 4 0123 4s4p4d4f 0-1 0 +1-2 -1 0 +1 +2-3 -2 -1 0 +1 +2+3 1357 All the best. AKASH GOYAL Please feel free to post as many doubts on our discussion forum as you can. We are all IITians and here to help you in your IIT JEE preparation. Win exciting gifts by answering the questions on Discussion Forum. So help discuss any query on askiitians forum and become an Elite Expert League askiitian. 9 years ago SAGAR SINGH - IIT DELHI 879 Points Dear student, To describe the magnetic quantum number m you begin with an atomic electron's angular momentum, L, which is related to its quantum number $l\,$ by the following equation: $\mathbf{L} = \hbar\sqrt{l(l+1)}$ where $\hbar = h/2\pi$ is the reduced planck constant. Please feel free to ask your queries here. We are all IITians and here to help you in your IIT JEE preparation. All the best. Win exciting gifts by answering the questions on Discussion Forum. So help discuss any query on askiitians forum and become an Elite Expert League askiitian. Sagar Singh B.Tech, IIT Delhi 9 years ago Aakash Dutta 29 Points this is very simple 9 years ago Think You Can Provide A Better Answer ? Answer & Earn Cool Goodies ## Other Related Questions on Physical Chemistry View all Questions » ### Course Features • 731 Video Lectures • Revision Notes • Previous Year Papers • Mind Map • Study Planner • NCERT Solutions • Discussion Forum • Test paper with Video Solution ### Course Features • 141 Video Lectures • Revision Notes • Test paper with Video Solution • Mind Map • Study Planner • NCERT Solutions • Discussion Forum • Previous Year Exam Questions
2020-10-20 12:37:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2522372603416443, "perplexity": 8196.414568752214}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107872686.18/warc/CC-MAIN-20201020105000-20201020135000-00705.warc.gz"}
https://www.questionsolutions.com/fb-2kn-resultant-force-acts-along/
If FB = 2kN and the resultant force acts along 2 If $F_B = 2kN$ and the resultant force acts along the positive u axis, determine the magnitude of the resultant force and the angle u. 2 thoughts on “If FB = 2kN and the resultant force acts along” • questionsolutions Post author I am sure there are other ways to solve these types of questions though I haven’t utilized them. Most textbooks usually show two ways of solving these questions, either the tip to tip method or the parallelogram way. Please take a look: https://goo.gl/ssjJMP Many thanks and best of luck with your studies. 🙂
2023-03-30 20:32:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 10, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7369741797447205, "perplexity": 704.9126162971178}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949387.98/warc/CC-MAIN-20230330194843-20230330224843-00447.warc.gz"}
http://gmatclub.com/forum/what-is-the-area-of-rectangular-region-r-105414.html?sort_by_oldest=true
Find all School-related info fast with the new School-Specific MBA Forum It is currently 07 Jul 2015, 13:14 GMAT Club Daily Prep Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History Events & Promotions Events & Promotions in June Open Detailed Calendar What is the area of rectangular region R ? Question banks Downloads My Bookmarks Reviews Important topics Author Message TAGS: Senior Manager Joined: 28 Aug 2010 Posts: 265 Followers: 3 Kudos [?]: 198 [1] , given: 11 What is the area of rectangular region R ? [#permalink]  26 Nov 2010, 12:55 1 KUDOS 4 This post was BOOKMARKED 00:00 Difficulty: 35% (medium) Question Stats: 65% (01:50) correct 35% (00:48) wrong based on 360 sessions What is the area of rectangular region R ? (1) Each diagonal of R has length 5. (2) The perimeter of R is 14. [Reveal] Spoiler: OA _________________ Gmat: everything-you-need-to-prepare-for-the-gmat-revised-77983.html ------------------------------------------------------------------------------------------------- Ajit Math Expert Joined: 02 Sep 2009 Posts: 28352 Followers: 4486 Kudos [?]: 45469 [6] , given: 6762 Re: NEED SOME Help on this DS question [#permalink]  26 Nov 2010, 13:07 6 KUDOS Expert's post 1 This post was BOOKMARKED ajit257 wrote: What is the area of rectangular region R ? (1) Each diagonal of R has length 5 (2) The perimeter of R is 14 Please could someone explain this question ...thanks. Let the sides of the rectangle be $$x$$ and $$y$$. Question: $$area=xy=?$$ (1) Each diagonal of R has length 5 --> as the diagonals in a rectangle are the hypotenuses for the sides then: $$x^2+y^2=5^2$$, but we can not get the value of $$xy$$ from this info. Not sufficient. (2) The perimeter of R is 14 --> $$P=2(x+y)=14$$ --> $$x+y=7$$. Again we can not get the value of $$xy$$ from this info. Not sufficient. (1)+(2) We have $$x^2+y^2=25$$ and $$x+y=7$$. Square the second expression: $$x^2+2xy+y^2=49$$, as $$x^2+y^2=5^2$$ then $$25+2xy=49$$ --> $$xy=12$$. Sufficient. _________________ Intern Joined: 18 Mar 2010 Posts: 39 Followers: 1 Kudos [?]: 16 [0], given: 11 Re: NEED SOME Help on this DS question [#permalink]  27 Nov 2010, 09:03 I thought "Statement 1" alone is sufficient to solve this problem. 3,4,5 is the only Pythagorean triplet which supports 5 a diagonal of a right angled triangle. Why can't the answer be A? Math Expert Joined: 02 Sep 2009 Posts: 28352 Followers: 4486 Kudos [?]: 45469 [1] , given: 6762 Re: NEED SOME Help on this DS question [#permalink]  27 Nov 2010, 09:22 1 KUDOS Expert's post 1 This post was BOOKMARKED rockroars wrote: I thought "Statement 1" alone is sufficient to solve this problem. 3,4,5 is the only Pythagorean triplet which supports 5 a diagonal of a right angled triangle. Why can't the answer be A? We are not told that the lengths of the sides are integers. So knowing that hypotenuse equals to 5 DOES NOT mean that the sides of the right triangle necessarily must be in the ratio of Pythagorean triple - 3:4:5. Or in other words: if $$x^2+y^2=5^2$$ DOES NOT mean that $$x=3$$ and $$y=4$$. Certainly this is one of the possibilities but definitely not the only one. In fact $$x^2+y^2=5^2$$ has infinitely many solutions for $$x$$ and $$y$$ and only one of them is $$x=3$$ and $$y=4$$. For example: $$x=1$$ and $$y=\sqrt{24}$$ or $$x=2$$ and $$y=\sqrt{21}$$... So knowing that the diagonal of a rectangle (hypotenuse) equals to one of the Pythagorean triple hypotenuse value is not sufficient to calculate the sides of this rectangle. Hope it's clear. _________________ Intern Joined: 18 Mar 2010 Posts: 39 Followers: 1 Kudos [?]: 16 [0], given: 11 Re: NEED SOME Help on this DS question [#permalink]  27 Nov 2010, 13:13 I feel so dumb now, I never thought about it. Thanks for the clarification! Director Status: No dream is too large, no dreamer is too small Joined: 14 Jul 2010 Posts: 660 Followers: 37 Kudos [?]: 387 [0], given: 39 QR: DS 42 Geomatry [#permalink]  20 Feb 2011, 12:33 QR: DS 42 what is the area of rectangualr region R? (1) Each diagonal R has length 5 (1) The perimeter of R is 14 I solved as follows: L=x, W= y, D= z x^2+y^2=z^2 1. z^2=25 N.S 2. 2(x+y)=14 => x+y=7 => (x+y)^2=49 (doing square both side) N.S for are 2xy ito be calculated so, x^2+y^2=z^2 => (x+y)^2 - 2xy=z^2 => 49 -2xy=25 2xy=24 OG solution is very long. _________________ Collections:- PSof OG solved by GC members: http://gmatclub.com/forum/collection-ps-with-solution-from-gmatclub-110005.html DS of OG solved by GC members: http://gmatclub.com/forum/collection-ds-with-solution-from-gmatclub-110004.html 100 GMAT PREP Quantitative collection http://gmatclub.com/forum/gmat-prep-problem-collections-114358.html Collections of work/rate problems with solutions http://gmatclub.com/forum/collections-of-work-rate-problem-with-solutions-118919.html Mixture problems in a file with best solutions: http://gmatclub.com/forum/mixture-problems-with-best-and-easy-solutions-all-together-124644.html Last edited by Baten80 on 20 Feb 2011, 12:47, edited 1 time in total. Current Student Joined: 11 Dec 2011 Posts: 55 Location: Malaysia Concentration: Nonprofit, Sustainability GMAT 1: 730 Q49 V40 GPA: 3.16 WE: Consulting (Computer Software) Followers: 0 Kudos [?]: 11 [1] , given: 7 Re: What is the area of rectangular region R? [#permalink]  29 Feb 2012, 01:30 1 KUDOS From what I understand of rectangular diagonals or quadrilateral diagonals is that if they are the same length, then all sides should be of equal length. Also area of Rhombus = 1/2 * diagonal * diagonal? Correct me if I'm wrong here, just need clarification Math Expert Joined: 02 Sep 2009 Posts: 28352 Followers: 4486 Kudos [?]: 45469 [1] , given: 6762 Re: What is the area of rectangular region R? [#permalink]  29 Feb 2012, 01:40 1 KUDOS Expert's post calvin1984 wrote: From what I understand of rectangular diagonals or quadrilateral diagonals is that if they are the same length, then all sides should be of equal length. Also area of Rhombus = 1/2 * diagonal * diagonal? Correct me if I'm wrong here, just need clarification All rectangles have the diagonals of equal length, so (1) doesn't necessarily means that given rectangle is a rhombus. For more on this subject check Polygons chapter of Math Book: math-polygons-87336.html Hope it helps. _________________ Current Student Joined: 11 Dec 2011 Posts: 55 Location: Malaysia Concentration: Nonprofit, Sustainability GMAT 1: 730 Q49 V40 GPA: 3.16 WE: Consulting (Computer Software) Followers: 0 Kudos [?]: 11 [0], given: 7 Re: What is the area of rectangular region R ? (1) Each diagonal [#permalink]  29 Feb 2012, 03:32 Actually I just realized it, sounded so stupid. Thanks! Current Student Joined: 01 Apr 2010 Posts: 300 Location: Kuwait Schools: Sloan '16 (M) GMAT 1: 710 Q49 V37 GPA: 3.2 WE: Information Technology (Consulting) Followers: 4 Kudos [?]: 54 [0], given: 11 Re: What is the area of rectangular region R ? (1) Each diagonal [#permalink]  20 Apr 2012, 19:05 Almost fell for the 3,4,5 rule, good explanation. Director Joined: 24 Aug 2009 Posts: 507 Schools: Harvard, Columbia, Stern, Booth, LSB, Followers: 10 Kudos [?]: 468 [0], given: 241 Re: NEED SOME Help on this DS question [#permalink]  21 Sep 2012, 08:15 Bunuel wrote: ajit257 wrote: What is the area of rectangular region R ? (1) Each diagonal of R has length 5 (2) The perimeter of R is 14 Please could someone explain this question ...thanks. Let the sides of the rectangle be $$x$$ and $$y$$. Question: $$area=xy=?$$ (1) Each diagonal of R has length 5 --> as the diagonals in a rectangle are the hypotenuses for the sides then: $$x^2+y^2=5^2$$, but we can not get the value of $$xy$$ from this info. Not sufficient. (2) The perimeter of R is 14 --> $$P=2(x+y)=14$$ --> $$x+y=7$$. Again we can not get the value of $$xy$$ from this info. Not sufficient. (1)+(2) We have $$x^2+y^2=25$$ and $$x+y=7$$. Square the second expression: $$x^2+2xy+y^2=49$$, as $$x^2+y^2=5^2$$ then $$25+2xy=49$$ --> $$xy=12$$. Sufficient. Area of a rectangular region = Product of two diagonals/2 We are given both are diagonals are equal to 5 So area would be = 25/2 = 12.5 Thus A is sufficient Let me know why i am wrong. _________________ If you like my Question/Explanation or the contribution, Kindly appreciate by pressing KUDOS. Kudos always maximizes GMATCLUB worth -Game Theory If you have any question regarding my post, kindly pm me or else I won't be able to reply Math Expert Joined: 02 Sep 2009 Posts: 28352 Followers: 4486 Kudos [?]: 45469 [0], given: 6762 Re: NEED SOME Help on this DS question [#permalink]  21 Sep 2012, 08:23 Expert's post fameatop wrote: Bunuel wrote: ajit257 wrote: What is the area of rectangular region R ? (1) Each diagonal of R has length 5 (2) The perimeter of R is 14 Please could someone explain this question ...thanks. Let the sides of the rectangle be $$x$$ and $$y$$. Question: $$area=xy=?$$ (1) Each diagonal of R has length 5 --> as the diagonals in a rectangle are the hypotenuses for the sides then: $$x^2+y^2=5^2$$, but we can not get the value of $$xy$$ from this info. Not sufficient. (2) The perimeter of R is 14 --> $$P=2(x+y)=14$$ --> $$x+y=7$$. Again we can not get the value of $$xy$$ from this info. Not sufficient. (1)+(2) We have $$x^2+y^2=25$$ and $$x+y=7$$. Square the second expression: $$x^2+2xy+y^2=49$$, as $$x^2+y^2=5^2$$ then $$25+2xy=49$$ --> $$xy=12$$. Sufficient. Area of a rectangular region = Product of two diagonals/2 We are given both are diagonals are equal to 5 So area would be = 25/2 = 12.5 Thus A is sufficient Let me know why i am wrong. The red part is not correct. It's true about squares: $$area_{square}=\frac{diagonal^2}{2}$$. Hope it helps. _________________ Intern Joined: 21 Jan 2010 Posts: 11 Followers: 0 Kudos [?]: 0 [0], given: 2 Re: What is the area of rectangular region R ? [#permalink]  08 Dec 2012, 10:06 I screwed up on this one like an earlier poster. So hypothetically, if the question stem stated that the sides were integers, would A be sufficient alone? I'm nervous on the DS. I got one wrong on the PS in the official guide and 6 wrong already on DS and I'm only on question 50 . Math Expert Joined: 02 Sep 2009 Posts: 28352 Followers: 4486 Kudos [?]: 45469 [0], given: 6762 Re: What is the area of rectangular region R ? [#permalink]  09 Dec 2012, 08:49 Expert's post RonBagel wrote: I screwed up on this one like an earlier poster. So hypothetically, if the question stem stated that the sides were integers, would A be sufficient alone? Yes, if we were told that the lengths of the sides of the rectangle are integers, then the first statement would be sufficient: x^2+y^2=25 --> x=3 and y=4 or vise -versa --> xy=12. _________________ Intern Joined: 11 Nov 2012 Posts: 12 Followers: 0 Kudos [?]: 6 [0], given: 38 Re: NEED SOME Help on this DS question [#permalink]  11 Apr 2014, 00:04 Bunuel wrote: ajit257 wrote: What is the area of rectangular region R ? (1) Each diagonal of R has length 5 (2) The perimeter of R is 14 Please could someone explain this question ...thanks. Let the sides of the rectangle be $$x$$ and $$y$$. Question: $$area=xy=?$$ (1) Each diagonal of R has length 5 --> as the diagonals in a rectangle are the hypotenuses for the sides then: $$x^2+y^2=5^2$$, but we can not get the value of $$xy$$ from this info. Not sufficient. (2) The perimeter of R is 14 --> $$P=2(x+y)=14$$ --> $$x+y=7$$. Again we can not get the value of $$xy$$ from this info. Not sufficient. (1)+(2) We have $$x^2+y^2=25$$ and $$x+y=7$$. Square the second expression: $$x^2+2xy+y^2=49$$, as $$x^2+y^2=5^2$$ then $$25+2xy=49$$ --> $$xy=12$$. Sufficient. Hi Bunuel, Can't we apply the 1 sqrt3 2 theory to statement one? Since it's a rectangular then the angle created by the diagonal must be 90 and leaving the rest 30 and 60. So the sides must be 5/2 and 5/2(sqrt3). What's wrong with this explanation? Thanks. Math Expert Joined: 02 Sep 2009 Posts: 28352 Followers: 4486 Kudos [?]: 45469 [0], given: 6762 Re: NEED SOME Help on this DS question [#permalink]  11 Apr 2014, 01:35 Expert's post aquax wrote: Bunuel wrote: ajit257 wrote: What is the area of rectangular region R ? (1) Each diagonal of R has length 5 (2) The perimeter of R is 14 Please could someone explain this question ...thanks. Let the sides of the rectangle be $$x$$ and $$y$$. Question: $$area=xy=?$$ (1) Each diagonal of R has length 5 --> as the diagonals in a rectangle are the hypotenuses for the sides then: $$x^2+y^2=5^2$$, but we can not get the value of $$xy$$ from this info. Not sufficient. (2) The perimeter of R is 14 --> $$P=2(x+y)=14$$ --> $$x+y=7$$. Again we can not get the value of $$xy$$ from this info. Not sufficient. (1)+(2) We have $$x^2+y^2=25$$ and $$x+y=7$$. Square the second expression: $$x^2+2xy+y^2=49$$, as $$x^2+y^2=5^2$$ then $$25+2xy=49$$ --> $$xy=12$$. Sufficient. Hi Bunuel, Can't we apply the 1 sqrt3 2 theory to statement one? Since it's a rectangular then the angle created by the diagonal must be 90 and leaving the rest 30 and 60. So the sides must be 5/2 and 5/2(sqrt3). What's wrong with this explanation? Thanks. Let me ask you a question: why must the remaining angles be 30 and 60 degrees? Why cannot they be 25 or 65? Or 20 and 70? Basically any pair totaling 90? _________________ Intern Joined: 10 Apr 2014 Posts: 33 Followers: 0 Kudos [?]: 18 [0], given: 3 Re: NEED SOME Help on this DS question [#permalink]  11 Apr 2014, 02:29 aquax wrote: Bunuel wrote: ajit257 wrote: What is the area of rectangular region R ? (1) Each diagonal of R has length 5 (2) The perimeter of R is 14 Please could someone explain this question ...thanks. Let the sides of the rectangle be $$x$$ and $$y$$. Question: $$area=xy=?$$ (1) Each diagonal of R has length 5 --> as the diagonals in a rectangle are the hypotenuses for the sides then: $$x^2+y^2=5^2$$, but we can not get the value of $$xy$$ from this info. Not sufficient. (2) The perimeter of R is 14 --> $$P=2(x+y)=14$$ --> $$x+y=7$$. Again we can not get the value of $$xy$$ from this info. Not sufficient. (1)+(2) We have $$x^2+y^2=25$$ and $$x+y=7$$. Square the second expression: $$x^2+2xy+y^2=49$$, as $$x^2+y^2=5^2$$ then $$25+2xy=49$$ --> $$xy=12$$. Sufficient. Hi Bunuel, Can't we apply the 1 sqrt3 2 theory to statement one? Since it's a rectangular then the angle created by the diagonal must be 90 and leaving the rest 30 and 60. So the sides must be 5/2 and 5/2(sqrt3). What's wrong with this explanation? Thanks. Probably you are getting confused because of this example: "a rectangle is inscribed in a circle of radius r...." The diagonal divides the rectangle in two right triangles, so sum of two angles need to be 90 but it can be 30: 60, 45:45...and so on... Intern Joined: 16 Aug 2012 Posts: 1 Followers: 0 Kudos [?]: 0 [0], given: 31 Re: What is the area of rectangular region R ? [#permalink]  22 Jul 2014, 07:33 Hi Bunuel, To keep it straight, just because it says its a rectangle does not mean we have to have two 30-60-90 triangles, but if we put together two 30-60-90 triangles we get a rectangle? Correct? I picked "A" because I thought that since it said we had a rectangle, we had to have two of these triangles. From the discussion above, it looks like this is not a mandatory condition of a rectangle. Math Expert Joined: 02 Sep 2009 Posts: 28352 Followers: 4486 Kudos [?]: 45469 [1] , given: 6762 Re: What is the area of rectangular region R ? [#permalink]  22 Jul 2014, 07:36 1 KUDOS Expert's post jbdoyl3 wrote: Hi Bunuel, To keep it straight, just because it says its a rectangle does not mean we have to have two 30-60-90 triangles, but if we put together two 30-60-90 triangles we get a rectangle? Correct? I picked "A" because I thought that since it said we had a rectangle, we had to have two of these triangles. From the discussion above, it looks like this is not a mandatory condition of a rectangle. Correct. But you can get a rectangle by putting together any two congruent right triangles, not necessarily 30-60-90 triangles. _________________ Re: What is the area of rectangular region R ?   [#permalink] 22 Jul 2014, 07:36 Similar topics Replies Last post Similar Topics: 8 What is the area of rectangular region R? 9 17 Jan 2014, 01:31 What is the area of rectangular region R? 2 29 Jul 2010, 13:05 What is the area of rectangular region R? 1. Each diagonal 19 17 Feb 2008, 16:17 What is the area of the rectangular region? PICTURE: 5 28 Oct 2007, 10:30 What is the area of rectangular region R ? (1) Each diagonal 9 05 Oct 2006, 12:05 Display posts from previous: Sort by What is the area of rectangular region R ? Question banks Downloads My Bookmarks Reviews Important topics Powered by phpBB © phpBB Group and phpBB SEO Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
2015-07-07 21:14:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7165158987045288, "perplexity": 2379.4649545765506}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375100481.40/warc/CC-MAIN-20150627031820-00029-ip-10-179-60-89.ec2.internal.warc.gz"}
https://greenemath.com/College_Algebra/80/Distance-Midpoint-Formula-Complex-PlanePracticeTest.html
About Distance and Midpoint Formulas Complex Plane: We can find the distance between any two complex numbers by modifying our known distance formula to include the difference in real values in the place of the difference in x-values and the difference in imaginary values in the place of the difference in y-values. Additionally, we need to know how to find the midpoint of a line segment in which the endpoints are two complex numbers. To find the midpoint, we will find the average of the real values and the average of the imaginary values for our two given endpoints. Test Objectives • Demonstrate an understanding of the complex plane • Demonstrate an understanding of the distance formula • Demonstrate the ability to find the distance between two complex numbers • Demonstrate the ability to find the midpoint of a line segment on the complex plane Distance and Midpoint Formulas Complex Plane Practice Test: #1: Instructions: find the distance between Z and W. a) #2: Instructions: find the distance between Z and W. a) #3: Instructions: find the distance between Z and W. a) #4: Instructions: find the midpoint of ZW. a) #5: Instructions: find the midpoint of ZW. a) Written Solutions: #1: Solutions: $$a)\hspace{.2em}5$$ #2: Solutions: $$a)\hspace{.2em}2 \sqrt{17}$$ #3: Solutions: $$a)\hspace{.2em}13$$ #4: Solutions: $$a)\hspace{.2em}\frac{5}{2}+ \frac{3}{2}i$$ #5: Solutions: $$a)\hspace{.2em}-1 + i$$
2021-04-17 05:06:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.552513837814331, "perplexity": 342.5374713281638}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038101485.44/warc/CC-MAIN-20210417041730-20210417071730-00071.warc.gz"}
https://socratic.org/questions/how-do-you-solve-7s-29-15
# How do you solve 7s-29=-15? Jan 20, 2017 $s = 2$ #### Explanation: Step 1: Have only the 'parts' with s in them on the left of the = and $\text{ }$everything else on the other side. Step 2: Have only s on the left of = and everything else on the $\text{ }$other side. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Step 1: Add $\textcolor{red}{29}$ to both sides color(green)(7s-29=-15" "->" "7s-29color(red)(+29)=-15color(red)(+29 $\text{ } \textcolor{g r e e n}{7 s + 0 = 14}$ Step 2: Multiply both sides by $\textcolor{red}{\frac{1}{7}}$ $\text{ } \textcolor{g r e e n}{\frac{7}{\textcolor{red}{7}} s = \frac{14}{\textcolor{red}{7}}}$ But $\frac{7}{7} = 1 \mathmr{and} \frac{14}{7} = 2$ giving: $\text{ } \textcolor{g r e e n}{s = 2}$
2021-12-04 15:03:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 10, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6556553244590759, "perplexity": 3830.273871166125}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362992.98/warc/CC-MAIN-20211204124328-20211204154328-00199.warc.gz"}
http://mathoverflow.net/questions/11939/remove-a-vertex-map-for-right-angled-artin-groups
“Remove a vertex” map for right-angled Artin groups Given a finite graph $\Gamma$, one has the right-angled Artin group $A(\Gamma )$. Its generators $s_1, \dots s_n$ bijectively correspond to vertices of $\Gamma$ and the relators are $s_is_j=s_js_i$ provided the corresponding vertices are joined by an edge. Let $A_i(\Gamma)$ be the group obtained from $A(\Gamma)$ by setting $s_i=1$; this corresponds to removing the vertex $i$ from $\Gamma$. I know very little of these matters but it seems plausible that any nontrivial element of $A(\Gamma)$ projects to a nontrivial element of $A_i(\Gamma)$ for some $i$; is this correct? - No. Let $\Gamma$ be the graph with two vertices and no edges - the non-abelian free group of rank two - and let $g$ be the commutator of the two generators $s_1$ and $s_2$. Then $g$ is certainly non-trivial, but $g$ dies whenever you kill $s_1$ or $s_2$. UPDATE: For an example with a connected graph, let's take $\Gamma$ to be the straight-line graph with four vertices $a,b,c,d$ (so $[a,b]=[b,c]=[c,d]=1$). Now consider $g=[[c,a],[b,d]]$. Clearly this dies when you kill any generator. On the other hand, $g=cac^{-1}a^{-1}bdb^{-1}d^{-1}aca^{-1}c^{-1}dbd^{-1}b^{-1}$ and a well-known solution to the word problem in right-angled Artin groups tells you that $g$ is non-trivial. - Thanks! I missed this obvious example as I was looking at right-angled Artin groups corresponding to connected graphs. Are there such examples for connected graphs (other than the graph that has only one vertex)? – Igor Belegradek Jan 16 '10 at 0:44 I've added one. – HJRW Jan 16 '10 at 2:02 Thanks a lot! This neatly kills the application that I had in mind. Too bad. – Igor Belegradek Jan 16 '10 at 2:18
2016-05-05 03:39:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8763408660888672, "perplexity": 133.2718892825334}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860125857.44/warc/CC-MAIN-20160428161525-00168-ip-10-239-7-51.ec2.internal.warc.gz"}
https://tex.stackexchange.com/questions/234535/using-cancelto-in-the-cancel-package-but-the-arrow-is-wavy-not-straight
# Using cancelto in the cancel package but the arrow is wavy not straight I used the cancel package today for the first time but the arrow isn't straight. Is this normal? \documentclass{article} \usepackage[makeroom]{cancel} \usepackage{mathtools} \begin{document} \begin{align*} L\{2\cos(3t)\}(s) & = 2\int_0^{\infty}\cos(3t)e^{-st}dt\\ & = \cancelto{0}{\frac{2e^{-st}\sin(3t)}{3}\biggl|_0^{\infty}} + \frac{s}{3}\int_0^{\infty}\sin(3t)e^{-st}dt \end{align*} \end{document} I would like to use this package, but at the same time, it is sloppy unless there is a fix for this problem. • How do you mean arrow is not straight? – percusse Mar 22 '15 at 20:50 • @percusse look at the arrow. It definitely isn't a nice straight line no matter the zoom so it isn't an artifact of the pdf viewer which occurs with some instances of LaTeX but disappear when you adjust the zoom. – dustin Mar 22 '15 at 20:51 • Ah you mean rasterized or pixallated. Got it. – percusse Mar 22 '15 at 20:55 • @DavidCarlisle it should be fine now – dustin Mar 22 '15 at 22:32 cancel uses picture environment commands, so sloping lines are made by positioning many font glyphs with small line segments, this inevitably gives a notch sometimes as rounding error moves the line from one pixel to the next. Normally in such cases, you can use \usepackage{pict2e} to re-implement the picture mode commands using pdf drawing primitives to get a smoother appearance and less restrictions on available slopes. That didn't work here, but it turns out that's a documented feature, the end of cancel.sty says % pict2e removes bounding box from line and vector, so use original % versions by declaring \OriginalPictureCmds; make it a no-op if undefined \@ifundefined{OriginalPictureCmds}{\let\OriginalPictureCmds\relax}{} % Sometime maybe find a better solution that uses all slopes with pict2e So currently that's just how it is.
2020-04-01 21:46:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8712727427482605, "perplexity": 2592.2250749767645}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370506121.24/warc/CC-MAIN-20200401192839-20200401222839-00369.warc.gz"}
https://joshmermelstein.com/vim-macro-and-g-normal/
# Vim macro and g/pattern/normal If you find yourself doing the same thing over and over in Vim, there is probably a better way. Here is an example of how to use macros and regex matching to avoid repetitive but mindless work. Lets say you have a config file of the form: foo = bar baz = "nort" qux = qat where all of the string on the right hand sides of each equals sign are supposed to have quotes around them but many do not. We can fix this pretty easily without needing to leave Vim. First we go to an offending line and record a macro: qqf=wi"<esc>A"<esc>q Lets break that down: • qq - begin recording a macro in register q • f= - go to the first equals sign on the line • w - go forward one word • i"<esc> - insert a quotation mark here • A"<esc> - insert a quotation mark at the end of the line We could manually execute this macro with @q on each necessary line. But there is an easier way: :g/= [^"]/normal @q Lets break this down too: • :g - begin executing an Ex command (by default on every line of the file) • = [^"] - a regex to match meaning “an equals sign followed by a space, followed by something other than a quotation mark” • normal @q - execute the normal mode vim command stored in macro q This runs the macro we recorded above on every line that needes it. Voila! Written on January 22, 2016
2020-09-22 06:58:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6263394355773926, "perplexity": 2711.1070570748543}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400204410.37/warc/CC-MAIN-20200922063158-20200922093158-00353.warc.gz"}
https://email.esm.psu.edu/pipermail/macosx-tex/2005-April/014613.html
# [OS X TeX] hyperref conflict with apacite Adam R. Maxwell amaxwell at mac.com Sat Apr 9 16:13:53 EDT 2005 ```On Apr 9, 2005, at 12:15, Alan Litchfield wrote: > Hi all, > > I have just started using hyperref (discovered it in the macros in > TeXShop ;) and found a conflict with apacite. What happens is that > when running through the second pass, when LaTeX is trying to make > the links and format citations, hyperref gets in the way and says > it won't do it. > > If I change the citation to \nocite, then hyperref runs through its > processes fine. The problem only occurs when I use \cite, > \citeyear, etc. > > I realise that natbib will work with hyperref, but I have been > using apacite for some time on the same set of documents,... > Anyway, my university says we need to use APA. > > Anyone else come up with this? Is there a suitable work around/fix? Can you use natbib and apalike? I'm not sure what APA requirements entail, but I use that combination with hyperref successfully. \usepackage{natbib} \bibliographystyle{apalike}
2020-09-29 19:56:52
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9352107048034668, "perplexity": 8503.046746868287}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402088830.87/warc/CC-MAIN-20200929190110-20200929220110-00663.warc.gz"}
http://rspa.royalsocietypublishing.org/content/464/2097/2411
# Acoustic cloaking theory Andrew N Norris ## Abstract An acoustic cloak is a compact region enclosing an object, such that sound incident from all directions passes through and around the cloak as though the object was not present. A theory of acoustic cloaking is developed using the transformation or change-of-variables method for mapping the cloaked region to a point with vanishing scattering strength. We show that the acoustical parameters in the cloak must be anisotropic: either the mass density or the mechanical stiffness or both. If the stiffness is isotropic, corresponding to a fluid with a single bulk modulus, then the inertial density must be infinite at the inner surface of the cloak. This requires an infinitely massive cloak. We show that perfect cloaking can be achieved with finite mass through the use of anisotropic stiffness. The generic class of anisotropic material required is known as a pentamode material (PM). If the transformation deformation gradient is symmetric then the PM parameters are explicit, otherwise its properties depend on a stress-like tensor that satisfies a static equilibrium equation. For a given transformation mapping, the material composition of the cloak is not uniquely defined, but the phase speed and wave velocity of the pseudo-acoustic waves in the cloak are unique. Examples are given from two and three dimensions. Keywords: ## 1. Introduction The observation that the electromagnetic equations remain invariant under spatial transformations is not new. Ward & Pendry (1996) used it for numerical purposes, but the result was known to Post (1962) who discussed it in his book, and it was probably known far earlier. The recent interest in passive cloaking and invisibility is due to the fundamental result of Greenleaf et al. (2003a,b) that singular transformations could lead to cloaking for conductivity. Not long after this important discovery Leonhardt (2006) and Pendry et al. (2006) made the key observation that singular transformations could be used to achieve cloaking of electromagnetic waves. These results and others have generated significant interest in the possibility of passive acoustic cloaking. Acoustic cloaking is considered here in the context of the so-called transformation or change-of-variables method. The transformation deforms a region in such a way that the mapping is one-to-one everywhere except at a single point, which is mapped into the cloak inner boundary (figure 1). The acoustic problem is for the infinitesimal pressure p(x, t) that satisfies the scalar wave equation in the surrounding fluid,(1.1)The basic idea is to alter the cloak's acoustic properties (density and modulus) so that the modified wave equation in ω mimics the exterior equation (1.1) in the entire region Ω. This is achieved if the spatial mapping of the simply connected region Ω to the multiply connected cloak ω has the property that the modified equation in ω when expressed in Ω coordinates has exactly the form of (1.1) at every point in Ω. Figure 1 The undeformed simply connected region Ω is transformed by the mapping Χ into the multiply connected cloak ω. Essentially, a single point O is transformed into a hole (the invisible region) surrounded by the cloak ω. The outer boundary ∂ω+ is coincident with ∂Ω+(=∂Ω) and the inner boundary ∂ω is the image of the point O. Apart from O and ∂ω the mapping is everywhere one-to-one and differentiable. The objective here is to answer the question: what type of material is required to realize these unusual properties that make an acoustic cloak? While cloaking cannot occur if the bulk modulus and density are simultaneously scalar quantities (see below), it is possible to obtain acoustical cloaks by assuming that the mass density is anisotropic (Chen & Chan 2007; Cummer & Schurig 2007; Cummer et al. 2008). A tensorial density is not ruled out on fundamental grounds (Milton et al. 2006) and in fact there is a strong physical basis for anisotropic inertia. For instance, Schoenberg & Sen (1983) showed that the inertia tensor in a medium comprising alternating fluid constituents is transversely isotropic (TI) with elements 〈ρ〉 in the direction normal to the layering, and 〈ρ−1−1 in the transverse direction, where 〈.〉 is the spatial average. Anisotropic effective density can arise from other microstructures, as discussed by Mei et al. (2007) and Torrent & Sánchez-Dehesa (2008). The general context for anisotropic inertia is the Willis equations of elastodynamics (Milton & Willis 2007), which Milton et al. (2006) showed are the natural counterparts of the electromagnetic (EM) equations that remain invariant under spatial transformation. The acoustic cloaking has been demonstrated, theoretically at least, in both two and three dimensions: a spherically symmetric cloak was discussed by Chen & Chan (2007) and Cummer et al. (2008), while Cummer & Schurig (2007) described a two-dimensional cylindrically symmetric acoustic cloak. These papers use a linear transformation based on prior EM results in two dimensions (Schurig et al. 2006). The cloaks based on anisotropic density in combination with the inviscid acoustic pressure constitutive relation (bulk modulus) will be called inertial cloaks (ICs). The fundamental mathematical identity behind the ICs is the observation of Greenleaf et al. (2007) that the scalar wave equation is mapped into the following form in the deformed cloak region:(1.2)Here g=(gij) is the Riemannian metric with and . The reader familiar with differential geometry will recognize the first term in equation (1.2) as the Laplacian in curvilinear coordinates. Comparison of the transformed wave equation (1.2) with the IC wave equation provides explicit expressions for the IC density tensor and the bulk modulus (Greenleaf et al. 2008). We will derive an identity equivalent to (1.2) in §2 using an alternative formulation adapted from the theory of finite elasticity. A close examination of the anisotropic density of the ICs shows that its volumetric integral, the total mass, must be infinite for perfect cloaking. This raises grave questions about the usefulness of the ICs. The rest of this paper provides a solution to this quandary. The main result is that the IC is a special case of a more general class of the acoustic cloaks, defined by anisotropic inertia combined with anisotropic stiffness. The latter is obtained through the use of the pentamode materials (PMs; Milton & Cherkaev 1995). In the same way that an ideal acoustic fluid can be defined as the limit of an isotropic elastic solid as the shear modulus tends to zero, there is a class of limiting anisotropic solids with five (hence penta) easy modes of deformation analogous to shear, and one non-trivial mode of stress and strain. The general cloak comprising PM and IC is called the PM-IC model. The additional degrees of freedom provided by the PM-IC allow us to avoid the infinite mass dilemma of the IC. We begin in §2 with a new derivation of the IC model, and a discussion of the infinite mass dilemma. The PMs are introduced in §3 where it is shown that they display simple wave properties, such as an ellipsoidal slowness surface. The intimate connection between the PM and the acoustic cloaking follows from theorem 4.2 in §4. The properties of the generalized PM-IC model for cloaking are developed in §4 through the use of an example cloak that can be either pure IC or pure PM as a parameter is varied. Further examples are given in §5, with a concluding summary of the generalized acoustic cloaking theory in §6. ## 2. The IC The transformation from Ω to ω is described by the point-wise deformation from XΩ to . In the language of finite elasticity, X describes a particle position in the Lagrangian or undeformed configuration and x is particle location in the Eulerian or deformed physical state. The transformation or mapping defined by Χ is one-to-one and invertible except at the single point X=O (figure 1). We use ∇, ∇X and div, Div to indicate the gradient and divergence operators in x and X, respectively. The component form of div A is ∂Ai/∂xi or ∂Aij/∂xi when A is a vector or a second-order tensor-like quantity, respectively. The deformation gradient is defined as F=∇Xx with inverse F−1=∇X, or in component form FiI=∂xi/∂XI and . The Jacobian of the deformation is or, in terms of volume elements in the two configurations, J=dv/dV. The polar decomposition implies F=VR, where R is proper orthogonal (RRt=RtR=I, det R=1) and the left-stretch tensor V∈Sym+ is the positive definite solution of V2=FFt. The analysis is as far as possible independent of the spatial dimension d, although applications are restricted to d=2 or 3. The principal result for the IC is given in lemma 2.1. (2.1) The r.h.s. can be expressed as(2.2)Using the chain rule in the form or implies that , which is . The proof follows from the identity (see problems 2.2.1 and 2.2.3 in Ogden 1997):(2.3) ### (a) Cloak acoustic parameters The connection with acoustics is made by identifying the field variable p in lemma 2.1 as the acoustic pressure. The cloak comprises an inviscid fluid with bulk modulus K(x), such that the pressure satisfies the standard relation(2.4)where v(x, t) is the particle velocity. The IC is defined by the assumption that the momentum balance involves a symmetric second-order inertia tensor ρ according to(2.5)Although this is a significant departure from classical acoustical theory in assuming an anisotropic mass density, it is by no means unprecedented. Based on the analysis of Schoenberg & Sen (1983), a spatially varying tensor ρ could possibly be achieved by small pockets of layered fluid separated by massless impermeable membranes. Eliminating the velocity between equations (2.4) and (2.5) gives a single equation for the pressure,(2.6)Consider the uniform wave equation in Ω,(2.7)Using lemma 2.1, we can express this in the deformed physical description as equation (2.6), where the bulk modulus and inertia tensor are(2.8)For a given deformation F, the identities (2.8) define the unique cloak with spatially varying material parameters K and ρ each defined by the deformation gradient. We note the following identity that is independent of F:(2.9) Could the cloak possibly have isotropic density? That is, could the cloak be described by a standard acoustic fluid with two scalar parameters, density and bulk moduli? The identity ρ=JV−2 means that ρ=ρI can occur only if V is a multiple of the identity, V=wI for some scalar w=w(x). The deformation of Ω into the smaller region ω could certainly be accomplished at some but not all points by this deformation, which corresponds to a uniform contraction or expansion, with rotation. However, the deformation near the inner surface of the cloak cannot be of this form. In fact, the deformation in the neighbourhood of X=O must be extremely non-uniform and anisotropic. We will discuss this below when we examine a fundamental and severe deficiency of the IC model. ### (b) Continuity between the cloak and the acoustic fluid Let ds, n and dS, N denote the area element and unit normal to the outer boundary ∂ω+ and ∂Ω+(=∂ω+), respectively. These are related by the deformation through Nanson's formula (Ogden 1997), . The nature of the cloak requires that the outer surface is identical in either description, since both must match with the exterior fluid. We, therefore, require that ds=dS at every point on the outer surface, or(2.10)and equation (2.8) then implies that(2.11)Equation (2.11) is a purely kinematic condition. The interior of the cloak mimics the wave equation in the exterior fluid. The final requirement that the cloak will be acoustically ‘invisible’ is that the pressure and normal velocity match across the outer surface separating the fluid and cloak. These two continuity conditions arise from the balance of force (normal traction) per unit area and the constraint of particle continuity. The condition for pressure is simply that p is continuous across the outer surface, whether one uses the wave equation in physical space, (2.6), or its counterpart in the undeformed simply connect region (2.7). As for the kinematic condition consider its equivalent, the continuity of normal acceleration. This is in physical space, and using equation (2.5) it becomes , which must match with in the fluid. Alternatively, equation (2.11) and the relation Ft∇=∇X imply, as expected, that(2.12)The final term is simply the normal acceleration in the undeformed description. In summary, the continuity conditions at the outer surface in the physical description are(2.13) ### (c) Example: a rotationally symmetric IC Consider the inverse deformation(2.14)where and r=|x|. Using F−1=∇X implies that(2.15)where and the second-order tensors are and . The bulk modulus and mass density in the cloak follow from equation (2.8) as:(2.16)The anisotropic inertia has the form(2.17)where the radial and azimuthal principal values ρr and ρ can be read off from equation (2.16) as functions of f. Introducing the radial and azimuthal phase speeds, and , the mass density tensor can then be expressed as . The quantity r is the square of the radial acoustic impedance, . Equation (2.9) implies that the identity is required for cloaking. The three equations (2.16) for K, ρr and ρ in terms of f can be replaced by the universal relation (2.9), i.e.(2.18)along with simple expressions for the wave speeds in terms of f,(2.19)We will see later that the phase and the wave (group velocity) speeds in the principal directions are identical. Note that f′ is required to be positive. The original quantities can be expressed in terms of the phase speeds as(2.20)One could, for instance, eliminate f as the fundamental variable defining the cloak in favour of c(r), from which all other quantities can be determined from the differential equation relating the speeds, . We assume that the cloak occupies with uniform acoustical properties K=1 and ρ=I in the exterior. The areal matching condition (2.11) with is satisfied by F and ρ of equations (2.15) and (2.16) if f is continuous across the boundary, which is accomplished by requiring f(b)=b. The pressure and velocity continuity conditions (2.13) become(2.21) Note that the cloak density is isotropic if cr=c, which requires that f′=f/r. Thus f=γr with γ constant, but the outer boundary condition f(b)=b implies that γ=1, which is the trivial undeformed configuration. Perfect cloaking requires that f vanish at r=a. It is clear that c blows up as ra, as does the product r. In order to examine the individual behaviour of K and ρr, consider f∝(ra)α near a for α constant and non-negative. No value of α>0 will keep the radial density ρr bounded, although the unique choice α=1/d ensures that the bulk modulus K(a) remains finite and non-zero. Note that the azimuthal density ρ has a finite limit in two dimensions for power law decay f∝(ra)α, while ρ remains finite in three dimensions if α≤1, otherwise it blows up. Similarly, the radial phase speed scales as cr∝(ra)1−α, which remains finite for α≤1, blowing up otherwise. These results are summarized in table 1. View this table: Table 1 Behaviour of quantities near the inner surface r=a for the scaling fξα as ξ=ra↓0. (The total radial mass mr is defined in equation (2.22).) We use a non-dimensional measure of the total mass in the cloak, . The total mass is isotropic for the symmetric deformation and configuration considered here, , where . Assuming for the moment that f(a) is non-zero, i.e. a near-cloak (Kohn et al. 2008), then(2.22)These forms indicate not only that mr→∞ as f(a)→0 but also the form of the blow-up. To leading order, and in two and three dimensions, respectively. The blow-up of mr occurs no matter how f tends to zero. The infinite mass is an unavoidable singularity. ### (d) A massive problem with inertial cloaking Table 1 and the example above illustrate a potentially grievous issue: infinite mass is required for perfect cloaking in the IC model. We now show that the problem is not specific to the rotationally symmetric cloak but is common to all ICs. Consider a ball of radius ϵ around X=O. Its volume dV=O(ϵd) is mapped to a volume with inner surface defined by the finite cloak inner boundary ∂ω and outer surface by a distance O(ϵβ) further out, where β>0 is a local scaling parameter, assumed constant (in terms of the example above and table 1, β=1/α). The mapped current volume is then dv=O(ϵβ) so that J=dv/dV=O(ϵβd). The eigenvalues of V are λ1=O(ϵβ−1), λ2, …, d=O(ϵ−1). The bulk modulus and the principal values of the density matrix are therefore(2.23)The principal value ρ1 blows up whether d=2 or d=3. Furthermore, the total mass associated with ρ1 in the mapped volume is m1=O(ϵ2−d). This blows up in three dimensions, and a more careful analysis for two dimensions similar to that for the rotationally symmetric case shows . In summary, the IC theory, while consistent and formally sound, reveals an underlying and ‘massive’ problem. We will show how this can be circumvented by using a more general cloaking theory that allows for anisotropic stiffness (elasticity) in addition to, or instead of, the anisotropic inertia. The anisotropic elastic material required is of a special type, called a PM (Milton & Cherkaev 1995), which is introduced next. ## 3. Pentamode materials We consider Hooke's law in three dimensions in the form , where the six-vectors of stress and strain, and the associated 6×6 matrix of moduli areThe terms ensure that products and norms are preserved, e.g. . The PM is rank one, or in other words, five of the six eigenvalues of vanish (Milton & Cherkaev 1995). The one remaining positive eigenvalue is therefore(3.1)Accordingly, the moduli can be defined by the stiffness and a normalized six-vector ,(3.2)The stress is described by a single scalar, with , and . Thus,(3.3)The PM (Milton et al. 2006) is so named because there are five easy ways to deform it, associated with the eigenvectors of the five zero eigenvalues of the elasticity stiffness. Pentamodes obviously include isotropic acoustic fluids, for which the only stress–strain eigenmode is a hydrostatic stress, or pure pressure, and the five easy modes are all pure shear. Milton & Cherkaev (1995) describe how PMs can be realized from specific microstructures. ### (a) Example: an orthotropic PM An elastic material with orthotropic symmetry has nine non-zero elements in general: the six Cij=Cji, i, j=1, 2, 3, plus C44, C55 and C66. We set these last three (shear) moduli to zero. The stress σ must then be diagonal in the Cartesian coordinate system, implying that , and therefore,with the following relations holding: , , . ### (b) Compatibility condition for PMs The notation and is used to signify the fact that the tensors are normalized by and therefore is given by equation (3.1). We will not follow this normalization in general, but write:(3.4)In other words, the products in (3.4) are the important physical quantities, not K and S individually. The stress in the PM is always proportional to the tensor S and only one strain element is significant, S : ϵ. The rank deficiency of the moduli, which is apparent from (3.2) or (3.4), means that there is no inverse strain–stress relation for the elements of ϵ in terms of the elements of σ. Static equilibrium of a PM under an applied load leads to a constraint on the spatial variability of the PM stiffness. Consider an inhomogeneous PM with smoothly varying C(x)=K0(x)S0(x)⊗S0(x). Under an applied static load the strain will also be spatially inhomogeneous, but the only part of the strain that is important is the component along the PM eigenvector. With no loss in generality, we may put for some scalar function w. The stress is then σ=qS0, where . Let S=qS0, then the static equilibrium condition becomes . Finally, the PM stiffness is C=KSS where . The fourth-order stiffness of a smoothly varying PM can always be expressed as C=KSS, where K(x)>0 and S(x)∈Sym satisfies the static equilibrium condition,(3.5) This identity also arises in a completely different manner later when we consider transformed wave equations. We say that the PM is of canonical form when equation (3.5) applies. The decomposition of lemma 3.1 is unique up to a multiplicative constant. Thus, if a static load is applied to a PM expressed in canonical form, then the stress and strain are σ(x)=c0S and , respectively, for constant c0. In summary, stability under static loading places a constraint on the PM moduli, which will turn out to be useful when we return to the cloaking problem. The constraint means that the moduli can in general be expressed in canonical form. ### (c) Dynamic equations of motion in a PM The equations for small amplitude disturbances in a PM with anisotropic mass density are(3.6)and(3.7)These are, respectively, the specific form of Hooke's law for a PM and the momentum balance incorporating the inertia tensor. In order to make the equations look similar to those for an acoustic fluid, we identify the ‘pseudo-pressure’ p with the negative single stress, . The stress tensor then becomes(3.8)and the linear constitutive relation can be written as(3.9)Equations (3.7) and (3.9) imply that the pseudo-pressure satisfies the generalized acoustic wave equation,(3.10)This reduces to the acoustic equation (2.6) with anisotropic inertia and isotropic stiffness when S=I. Finally, assuming that the PM is in canonical form, so that S satisfies the equilibrium condition (3.5), we have(3.11) ### (d) Wave motion in a PM The wave properties of PMs are of interest since we will show that they can be used to make the acoustic cloak. Consider plane wave solutions for displacement of the form , for |n|=1 and constant q, k and v, and uniform PM properties. Non-trivial solutions of the equations of motion (3.6) and (3.7) must satisfy(3.12)The acoustical or Christoffel (Musgrave 2003) tensor K(Sn)⊗(Sn) is rank one and it follows that of the three possible solutions for v2, only one is not zero, the quasi-longitudinal solution,(3.13)The slowness surface is therefore an ellipsoid. Standard arguments for waves in anisotropic solids (Musgrave 2003) show that the energy flux velocity (or wave velocity or ray direction) is(3.14)Note that this is in the direction Sq, and satisfies , a well-known relation for generally anisotropic solids with isotropic density. As an example, consider the orthotropic PM with a density tensor of the same symmetry and coincident principal axes. Then(3.15a)(3.15b)(3.15c)where , and , and ρ1, ρ2 and ρ3 are the principal inertias. ## 4. The general acoustic cloaking theory We now show that the IC is but a special case of a much more general type of acoustic cloak. While the IC depends upon the anisotropic inertia, the general cloaking model can have both anisotropic inertia and stiffness. The additional degree of freedom is obtained by replacing the pressure field with the scalar stress of a PM. The general cloaking model is called PM-IC. ### (a) The fundamental identity Let P∈Sym be non-singular and F is the deformation gradient for the mapping Xx with and V2=FFt. Then(4.1)if P satisfies(4.2) The proof is given in appendix A. This clearly generalizes lemma 2.1, and in the context of PMs it implies theorem 4.2. The pressure p satisfies a uniform wave equation in Ω. Under the transformation Ωω with and V2=FFt, p satisfies the equation for the pseudo-pressure of a PM with stiffness C and anisotropic inertia ρ,(4.3a)where(4.3b)and S satisfies(4.3c) Note that the stress tensor S is not uniquely defined, although it must satisfy the equilibrium condition (4.3c). The associated density depends only on the left stretch tensor of F, viz. V. The IC corresponds to the special case of S=I, which is a trivial solution of equation (4.3c). The importance of theorem 4.2 is that the cloaks may simultaneously comprise PM stiffness and anisotropic inertia, which provides a vastly richer potential set of material parameters, not limited to the model of equation (2.6). Theorem 4.2 implies that the phase speed, wave velocity vector and polarization (not normalized) for plane waves with phase direction n are, from equations (3.13) and (3.14),(4.4)The phase speed and wave velocity are independent of whether the cloak is an IC or the generalized PM-IC. These important wave properties are functions of the deformation only. They can be expressed in revealing forms using the deformation gradient as v=|Ftn| and c=FN, where N=Ftn/|Ftn|. Note that the polarization q does in general depend upon the PM properties through the stress S. #### (i) Continuity between the cloak and the acoustic fluid Continuity conditions at the cloak outer surface in the physical description follow in the same manner as (2.13). The main difference is that the stress in the cloak is not isotropic, and therefore the condition that the shear tractions on the boundary vanish must be explicitly stated. The conditions for the pseudo-pressure which satisfies equation (3.11) are(4.5)These follow from equations (3.8) and (3.7). #### (ii) Rays in the cloak are straight lines in the undeformed space Although theorem 4.2 implies that the simple wave equation (2.7) in Ω is exactly mapped to equation (3.11) in ω and hence all wave motion properties transform accordingly, including rays, it is instructive to deduce the ray transformation separately. We now demonstrate explicitly that rays in the cloak ω, which are curves that minimize travel time, are just straight lines in Ω. Consider the straight line X(t)=X0+τN, where N is a unit vector in Ω. The associated curve in ω is x(τ)=x(X0+τN). Differentiation yields dx/dτ=FN, which is the same as V2s, where the vector sFtN. Differentiating s(τ), keeping in mind that N is fixed, gives(4.6)where the compatibility identity has been used. We therefore deduce that straight lines in Ω are mapped to solutions of the coupled ordinary differential equations,(4.7)But these are identically the ray equations in the cloak (see appendix B). They are also the geodesic equations for the metric V−2. The ray equations conserve the quantity s.V2s that is equal to unity, reflecting the fact that s is the slowness vector, s=n/v (see equations (4.4) and (B 4)). An illustration of rays inside the physical cloak is presented in §5. #### (iii) Relation to the Milton, Briane and Willis transformations Milton et al. (2006) examined how the elastodynamic equations transform under general curvilinear transformations. They showed, in particular, that if the deformation is harmonic then the constitutive relation (2.4) and momentum balance (2.5) for a compressible inviscid fluid with isotropic density transform into the equations for a PM with anisotropic inertia, equations (3.6) and (3.7), respectively. The deformation is harmonic if , which realistically limits the transformation to the identity (Milton et al. 2006). This would appear to indicate that acoustic cloaking using the transformation method is impossible, in contradiction to the present result. In fact, as we show next, the Milton, Briane and Willis (MBW) result is a special case of the more general theory embodied in theorem 4.2, one that corresponds to the choice S=J−1V2. The PM stiffness and inertia tensor found by Milton et al. (2006) are C=J−1V2V2 and ρ=J−1V2 (their eqns (2.12) and (2.13)). These are of the general form required by equation (4.3b) if we identify S as S=J−1V2. Does this satisfy the equilibrium condition (4.3c)? Using equation (2.3) and this vanishes if the deformation is harmonic. The MBW transformation therefore falls under the requirements of theorem 4.2 for the specific choice of S=J−1V2 that satisfies the equilibrium equation (4.3c) only if the deformation is harmonic. Having shown that the MBW transformation result is a special case of the present theory, it is clear that the transformation as considered here is different from theirs. Milton et al. (2006) demand that all of the equations transform isomorphically, whereas the present theory requires only that the scalar acoustic wave equation is mapped to the scalar wave equation for the PM (see equations (4.3a)). The mapping contains an arbitrary but divergence-free tensor S that defines the particular but non-unique constitutive relation (2.4) and momentum balance (2.5). Consider, for instance the displacement fields u(X) and u in Ω and ω, respectively. Under the transformation of (Milton et al. 2006) (eqn (2.2) of Milton et al. 2006). There is no analogous constraint in the present theory. In other words, we do not require an isomorphism between the equations for all of the field variables. Instead, the scalar wave equation for the acoustic pressure is isomorphic to the scalar equation for the pseudo-pressure of the PM. ### (b) Cloaks with isotropic inertia Theorem 4.2 opens up a vast range of potential material properties. It means that there is no unique cloak associated with a given transformation Ωω and its deformation gradient F. We now take advantage of this non-uniqueness to consider the possibility of isotropic inertia. Equation (4.3b) indicates that the density is isotropic if S is proportional to V. Hence, we deduce lemma 4.3. A necessary and sufficient condition that the density is isotropic, ρ=ρI, is that there is a scalar function h(x), such that(4.8)in which case,(4.9)and the Laplacian is . There is a general circumstance for which a solution can be found for h. It takes advantage of the second-order differential equality,(4.10)Although F is generally unsymmetric, F=Ft in the special case that the deformation gradient is a pure stretch with no rotation (R=I). We therefore surmise lemma 4.4. If the deformation gradient is a pure stretch (R=I and hence F coincides with V) then the density is isotropic,(4.11)and the Laplacian becomes . The infinite mass problem of the IC can be avoided if the material near the inner boundary ∂ω has integrable mass. This could be achieved, for instance, by requiring that the deformation near ∂ω is symmetric (pure stretch). Lemma 4.4 and the scaling arguments of §2d imply that the isotropic density scales as ρ=O(ϵdβ), which is integrable as long as β<d+1 (α>1/(d+1)). ### (c) Example: the rotationally symmetric cloak We again consider the deformation of equation (2.14) for the cloak and assume that the symmetric tensor S has the form . Differentiation yields , and the ‘equilibrium’ condition (4.3c) is satisfied if w(r) and γ(r) are related by . It is convenient to introduce a new function g(r), such that γ=rg′/g and w=(g/r)d−1, which automatically makes . The cloak parameters, therefore, have general rotationally symmetric forms(4.12) The functions f and g are independent of one another, and together define a two-degree of freedom class of PM-IC model. The general solution has both anisotropic stiffness and anisotropic inertia. The previous example of the pure IC corresponds to the special case of g=r, for which equation (4.12) gives S=I and K, and ρ agree with equation (2.16). The form of the stress S indicates the PM-IC has TI symmetry. This is a special case of the orthotropic PM considered earlier. A normal TI solid with axis of symmetry in the x3-direction has five independent elastic moduli: C11, C33, C12, C13 and C44. The last is a shear modulus, the other shear modulus is C66=(C11C12)/2. We set all shear moduli to zero, implying C44=0 and C12=C11, and the remaining independent moduli C11, C33 and C13 satisfy . The PM, therefore, has two independent elastic moduli. Let C33Kr(r), C11K(r) and , then the fourth-order elasticity tensor defined by (4.12) is(4.13)where the stiffnesses Kr and K, and the principal values of the inertia tensor given by equation (2.17), are(4.14)The phase speeds cr and c in the principal directions are again given by equation (2.19). This might seem amazing at first sight, but recall that it is predicted from the general theory. That is, the phase speed and wave velocity are independent of how we interpret the cloak material, as an IC or the more general PM-IC. In this example, it means that the phase speed and wave velocity are independent of g. #### (i) Pure PM cloak with isotropic density The inertia is isotropic when ρr=ρ, which occurs if g(r)=f(r). In that case ρ=ρI, and equation (4.14) reduces to(4.15)We observe that the parameters of equation (4.15) are obtained from the IC parameters in equations (2.16) and (2.17) under the substitutions . Thus, the universal relation analogous to equation (2.18) is now(4.16)and by analogy with equation (2.20) the three original material parameters can be expressed using the phases speeds only, as(4.17)In summary, there is a one-to-one correspondence between the two sets of three material parameters for the limiting cases of the pure IC on the one hand, and the pure PM cloak on the other. Of course, as discussed before, the density and stiffness cannot be simultaneously isotropic. The PM-IC model with material properties (4.12) includes both limiting cases when g=r and f, respectively. Table 2 summarizes the scaling of the physical quantities for isotropic inertia, similar to the scalings in table 1 for the pure IC. Note that the wave speeds cr, c and the intermediate (C13) modulus have limiting behaviour that is independent of the dimensionality, while the density ρ and the moduli Kr and K depend upon whether the cloak is in two or three dimensions. View this table: Table 2 Behaviour of quantities near the vanishing point r=a for the scaling fξα as ξ=ra↓0 with isotropic inertia. ## 5. Further examples ### (a) A non-radially symmetric cloak with finite mass The examples considered above are rotationally symmetric and rather special in that they can be made using uniformly pure IC, or pure PM, or hybrid PM-IC. The pure IC model is always achievable as lemma 2.1 showed, but it suffers from the infinite mass catastrophe. The pure PM model requires that lemma 4.3 hold at all points, which is not realistic. However, we can always obtain a cloak comprising partly pure PM by requiring the deformation to be locally a pure stretch (lemma 4.4). In particular, by constraining the deformation near the inner surface ∂ω in this manner, the density can be made both isotropic and integrable. We now demonstrate this for a non-rotationally symmetric cloak. For A∈Sym+, h(ζ), h′(ζ)>0 for ζ∈[0,1], consider the deformation,(5.1)This generalizes the deformation of equation (2.14) (A=I) and has the important property that the deformation gradient is symmetric,(5.2)The inner surface is an ellipse (two-dimensional) or ellipsoid (three-dimensional),(5.3) The mapping must be the identity on the outer surface of the cloak ∂ω+=∂Ω. This eliminates the transformation (5.1) as a possible deformation in the vicinity of ∂ω+ but it does not rule it out elsewhere. In particular, it can be used on the inner surface ∂ω and for a finite surrounding volume. Then it could be patched to a different mapping closer to the outer boundary of the cloak, one which reduces to the identity on ∂ω+. For instance,(5.4)where ν(x)=1 for all x between ∂ω and some surface , beyond which ν decreases smoothly to zero as x approaches ∂ω+, which is assumed to be a level surface of ζ, i.e. an ellipsoid or an ellipse. We assume that ζ=1 on the outer surface, so that(5.5)Let be the level surface ζ0 for constant ζ0∈(0,1). The surface separating the pure PM inner region from the PM-IC outer part of the cloak is therefore(5.6)Based on lemma 4.4, the inner part of the cloak between ∂ω and can be constructed from pure PM with isotropic density . The remaining part of the cloak is PM-IC and the mass of the entire cloak will be finite. For instance, in figure 2, h(ζ)=(1/2)(1+ζ) for ζ∈[0,1], ν=1 for and ν=4(1−ζ) for , and the principal values of A are 0.6 and 1.0. It also shows each ray following a continuous path through the cloak with collinear incident and emergent ray paths. There is a unique ray separating the rays traversing the cloak in opposite senses, and which defines a ‘stagnation point’ at the cloak inner surface. The separation ray is the one that would intersect the singular point in the undeformed space, O in figure 1. This is the origin in figure 2 and since the rays are incident horizontally, the separation ray is defined by x2=0 outside the cloak, and it intersects ∂ω at . The wavefront in effect splits or tears apart at the incident intersect and it reforms at the emergent intersect. The time delay between these two events is infinitesimal since the tearing/rejoining is associated with the instant at which the wavefront would traverse O in the undeformed space. A time-lapse movie illustrating this more vividly may be seen in the electronic supplementary material (20 s long). Another movie showing the ray paths for different directions of incidence can be found in the electronic supplementary material. Figure 2 Ray paths through a non-radially symmetric cloak. The solid curves are the inner and outer surfaces of the cloak. The dashed curve delineates the inner region in which the deformation gradient is symmetric everywhere and the cloak is pure PM with finite isotropic mass. Two movies of the rays and the wavefronts in this cloak may be viewed in the electronic supplementary material. ### (b) Scattering from near-cloaks A near-cloak or almost perfect cloak is defined here as one with inner surface ∂ω that does not correspond to the single point X=O. We illustrate the issue using the radially symmetric deformation (2.14) with f(a) small but non-zero, and assuming time harmonic motion, with the factor e−ikt understood but omitted. Since the inner surface is not the image of a point, it is necessary to prescribe a boundary condition on the interior surface, which we take as zero pressure on r=a. The specific nature of the boundary condition should be irrelevant as f(a) shrinks to zero. As before, the cloak occupies , but now f(a)>0. The total response for plane wave incidence is ,in two and three dimensions, respectively, and p0 is a constant. A near-cloak can be defined in many ways: for instance, a power law f(r)=b((rδ)/(bδ))α with 0<δ<a is considered in Norris (2008). Here we assume a linear near-cloak mapping similar to the one examined by Kohn et al. (2008),(5.7)where 0<δ<a. Hence, f(δ)(a)=δ and the radius at which the mapping is zero, r=aδ(ba)/(bδ), defines the size of a smaller but perfect cloak. Some representative results are shown in figure 3, which illustrates clearly a disparity between the cylindrical and spherical cloakings, even when the physical optics cross-sections are identical. Thus, for f(a)=0.01a, the three-dimensional cross-section is negligible (figure 3d) but the two-dimensional cross-section is two orders of magnitude larger (figure 3c). Ruan et al. (2007) found that the perfect cylindrical EM cloak is sensitive to perturbation. This sensitivity is evident from the present analysis through the dependence on the length δ that measures the departure from perfect cloaking δ=0. Figure 3 A plane wave is incident from the left with frequency k=10 on the cloak defined by equation (5.7) with a=1 and b=21/(d−1). The outer cloak radius b is chosen so that the geometrical cross-section of the cloak is twice that of the cloaked region in both two dimensions ((a) δ=0.3, Σ=1.48 and (c) δ=0.01, Σ=0.12) and three dimensions ((b) δ=0.3, Σ=0.79 and (d) δ=0.01, Σ=1×10−3). The circular core in the plots is the cloaked region of radius a. The virtual inner radius f(a)=δ is 0.3 or 0.01, and Σ is the total scattering cross-section. The ineffectiveness of the same cloak in two dimensions when compared with three dimensions can be understood in terms of the scattering cross-section. The leading order far-field is of the form . The optical theorem implies that the total scattering cross-section, and hence the total energy scattered, is determined by the forward scattering amplitude, . Thus,(5.8)The cross-section is dominated in the small kf(a) limit by the n=0 term, with leading order approximations,(5.9)This explains the greater efficacy in three dimensions, and suggests that all things being equal, cylindrical cloaking is more difficult to achieve than its spherical counterpart. ## 6. Discussion and conclusion Starting from the idea of an acoustic cloak defined by a finite deformation, we have shown that the acoustic wave equation in the undeformed region is mapped into a variety of possible equations in the physical cloak. Theorem 4.2 implies that the general form of the wave equation in the cloak is(6.1)where the stress-like symmetric tensor S is divergence free and the inertia tensor is ρ=JSV−2S. The non-unique nature of S for a given fixed deformation opens many possibilities for interpreting the cloak in terms of material properties. If S is constant (S=I with no loss in generality) then the cloak material corresponds to an acoustic fluid with pressure p defined by a single bulk modulus but with a mass density ρ that is anisotropic, which we call the IC. The IC model is mathematically consistent but physically impossible because it requires a cloak of infinite total mass. There appears to be no way to avoid this if one restricts the cloak material properties to the IC model. If one is willing to use an imperfect cloak with finite mass, and is concerned with fixed frequency waves, then the scattering examples show that significant cloaking can be obtained by shrinking the effective visible radius to be sub-wavelength. The two- and three-dimensional responses for imperfect cloaking are quite distinct, with far better results found in three dimensions. A cloak of finite mass is achievable by allowing S to be spatially varying and divergence free. The general material associated with equation (6.1), called PM-IC, has both anisotropic inertia and anisotropic elastic properties. The elastic stiffness tensor has the form of a PM characterized by the symmetric tensor S and a single modulus K. Under certain circumstances, characterized in lemmas 4.3 and 4.4, the density becomes isotropic and the material is pure pentamode. More importantly, the total mass can be made finite. The finite mass problem arises from how we interpret the cloak material in the neighbourhood of its inner surface. It is therefore not necessary to totally abandon the pure IC model, but it does mean that the alternative PM-IC is required at the inner surface. From the examples considered here it appears that one can always use a pure PM model near the inner cloak surface, and thereby achieve finite mass. One method is to force the deformation near the inner surface to be a pure stretch, then lemma 4.4 implies that the density is locally . The total mass remains finite as long as ρ is locally integrable, which is easily achieved. The theory and simulations of PM-IC and PMs presented here illustrate the wealth of possible material properties that are opened up through the general PM-IC model of acoustic cloaking. The physical implementation is in principle feasible: for instance, anisotropic inertia can be achieved by microlayers of inviscid acoustic fluid (Schoenberg & Sen 1983), while the microstructure required for PMs has been described (Milton & Cherkaev 1995). Fabrication of practical PM-IC materials remains as a challenging but worthwhile goal. ## Acknowledgements Constructive suggestions from the anonymous reviewers are appreciated. View Abstract
2014-12-18 01:57:31
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8951565623283386, "perplexity": 947.987549913526}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802765093.40/warc/CC-MAIN-20141217075245-00152-ip-10-231-17-201.ec2.internal.warc.gz"}
http://atozmath.com/Conversion.aspx?SM=Volume&ST=Cylinder
Home > Plots & Geometry calculators > Volume >> Cylinder calculator Volume - 1 - Cuboid 2 - Cube 3 - Cylinder 4 - Cone 5 - Sphere 6 - Hemi-Sphere Volume >> Cylinder Volume 3 of 6 I know that for a cylinder RadiusDiameter = and Height = . From this find out Curved Surface AreaTotal Surface AreaVolume of the cylinder. pi = 22/7 3.14159 SolutionCylinder FormulaVolume Formula Volume of Cylinder Curved Surface Area (CSA) = 2 pi r hTotal Surface Area (TSA) = 2 pi r (r + h)Volume (V) = pi r^2 hExample : I know that for a cylinder Radius = 3 and Height = 10 . From this find out Curved Surface Area of the cylinder."Here, we have Radius "(r) = 3" and Height "(h) = 10" (Given)""Volume" = pi r^2 h = pi * 3^2 * 10 = 282.7433"Total Surface Area" = 2 pi r (r+h) = 2 * pi * 3 * (3 + 10) = 245.0442"Curved Surface Area" = 2 pi r h = 2 * pi * 3 * 10 = 188.4956 Copyright © 2016. All rights reserved. Ad: Web Hosting By Arvixe
2017-01-20 22:13:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2760470509529114, "perplexity": 1827.405835581043}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00401-ip-10-171-10-70.ec2.internal.warc.gz"}
https://www.albert.io/learn/abstract-algebra/question/right-cosets-of-dihedral-group
Limited access Let $G=\langle r,s: r^5=s^2=1, srs=r^{-1}\rangle$ be the dihedral group of order 10. Determine the right cosets of the subgroup $H=\{1, r^3s\}$ in $G$. A $H$, $\{r,r^4s\}$, $\{r^2, s\}$, $\{r^3,rs\}$, $\{r^4,r^2s\}$ B $H$, $\{r,r^2s\}$, $\{r^2, rs\}$, $\{r^3,r^4s\}$, $\{r^4,s\}$ C $H$, $\{r,r^2s\}$, $\{r^2, rs\}$, $\{r^3,s\}$, $\{r^4,r^4s\}$ D $\{1, s\}$, $\{r,r^4\}$, $\{r^2,r^3s\}$, $\{r^3, rs\}$, $\{r^2s,r^4s\}$ Select an assignment template
2017-03-30 20:34:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6898241639137268, "perplexity": 878.5119868897594}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218203515.32/warc/CC-MAIN-20170322213003-00315-ip-10-233-31-227.ec2.internal.warc.gz"}
https://www.gamedev.net/forums/topic/360006-rotating-objects-that-are-not-centered-around-the-origin/
rotating objects that are not centered around the origin Recommended Posts i'm trying to rotate my objects by mouse, that are not centered at the origin, but no luck so far, it just seams as my object are still rotated around their the origin(0,0,0). this is my viewing code gl.glLoadIdentity(); //z is up vector glu.gluLookAt(0,0,0, 6538.139, -8353.096, 780.3191, 0,0,1 ); //translate back to origin gl.glTranslated(-6538.139, 8353.096, -780.3191); //rotate object by mouse gl.glRotatef(xRot, 1, 0, 0); gl.glRotatef(yRot, 0, 0, 1); //translate back to orignal location gl.glTranslated(6538.139, -8353.096, 780.3191); //draw points render.renderPoints(); anyone shows a light;) cheers Share on other sites Don't know exactly what you want to see, but it seems to me that the problem is the last "translation" before the final "lookat". I assume you want to see your object rotating arround the x and z axis, so it describe some kind of "circle" arround the world origin (0,0,0)...don't you? As the GL uses a post multipliation matrix scheme, you must issue the operations in the reverse order, so what I would do is: 1) use the "look at" to make the camera point from the origin to the desired point 2) perform the rotation, this will allways make it rotate arround the origin (0,0,0) but as you have peformed a previous translation the (0,0,0) wont be the local object space, but the (0,0,0) in world space 3) perform the translation (this will move the object to some point) 4) render As you may see, the order of the operations is "not logical" as its somehow reversed. What you would normaly think to make an object rotate arround the world origin would be "move to the point from wich the rotation is going to be done", then "rotate it", then make the camera look to somewhere, and finally render. But the actual ordering in the code should be just the opposite (except for the render operation, of course :) due to the way the matrix operations are performed. Hope this helps Share on other sites Just forget my other post, now I think I understand your problem "your object allways rotates arround the origin" :) OK, the problem is basically the same. You need to think in "reverse order" so if you want to "move object, rotate it, then move it again to the new place where you want the object to be rotating around, and finally set the camera to look at it from the origin", just do the opposite: 1) Perform the "look at" 2) Perform a translation to the point in space you want the object be rotating around 3) Perform the rotation 4) Perform the translation that will give you the rotation "radius" So, for example, if you want your object rotating arround the (100,100,100), describing circles of 50 units radius, just do: 1) Perform the "look at" to position the camera wherever you wat 2) Perform a translation to (100,100,100 3) Perform a rotation (rot,0,1,0) (this will make it rotate arround the Y axis) 4) Perform a translation to (50,0,0) 5) Then render It should work :) Share on other sites Not really what i basically want is that a model rotates around it own origin, sorry it was pretty early when i first posted this. When i move my mouse, so when moving my mouse in horizontal direction the model rotates around it's Z axis( z is up here ) and when moving the mouse up or down the model rotates around it x axis. something like this, ( the pocketwatch demo ) http://www.sulaco.co.za/opengl2.htm Share on other sites And what are you getting instead? If you just perform: 1) "look at" 2) rotation 3) render It sound work fine. You should be able to see the object centered in the world origin (0,0,0) and rotating around its own center. Maybe the problem is you are not setting the camera right. Just try to set the camera so it points to (0,0,0) from a quite distant point (0,0,500). The camera will be positioned just in front of your object, at 500 units of distance, aligned with the world Y axis (that is, looking straight forward). If you want your object rotating around its center, but positioned anywhere in the world, just do: 1) "look at" 2) translation to the point you want the object to be 3) rotation 4) render No matter where you put the camera, your object will be at the position sated in step 2, rotating arround its center. Share on other sites derodo has basically spelled it out for you, but to give you an aditional example here's my testing code to rotate the earth around its origin: camera.view(); // the earth GL11.glPushMatrix(); bindTexture(texture3); GL11.glTranslatef(0f, 6280000.0f, 0f); // some point in space to // position the earth to illustrate // rotation around a point that is not //(0,0,0) GL11.glRotatef(23.5f, 1.0f, 0.0f, 0.0f); // tilt the earth GL11.glRotatef(rot*5, 0.0f, -1.0f, 0.0f); createSphere(12800000,50); GL11.glPopMatrix(); So after you position the camera or whatever, do a translation to where in space the origin of the rotating object should be, then do the rotation, then draw the object. Share on other sites Just one thing more, i'm not sure, but I think GL uses a default left handed coordinate system. So if you do nothing to change it (that is, setting your custom transformation), the standard "up" basis vector should be (0,1,0). The basis (1,0,0) would point "right", and (0,0,1) would point forward away. That's why I wrote those numbers in the post before, asuming (0,0,0) to be the world origin, and (0,0,500) a position just 500 units away (not 500 units up). If you don't know what your reference system is, then you may think your doing things wrong, while they're just right. I mean, if you call gluLookAt with an UP vector being (0,0,1), while using a left handed coordinate system, the camera will be rotated -90 degrees arround the Z axis, so if you just render an object against a black background, apparently, the rotations about its X axis would look as if they were done about it's Y axis...¿do you know what I mean? If you do nothing, the default reference system is somethign like: Y (0,1,0)^| Z (0,0,1)| /| /|/+-------> X (1,0,0) At least, that's how it works with all the programs I have made. And it really helps to know what your base reference is. Hope this helps a little bit. Share on other sites --removed as I'm no longer sure myself-- But I think OpenGL was right-handed.. Edit: OpenGL is right-handed by default after all, and the "forward" Z axis is the negative direction, so the above diagram would be incorrect. Share on other sites You're righ gorgorath. I was just looking at the wrong hand when I wrote the post :P The only thing I'm not sure now is the Z forward direction...I think its along the positive axis...shoulnd't a right handed coordinate system look like? Y(0,1,0) ^Z(0,0,1) | \ | \ | \| <------+X(1,0,0) At least that's what I get if render three lines from (0,0,0) to (100,0,0), (0,100,0) and (0,0,100)... Share on other sites If you meant me, I'm definitely not gorgorath :P Math is sometimes confusing to me (I shouldn't admit that when I try to do 3d graphics I guess). I used this site for reference to figure this out: http://mathworld.wolfram.com/Right-HandRule.html Also I looked at the OpenGL faq at http://www.opengl.org/resources/faq/technical/transformations.htm Quote: 9.150 Can I make OpenGL use a left-handed coordinate space? OpenGL doesn't have a mode switch to change from right- to left-handed coordinates. However, you can easily obtain a left-handed coordinate system by multiplying a negative Z scale onto the ModelView matrix. For example:glMatrixMode (GL_MODELVIEW);glLoadIdentity ();glScalef (1., 1., -1.);/* multiply view transforms as usual... *//* multiply model transforms as usual... */ also on the same page: Quote: Eye Coordinates result from transforming Object Coordinates by the ModelView matrix. The ModelView matrix contains both modelling and viewing transformations that place the viewer at the origin with the view direction aligned with the negative Z axis. So what does all this mean in our case? I don't know. :-) The matrix stack in OpenGL still confuses me. But a coordinate system that has "forward" Z negative seems to be right-handed. Edit: I'll steal your diagram to illustrate what I think the whole system should look like by default in OpenGL: Y(0,1,0) ^ | | | | +-------> / X(1,0,0) / / Z(0,0,1) Which is the same as that last diagram of yours, albeit rotated to show that the abscissa really does point to "the right". Share on other sites mmm...I kinda lost now :) But I guess everything is subjective...what I do is consider the base reference system as the one formed by (1,0,0) (0,1,0) (0,0,1), so it really doesn't matter wether the actual "z away" axis is positive or negative :P as everything depends on where you make the camera look to :P And as I set my default camera to stay at (0,0,0) and look to (0,0,1), then what I "see" is that incrasing z values make objects go "far away" :) But yes. You ARE right. If you dont apply any camera transformations, the objects going "away" are those going in the negative z direction. Quite nice thought lightbringer (that is you, isn't it:)...now I know all those years I've been thinking the wrong way. It's never late to re-learn things... Thanks for the reminder :) [Edited by - derodo on November 24, 2005 1:00:19 PM] Create an account Register a new account • Forum Statistics • Total Topics 628329 • Total Posts 2982104 • 22 • 9 • 9 • 13 • 11
2017-11-21 04:49:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4270828664302826, "perplexity": 1823.82265867613}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806316.80/warc/CC-MAIN-20171121040104-20171121060104-00144.warc.gz"}
https://zbmath.org/authors/?q=ai%3Apetrusel.adrian
Compute Distance To: Documents Indexed: 225 Publications since 1987, including 6 Books 5 Contributions as Editor Reviewing Activity: 184 Reviews Co-Authors: 86 Co-Authors with 185 Joint Publications 2,546 Co-Co-Authors all top 5 ### Co-Authors 45 single-authored 46 Yao, Jen-Chih 37 Petruşel, Gabriela 37 Rus, Ioan A. 31 Ceng, Lu-Chuan 16 Şerban, Marcel Adrian 10 Wong, Mu Ming 8 Moţ, Ghiocel 5 O’Regan, Donal 5 Qin, Xiaolong 5 Samet, Bessem 4 Bota, Monica-Felicia 4 Guran, Liliana 4 Karapınar, Erdal 4 Yao, Yonghong 3 Amini-Harandi, Alireza 3 Ansari, Qamrul Hasan 3 Benchohra, Mouffak 3 Chifu, Cristian 3 Espínola García, Rafael 3 Filip, Alexandru-Darius 3 Li, Jinlu 3 Mleşniţe, Oana Maria 3 Sahu, Daya Ram 3 Shahzad, Naseer 3 Sîntămărian, Alina 3 Som, Sumit 2 Abbas, Said 2 Alghamdi, Maryam A. 2 Berinde, Vasile 2 Dey, Lakshmi Kanta 2 Dhage, Bapurao C. 2 Huang, Shuechin 2 Kumam, Poom 2 Lazăr, Tania Angelica 2 Moroşanu, Gheorghe 2 Petru, Tünde Petra 2 Singh, Deepak Kumar 2 Soós, Anna 2 Su, Yongfu 2 Urs, Cristina 1 Abbas, Mujahid 1 Albarakati, Wafaa A. 1 Alecsa, Cristian Daniel 1 Alizadeh, C. G. 1 Balooee, Javad 1 Bota, Marius 1 Brzdęk, Janusz 1 Bucur, Amelia 1 Dey, Lakshim Kanta 1 Dolhare, U. P. 1 Du, Wei-Shih 1 Duca, Dorel I. 1 Fakhar, Majid 1 Garai, Hiranmoy 1 Hajisharifi, Hamid Reza 1 Iqbal, Hira 1 Joshi, Vishal 1 Kirr, Eduard 1 Köbis, Elisabeth 1 Laha, Supriti 1 Lee, Chinsan 1 Lin, Lai-Jiu 1 Lin, Yen-Cherng 1 Llorens-Fuster, Enrique 1 López Acedo, Genaro 1 Luo, Yinglin 1 Martin, Calin-Iulian 1 Nguyen Van Dung 1 Nicolae, Adriana 1 Pant, Rajendra Prasad 1 Petre, Ioan-Radu 1 Postolache, Mihai 1 Precup, Radu 1 Prus, Stan 1 Romaguera Bonilla, Salvador 1 Sagar, Vidya 1 Satco, Bianca-Renata 1 Shukla, Rahul 1 Sintunavarat, Wutiphol 1 Szentesi, Silviu 1 Wu, Soon-Yi 1 Xiao, Yibin 1 Xie, Linsen 1 Xu, Hong-Kun 1 Younis, Mudasir 1 Yu, Su-Jane all top 5 ### Serials 21 Fixed Point Theory 20 Journal of Nonlinear and Convex Analysis 14 Studia Universitatis Babeș-Bolyai. Mathematica 13 Taiwanese Journal of Mathematics 8 Fixed Point Theory and Applications 8 Journal of Fixed Point Theory and Applications 7 Carpathian Journal of Mathematics 6 Nonlinear Analysis. Theory, Methods & Applications. Series A: Theory and Methods 5 Optimization 5 Miskolc Mathematical Notes 4 Journal of Optimization Theory and Applications 4 PU.M.A. Pure Mathematics and Applications 4 Annals of the Academy of Romanian Scientists. Mathematics and its Applications 3 Journal of Mathematical Analysis and Applications 3 Chaos, Solitons and Fractals 3 Revue d’Analyse Numérique et de Théorie de l’Approximation 3 Abstract and Applied Analysis 3 “Babeș-Bolyai” University. Faculty of Mathematics and Computer Science. Research Seminars. Preprint 3 Seminar on Fixed Point Theory Cluj-Napoca 3 Preprint. “Babeș-Bolyai” University. Faculty of Mathematics and Physics. Research Seminars 3 Journal of Nonlinear and Variational Analysis 2 Mathematica 2 Filomat 2 Analele Științifice ale Universității “Ovidius” Constanța. Seria: Matematică 2 Electronic Journal of Qualitative Theory of Differential Equations 2 Mathematica Moravica 2 Discrete Dynamics in Nature and Society 2 Scientiae Mathematicae Japonicae 2 Central European Journal of Mathematics 2 Applicable Analysis and Discrete Mathematics 2 Journal of Nonlinear Science and Applications 2 Annals of the Tiberiu Popoviciu Seminar of Functional Equations, Approximation and Convexity 2 Revista de la Real Academia de Ciencias Exactas, Físicas y Naturales. Serie A: Matemáticas. RACSAM 2 Journal of Function Spaces 1 Applicable Analysis 1 Applied Mathematics and Computation 1 Commentationes Mathematicae Universitatis Carolinae 1 Demonstratio Mathematica 1 Numerical Functional Analysis and Optimization 1 Proceedings of the American Mathematical Society 1 Publicationes Mathematicae Debrecen 1 Rendiconti del Circolo Matemàtico di Palermo. Serie II 1 Annales Societatis Mathematicae Polonae. Seria I. Commentationes Mathematicae 1 Zeitschrift für Analysis und ihre Anwendungen 1 Bulletin of the Korean Mathematical Society 1 Analele Științifice ale Universității Al. I. Cuza din Iași. Serie Nouă. Matematică 1 Topological Methods in Nonlinear Analysis 1 Georgian Mathematical Journal 1 Discussiones Mathematicae. Differential Inclusions 1 Journal of Inequalities and Applications 1 Mathematical Inequalities & Applications 1 Fractional Calculus & Applied Analysis 1 Acta Mathematica Sinica. English Series 1 Nonlinear Analysis Forum 1 Discussiones Mathematicae. Differential Inclusions, Control and Optimization 1 Nonlinear Analysis. Modelling and Control 1 Nonlinear Functional Analysis and Applications 1 Discrete and Continuous Dynamical Systems. Series B 1 Buletinul Științific al Universității din Baia Mare. Seria B. Fascicola Matematică - Informatică 1 International Journal of Pure and Applied Mathematics 1 Cubo 1 Preprint. “Babeș-Bolyai” University. Faculty of Mathematics. Research Seminars 1 Journal of Mathematical Inequalities 1 Analele Universității de Vest din Timișoara. Seria Matematică-Informatică 1 Acta Universitatis Sapientiae. Mathematica 1 Set-Valued and Variational Analysis 1 Linear and Nonlinear Analysis all top 5 ### Fields 156 Operator theory (47-XX) 137 General topology (54-XX) 26 Calculus of variations and optimal control; optimization (49-XX) 19 Ordinary differential equations (34-XX) 15 Integral equations (45-XX) 14 Measure and integration (28-XX) 13 Numerical analysis (65-XX) 8 Dynamical systems and ergodic theory (37-XX) 7 Operations research, mathematical programming (90-XX) 5 General and overarching topics; collections (00-XX) 5 Real functions (26-XX) 5 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 4 Functional analysis (46-XX) 3 History and biography (01-XX) 3 Partial differential equations (35-XX) 3 Convex and discrete geometry (52-XX) 2 Order, lattices, ordered algebraic structures (06-XX) 2 Difference and functional equations (39-XX) 1 Combinatorics (05-XX) 1 Linear and multilinear algebra; matrix theory (15-XX) 1 Approximations and expansions (41-XX) 1 Global analysis, analysis on manifolds (58-XX) 1 Probability theory and stochastic processes (60-XX) 1 Mechanics of particles and systems (70-XX) 1 Mechanics of deformable solids (74-XX) ### Citations contained in zbMATH Open 153 Publications have been cited 1,618 times in 1,089 Documents Cited by Year Fixed point theorems for generalized contractions in ordered metric spaces. Zbl 1142.47033 2008 Multivalued fractals in $$b$$-metric spaces. Zbl 1235.54011 Boriceanu, Monica; Bota, Marius; Petruşel, Adrian 2010 Fixed point theorems in ordered $$L$$-spaces. Zbl 1086.47026 2006 Fixed point theory. Zbl 1171.54034 Rus, Ioan A.; Petruşel, Adrian; Petruşel, Gabriela 2008 Data dependence of the fixed point set of some multivalued weakly Picard operators. Zbl 1055.47047 Rus, Ioan A.; Petruşel, Adrian; Sîntămărian, Alina 2003 Fixed point theory for a new type of contractive multivalued operators. Zbl 1213.54068 2009 Fixed point theorems for singlevalued and multivalued generalized contractions in metric spaces endowed with a graph. Zbl 1227.54053 2011 Hybrid viscosity extragradient method for systems of variational inequalities, fixed points of nonexpansive mappings, zero points of accretive operators in Banach spaces. Zbl 1406.49010 Ceng, Lu-Chuan; Petruşel, Adrian; Yao, Jen-Chih; Yao, Yonghong 2018 Iterative approaches to solving equilibrium problems and fixed point problems of infinitely many nonexpansive mappings. Zbl 1188.90256 Ceng, L. C.; Petruşel, A.; Yao, J. C. 2009 Systems of variational inequalities with hierarchical variational inequality constraints for Lipschitzian pseudocontractions. Zbl 1430.49004 Ceng, Lu-Chuan; Petruşel, Adrian; Yao, Jen-Chih; Yao, Yonghong 2019 Ulam-Hyers stability for operatorial equations and inclusions via nonself operators. Zbl 1246.54049 Petru, T. P.; Petruşel, A.; Yao, J.-C. 2011 Strong convergence of iterative methods by strictly pseudocontractive mappings in Banach spaces. Zbl 1281.47054 2011 Viscosity approximation to common fixed points of families of nonexpansive mappings with generalized contractions mappings. Zbl 1142.47329 Petruşel, A.; Yao, J.-C. 2008 Well-posedness in the generalized sense of the fixed point problems for multivalued operators. Zbl 1149.54022 Petruşel, Adrian; Rus, Ioan A.; Yao, Jen-Chih 2007 Ulam-Hyers stability for operatorial equations. Zbl 1265.54158 Bota-Boriceanu, M. F.; Petruşel, A. 2011 Strong convergence of modified implicit iterative algorithms with perturbed mappings for continuous pseudocontractive mappings. Zbl 1168.65350 Ceng, Lu-Chuan; Petruşel, Adrian; Yao, Jen-Chih 2009 CQ iterative algorithms for fixed points of nonexpansive mappings and split feasibility problems in Hilbert spaces. Zbl 06876349 Qin, Xiaolong; Petruşel, Adrian; Yao, Jen-Chih 2018 Iterated function systems and well-posedness. Zbl 1198.52014 Llorens-Fuster, Enrique; Petruşel, Adrian; Yao, Jen-Chih 2009 Multivalued fractals and generalized multivalued contractions. Zbl 1131.28005 2008 Fixed point theory 1950–2000. Romanian contributions. Zbl 1005.54037 Rus, Ioan A.; Petruşel, Adrian; Petruşel, Gabriela 2002 Common coupled fixed point theorems for $$w^*$$-compatible mappings without mixed monotone property. Zbl 1260.54066 Sintunavarat, Wutiphol; Petruşel, Adrian; Kumam, Poom 2012 Fixed points for non-self operators and domain invariance theorems. Zbl 1183.47052 2009 Coupled fixed point theorems for symmetric contractions in $$b$$-metric spaces with applications to operator equation systems. Zbl 1489.54199 Petruşel, A.; Petruşel, G.; Samet, B.; Yao, J.-C. 2016 A fixed point theorem and the Ulam stability in generalized dq-metric spaces. Zbl 1402.54037 Brzdęk, Janusz; Karapınar, Erdal; Petruşel, Adrian 2018 Iterative approximation of fixed points for asymptotically strict pseudocontractive type mappings in the intermediate sense. Zbl 1437.47046 Ceng, Lu-Chuan; Petruşel, Adrian; Yao, Jen-Chih 2011 Multivalued weakly Picard operators and applications. Zbl 1066.47058 2004 A modified inertial subgradient extragradient method for solving pseudomonotone variational inequalities and common fixed point problems. Zbl 1477.47060 Ceng, L. C.; Petrusel, A.; Qin, X.; Yao, J. C. 2020 On Frigon-Granas-type multifunctions. Zbl 1043.47036 2002 Krasnoselskii’s theorem in generalized Banach spaces and application. Zbl 1340.47110 Petre, I. R.; Petrusel, A. 2012 Weakly Picard operators: equivalent definitions, applications and open problems. Zbl 1111.47048 2006 Fixed point theorems on spaces endowed with vector-valued metrics. Zbl 1197.54061 2010 Well-posedness of the fixed point problem for multivalued operators. Zbl 1169.47037 2007 Multivalued operators and continuous selections. The fixed points set. Zbl 0937.47052 1998 Operatorial inclusions. Zbl 1057.47004 2002 Ulam stability for Hilfer type fractional differential inclusions via the weakly Picard operators theory. Zbl 1364.34008 Abbas, Saïd; Benchohra, Mouffak; Petruşel, Adrian 2017 Applications of graph Kannan mappings to the damped spring-mass system and deformation of an elastic beam. Zbl 1453.05127 Younis, Mudasir; Singh, Deepak; Petruşel, Adrian 2019 Relaxed extragradient methods with regularization for general system of variational inequalities with constraints of split feasibility and fixed point problems. Zbl 1272.49061 Ceng, L. C.; Petruşel, A.; Yao, J. C. 2013 Relaxed extragradient-like method for general system of generalized mixed equilibria and fixed point problem. Zbl 1243.49037 2012 Ulam stability for partial fractional differential inclusions via Picard operators theory. Zbl 1324.34029 Abbas, S.; Benchohra, M.; Petrusel, A. 2014 Krasnoselski-Mann iterations for hierarchical fixed point problems for a finite family of nonself mappings in Banach spaces. Zbl 1210.47094 Ceng, L. C.; Petruşel, A. 2010 Multivariate fixed point theorems for contractions and nonexpansive mappings with applications. Zbl 1347.54110 Su, Yongfu; Petruşel, Adrian; Yao, Jen-Chih 2016 A study of a general system of operator equations in $$b$$-metric spaces via the vector approach in fixed point theory. Zbl 1489.54195 2017 Fixed points, fixed sets and iterated multifunction systems for nonself multivalued operators. Zbl 1328.54047 2015 Fixed points for multivalued operators on a set endowed with vector-valued metrics and applications. Zbl 1194.54056 Bucur, Amelia; Guran, Liliana; Petruşel, Adrian 2009 A fixed point theorem for cyclic generalized contractions in metric spaces. Zbl 1274.54108 2012 Fixed point theorems for generalized contractions with applications to coupled fixed point theory. Zbl 1489.54197 Petruşel, Adrian; Petruşel, Gabriela; Xiao, Yi-Bin; Yao, Jen-Chih 2018 Composite viscosity approximation methods for equilibrium problem, variational inequality and common fixed points. Zbl 1287.49009 Ceng, L. C.; Petruşel, A.; Yao, J. C. 2014 Two inertial subgradient extragradient algorithms for variational inequalities with fixed-point constraints. Zbl 1486.47105 Ceng, L. C.; Petruşel, A.; Qin, X.; Yao, J. C. 2021 Existence and data dependence of fixed points for multivalued operators on gauge spaces. Zbl 1070.47046 2005 A study of the coupled fixed point problem for operators satisfying a max-symmetric condition in $$b$$-metric spaces with applications to a boundary value problem. Zbl 1389.54104 Petruşel, Adrian; Petruşel, Gabriela; Samet, Bessem 2016 The retraction-displacement condition in the theory of fixed point equation with a convergent iterative algorithm. Zbl 1458.54030 Berinde, V.; Petruşel, A.; Rus, I. A.; Şerban, M. A. 2016 The theory of a metric fixed point theorem for multivalued operators. Zbl 1225.54026 2010 The role of equivalent metrics in fixed point theory. Zbl 1278.54044 2013 Implicit iteration scheme with perturbed mapping for common fixed points of a finite family of Lipschitz pseudocontractive mappings. Zbl 1161.47048 Ceng, Lu-Chuan; Petruşel, Adrian; Yao, Jen-Chih 2007 Fixed point theory for multivalued operators on a set with two metrics. Zbl 1133.47036 2007 Single-valued and multi-valued Meir–Keeler type operators. Zbl 1074.47511 2001 Data dependence of the fixed points set of multivalued weakly Picard operators. Zbl 1027.47053 Rus, Ioan A.; Petruşel, Adrian; Sîntămărian, Alina 2001 Fixed points and homotopy results for Ćirić-type multivalued operators on a set with two metrics. Zbl 1153.47047 Lazăr, Tania; O’Regan, Donal; Petruşel, Adrian 2008 Single-valued and multi-valued Caristi type operators. Zbl 1003.47041 2002 Generalized multivalued contractions. Zbl 1042.47520 2001 An extragradient iterative scheme by viscosity approximation methods for fixed point problems and variational inequality problems. Zbl 1195.49017 2009 A study of a system of operator inclusions via a fixed point approach and applications to functional-differential inclusions. Zbl 1399.47159 Petruşel, Adrian; Petruşel, Gabriela; Yao, Jen-Chih 2016 Fixed points for multivalued Suzuki type $$(\theta, \mathscr{R})$$-contraction mapping with applications. Zbl 07067883 Abbas, Mujahid; Iqbal, Hira; Petrusel, Adrian 2019 On iterated function systems consisting of Kannan maps, Reich maps, Chatterjea type maps, and related results. Zbl 1377.28010 2017 Two extragradient approximation methods for variational inequalities and fixed point problems of strict pseudo-contractions. Zbl 1170.49006 Ceng, L. C.; Petruşel, A.; Lee, C.; Wong, M. M. 2009 Multivalued Picard and weakly Picard operators. Zbl 1091.47047 2004 A class of abstract Volterra equations, via weakly Picard operators technique. Zbl 1197.47080 Şerban, M. A.; Rus, I. A.; Petruşel, A. 2010 Dynamics on $$(P_{cp}(X), H_d)$$ generated by a finite family of multi-valued operators on $$(X,d)$$. Zbl 1011.47043 2001 Fixed points and selections for multi-valued operators. Zbl 1048.47039 2001 Fixed points for operators on generalized metric spaces. Zbl 1160.54030 2008 Coupled fixed point theorems for symmetric multi-valued contractions in b-metric space with applications to systems of integral inclusions. Zbl 1470.54101 Petruşel, Adrian; Petruşel, Gabriela; Samet, Bessem; Yao, Jen-Chih 2016 Coupled fixed point theorems for single-valued operators in $$b$$-metric spaces. Zbl 1347.54069 Bota, Monica-Felicia; Petruşel, Adrian; Petruşel, Gabriela; Samet, Bessem 2015 Integral inclusions. Fixed point approaches. Zbl 0991.47041 2000 Fixed point theory in terms of a metric and of an order relation. Zbl 07262291 2019 Multi-valued graph contraction principle with applications. Zbl 07271707 Petruşel, Adrian; Petruşel, Gabriela; Yao, Jen-Chih 2020 Fixed points vs. coupled fixed points. Zbl 1398.54083 2018 Self-similar sets and fractals generated by Ćirić type operators. Zbl 1437.54064 2015 A fixed point theorem by altering distance technique in complete metric spaces. Zbl 1299.54079 Amini-Harandi, A.; Petruşel, A. 2013 An abstract point of view on iterative approximation schemes of fixed points for multivalued operators. Zbl 1432.54075 2013 Ćirić type fixed point theorems. Zbl 1389.47140 2014 Pseudomonotone variational inequalities and fixed points. Zbl 1489.47088 Ceng, L. C.; Petrușel, A.; Qin, X.; Yao, J. C. 2021 Weak convergence theorem by a modified extragradient method for nonexpansive mappings and mononote mappings. Zbl 1223.47072 Ceng, Lu-Chuan; Petruşel, Adrian; Yao, Jen-Chih 2008 An improved algorithm based on Korpelevich’s method for variational inequalities in Banach spaces. Zbl 1445.47045 Yao, Yonghong; Petruşel, Adrian; Qin, Xiaolong 2018 On hybrid proximal-type algorithms in Banach spaces. Zbl 1215.47060 Ceng, L. C.; Petruşel, A.; Wu, S. Y. 2008 Weak convergence theorem by a modified extragradient method for nonexpansive mappings and monotone mappings. Zbl 1169.49005 Ceng, L. C.; Huang, S.; Petruşel, A. 2009 Existence and Ulam-Hyers stability results for multivalued coincidence problems. Zbl 1289.54137 2012 Hybrid viscosity iterative approximation of zeros of $$m$$-accretive operators in Banach spaces. Zbl 06074771 Ceng, L. C.; Petruşel, A.; Wong, M. M. 2011 Multivalued analysis and mathematical economics. Zbl 1075.26001 2004 Approximation of fixed common points and variational solutions for one-parameter family of Lipschitz pseudocontractions. Zbl 1203.49013 Ceng, Lu-Chuan; Petruşel, Adrian; Szentesi, Silviu; Yao, Jen-Chih 2010 Viscosity approximations by generalized contractions for resolvents of accretive operators in Banach spaces. Zbl 1223.47090 2009 Graphic contraction principle and applications. Zbl 07216131 Petruşel, A.; Rus, I. A. 2019 New fixed point theorems on $$b$$-metric spaces with applications to coupled fixed point theory. Zbl 07240948 Bota, Monica-Felicia; Guran, Liliana; Petruşel, Adrian 2020 On some fixed point theorems for multi-valued operators by altering distance technique. Zbl 1437.54037 2017 Nonlinear dynamics, fixed points and coupled fixed points in generalized gauge spaces with applications to a system of integral equations. Zbl 1418.54026 2015 Order-clustered fixed point theorems and their applications to Pareto equilibrium problems. Zbl 1484.54056 Xie, Linsen; Li, Jinlu; Petruşel, Adrian; Yao, Jen-Chih 2017 Some remarks on regularized multivalued nonconvex equilibrium problems. Zbl 1399.47155 2017 Vector-valued metrics in fixed point theory. Zbl 1330.54066 Petruşel, Adrian; Urs, Cristina; Mleşniţe, Oana 2015 Basic problems of the metric fixed point theory and the relevance of a metric fixed point theorem for a multivalued operator. Zbl 1293.47053 Petruşel, A.; Rus, I. A.; Şerban, M. A. 2014 Vector-valued metrics, fixed points and coupled fixed points for nonlinear operators. Zbl 1314.54040 Petruşel, Adrian; Petruşel, Gabriela; Urs, Cristina 2013 Selection theorems for multivalued generalized contractions. Zbl 1089.54011 2005 Two inertial subgradient extragradient algorithms for variational inequalities with fixed-point constraints. Zbl 1486.47105 Ceng, L. C.; Petruşel, A.; Qin, X.; Yao, J. C. 2021 Pseudomonotone variational inequalities and fixed points. Zbl 1489.47088 Ceng, L. C.; Petrușel, A.; Qin, X.; Yao, J. C. 2021 Graph contractions in vector-valued metric spaces and applications. Zbl 07339864 Petruşel, Adrian; Petruşel, Gabriela; Yao, Jen-Chih 2021 Some variants of fibre contraction principle and applications: from existence to the convergence of successive approximations. Zbl 07396559 2021 A modified inertial subgradient extragradient method for solving pseudomonotone variational inequalities and common fixed point problems. Zbl 1477.47060 Ceng, L. C.; Petrusel, A.; Qin, X.; Yao, J. C. 2020 Multi-valued graph contraction principle with applications. Zbl 07271707 Petruşel, Adrian; Petruşel, Gabriela; Yao, Jen-Chih 2020 New fixed point theorems on $$b$$-metric spaces with applications to coupled fixed point theory. Zbl 07240948 Bota, Monica-Felicia; Guran, Liliana; Petruşel, Adrian 2020 On admissible hybrid Geraghty contractions. Zbl 1478.54077 Karapinar, Erdal; Petruşel, Adrian; Petruşel, Gabriela 2020 Perov type theorems for orbital contractions. Zbl 1460.54055 Petruşel, Adrian; Petruşel, Gabriela; Yao, Jen-Chih 2020 Systems of variational inequalities with hierarchical variational inequality constraints for Lipschitzian pseudocontractions. Zbl 1430.49004 Ceng, Lu-Chuan; Petruşel, Adrian; Yao, Jen-Chih; Yao, Yonghong 2019 Applications of graph Kannan mappings to the damped spring-mass system and deformation of an elastic beam. Zbl 1453.05127 Younis, Mudasir; Singh, Deepak; Petruşel, Adrian 2019 Fixed points for multivalued Suzuki type $$(\theta, \mathscr{R})$$-contraction mapping with applications. Zbl 07067883 Abbas, Mujahid; Iqbal, Hira; Petrusel, Adrian 2019 Fixed point theory in terms of a metric and of an order relation. Zbl 07262291 2019 Graphic contraction principle and applications. Zbl 07216131 Petruşel, A.; Rus, I. A. 2019 Existence and stability results for a system of operator equations via fixed point theory for nonself orbital contractions. Zbl 1489.54198 Petruşel, Adrian; Petruşel, Gabriela; Yao, Jen-Chih 2019 Local fixed point results for graphic contractions. Zbl 1475.54032 2019 Coupled fixed point theorems in quasimetric spaces without mixed monotonicity. Zbl 1474.54217 Petruşel, Adrian; Petruşel, Gabriela; Yao, Jen-Chih 2019 A proximal point algorithm revisited and extended. Zbl 07101215 2019 Pseudo-contractivity and metric regularity in fixed point theory. Zbl 1476.54095 Petruşel, Adrian; Petruşel, Gabriela; Yao, Jen-Chih 2019 Hybrid viscosity extragradient method for systems of variational inequalities, fixed points of nonexpansive mappings, zero points of accretive operators in Banach spaces. Zbl 1406.49010 Ceng, Lu-Chuan; Petruşel, Adrian; Yao, Jen-Chih; Yao, Yonghong 2018 CQ iterative algorithms for fixed points of nonexpansive mappings and split feasibility problems in Hilbert spaces. Zbl 06876349 Qin, Xiaolong; Petruşel, Adrian; Yao, Jen-Chih 2018 A fixed point theorem and the Ulam stability in generalized dq-metric spaces. Zbl 1402.54037 Brzdęk, Janusz; Karapınar, Erdal; Petruşel, Adrian 2018 Fixed point theorems for generalized contractions with applications to coupled fixed point theory. Zbl 1489.54197 Petruşel, Adrian; Petruşel, Gabriela; Xiao, Yi-Bin; Yao, Jen-Chih 2018 Fixed points vs. coupled fixed points. Zbl 1398.54083 2018 An improved algorithm based on Korpelevich’s method for variational inequalities in Banach spaces. Zbl 1445.47045 Yao, Yonghong; Petruşel, Adrian; Qin, Xiaolong 2018 On Reich’s strict fixed point theorem for multi-valued operators in complete metric spaces. Zbl 1489.54196 2018 Coupled fractals in complete metric spaces. Zbl 1420.54085 2018 Variational analysis concepts in the theory of multi-valued coincidence problems. Zbl 1451.54019 Petruşel, Adrian; Petruşel, Gabriela; Yao, Jen-Chih 2018 Ulam stability for Hilfer type fractional differential inclusions via the weakly Picard operators theory. Zbl 1364.34008 Abbas, Saïd; Benchohra, Mouffak; Petruşel, Adrian 2017 A study of a general system of operator equations in $$b$$-metric spaces via the vector approach in fixed point theory. Zbl 1489.54195 2017 On iterated function systems consisting of Kannan maps, Reich maps, Chatterjea type maps, and related results. Zbl 1377.28010 2017 On some fixed point theorems for multi-valued operators by altering distance technique. Zbl 1437.54037 2017 Order-clustered fixed point theorems and their applications to Pareto equilibrium problems. Zbl 1484.54056 Xie, Linsen; Li, Jinlu; Petruşel, Adrian; Yao, Jen-Chih 2017 Some remarks on regularized multivalued nonconvex equilibrium problems. Zbl 1399.47155 2017 Existence results for integral equations and boundary value problems via fixed point theorems for generalized $$F$$-contractions in $$b$$-metric-like spaces. Zbl 1467.54011 Joshi, Vishal; Singh, Deepak; Petruşel, Adrian 2017 Coupled fixed point theorems for symmetric contractions in $$b$$-metric spaces with applications to operator equation systems. Zbl 1489.54199 Petruşel, A.; Petruşel, G.; Samet, B.; Yao, J.-C. 2016 Multivariate fixed point theorems for contractions and nonexpansive mappings with applications. Zbl 1347.54110 Su, Yongfu; Petruşel, Adrian; Yao, Jen-Chih 2016 A study of the coupled fixed point problem for operators satisfying a max-symmetric condition in $$b$$-metric spaces with applications to a boundary value problem. Zbl 1389.54104 Petruşel, Adrian; Petruşel, Gabriela; Samet, Bessem 2016 The retraction-displacement condition in the theory of fixed point equation with a convergent iterative algorithm. Zbl 1458.54030 Berinde, V.; Petruşel, A.; Rus, I. A.; Şerban, M. A. 2016 A study of a system of operator inclusions via a fixed point approach and applications to functional-differential inclusions. Zbl 1399.47159 Petruşel, Adrian; Petruşel, Gabriela; Yao, Jen-Chih 2016 Coupled fixed point theorems for symmetric multi-valued contractions in b-metric space with applications to systems of integral inclusions. Zbl 1470.54101 Petruşel, Adrian; Petruşel, Gabriela; Samet, Bessem; Yao, Jen-Chih 2016 Scalar and vectorial approaches for multi-valued fixed point and multi-valued coupled fixed point problems in $$b$$-metric spaces. Zbl 1470.54102 Petruşel, Adrian; Petruşel, Gabriela; Samet, Bessem; Yao, Jen-Chih 2016 Nonexpansive operators as graphic contractions. Zbl 1362.47036 2016 Contributions to the fixed point theory of diagonal operators. Zbl 1367.47060 2016 Existence and Ulam stability results for Hadamard partial fractional integral inclusions via Picard operators. Zbl 1399.26009 Abbas, Saïd; Albarakati, Wafaa; Benchohra, Mouffak; Petruşel, Adrian 2016 Fixed points, fixed sets and iterated multifunction systems for nonself multivalued operators. Zbl 1328.54047 2015 Coupled fixed point theorems for single-valued operators in $$b$$-metric spaces. Zbl 1347.54069 Bota, Monica-Felicia; Petruşel, Adrian; Petruşel, Gabriela; Samet, Bessem 2015 Self-similar sets and fractals generated by Ćirić type operators. Zbl 1437.54064 2015 Nonlinear dynamics, fixed points and coupled fixed points in generalized gauge spaces with applications to a system of integral equations. Zbl 1418.54026 2015 Vector-valued metrics in fixed point theory. Zbl 1330.54066 Petruşel, Adrian; Urs, Cristina; Mleşniţe, Oana 2015 An endpoint theorem in generalized $$L$$-spaces with applications. Zbl 1311.54034 2015 Approximation methods for triple hierarchical variational inequalities. II. Zbl 1327.49019 Ceng, L.-C.; Ansari, Q. H.; Petruşel, A.; Yao, J.-C. 2015 Existence and uniqueness of fixed point in various abstract spaces and related applications. Zbl 1354.00075 2015 Approximation methods for triple hierarchical variational inequalities. I. Zbl 1311.49018 Ceng, L.-C.; Ansari, Q. H.; Petruşel, A.; Yao, J.-C. 2015 Fixed point theorems for operators in generalized Kasahara spaces. Zbl 1333.47042 2015 Semilinear evolution equations with distributed measures. Zbl 1338.37115 2015 Ulam stability for partial fractional differential inclusions via Picard operators theory. Zbl 1324.34029 Abbas, S.; Benchohra, M.; Petrusel, A. 2014 Composite viscosity approximation methods for equilibrium problem, variational inequality and common fixed points. Zbl 1287.49009 Ceng, L. C.; Petruşel, A.; Yao, J. C. 2014 Ćirić type fixed point theorems. Zbl 1389.47140 2014 Basic problems of the metric fixed point theory and the relevance of a metric fixed point theorem for a multivalued operator. Zbl 1293.47053 Petruşel, A.; Rus, I. A.; Şerban, M. A. 2014 Hybrid algorithms for solving variational inequalities, variational inclusions, mixed equilibria, and fixed point problems. Zbl 1472.47066 Ceng, Lu-Chuan; Petrusel, Adrian; Wong, Mu-Ming; Yao, Jen-Chih 2014 A class of functional-integral equations with applications to a bilocal problem. Zbl 1327.45005 2014 Ciric-type $$\delta$$-contractions in metric spaces endowed with a graph. Zbl 1310.54043 2014 Relaxed extragradient methods with regularization for general system of variational inequalities with constraints of split feasibility and fixed point problems. Zbl 1272.49061 Ceng, L. C.; Petruşel, A.; Yao, J. C. 2013 The role of equivalent metrics in fixed point theory. Zbl 1278.54044 2013 A fixed point theorem by altering distance technique in complete metric spaces. Zbl 1299.54079 Amini-Harandi, A.; Petruşel, A. 2013 An abstract point of view on iterative approximation schemes of fixed points for multivalued operators. Zbl 1432.54075 2013 Vector-valued metrics, fixed points and coupled fixed points for nonlinear operators. Zbl 1314.54040 Petruşel, Adrian; Petruşel, Gabriela; Urs, Cristina 2013 Relaxed implicit extragradient-like methods for finding minimum-norm solutions of the split feasibility problem. Zbl 1292.47043 Ceng, Lu-Chuan; Wong, Mu-Ming; Petruşel, Adrian; Yao, Jen-Chih 2013 Correction to: “A fixed point theorem for cyclic generalized contractions in metric spaces”. Zbl 1283.54028 2013 Common coupled fixed point theorems for $$w^*$$-compatible mappings without mixed monotone property. Zbl 1260.54066 Sintunavarat, Wutiphol; Petruşel, Adrian; Kumam, Poom 2012 Krasnoselskii’s theorem in generalized Banach spaces and application. Zbl 1340.47110 Petre, I. R.; Petrusel, A. 2012 Relaxed extragradient-like method for general system of generalized mixed equilibria and fixed point problem. Zbl 1243.49037 2012 A fixed point theorem for cyclic generalized contractions in metric spaces. Zbl 1274.54108 2012 Existence and Ulam-Hyers stability results for multivalued coincidence problems. Zbl 1289.54137 2012 Multivalued Picard operators. Zbl 1243.54039 2012 Hybrid method for designing explicit hierarchical fixed point approach to monotone variational inequalities. Zbl 1262.49011 Ceng, Lu-Chuan; Lin, Yen-Cherng; Petruşel, Adrian 2012 Multi-step hybrid iterative method for triple hierarchical variational inequality problem with equilibrium problem constraint. Zbl 1257.49008 Ceng, Lu-Chuan; Petruşel, Adrian; Yao, Jen-Chih 2012 Fixed point theorems for singlevalued and multivalued generalized contractions in metric spaces endowed with a graph. Zbl 1227.54053 2011 Ulam-Hyers stability for operatorial equations and inclusions via nonself operators. Zbl 1246.54049 Petru, T. P.; Petruşel, A.; Yao, J.-C. 2011 Strong convergence of iterative methods by strictly pseudocontractive mappings in Banach spaces. Zbl 1281.47054 2011 Ulam-Hyers stability for operatorial equations. Zbl 1265.54158 Bota-Boriceanu, M. F.; Petruşel, A. 2011 Iterative approximation of fixed points for asymptotically strict pseudocontractive type mappings in the intermediate sense. Zbl 1437.47046 Ceng, Lu-Chuan; Petruşel, Adrian; Yao, Jen-Chih 2011 Hybrid viscosity iterative approximation of zeros of $$m$$-accretive operators in Banach spaces. Zbl 06074771 Ceng, L. C.; Petruşel, A.; Wong, M. M. 2011 Strong convergence of implicit viscosity approximation methods for pseudocontractive mappings in Banach spaces. Zbl 1223.49009 Ceng, Lu-Chuan; Petruşel, Adrian; Wong, Mu-Ming; Yu, Su-Jane 2011 Multivalued fractals in $$b$$-metric spaces. Zbl 1235.54011 Boriceanu, Monica; Bota, Marius; Petruşel, Adrian 2010 Fixed point theorems on spaces endowed with vector-valued metrics. Zbl 1197.54061 2010 Krasnoselski-Mann iterations for hierarchical fixed point problems for a finite family of nonself mappings in Banach spaces. Zbl 1210.47094 Ceng, L. C.; Petruşel, A. 2010 The theory of a metric fixed point theorem for multivalued operators. Zbl 1225.54026 2010 A class of abstract Volterra equations, via weakly Picard operators technique. Zbl 1197.47080 Şerban, M. A.; Rus, I. A.; Petruşel, A. 2010 Approximation of fixed common points and variational solutions for one-parameter family of Lipschitz pseudocontractions. Zbl 1203.49013 Ceng, Lu-Chuan; Petruşel, Adrian; Szentesi, Silviu; Yao, Jen-Chih 2010 Strong convergence theorem for a generalized equilibrium problem and a pseudocontractive mapping in a Hilbert space. Zbl 1220.49004 Ceng, Lu-Chuan; Petruşel, Adrian; Wong, Mu-Ming 2010 Fixed point theory for a new type of contractive multivalued operators. Zbl 1213.54068 2009 Iterative approaches to solving equilibrium problems and fixed point problems of infinitely many nonexpansive mappings. Zbl 1188.90256 Ceng, L. C.; Petruşel, A.; Yao, J. C. 2009 Strong convergence of modified implicit iterative algorithms with perturbed mappings for continuous pseudocontractive mappings. Zbl 1168.65350 Ceng, Lu-Chuan; Petruşel, Adrian; Yao, Jen-Chih 2009 Iterated function systems and well-posedness. Zbl 1198.52014 Llorens-Fuster, Enrique; Petruşel, Adrian; Yao, Jen-Chih 2009 Fixed points for non-self operators and domain invariance theorems. Zbl 1183.47052 2009 Fixed points for multivalued operators on a set endowed with vector-valued metrics and applications. Zbl 1194.54056 Bucur, Amelia; Guran, Liliana; Petruşel, Adrian 2009 An extragradient iterative scheme by viscosity approximation methods for fixed point problems and variational inequality problems. Zbl 1195.49017 2009 Two extragradient approximation methods for variational inequalities and fixed point problems of strict pseudo-contractions. Zbl 1170.49006 Ceng, L. C.; Petruşel, A.; Lee, C.; Wong, M. M. 2009 ...and 53 more Documents all top 5
2022-09-27 06:01:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7158692479133606, "perplexity": 7551.252631368718}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00701.warc.gz"}
https://mathematica.stackexchange.com/questions/213448/clearing-usage-message
# Clearing ::usage message After having assigned the usage message to f using ClearAll@f; f::usage=ToString@RandomInteger@{10^10,10^11}; the mouseover message is stuck at the first assignment while ?f shows the freshest usage message. How to clear and reassign the usage message so even mouseover is updated? Env: Mathematica 12.0 on Win10 • You have use Remove to get rid of usage messages. – m_goldberg Jan 24 '20 at 23:43 • @m_goldberg works! I'll accept if you answer... why doesn't ClearAll work though? – lineage Jan 25 '20 at 3:17 ## 1 Answer It seems one can use the following to force the front-end to update the usage templates (starting in version 12.0): FECacheTemplateAndUsage["f"] ` Please note that the front-end will not update the templates even if you set a new usage message. Simply call the above again to force the update once again. • I think the packet this calls has existed since 11 – b3m2a1 Jan 29 '20 at 23:16 • @b3m2a1 For me, it doesn't seem to work: I get "Could not process unknown packet "CacheTemplateAndUsagePacket"" if I try it in 11.3. Am I missing something? – Lukas Lang Jan 30 '20 at 8:40 • not really sure but I think it it did work – b3m2a1 Jan 30 '20 at 8:42
2021-05-16 16:09:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5646305680274963, "perplexity": 2561.117033610712}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991224.58/warc/CC-MAIN-20210516140441-20210516170441-00513.warc.gz"}
http://j-mi.org/articles/view/196
## JMI2011A-9 Some relations between Semaev's summation polynomials and Stange's elliptic nets (pp.89-92) Author(s): Tsunekazu Saito, Shun'ichi Yokoyama, Tetsutaro Kobayashi and Go Yamamoto J. Math-for-Ind. 3A (2011) 89-92. Abstract There are two decision methods for the decomposition of multiple points on an elliptic curve, one based on Semaev's summation polynomials and the other based on Stange's elliptic nets. This paper presents some relations between these two methods. Using these relations, we show that an index calculus attack for the elliptic curve discrete logarithm problem (ECDLP) over extension fields via an elliptic net is equivalent to such an attack via Semaev's summation polynomials. Keyword(s).  Index calculus attack, Semaev's summation polynomials, elliptic nets
2020-01-19 03:20:18
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8622089624404907, "perplexity": 2971.1283097619803}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250594101.10/warc/CC-MAIN-20200119010920-20200119034920-00013.warc.gz"}
https://hsm.stackexchange.com/tags/physics/new
# Tag Info Accepted ### When was the geometric structure of a water molecule discovered? Császára et al. J. Chem. Phys. 122, 214305 (2005) has a nice table of determination of the bond angle of water per year (missing probably Linus Pauling predicting 90° from quantum mechanics in 1931). ... • 1,453 Accepted ### Why is thermodynamics called thermodynamics? Thermodynamics is indeed derived from the Greek words Therme (heat) and Dynamis (power). However, Dynamis is not the same as the Physics definition of Power but is synonymous with "might" or ... • 1,875 1 vote Accepted ### Help in Understanding Emission theory of Empedocles Much of Empedocles exists in fragments of what he originally wrote, so finding his literal writing on the subject may be difficult. However, we can infer what he said based on the commentary provided ... • 128 Top 50 recent answers are included
2022-07-03 09:14:02
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8169381022453308, "perplexity": 2077.9736306481313}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104215805.66/warc/CC-MAIN-20220703073750-20220703103750-00271.warc.gz"}
http://1arcs.ir/1024850/html
# مقاله Zagreb, multiplicative Zagreb Indices and Coindices of ‎graphs —         — ... دانلود ... ## بخشی از متن مقاله Zagreb, multiplicative Zagreb Indices and Coindices of ‎graphs : سال انتشار : 2017 تعداد صفحات :18 ‎Let G=(V,E) be a simple connected graph with vertex set V and edge set E. The first, second and third Zagreb indices of G are respectivly defined by: $M_1(G)=\sum_{u\in V} d(u)^2, \hspace {.1 cm} M_2(G)=\sum_{uv\in E} d(u).d(v)$ and $M_3(G)=\sum_{uv\in E}| d(u)-d(v)|$ , where d(u) is the degree of vertex u in G and uv is an edge of G connecting the vertices u and v. Recently, the first and second multiplicative Zagreb indices of G are defined by: $PM_1(G)=\prod_{u\in V} d(u)^2$ and $PM_2(G)=\prod_{u\in V} d(u)^{d(u)}$. The first and second Zagreb coindices of G are defined by: $\overline {M_1}(G) =\sum_{uv\notin E} ( d(u)+d(v))$ and $\overline {M_2}(G) =\sum_{uv\notin E} d(u).d(v)$. The indices $\overline {PM_1}(G) =\prod_{uv\notin E} d(u)+d(v)$ and $\overline {PM_2}(G) =\prod_{uv\notin E} d(u).d(v)$ , are called the first and second multiplicative Zagreb coindices of G, respectively. In this article, we compute the first, second and third Zagreb indices and the first and second multiplicative Zagreb indices of some classes of dendrimers. The first and second Zagreb coindices and the first and second multiplicative Zagreb coindices of these graphs are also computed.Also, the multiplicative Zagreb indices are computed using link of ‎graphs. لینک کمکی
2022-08-11 08:18:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8735002279281616, "perplexity": 689.72875149426}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571246.56/warc/CC-MAIN-20220811073058-20220811103058-00161.warc.gz"}
http://math.stackexchange.com/questions/465549/proving-the-completeness-theorem-of-metric-spaces/465602
# Proving the completeness theorem of metric spaces. I have to prove that every metric space is isometric to a dense subset of a complete metric space. My proof: Let $X$ be the metric space, and $\{p\}$ the set of limits of all the cauchy sequences in $X$. Then $X\bigcup\{p\}$ is a complete metric space, and $X$ is isometric to $X$, which is a dense subset of $X\bigcup\{p\}$. Is this proof correct? Because my book has a huge 2-page proof, which surely employs different arguments. A potential flaw in my argument might be the unsubstantiated statement that $X\bigcup \{p\}$ is a metric space if $X$ is a metric space. - What does "the set of limits of all the cauchy sequences in X" mean? E.g. what is an element of this set? –  Pete L. Clark Aug 12 '13 at 6:15 Let $\{x_i\}$ be a cauchy sequence in $X$, converging to limit $l_i$, which may or may not be within $X$. Then $l_i\in\{p\}$. I'm assuming the existence of such a limit, regardless of whether it is inside $X$ or not. Is this in invalid assumption? For example, $3,3.14,3.141,\dots$ does not have a limit in $\Bbb{Q}$, but it does have a limit (in $\Bbb{R})$. –  Ayush Khaitan Aug 12 '13 at 6:18 $@$Ayush: In the example of $\mathbb{Q}$ you give, the place where your limit lives is a complete metric space into which your space is isometrically embedded. Are you assuming that an arbitrary metric space can be isometrically embedded in a complete space? If so, then your reasoning is fine but the result is almost trivial (take the closure!). If not, then aren't you not assuming the very thing you are trying to prove? –  Pete L. Clark Aug 12 '13 at 6:22 Is the statement (albeit trivial) that any metric space can be embedded in a complete metric space incorrect? –  Ayush Khaitan Aug 12 '13 at 6:25 Note that Prahlad's answer contains the standard way to proceed. Roughly speaking you take the limit of the Cauchy sequence to be the Cauchy sequence itself and then correct for the fact that different Cauchy sequences may have the same limit (in any completion). Then you need to check a bunch of details. You haven't proved anything yet: this one important hole aside, none of your statements are justified! –  Pete L. Clark Aug 12 '13 at 6:26 Well, the limits of some of the Cauchy sequences in $X$ may not exist (because $X$ needs not be a complete space), so you need to specify what you mean by "the set of limits". For example, suppose your space consists precisely of points $x_n, n\in\mathbb N$, with $d(x_n,x_{n+1})=1/2^n$. There is no limit $p$ to add to $X$. Of course, you can say "Ah, well, then pick a new point $p$ and declare it to be the limit of the sequence". OK, fine. What is the distance between $x_{17}$ and $p$, in that case? Now, consider the same example as in the first paragraph, and note that the sequence $x_3,x_6,x_9,\dots$ is Cauchy. Your description says we need to add a new limit point $p'$ corresponding to it. Is $p'\ne p$? This suggests that the problem is slightly more complicated than anticipated: If two Cauchy sequences are to have the same limit, we better make sure that we add the same limit point for both of them. How do we know that two different sequences ought to have the same limit? Let's say that we manage to solve all those obstacles, that is: We indeed add to $X$ new points, one for each Cauchy sequence in $X$, making sure that if two distinct Cauchy sequences are to converge to the same thing, they indeed do. We also somehow manage to extend the distance function so the new space is metric, and $X$ is dense in it. How do we know that there is not a Cauchy sequence of new points for which we haven't yet added a limit point? Because if there are such sequences, then we ought to iterate the procedure. For how long? Does it ever end? Now, if there are no such sequences, that is certainly part of what the proof needs to show. All that being said, you are on the right track. First we need to deal with what things we are to add. Of course, new points, and the new points can be anything we want, but let's try to choose something specific so we can keep track of them. A natural thing to do is to exploit the fact that if two Cauchy sequences are to have the same limit, then we better add the same point as limit of both of them. One way to take advantage of this is to introduce an equivalence relation on Cauchy sequences, saying that two such sequences are equivalent "if they are to have the same limit". Then as the new points we can just add the equivalence classes of this equivalence relation. How do we check that two distinct sequences have the same limit? Luckily, this is easy: Say the sequences are $x_1,x_2,\dots$ and $y_1,y_2,\dots$ Define a new sequence by $$z_1=x_1,z_2=y_1,z_3=x_2,z_4=y_2,z_5=x_3,\dots$$ Then the two sequences are equivalent iff the new sequence so described is Cauchy. It does not matter here whether there are repetitions in this sequence of $z_i$. (Naturally, there are things to verify here, mainly that this is indeed an equivalence relation.) A small problem at this point is that some sequences may be Cauchy and already have a limit in $X$. The easiest solution is to ignore them. It may not be the prettiest solution, because now your space consists of two rather different creatures: Elements of $X$, and equivalence classes of Cauchy sequences of elements of $X$, that do not converge to an element of $X$. But it is a fine solution (meaning: It works). The standard approach is to avoid this separation of creatures, and simply take as the new space the collection of all equivalence classes of Cauchy sequences. (If a sequence converges to $x\in X$, we identify its equivalence class with $x$, so rather than the isometry being inclusion, at the end we have something a tad more elaborate.) Now comes the second problem: How do we make this thing into a metric space? Luckily, there is an easy solution as well: If $x_n\to p$, then for any $k$, we have that $d(x_k,x_n)\to_{n\to\infty}d(x_k,p)$. So we can use this as the way to define the new distances: Given an equivalence class $p$, let $x\in X$. We define $d(x,p)$ as $\lim_{n\to\infty}d(x,x_n)$, where $x_1,x_2,\dots$ is some Cauchy sequence in the equivalence class $p$. OK. Maybe not so easy: We need to check that this definition gives us a positive real number (as opposed to $0$ [excluded since the $x_n$ do not converge in $X$, so $x$ better not be their limit], or $+\infty$, or to the case when the limit does not exist). We also need to check that this number is independent of the sequence $x_1,x_2,\dots$ we picked. A similar idea gives us how to define $d(p,q)$ when both $p,q$ are equivalence classes. Of course, one still needs to verify this indeed gives us a metric space. Finally, we need to check that this is complete, and $X$ is dense in it. But I'll stop here, as I'm pretty sure the construction in your book is following the same lines. (There are rather different presentations of the construction, that look superficially different and start from very different ideas, but the space obtained at the end is essentially unique, in the sense that any two constructions will be isometric via an isomorphism that identifies their copies of $X$. Naturally, this also takes an argument.) - Another possibility is to embed $X$ into a complete metric space. For example, Lemma: Any metric space is isometric to a subspace of a complete normed space. Proof. Let $(X,d)$ be a metric space and $F=C_b(X)$ be the set of continuous bounded functions $X \to \mathbb{R}$ endowed with the sup norm; then $F$ is a Banach space. Fix $x_0 \in X$. Then $\phi : x \mapsto d(x, \cdot)-d(x_0,\cdot)$ is an isometry from $X$ into $F$. $\square$ Notice that the image of $X$ by the isometry $X \hookrightarrow F$ is dense in the closure $\mathrm{cl}_F(X)$ of $X$ in $F$; moreover, $\mathrm{cl}_F(X)$ is complete. - I am not sure what you mean by the set $\{p\}$. If you look at the limits of all cauchy sequences, then some of these limits would lie outside $X$, right? The usual way to do this is to let $Y$ by the space whose elements are cauchy sequences in $X$. Define an equivalence relation on $Y$ by saying that $(x_n) \sim (y_n)$ iff $$\lim_{n\to\infty} d(x_n,y_n) = 0$$ Define a metric on the quotient $Y/\sim$ by $$\delta([x_n],[y_n]) = \lim_{n\to\infty} d(x_n,y_n)$$ Show that this function $\delta$ is well-defined, and that it is a metric on $Y/\sim$. Now $Y/\sim$ is a complete metric space, and $X$ sits isometrically inside $Y/\sim$, by $$x \mapsto [(x,x,x,\ldots)]$$ I assume that the book does something like this? - Yes the limit points can be outside $X$, no problem. Nowhere have I assumed that $\{p\}\subset X$. –  Ayush Khaitan Aug 12 '13 at 6:20 @Ayush: But what does it mean for a sequence in a metric space to converge to a point outside of the space? –  Pete L. Clark Aug 12 '13 at 6:33 Well, you can't just choose points outside of $X$. $X$ is your universe, and there is nothing outside it. –  Prahlad Vaidyanathan Aug 12 '13 at 6:33
2014-07-28 21:17:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9309566020965576, "perplexity": 122.025539300492}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510261958.8/warc/CC-MAIN-20140728011741-00290-ip-10-146-231-18.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/showing-that-half-sum-of-positive-roots-is-the-sum-of-fundamental-weights.563884/
# Showing that half-sum of positive roots is the sum of fundamental weights 1. Dec 30, 2011 ### naele 1. The problem statement, all variables and given/known data Let L be a simple compact Lie group, and $\Delta_+$ is the set of positive roots. I have previously shown that if $\alpha\in\Delta_+$ and $\alpha_i$ is a simple root, then $s_i\alpha\in \Delta_+$ where s_i is the Weyl reflection associated with $\alpha_i$. Now, let $\delta = \frac{1}{2}\sum_{\alpha\in\Delta_+}\alpha$. I want to show that $$s_i\delta=\delta-\alpha_i$$ 2. Relevant equations 3. The attempt at a solution It's clear that $$s_i\delta=\delta - \sum_{\alpha\neq \alpha_i} \frac{\alpha\cdot\alpha_i}{\alpha_i^2}\alpha_i - \alpha_i$$ But I have no idea how to show that $\alpha\cdot\alpha_i=0\quad \forall\alpha\neq\alpha_i$. I cannot make appeal to the fact that delta might be a sum of fundamental weights because that's what I need to show later on. 2. Dec 31, 2011 ### fzero To be precise, $s_i\alpha\in \Delta_+$ for $\alpha\neq \alpha_i$. That is, $s_i$ reflects $\alpha_i \rightarrow -\alpha_i$ but permutes the $\alpha\neq \alpha_i$ into one another. In light of the comments above, it's more straightforward to note that $$s_i \delta = \frac{1}{2}\sum_{\alpha\in\Delta_+}s_i\alpha =\frac{1}{2} \left( \sum_{\alpha\neq \alpha_i} \alpha - \alpha_i \right).$$ Restoring $\delta$ in an obvious way gives the required result. Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook
2018-03-21 11:28:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7428264617919922, "perplexity": 323.08721998475426}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647612.53/warc/CC-MAIN-20180321102234-20180321122234-00245.warc.gz"}
https://juliadiff.org/ChainRulesCore.jl/stable/debug_mode.html
# Debug Mode ChainRulesCore supports a debug mode which you can use while writing new rules. It provides better error messages. If you are developing some new rules, and you get a weird error message, it is worth enabling debug mode. There is some overhead to having it enabled, so it is disabled by default. To enable, redefine the ChainRulesCore.debug_mode function to return true. ChainRulesCore.debug_mode() = true ## Features of Debug Mode: • If you add a Composite to a primal value, and it was unable to construct a new primal values, then a better error message will be displayed detailing what overloads need to be written to fix this. • during add!!, if an InplaceThunk is used, and it runs the code that is supposed to run in place, but the return result is not the input (with updated values), then an error is thrown. Rather than silently using what ever values were returned.
2021-04-22 18:17:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42515748739242554, "perplexity": 1984.3927399155812}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039594341.91/warc/CC-MAIN-20210422160833-20210422190833-00341.warc.gz"}
https://www.enotes.com/homework-help/1-suppose-bncm-where-has-dimensions-lt-b-has-421142
# 1. Suppose A = B^nC^m, where A has dimensions LT, B has dimensions L2T−1, and C has dimensions LT2. Then the exponent’s n and m have the values? `A = B^n C^m` Where `A = LT` ; `B = L^2T^-1` and `C= LT^2` Arrange the equation first by substituting the terms given. `[L][T] = ([L]^2 [T]^-1)^n ([L] [T]^2)^m` `[L][T] = [L]^(2n+m) [T]^(-n + 2m)` eq 1 ->  `1 = 2n + m`          --> `1 = 2n + m` eq 2 -> `(1 = -n+ 2m)*2` --> `2 = -2n + 4m`    add ------------------- `3 = 5m` `m = 3/5` substitute `m = 3/5` to any of the two equations. eq 1-> `1 = 2n + m` `1 = 2n + (3/5)` `1 -3/5 = 2n` `2/5 = 2n` `n = 1/5` Therefore the values of n and m are 1/5 and 3/5 respectively.
2023-03-27 00:22:13
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8579810857772827, "perplexity": 6071.084119576465}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296946584.94/warc/CC-MAIN-20230326235016-20230327025016-00315.warc.gz"}
https://eccc.weizmann.ac.il/report/2004/052/
Under the auspices of the Computational Complexity Foundation (CCF) REPORTS > DETAIL: ### Paper: TR04-052 | 14th June 2004 00:00 #### Non-Abelian Homomorphism Testing, and Distributions Close to their Self-Convolutions TR04-052 Authors: Michael Ben Or, Don Coppersmith, Michael Luby, Ronitt Rubinfeld Publication: 22nd June 2004 09:06 Keywords: Abstract: In this paper, we study two questions related to the problem of testing whether a function is close to a homomorphism. For two finite groups $G,H$ (not necessarily Abelian), an arbitrary map $f:G \rightarrow H$, and a parameter $0 < \epsilon <1$, say that $f$ is $\epsilon$-close to a homomorphism if there is some homomorphism $g$ such that $g$ and $f$ differ on at most $\epsilon |G|$ elements of $G$, and say that $f$ is $\epsilon$-far otherwise. For a given $f$ and $\epsilon$, a homomorphism tester should distinguish whether $f$ is a homomorphism, or if $f$ is $\epsilon$-far from a homomorphism. When $G$ is Abelian, it was known that the test which picks $O(1/\epsilon)$ random pairs $x,y$ and tests that $f(x)+f(y)=f(x+y)$ gives a homomorphism tester. Our first result shows that such a test works for all groups $G$. Next, we consider functions that are close to their self-convolutions. Let $A = \{ a_g | g \in G\}$ be a distribution on $G$. The self-convolution of $A$, $\cA = \{ \ca_g | g \in G\}$, is defined by $\ca_x = \sum_{y,z \in G; yz=x}a_y a_z$. It is known that $A=\cA$ exactly when $A$ is the uniform distribution over a subgroup of $G$. We show that there is a sense in which this characterization is robust -- that is, if $A$ is close in statistical distance to $\cA$, then $A$ must be close to uniform over some subgroup of $G$. ISSN 1433-8092 | Imprint
2019-09-20 20:38:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9695897698402405, "perplexity": 618.0481594766183}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574077.39/warc/CC-MAIN-20190920200607-20190920222607-00015.warc.gz"}
https://web2.0calc.com/questions/help_37414
+0 # help 0 64 3 Find the number of 10-digit numbers where the sum of the digits is divisible by 5. Jan 16, 2020 #1 0 Jan 16, 2020 #2 0 %%time # Efficient Python 3 program to sum up the # # multiples of digit n in a range of 2 - 10-digit integers # # E.G. "What is the total sum of all 3-digit integers that are #  mutiples of 7?". Answer =70,336 # # find the Sum of having n digit # and divisible by the number def totalSumDivisibleByNum(digit, number): # compute the first and last term firstnum = pow(10, digit - 1) lastnum = pow(10, digit) # first number which is divisible # by given number firstnum = (firstnum - firstnum % number)+number # last number which is divisible # by given number lastnum = (lastnum - lastnum % number ) # total divisible number count =int ((lastnum - firstnum) / number+1  ) print("Total count =", f"{count:,d}") # return the total sum return int(((lastnum + firstnum) * count) / 2) # Driver code digit =10 ; num = 5 print("Total Sum =", f"{totalSumDivisibleByNum(digit, num):,d}") Total count = 1,800,000,000 Total Sum = 9,900,000,002,700,000,000 Wall time: 0 ns Jan 16, 2020 #3 +24097 +2 Find the number of 10-digit numbers where the sum of the digits is divisible by 5. The first digit cannot be zero, so 9 ways, the next 8 digits can be anything, so there are 10 ways for each of them. We can then only choose two digits as last digit to satisfy the condition. sum = $$9\times 10^8 \times 2 = 1~ 800~ 000~ 000$$ $$\begin{array}{|c|c|} \hline \text{the sum of the digits 1 until 9} & \text{last digit to satisfy the condition }\\ \hline \ldots 0 & 0~ \text{ or } ~5 \\ \ldots 1 & 4~ \text{ or } ~9 \\ \ldots 2 & 3~ \text{ or } ~8 \\ \ldots 3 & 2~ \text{ or } ~7 \\ \ldots 4 & 1~ \text{ or } ~6 \\ \ldots 5 & 0~ \text{ or } ~5 \\ \ldots 6 & 4~ \text{ or } ~9 \\ \ldots 7 & 3~ \text{ or } ~8 \\ \ldots 8 & 2~ \text{ or } ~7 \\ \ldots 9 & 1~ \text{ or } ~6 \\ \hline \end{array}$$ Example: 1 000 000 08(1) or 1 000 000 08(6) 4 567 123 98(0) or 4 567 123 98(5) Jan 17, 2020
2020-02-24 12:55:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5995166897773743, "perplexity": 4871.850151913263}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145941.55/warc/CC-MAIN-20200224102135-20200224132135-00510.warc.gz"}
http://mathhelpforum.com/math-topics/98804-relative-velocities.html
# Math Help - Relative Velocities 1. ## Relative Velocities "A glider is moving with a velocity $v = (40, 30, 10)$ relative to the air and is blown by the wind which has velocity relative to the earth of $w = (5, -10, 0)$. Find the velocity of the glider relative to the earth." My argument goes that as the velocity of the wind relative to the earth increases, and the velocity of the glider relative to the air, increase, so does the velocity of the glider relative to earth. So if we let $v_E$ represent the velocity of the glider relative to the earth, $v_E = v + w$. Therefore, in this case the velocity of the glider relative to the earth, $v_E = (40, 30, 10) + (5, -10, 0) = (45, 20, 10)$ However, the answer booklet has the expression for $v_E$ as follows: $v_E = v - w$ giving $v_E = (35, 40, 10)$ which I suppose must be the right answer. Could somebody explain this result to me? Thank you. 2. Originally Posted by Harry1W "A glider is moving with a velocity $v = (40, 30, 10)$ relative to the air and is blown by the wind which has velocity relative to the earth of $w = (5, -10, 0)$. Find the velocity of the glider relative to the earth." My argument goes that as the velocity of the wind relative to the earth increases, and the velocity of the glider relative to the air, increase, so does the velocity of the glider relative to earth. So if we let $v_E$ represent the velocity of the glider relative to the earth, $v_E = v + w$. Therefore, in this case the velocity of the glider relative to the earth, $v_E = (40, 30, 10) + (5, -10, 0) = (45, 20, 10)$ However, the answer booklet has the expression for $v_E$ as follows: $v_E = v - w$ giving $v_E = (35, 40, 10)$ which I suppose must be the right answer. Could somebody explain this result to me? Thank you. I agree with you ... (air vector) + (wind vector) = ground vector the answer booklet is in error, imho. 3. Hello Harry1W Originally Posted by Harry1W "A glider is moving with a velocity $v = (40, 30, 10)$ relative to the air and is blown by the wind which has velocity relative to the earth of $w = (5, -10, 0)$. Find the velocity of the glider relative to the earth." My argument goes that as the velocity of the wind relative to the earth increases, and the velocity of the glider relative to the air, increase, so does the velocity of the glider relative to earth. So if we let $v_E$ represent the velocity of the glider relative to the earth, $v_E = v + w$. Therefore, in this case the velocity of the glider relative to the earth, $v_E = (40, 30, 10) + (5, -10, 0) = (45, 20, 10)$ However, the answer booklet has the expression for $v_E$ as follows: $v_E = v - w$ giving $v_E = (35, 40, 10)$ which I suppose must be the right answer. Could somebody explain this result to me? Thank you. You need to check on the definition of 'wind velocity'. Sometimes (perversely!) it's given as the direction from which the wind blows. For example, a north-easterly is a wind that blows from the N-E; i.e towards the South-West. This would indeed make the velocity of the glider relative to the earth $v - w$. This would indeed make the velocity of the glider relative to the earth $v - w$.
2014-10-24 18:28:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 26, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9696809649467468, "perplexity": 304.8617686839034}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119646352.2/warc/CC-MAIN-20141024030046-00263-ip-10-16-133-185.ec2.internal.warc.gz"}
https://forum.remnote.io/t/enable-html-in-katex-formulas/2465
# Enable HTML in KaTeX formulas With https://katex.org/docs/supported.html#html enabled we could put classes on parts of formulas. This would enable us to hide parts of formulas and reveal them only on hover for example (similar to the actice recall scroll). This could be done even in queue. Combine this with some dummy clozes somewhere on the rem this would be a quick&dirty way to have spaced repetition inside latex formulas: $$\htmlClass{hide}{F} = \htmlClass{hide}{m} \cdot \htmlClass{hide}{a}$$ (Clozes: {{1}}, {{2}}, {{3}}). It could also be somewhat used to turn parts of latex into references? Regarding security you could: • Sanatize the input from everything the user has not typed themselves removing all those special commands when content comes from outside. • Define the trust option per formula and trust: false for everything the user has not typed.
2021-10-20 21:02:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5589578747749329, "perplexity": 6270.248779527635}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585348.66/warc/CC-MAIN-20211020183354-20211020213354-00316.warc.gz"}
https://5cb849dabd72e400080e392a--pru-portal.netlify.com/methodology/additional-asma-time-pi/
# Introduction ## General This document describes the conceptual, informational, and implementation independent model of the additional ASMA time performance indicator. The indicator is used as part of the performance monitoring and reporting under: • SES: IR691/2010 (European Commission 2010) and IR390/2013 (European Commission 2013), and • EUROCONTROL: performance review reporting (Performance Review Unit 2009). This web page is generated from (Cappelleras 2015). ## Purpose of the document The purpose of this document is twofold: 1. to present the concept, and underlying logical and mathematical modelling of the indicator; and 2. to document the processing and use of data for the calculation of the indicator. ## Scope This document covers the data processing and calculation of the additional ASMA time performance indicator. The calculation of this performance indicator is performed according to the Airport Operator Data Flow (Eurocontrol 2010) standard for data collection and processing, under the responsibility of the airports division in the QoS department of the PRU, which is compliant with IR691/2010 and IR390/2013. For the calculation of the indicator, the Airport Operator Data Flow is combined with trajectory data provided by the Network Manager. The associated processes and procedures are documented as part of the PRU Quality Management System. This performance indicator is also defined in the Implementing Regulation (390/2013), Annex I, Section 2 Environment, 2.2 (b). ## Summary of the performance indicator information The following list summarises the status of the indicator: • Current version status: Monitoring. • Version status and evolution: • Conceptual Phase: 2008, phase completed. • Technical Development: 2008, phase completed. • Prototyping / Validation: 2008, phase completed. • Monitoring: RP1 and RP2, active. • Target Setting: n/a. • Phase Out: n/a. • Context • KPA • SES II Performance Scheme: Environment (RP2), Capacity (RP1). • PRC/PRU: Efficiency. • Focus Area: Airport impact on flight duration. • Trade Offs: throughput and ATFM arrival delays. Trade-off can be observed between additional ASMA time, ATFM delay and runway throughput. • Supports the SES II Performance Scheme (IR 691/2010 Annex I - Section 1 3.1 and Section 2 - 3.1). • Description: This indicator provides an approximate measure of the average inbound queuing time on the inbound traffic flow, during times that the airport is congested. • Formula and Metrics: This indicator is calculated on the basis of data availability for actual ASMA entry time (flight entering the area within 40 NM radius around the airport) and Actual Landing Time (ALDT). The additional ASMA time is the difference between the actual ASMA transit time and the median unimpeded ASMA transit time for the group of similar flights. The ASMA additional time for the airport is the average of the average ASMA values for all flights. • Units: Minutes per IFR arrival. • Used in • SES (IR691/2010 & IR390/2013): Annual Performance Report. • SES eDashboard [ RP1 (Performance Review Body 2015) and RP2 (Performance Review Body 2016) ] • EUROCONTROL: Performance Review Report ## Acronyms and terminology Table 1: Acronyms and terminology TermDefinition ACAircraft class ALDTActual Landing Time ASMAArrival Sequencing and Metering Area ATFMAir Traffic Flow Management ATMAir Traffic Management ATMAPATM Airport Performance project ETEntry Time ICAOInternational Civil Aviation Organization IR691COMMISSION REGULATION (EU) No 691/2010 IR390COMMISSION REGULATION (EU) No 390/2013 KPAKey Performance Area KPIKey Performance Indicator PIPerformance Indicator PRUPerformance Review Unit QoSQuality of Service RWYRunway SECASMA sector SESSingle European Sky TMATerminal Manoeuvring Area # Conceptual model ## What we ideally would like to measure On the conceptual level, the indicator aims to address the operational penalty associated with techniques used to maximize runway utilisation for inbound traffic flows at an airport, i.e. the accumulation of additional approach time resulting from speed control, path stretching and circling in the vicinity of the airport (use of holding patterns/stacks). ## Concept of runway optimisation When aircraft cannot fly unimpeded 4D trajectories, there are generally three places at which queuing takes place, as illustrated in the Figure 1: 1. At the departure stand (pre-departure queuing to optimise network performance) 2. At the departure runway (take-off queuing, e.g. runway holding) 3. In the arrival terminal airspace (arrival queuing in the Arrival Sequencing and Metering Area or ASMA, using speed control, stacks, holding, extension of approach path etc.) Uncertainty of approach conditions (e.g. pilot performance, landing clearance time, approach speed, wind conditions) makes traffic supply to runways a stochastic phenomenon. In order to ensure continuous traffic demand at runways and maximise runway usage, a minimum level of queuing is required. A certain extent of arrival queuing in airspace is necessary to allow arrival management (sequencing and metering) to optimise runway utilisation when demand is at or near the operational capacity. However, additional time in holding is detrimental to operations efficiency, fuel consumption and environment. Therefore, a trade-off exists between approach efficiency and runway throughput. Optimisation of the runway utilisation may require: 1. Re-sequencing the take-off/ landing order at the runway (first come is not first served), and 2. buffering a sufficient number of aircraft in the queue to be able to fine tune the metering (to optimise the separation of aircraft released from the queue to the runway.) In both cases some aircraft will suffer a certain penalty in terms of queuing time. Higher runway utilisation targets may require higher levels of departure (take-off) queuing in the manoeuvring area and arrival queuing in airspace. This effect can be reduced if aircraft are already delivered to the queue in the right sequence and at the required time intervals. To reduce cost and environmental impact, the departure and arrival queuing time should be kept to the minimum needed to achieve the selected runway utilisation objectives. If possible, any departure and arrival delay that is needed for other reasons than sequencing and metering should therefore be absorbed at the departure stand through ATFM delays and local ATC pre-departure delay. If this is done properly, then measuring outbound and inbound queuing time allows assessing the “operational cost” associated with sequencing and metering in function of the selected runway utilisation objectives. ## Conceptual approach The additional ASMA time is a proxy for the management of the arrival flow, understood as the average arrival runway queuing time on the inbound traffic flow, during periods of congestion at airports. Performance in terms of additional ASMA time is monitored on the basis of regular reporting in comparison to a nominal reference. Based on regular reporting, metrics are derived for the respective reporting month. The current measurements are compared to a nominal reference to address the level of efficiency. The reference is derived from the statistical analysis of a reference period sample. This approach is depicted in Figure 2. Based on regular reporting, metrics are derived for the respective reporting month. The current measurements are compared to a nominal reference to address the level of efficiency. The reference is derived from the statistical analysis of a reference period sample. The indicator is defined, see Equation (1), as the difference between the ASMA transit time (actual ASMA time) and the unimpeded ASMA time, based on ASMA transit times in periods of low traffic. $$$textrm{Actual ASMA Time} = textrm{Unimpeded ASMA Time} + textrm{Additional ASMA Time} \tag{1}$$$ The unimpeded reference time is determined based on a statistical analysis of historic data observed at the airport, averaged for groupings of similar flights. The additional ASMA time is a measure for the extent of which the actual ASMA time exceeds the unimpeded reference. # Logical model This section describes the underlying logical modelling and drives the implementation of the additional ASMA time algorithm. ## Assumptions The purpose of the additional ASMA time indicator is to provide an approximate measure of the average inbound queuing time on the inbound traffic flow, during times that the airport is congested. The calculation of this indicator is based on generalised ASMA area defined around an airport. Aircraft are subject to the management of the arrival flow upon entering the ASMA area. Accordingly, the time spent within the ASMA area, i.e. the time elapsing between entering the ASMA area (actual ASMA entry time, ET) and the actual landing time of an arriving flight (ALDT). The generalised ASMA area is defined by a cylinder of radius 40NM around the airport extending to unlimited in terms of altitude. Actual ASMA time refers to the period between the point in time when the aircraft enters the ASMA cylinder for the last time (‘entry time’) and the landing time. This ensures a consistent measurement for the inbound flow in the 40 NM around an airport. The additional time is measured as the average additional time beyond the unimpeded ASMA time, which is a statistically determined reference time based on actual ASMA times in periods of low traffic demand. Note: The indicator is currently defined for a radius of 40 NM to allow for comparability across Europe. For monitoring purposes, a supporting metric considering a radius of 100NM is calculated by PRU (the 40/100NM positions and timestamps are calculated from data feeds the Network Manager receives from member states.) This indicator excludes influence of the following factors: 1. Impact of noise management and terrain clearance aspects: the same effects are included in both the impeded and unimpeded transit times; therefore this does not show up in the additional time which is the difference between impeded and unimpeded. 2. Effect of runway friction deteriorations: periods with such conditions are excluded. The calculation algorithm does not explicitly take any weather conditions into account. ## Grouping of flights To reduce the number of combinations of unique entry points and arrival runway, arriving flights entering the ASMA area within certain limits are grouped together. The clustering is based on observed arrival flows (i.e. crossing of flown trajectory with the ASMA cylinder). The result of this clustering yields the ASMA sectors and may not be confused with the actual TMA or approach sectors around the airport. Each ASMA sector covers a major arrival flow, and the extent of the sector is based on visualization of arrival radar tracks (see Figure 4) and the aforementioned entry points. The indicator is first calculated at disaggregated level, i.e. per comparable grouping of flights with the same combination of ASMA sector, landing runway and aircraft class. Each grouping of flights has an unimpeded reference associated. Taking the weighted average of the values for all groups produces the ASMA additional time for the airport. ## Overview of the logical model of the Additional and Unimpeded ASMA Times This section focuses on the algorithm for the calculation of the additional ASMA time indicator from a logical point of view. The additional ASMA time calculation is depicted below. The unimpeded ASMA time is calculated as depicted below. ## Logical approach to Additional ASMA Time calculation The computation of the indicator is based on four consecutive steps: 1. Filter out the flights with erroneous data and helicopters. 2. The unimpeded times are calculated from a reference dataset in a separate process that is explained in the next section, and their values are constant for groups of similar flights (same ASMA entry sector, same arrival runway, same aircraft class). 3. Calculation of the average additional time for each group of similar flights by calculating the additional time for each flight through subtraction of the group’s unimpeded time from the actual time each flight spent in ASMA space. 4. The calculation of the average additional ASMA time for the airport which is the weighted average of the average additional ASMA times of all groups of similar inbound flights. [min/IFR flight]. ## Logical approach to Unimpeded ASMA Time calculation The unimpeded ASMA time for each flight is taken from the unimpeded reference tables. These are calculated by averaging the actual ASMA time for the unimpeded flights from a reference sample (e.g. one year worth of data). The basis of the algorithm is to determine which flights are unimpeded. The unimpeded ASMA time corresponds to the ASMA time that an aircraft of a given triplet, aircraft type – entry sector – runway combination, would spend if no additional sequencing time would be added, i.e., if the operation would be unimpeded. The unimpeded times are calculated from a reference data set, and their values are constant for each triplet combination. The process steps are described below: 1. The flights with no data or with wrong data are filtered; 2. Actual ASMA time and congestion level are calculated per flight; 3. For each flight and flight groupings of same aircraft type – entry sector – runway combination, calculate the saturation level per grouping; 4. From these flights, determine which flights are unimpeded by comparing congestion level and saturation level, for each grouping of flights with a different AC – SEC – RWY combination; 5. The unimpeded time is calculated as the median of all flights in the grouping of combinations AC-SECRWY, for groupings that have at least 20 unimpeded flights. No unimpeded reference time is calculated for the groupings that have less than 20 unimpeded flights. Night flights are excluded. # Mathematical model The aim of this section is to describe how the logical model is modelled mathematically. ## Mathematical model of the Additional ASMA Time performance indicator ASMA (Arrival Sequencing and Metering Area) is defined as the airspace within a radius of 40NM around an airport. The additional ASMA time is a proxy for the average arrival runway queuing time of the inbound traffic flow, during times when the airport is congested. Mathematically, the actual ASMA transit time per flight is calculated as the difference between the entry time at ASMA cylinder and ALDT. The additional ASMA time performance indicator is calculated as the difference between the actual transit time, and a previously computed transit time reference considered as unimpeded ASMA time. Throughout this chapter units for each variable are shown in [ ] brackets. ### Step A: Filtering Calculation of the additional ASMA time performance indicator is done with the flight data reported by airports as monthly reporting in combination with data received from the Network Manager. The following filter criteria apply: • flights with no actual ASMA time or an actual ASMA time of more than 2 hours are excluded, i.e. flights with $$\textrm{AcASMA} < 120 \textrm{min}$$ are taken into account; • Helicopters are also excluded from the calculation. ### Step B: Determination of unimpeded time AC-RWY-SEC combination In this step, flights are assigned a reference unimpeded ASMA time, according to the grouping of flights they belong. Flights are grouped by $$c_i$$, or grouping of flights with the same combination aircraft class, ASMA sector and arrival runway (direction of runway, i.e. 12 or 30R), at each airport $$j$$ ($$j = 1 \dotso n$$, $$n$$ being the total number of airports affected by regulation IR390/2013). For example, if there are four aircraft classes landing at the airport $$j$$, two ASMA sectors and two arrival runways, then there will be $$16 \, c_j$$ groupings of flights. $$\textrm{UASMA}(c_i)$$, the unimpeded ASMA time, is a calculated constant for each grouping $$c_i$$ (for calculation see section 4.2) [min]. The unimpeded ASMA time is the ASMA transit time in non-congested conditions at arrival airport. ### Step C: Calculation of Additional ASMA Time per flight Let: • $$f$$ arrival flight, • $$f_{c_i}$$ Arrival Flight belonging to the grouping $$c_i$$. Each grouping $$c_i$$ contains at least one flight. • $$\textrm{ET}(f_{c_i})$$ ASMA entry time. The time of last entry of the flight $$f_{c_i}$$ in its ASMA sector [time], • $$\textrm{ALDT}(f_{c_i})$$ Actual Landing Time of flight $$f_{c_i}$$ [time], The $$\textrm{AcASMA}(f_{c_i})$$, the actual ASMA transit time for a flight $$f_{c_i}$$, is defined as the elapsed time between the time of the last entry of the flight $$f_{c_i}$$ in its ASMA sector, $$\textrm{ET}(f_{c_i})$$ [time], and its Actual Landing Time $$\textrm{ALDT}(f_{c_i})$$ [time]. $\textrm{AcASMA}(f_{c_i}) = \textrm{ALDT}(f_{c_i}) - \textrm{ET}(f_{c_i}), [min]$ Then, the Additional ASMA Time per flight $$\textrm{AdASMA}(f_{c_i})$$ is calculated for each flight $$f_{c_i}$$ as the difference between the Actual ASMA Transit Time $$\textrm{AcASMA}(f_{c_i})$$ of the flight and the Unimpeded ASMA Time $$\textrm{UASMA}(c_i)$$ for the Grouping $$c_i$$ to which the flight belongs. $\textrm{AdASMA}(f_{c_i}) = \textrm{AcASMA}(f_{c_i}) - \textrm{UASMA}(c_i), [min]$ ### Step D: Calculation of the Additional ASMA Time per airport • $$N_j$$ is the total number of IFR arrivals in the data set used for calculation of the additional ASMA time performance indicator, [count]. The additional ASMA time for a given airport $$\textrm{AdASMA}_j$$ is the average of the additional ASMA time $$\textrm{AdASMA}(f_j)$$, for all the flights $$f_j$$ at that airport that have an unimpeded reference, in the sample $$N_j$$. $\textrm{AdASMA}_j = \frac{ textrm{AdASMA}(f_j)}{N_j}, [\textrm{min}/\textrm{IFR arrival flight}].$ ## Mathematical model of the Unimpeded ASMA Time The unimpeded ASMA time is the ASMA transit time in non-congested conditions at arrival airports. The unimpeded ASMA time is used (as a constant number for each combination of aircraft class, ASMA sector and arrival runway) in the calculation of the additional ASMA time. The unimpeded ASMA times are calculated for IFR arriving flights, at each airport. The following section details the calculation done as part of each step. The units for each variable are shown in [ ] brackets. ### Step 1: Filtering Reference sample for calculation of unimpeded times for all airports is one year, normally from (and including) January $$1^{st}$$ until $$31^{st}$$ December. The year depends on the availability of the data. A filter is applied so that only flights with $$\textrm{AcASMA}(f_{c_i}) < 120\, \text{min}$$ are taken into account. Incomplete records will not be taken into account for the calculation, that is, records with no landing or Entry Time data. ### Step 2: Computations at flight level: ASMA Time, Congestion level At flight level, there are two parallel computations that lead to two new variables: the actual ASMA time (Step 2a) and the congestion level (Step 2b). #### Step 2a: Calculation of the Actual ASMA Time per flight Let: • $$f$$ arrival flight, • $$f_{c_i}$$ Arrival Flight belonging to the grouping $$c_i$$. Each grouping $$c_j$$ contains at least one flight. • $$\textrm{ET}(f_{c_i})$$ ASMA entry time. The time of last entry of the flight $$f_{c_i}$$ in its ASMA sector [time], • $$\textrm{ALDT}(f_{c_i})$$ Actual Landing Time of flight $$f_{c_i}$$ [time], Then, $$\textrm{AcASMA}(f_{c_i})$$, the Actual ASMA transit time for a flight $$f_{c_i}$$, is defined as the elapsed time between the time of the last entry of the flight $$f_{c_i}$$ in its ASMA sector, $$\textrm{ET}(f_{c_i})$$ [time], and its actual landing time $$\textrm{ALDT}(f_{c_i})$$ [time]. #### Step 2b: Determination of Congestion Level per flight For each flight $$f_{c_i}$$, a congestion level $$\textrm{seq}(f_{c_i})$$ can be determined [count]. The congestion level is the number of other landings during the time interval: [ASMA entry time $$\textrm{ET}(f_{c_i})$$, actual landing time $$\textrm{ALDT}(f_{c_i})$$] and then subtracting the flight itself. Thus, for all flights $$f$$ landing in that time interval: $\text{seq}(f_{c_i}) = \sum f - 1 \quad \forall f \textrm{ such that } \text{ET}(f_{c_i}) \leq \text{ALDT}(f) \leq \text{ALDT}(f_{c_i})$ ### Step 3: Computation of the Saturation level Computation of the saturation level requires the previous determination of the airport congestion index (Step 3a) and airport arrival throughput $$R_j$$ (Step 3b). The grouping of flight according to their AC-RWY-SEC combination (Step 3c) allows the calculation of the first unimpeded time estimate U1 (Step 3d) and latter determination of the saturation level based on the results from each one of the computations (Step 3e). #### Step 3a: determination of airport congestion index At airport level, a constant known as the congestion limit $$\textrm{cl}$$ is chosen as: • $$0.5$$ for all airports. #### Step 3b: Calculation of Airport Peak Hourly Arrival Throughput The next step is defining the airport peak hourly arrival throughput, which is a theoretical hourly rate based on the truncated 20min window prior to the arrival. The – theoretical – maximum airport arrival throughput is seen as determinant of the level of traffic saturation and, thus, the threshold at which effects of congestion can be observed. For each aircraft landing at airport A, the number of aircraft that landed in the previous 20 minutes is counted. Let $$t_i$$ be the arrival time of the aircraft $$i$$, $$n$$ be the number of aircraft that have landed in the window $$[t_i − 20, t_i ]$$ and $$f_i$$ the landing time of the first aircraft in the window. Then, the arrival rate during this period is given by $\text{hourly rate} = \frac{n - 1}{t_i - f_i}$ The numerator of the fraction is expressed as $$n – 1$$ to denote that the aircraft $$i$$ is excluded for the calculation of the rate. Therefore, the arrival throughput for each flight $$f_{c_i}$$ is calculated as: $\text{hourly rate}(f_{c_j}) = \frac{count - 1}{base}$ where ${count} = \sum f \quad \textrm{for } f \textrm{ such that} \quad {ALDT}(f_{c_i}) - 20\, \text{min} \leq {ALDT}(f) \leq {ALDT}(f_{c_i})$ ${base} = {ALDT}(f_{c_i}) - \textrm{min}({ALDT}(f)) \quad \text{where } {ALDT}(f) \in [{ALDT}(f_{c_i}) - 20 \, \text{min}, {ALDT}(f_{c_i})]$ For example, for a flight landing at 07:04:00hrs, the first preceding arrival is observed at 06:45:04. The resulting rate base is hence 18:56 minutes. Assuming 9 arrivals would be observed in this window (without counting the flight 07:04:00 flight), this yields a (theoretical) hourly throughput of: $\textrm{hourly departure throughput}(\text{flight}) = \ \frac{9\, \textrm{flights}}{ {18:56}\, \textrm{min}} = \ 0.4754 \frac{\textrm{flights}}{\textrm{min}} = \ 28.52 \frac{\textrm{flights}}{\textrm{hour}}$ The peak hourly arrival throughput of the airport, $$R_j$$ , is calculated as the 90 percentile of all $$\text{hourly rate}$$ values in the reference sample $R_j = 90^{th} \textrm{percentile}(\textrm{hourly rate}), [\textrm{flights}/\textrm{hour}]$ #### Step 3c: Grouping of similar flights by AC-RWY-SEC combination Flights are grouped by $$c_i$$ , or Grouping of flights with the same combination aircraft class, ASMA sector and arrival runway (direction of runway, i.e. 12 or 30R), at each airport $$j$$ ($$j = 1 \dots n$$, being $$n$$ the total number of airports affected by regulation IR390/2013). For example, if there are four aircraft classes landing at the airport $$j$$, two ASMA sectors and two arrival runways there will be $$16\, c_i$$ groupings of flights. #### Step 3d: Calculation of the first Unimpeded ASMA Time estimate $$U_1(c_i)$$ is defined as the first Unimpeded ASMA Time estimate for each grouping of flights $$c_i$$, being the $$20^{th}$$ percentile of all the ASMA transit times of the flights belonging to that grouping [min]. $U_1(c_i) = 20^{th}\textrm{percentile}({AcASMA}(f_{c_i}))$ #### Step 3e: Determination of the Saturation Level per grouping While the congestion level is a measure for the traffic encountered by an individual flight, the saturation threshold describes the maximum traffic level served under non-congested traffic conditions. As an upper bound for the saturation threshold, the saturation level can be estimated as the maximum number of aircraft landing under non-congested conditions (expressed by the first estimate of the unimpeded ASMA time). Dependent on the previous steps 3b and 3d the saturation level is calculated by multiplying the estimate of the unimpeded time U1 with the peak hourly throughput $$R_j$$, to provide an estimation of the (theoretical) maximum number of arrivals per hour served by the airport without congestion. The result is rounded to the next unit. $$L(c_i)$$, the saturation level of the grouping $$c_i$$, is calculated as: $L(c_i) = \textrm{round}(\frac{R_j U_1(c_i)}{60})$ ### Step 4: Identification of unimpeded flights To ensure that a flight is unimpeded, its congestion level needs to be sufficiently smaller than the saturation level. Departing flights are considered as non-congested, if their congestion limit is equal or below the saturation threshold. However, it needs to be ensured that the data sample is big enough to produce a statistically relevant sample for the chosen stand/runway combination and robust estimate of the unimpeded time. To limit the impact of any congestion effect (and to address the sample size), the saturation threshold is estimated as a fraction of the saturation level. This limitation is achieved with the definition of the congestion limit $$\textrm{cl}$$, a constant defined in Step 3a. Based on the previous outputs, the identification of the unimpeded flights is done: the saturation level, corrected with the congestion limit, is compared to the congestion level. With the aim of designating the unimpeded flights,$${fu}_{c_i}$$ is defined as a binary variable, denoting that a flight $$f_{c_i}$$ belonging to the grouping $$c_i$$ is an unimpeded flight. A flight is considered unimpeded if its congestion level $${seq}(f_{c_i})$$ is less than or equal to the product of congestion limit $${cl}$$ and the flight’s grouping saturation level $$L(c_i)$$. ### Step 5: Computation of unimpeded time per grouping ${fu}_{c_i} = \begin{cases} 1,& \forall f_{c_i} \in \text{seq}(f_{c_i}) \leq 0.5 L_(c_i) \\ 0,& \text{otherwise} \end{cases}$ The unimpeded ASMA time per grouping is computed as the median of the ASMA time only for the unimpeded flights. The unimpeded ASMA time $$\textrm{UASMA}(c_i)$$ for a grouping $$c_i$$ is a calculated constant at the airport $$j$$. In order to derive statistically meaningful and representative unimpeded times per group, only those groupings with equal or more than 20 flights are retained in the calculation. For groupings $$c_i$$ that have less than 20 unimpeded flights $$\text{fu}_{c_i}$$, the associated unimpeded time $$\text{UASMA}(c_i)$$ is not calculated. These groupings 𝑐𝑖 do not have unimpeded times (consequently, it is not possible to calculate additional ASMA time for those groupings). Night flights are excluded from the calculation at this point, so only the flights taking off during day time are considered for the calculation of the reference unimpeded times. On the standard calculation day time is defined between 06:30 and 22:00 of airport local time. Calibration of day time definition may lead to different definition in some of the busiest airports (see section 5). For the groupings $$c_i$$ that have 20 or more unimpeded flights, the unimpeded time is defined as the median of the Actual ASMA Transit Time $$\text{AcASMA}(f_{c_i})$$, of all unimpeded flights $$\text{fu}_{c_i}$$ belonging to the grouping of flights. $\text{UASMA}(c_i) = \begin{cases} \textrm{median}(\text{AcASMA}(f_{c_i})),& \forall c_i \in \textrm{count}(\text{fu}_{c_i}) \geq 20 \\ \textrm{null},& \text{otherwise} \end{cases}$ These times are aggregated in Unimpeded References Tables for the calculation of the additional ASMA time as described in Section 4.1. Although unimpeded ASMA time constants for each 𝑐𝑖 are relatively static in time, regular checks are made to ensure that they remain representative of the operations for the airport under consideration (see section 5.2). In case a change of unimpeded times is detected, the causes of that change (i.e. new procedure implementation, change of TMA design) are investigated. When required, new unimpeded time constants are calculated. However, because the unimpeded time constants calculation method is based on statistical analysis of the Actual ASMA transit time, a period of several months after the change is required before new robust unimpeded times can be established. # Calibration of model parameters This section describes the model parameters that need to be customized for every airport, and the approach used for obtaining and updating them. For the additional ASMA time indicator, the parameters that need manual calibration are the ASMA sectors and the unimpeded times. ## ASMA sectors For a given runway configuration, the ASMA transit time depends on the direction from which the ASMA cylinder is entered. For this reason, the ASMA cylinder is divided into so-called ASMA sectors. The ASMA sectors are defined according to a statistical cluster analysis of the distribution of the inbound traffic. Figure 19 shows the sectorization for Heathrow. The ASMA entry bearings are checked every month to investigate whether there are any changes in the arrival flows directions. In case substantial changes in the arrival flow directions are identified, the differences will be investigated with the particular airport, and the ASMA sectors will be updated if applicable. ## Unimpeded times The base period for the calculation of Unimpeded times (which are constants for each one of the groupings) is the year 2011, but this can vary across the different airports. Unimpeded times are calculated once, over 1 year, and they are kept in a static reference table. As part of the manual quality analysis of the results, the unimpeded times are recalculated every month to investigate whether there are any changes in unimpeded times. Changes at the airport and the airspace design can have an impact on the unimpeded times. Only in the cases that the substantial changes are reported for an airport, the differences will be investigated with the particular airport. If the change is due to a change in the airport characteristics that require a new performance reference, the reference unimpeded times will be modified accordingly. Because the unimpeded time constants calculation method is based on statistical analysis of the actual ASMA time, a period of three months after the change is required before new robust unimpeded times references can be established. ### Example of unimpeded times recalculation The following represents an example considering an airport XXXX with its reference unimpeded times based on the full year of 2011. As part of the monthly values calculation, the unimpeded times will be recalculated each month (e.g. for June 2014) and compared with the “reference” ones (based on 2011). • If the values are considered similar enough, the static values will be kept: the airport reference period will still be 2011. The validity of the reference unimpeded times based on 2011 will be extended to June 2014) • If a change is detected in the June 2014 results (the unimpeded times recalculated based on June 2014 are significantly different from the 2011 ones), the issue will be investigated and discussed with the airport, in order to decide which new reference to take. If, for example, the change was due to a new runway opened in June 2014, the new reference will be the following year: June 2014-June 2015. However, if the difference corresponds to a runway closure of 3 months, the reference would be modified only for the affected 3 months. Two parameters can trigger the renewal of the unimpeded time reference for one airport, each month: • The number of flights that do not have unimpeded reference (are not in a grouping with more than 20 unimpeded flights) are more than 10% of the total traffic of the concerned month. • The standard deviation of the unimpeded times of all the flights of the month is higher than 2 minutes. ## Day Time Due to extended opening hours in certain airports, day time may be extended for the filtering of the flights in the unimpeded ASMA time calculation. # Source data ## Main and secondary data sources The additional ASMA time indicator is calculated using data provided by the airport operators and the Network Manager: • The airport operators provide the Actual Landing times, the aircraft type and the runways used for the arrivals. • The Network Manager (NM) provides the entry points to the ASMA cylinder (position and time of entry) coming from the Correlated Position Reports (CPR). CPR are built from ANSP radar track data for both 40NM and 100NM. When CPR data for ASMA entry point data is not available, the data from NM’s Current Tactical Flight Model (CTFM) is used as a substitute. NameSource SES (IR691/390)Alternative Source Arrival airportAirportsANSPs Actual landing timeAirportsANSPs or NM Arrival RunwayAirports Aircraft typeAirports Actual ASMA entry time and pointNM (based on ANSP)NM (CTFM) Table 3: Data Sources Note: The NM data flow also provides the calculated landing times (from both CPR and CTFM). Given the quality assurance measures defined for the airport operator data flow, NM data flow based timestamps will only be used for complementing the airport operator data flow. # Quality management ## The Airport Operator Data Flow process The airport operator data flow (APDF) comprises all data collection, processing, and performance indicator calculation sub-processes. Reporting entities (i.e. airport operators) submit their data to EUROCONTROL on a monthly basis and in compliance with the APDF data specification. Several activities are performed in the data flow process, involving different actors, until Performance reports are published. As a summary, a high level overview of the activities can be found below: The airport operator data flow can be conceptualised as stages sub-processes: • APDF_1 – Data Collection • APDF_2 – Data Validation • APDF_3 – Data Extract, Transform and Load • APDF_4 – Data Merging • APDF_5 – Pre-Computation of performance parameters • APDF_6 – Calculation of Performance Indicators Each of the sub-processes are governed by quality assurance measures. The PRU assumes responsibility for the whole flow in terms of quality assurance. Data collection and initial validation is performed by CODA. Once the data is loaded, the data processes within the EUROCONTROL data warehouse are managed by PRISME. In the final stage, PRU extracts the relevant data and computes the performance metrics and indicators. APDF related documentation is published under the PRU Quality Management System. ## Quality Assurance Framework Quality assessment on airport data flow process is focused towards implementing a Quality Assurance Framework based on the ISO 9001 standard. The airport operator data flow process includes several data processing activities, starting from the moment flight information is provided by the reporting entity until performance reports are released. Standard Operating Procedures for all these sub-processes have been established and is quality controlled. The documentation is published under the PRU Quality Management System. ## Data Quality Checks For the APDF the following quality areas have been identified. Quality controls in support of these quality areas are implemented and regularly monitored as part of the aforementioned APDF sub-processes. More detail on these quality checks can be found in the Airport Data Flow Data Specifications (see [3]). ## Performance Indicator Quality Checks The average additional ASMA time per flight grouped by airport and month represents the main result from the performance indicator calculation process. In addition, this process provides parameters that are used for data validation and statistical analysis: • average unimpeded time • standard deviation • Values for 25%,50% and 75% percentiles • number of flights, number of unimpeded flights and number of flights with valid data • total additional time, unimpeded time and 25%, 50% and 75% percentile • completeness and coverage of the traffic sample These metrics are used for PRU internal validation activities and may trigger case-by-case analyses if significant variations are observed. # References Cappelleras, Laura. 2015. “Additional Asma Time Performance Indicator Document.” 00-06. Eurocontrol/PRU. http://ansperformance.eu/methodology/unimpeded-asma-time/. Eurocontrol. 2010. “Data Specification for Airport Operators (Ec Reg 691/2010, Annex Iv).” Eurocontrol. http://www.eurocontrol.int/sites/default/files/content/documents/official-documents/regulatory-documents/ir691-airport-data-specification-v3-15feb2011.pdf. European Commission. 2010. “Commission Regulation (EU) No 691/2010 of 29 July 2010 Laying down a Performance Scheme for Air Navigation Services and Network Functions and Amending Regulation (Ec) No 2096/2005 Laying down Common Requirements for the Provision of Air Navigation Service.” https://goo.gl/Vcs28c. ———. 2013. “Commission Implementing Regulation (Eu) No 390/2013 of 3 May 2013 Laying down a Performance Scheme for Air Navigation Services and Network Functions Text with Eea Relevance.” https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32013R0390. Performance Review Body. 2015. “Dashboard Rp1.” http://www.eurocontrol.int/prudata/dashboard/eur_view_2014.html. ———. 2016. “Dashboard Rp2.” http://www.eurocontrol.int/prudata/dashboard/rp2_2015.html. Performance Review Unit. 2009. “ATM Airport Performance (Atmap) Framework.” 1. Performance Review Commission.
2019-05-22 12:51:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.5643665194511414, "perplexity": 5717.875253548582}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256812.65/warc/CC-MAIN-20190522123236-20190522145236-00518.warc.gz"}
https://plumbingandhvac.ca/u-s-changes-process-for-creating-energy-standards/
The United States Department of Energy (DOE) has updated its process to evaluate energy efficiency standards. On Feb 13, 2019, the DOE proposed to modernize the “process rule” which has not been updated since 1996. On Jan. 15., the DOE finalized its proposal. “Clearer energy efficiency standards will provide certainty to manufacturers, allowing them to produce products that will save consumers money on a variety of appliances,” said Dan Brouillette, U.S. energy secretary. “These modernized procedures will increase transparency, accountability, and regulatory certainty for the American people.” The rule expands opportunities for the public to become engaged early in the rulemaking process. Major elements to the rule include: • Establishing a threshold for “significant” energy savings at 0.3 quads (a short-scale quadrillion) of site energy over 30 years, or, if less than that amount, a 10 per cent improvement over existing standards; • Requiring that DOE establish final test procedures 180 days before proposing a new energy conservation standard rulemaking; and • Clarifying that DOE will codify private sector consensus standards for test procedures. This change will allow manufacturers to test their products at lower cost than when DOE creates a separate testing metric. Some industry groups have responded to the announcement, including the American Council for an Energy-Efficient Economy (ACEEE). “In yet another attack on energy-saving policies, the Trump administration today approved a rule that will make it much more difficult to set new energy efficiency standards for common appliances and equipment — from refrigerators, dishwashers and home furnaces to commercial air conditioners and industrial motors,” reports the ACEEE. There is further concern by the Appliance Standards Awareness Project (ASAP) about creating more issues than solutions with these new changes. The ACEEE argues that there are several elements of the new process that makes it harder to set new standards. This includes: • A minimum savings threshold that will make new standards for many products illegal; • Increased deference to industry developed test procedures; • Increased deference to standards established by ASHRAE for commercial products; • A pre-rulemaking process that can lead to a decision to not conduct a rulemaking; • Requirement that DOE “cover” products before setting standards; • Requirement that DOE re-start the standards rulemaking process whenever the test procedure is amended; • Requirement that DOE re-start the standards rulemaking process whenever more products are included within the scope of a regulation; and • A mandate that makes the process rule legally binding in all instances. The efficiency standards include more than 60 categories of appliances. Share.
2021-05-16 06:22:51
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8132688403129578, "perplexity": 5694.912961620692}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989690.55/warc/CC-MAIN-20210516044552-20210516074552-00590.warc.gz"}
http://astro7a.wikia.com/wiki/Bohr_Atom
## FANDOM 48 Pages The Bohr Atom depicts the nucleus of the atom at the center with electrons in orbits around it. ## Quantization of Energy LevelsEdit For hydrogen, the allowed energies are governed by the equation: $E _n = -13.6eV \frac {1}{n^2}$
2018-10-23 12:56:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30312561988830566, "perplexity": 1590.2587836096131}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583516135.92/warc/CC-MAIN-20181023111223-20181023132723-00539.warc.gz"}
http://www.quantumstudy.com/tag/magnetism/
## The horizontal component of earth’s magnetic field at place is 0.36× 10^-4 weber/m². If the angle of dip at that placed is….. Q. The horizontal component of earth’s magnetic field at place is 0.36× 10-4 weber/m². If the angle of dip at that placed is 60° then the value of vertical component of earth’s magnetic field will be (in wb/m²) (a) 6 × 10-5 T (b) 6√2 × 10-5 T (c) 3.6√3 × 10-5 T (d) √2 × 10-5 T Ans:(c) ## A magnetic material of volume 30cm³ is placed in a magnetic field of intensity 5 oersted. The magnetic moment produced….. Q. A magnetic material of volume 30cm³ is placed in a magnetic field of intensity 5 oersted. The magnetic moment produced due it is 6 amp – m². The value of magnetic induction will be. (a) 0.2517 Tesla (b) 0.025 Tesla (c) 0.0025 Tesla (d) 25 Tesla Ans: (a) ## A rod of ferromagnetic material with dimensions 10cm×5cm× 2cm is placed in a magnetising field of intensity 2 × 10^-5 A/m….. Q. A rod of ferromagnetic material with dimensions 10cm ×5cm × 2cm is placed in a magnetising field of intensity 2 × 10-5 A/m. The magnetic moment produced due it is 6 amp – m². The value of magnetic induction will be …….10-2T. (a) 100.48 (b) 200.28 (c) 50.24 (d) 300.48 Ans:(a) ## Find the percentage increase in the magnetic field B when the space within the current carrying toroid is filled with aluminum…. Q. Find the percentage increase in the magnetic field B when the space within the current carrying toroid is filled with aluminum. The susceptibility of aluminium is 2.1 × 10-5 (a) 3.1 × 10-3 (b) 1.1 × 10-3 (c) 2.1 × 10-3 (d) 2.1 × 10-5 Ans: (c) ## 300 turns of a thin wire are uniformly wound on a permanent magnet shaped as a cylinder of length 15cm…. Q. 300 turns of a thin wire are uniformly wound on a permanent magnet shaped as a cylinder of length 15cm. When a current 3A is passed through the wire, the field outside the magnet disappears. Then the coercive force of the material is (a) 2 kNm-1 (b) 4 kNm-1 (c) 5 kNm-1 (d) 6 kNm-1 Ans : (a)
2020-06-01 03:11:46
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8312733769416809, "perplexity": 2423.270595844583}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347413901.34/warc/CC-MAIN-20200601005011-20200601035011-00345.warc.gz"}
http://math.wikia.com/wiki/Even_and_odd_functions
## FANDOM 1,122 Pages A function f on R is an even function if for all x in the domain of f, f(x) = f(-x). Such a function is symmetric with respect to the y-axis when graphed. A function f on R is an odd function if for all x in the domain of f, -f(x) = f(-x). Such a function has rotational symmetry with respect to the origin. #### Examples $x^2$ is a even function but $x^3$ is odd. $X^5 +56x+909$ is neither.
2018-07-18 13:58:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.763424813747406, "perplexity": 374.09860375146894}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590199.42/warc/CC-MAIN-20180718135047-20180718155047-00285.warc.gz"}
https://jrogel.com/category/data-science/artificial-intelligence/
## Data Skeptic Podcast I had an opportunity to be one of the panellists in the Data Skeptic podcast recently. It was great to have been invited and as a listener to the podcast it was a really treat to be able to take part. Also, recording it was fun… You can listen to the episode here. More information about the Data Skeptic Journal Club can be found in their site. I would like to thank  Kyle Polich, Lan Guo and George Kemp for having me as a guest. I hope it is not the last time! In the episode Kyle talks about the relationship between Covid-19 and Carbon Emissions. George tells us about the new Hateful Memes Challenge from Facebook. Lan joins us to talk about Google’s AI Explorables. I talk about a paper that uses neural networks to detect infections in the ear. Let me know what you guys think! ## Getting Answers for Core ML deployment from my own Book I was working today in the deployment of a small neural network model prototype converted to Core ML to be used in an iPhone app. I was trying to find the best way to get things to work and then it occurred to me I had solved a similar issue before… where‽ when‽ aha! The answer was actually in my Advanced Data Science and Analytics with Python. ## Top Free Books for Deep Learning This collection includes books on all aspects of deep learning. It begins with titles that cover the subject as a whole, before moving onto work that should help beginners expand their knowledge from machine learning to deep learning. The list concludes with books that discuss neural networks, both titles that introduce the topic and ones that go in-depth, covering the architecture of such networks. 1. Deep Learning By Ian Goodfellow, Yoshua Bengio and Aaron Courville The Deep Learning textbook is a resource intended to help students and practitioners enter the field of machine learning in general and deep learning in particular. The online version of the book is now complete and will remain available online for free. 2. Deep Learning Tutorial By LISA Lab, University of Montreal Developed by LISA lab at University of Montreal, this free and concise tutorial presented in the form of a book explores the basics of machine learning. The book emphasizes with using the Theano library (developed originally by the university itself) for creating deep learning models in Python. 3. Deep Learning: Methods and Applications By Li Deng and Dong Yu This book provides an overview of general deep learning methodology and its applications to a variety of signal and information processing tasks. This book is oriented to engineers with only some basic understanding of Machine Learning who want to expand their wisdom in the exciting world of Deep Learning with a hands-on approach that uses TensorFlow. 5. Neural Networks and Deep Learning By Michael Nielsen This book teaches you about Neural networks, a beautiful biologically-inspired programming paradigm which enables a computer to learn from observational data. It also covers deep learning, a powerful set of techniques for learning in neural networks. 6. A Brief Introduction to Neural Networks By David Kriesel This title covers Neural networks in depth. Neural networks are a bio-inspired mechanism of data processing, that enables computers to learn technically similar to a brain and even generalize once solutions to enough problem instances are taught. Available in English and German. 7. Neural Network Design (2nd edition) By Martin T. Hagan, Howard B. Demuth, Mark H. Beale and Orlando D. Jess NEURAL NETWORK DESIGN (2nd Edition) provides a clear and detailed survey of fundamental neural network architectures and learning rules. In it, the authors emphasize a fundamental understanding of the principal neural networks and the methods for training them. The authors also discuss applications of networks to practical engineering problems in pattern recognition, clustering, signal processing, and control systems. Readability and natural flow of material is emphasized throughout the text. 8. Neural Networks and Learning Machines (3rd edition) By Simon Haykin This third edition of Simon Haykin’s book provides an up-to-date treatment of neural networks in a comprehensive, thorough and readable manner, split into three sections. The book begins by looking at the classical approach on supervised learning, before continuing on to kernel methods based on radial-basis function (RBF) networks. The final part of the book is devoted to regularization theory, which is at the core of machine learning. ## CoreML – iOS App Implementation for the Boston Price Model (Part 1) Hey! How are things? I hope the beginning of the year is looking great for you all. As promised, I am back to continue the open notebook for the implementation of a Core ML model in a simple iOS app. In one of the previous post we created a linear regression model to predict prices for Boston properties (1970 prices that is!) based on two inputs: the crime rate per capita in the area and the average number of rooms in the property. Also, we saw (in a different post) the way in which Core ML implements the properties of the model to be used in an iOS app to carry out the prediction on device! In this post we will start building the iOS app that will use the model to enable our users to generate a prediction based on input values for the parameters used in the model. Our aim is to build a simple interface where the user enters the values and the predicted price is shown. Something like the following screenshot: You will need to have access to a Mac with the latest version Xcode. At the time of writing I am using Xcode 9.2. We will cover the development of the app, but not so much the deployment (we may do so in case people make it known to me that there is interest). In Xcode we will select the “Create New Project” and in the next dialogue box, from the menu at the top make sure that you select “iOS” and from the options shown, please select the “Single View App” option and then click the “Next” button. This will create an iOS app with a single page. If you need more pages/views, this is still a good place to start, as you can add further “View Controllers” while you develop the app. Right, so in the next dialogue box Xcode will be asking for options to create the new project. Give your project a name, something that makes it easier to elucidate what your project is about. In this case I am calling the project “BostonPricer”. You can also provide the name of a team (team of developers contributing to your app for instance) as well as an organisation name and identifier. In our case these are not that important and you can enter any suitable values you desire. Please note that this becomes more important in case you are planning to send your app for approval to Apple. Anyway, make sure that you select “Swift” as the programming language and we are leaving the option boxes for “Use Core Data”, “Include Unit Tests” and “Include UI Tests” unticked. I am redacting some values below: On the left-hand side menu, click on the “Main.storyboard”. This is the main view that our users will see and interact with. It is here where we will create the design, look-and-feel and interactions in our app. We will start placing a few objects in our app, some of them will be used simple to display text (labels and information), whereas others will be used to create interactions, in particular to select input values and to generate the prediction. To do that we will use the “Object Library”. In the current window of Xcode, on the bottom-right corner you will see an icon that looks like a little square inside a circle; this is the “Show the Object Library” icon. When you select it, at the bottom of the area you will see a search bar. There you will look for the following objects: • Label • Picker View • Button You will need three labels, one picker and one button. You can drag each of the elements from the “Object Library” results shown and into the story board. You can edit the text for the labels and the button by double clicking on them. Do not worry about the text shown for the picker; we will deal with these values in future posts. Arrange the elements as shown in the screenshot below: OK, so far so good. In the next few posts we will start creating the functionality for each of these elements and implement the prediction generated by the model we have developed. Keep in touch. You can look at the code (in development) in my github site here. ## CoreML – Model properties If you have been following the posts in this open notebook, you may know that by now we have managed to create a linear regression model for the Boston Price dataset based on two predictors, namely crime rate and average number of rooms. It is by no means the best model out there ad our aim is to explore the creation of a model (in this case with Python) and convert it to a Core ML model that can be deployed in an iOS app. Before move on to the development of the app, I thought it would be good to take a look at the properties of the converted model. If we open the PriceBoston.mlmodel we saved in the previous post (in Xcode of course) we will see the following information: We can see the name of the model (PriceBoston) and the fact that it is a “Pipeline Regressor”. The model can be given various attributes such as Author, Description, License, etc. We can also see the listing of the Model Evaluation Parameters in the form of Inputs (crime rate and number of rooms) and Outputs (price). There is also an entry to describe the Model Class (PriceBoston) and without attaching this model to a target the class is actually not present. Once we make this model part of a target inside an app, Xcode will generate the appropriate code Just to give you a flavour of the code that will be generated when we attach this model to a target, please take a look at the screenshot below: You can see that the code was generated automatically (see the comment at the beginning of the Swift file). The code defines the input variables and feature names, defines a way to extract values out of the input strings, sets up the model output and other bits and pieces such as defining the class for model loading and prediction (not shown). All this is taken care of by Xcode, making it very easy for us to use the model in our app. We will start building that app in the following posts (bear with me, I promise we will get there). Enjoy! ## CoreML – Building the model for Boston Prices In the last post we have taken a look at the Boston Prices dataset loaded directly from Scikit-learn. In this post we are going to build a linear regression model and convert it to a .mlmodel to be used in an iOS app. We are going to need some modules: import coremltools import pandas as pd from sklearn import datasets, linear_model from sklearn.model_selection import train_test_split from sklearn import metrics import numpy as np The cormeltools is the module that will enable the conversion to use our model in iOS. Let us start by defining a main function to load the dataset: def main(): boston_df = pd.DataFrame(boston.data) boston_df.columns = boston.feature_names print(boston_df.columns) In the code above we have loaded the dataset and created a pandas dataframe to hold the data and the names of the columns. As we mentioned in the previous post, we are going to use only the crime rate and the number of rooms to create our model: print("We now choose the features to be included in our model.") X = boston_df[['CRIM', 'RM']] y = boston.target Please note that we are separating the target variable from the predictor variables. Although this dataset in not too large, we are going to follow best practice and split the data into training and testing sets: X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.2, random_state=7) We will only use the training set in the creation of the model and will test with the remaining data points. my_model = glm_boston(X_train, y_train) The line of code above assumes that we have defined the function glm_boston as follows: def glm_boston(X, y): print("Implementing a simple linear regression.") lm = linear_model.LinearRegression() gml = lm.fit(X, y) return gml Notice that we are using the LinearRegression implementation in Scikit-learn. Let us go back to the main function we are building and extract the coefficients for our linear model. Refer to the CoreML – Linear Regression post to remember that type of model that we are building is of the form $y=\alpha + \beta_1 x_1 + \beta_2 x_2 + \epsilon$: coefs = [my_model.intercept_, my_model.coef_] print("The intercept is {0}.".format(coefs[0])) print("The coefficients are {0}.".format(coefs[1])) We can also take a look at some metrics that tell let us evaluate our model against the test data: # calculate MAE, MSE, RMSE print("The mean absolute error is {0}.".format( metrics.mean_absolute_error(y_test, y_pred))) print("The mean squared error is {0}.".format( metrics.mean_squared_error(y_test, y_pred))) print("The root mean squared error is {0}.".format( np.sqrt(metrics.mean_squared_error(y_test, y_pred)))) ## CoreML conversion And now for the big moment: We are going to convert our model to an .mlmodel object!! Ready? print("Let us now convert this model into a Core ML object:") # Convert model to Core ML coreml_model = coremltools.converters.sklearn.convert(my_model, input_features=["crime", "rooms"], output_feature_names="price") # Save Core ML Model coreml_model.save("PriceBoston.mlmodel") print("Done!") We are using the sklearn.convert method of coremltools.converters to create the my_model model with the necessary inputs (i.e. crime and rooms) and output (price). Finally we save the model in a file with the name PriceBoston.mlmodel. Et voilà! In the next post we will start creating an iOS app to use the model we have just built. You can look at the code (in development) in my github site here. ## CoreML – Boston Prices exploration In the previous post of this series we described some of the basics of linear regression, one of the most well-known models in machine learning. We saw that we can relate the values of input parameters $x_i$ to the target variable $y$ to be predicted. In this post we are going to create a linear regression model to predict the price of houses in Boston (based on valuations from 1970s). The dataset provides information such as Crime (CRIM), areas of non-retail business in the town (INDUS), the age of people who own the house (AGE), average number of rooms (RM) as well as the median value of homes in \$1000s (MEDV) as well as other attributes. Let us start by exploring the data. We are going to use Scikit-learn and fortunately the dataset comes with the module. The input variables are included in the data method and the price is given by the target. We are going to load the input variables in the dataframe boston_df and the prices in the array y: from sklearn import datasets import pandas as pd boston_df = pd.DataFrame(boston.data) boston_df.columns = boston.feature_names y = boston.target We are going to build our model using only a limited number of inputs. In this case let us pay attention to the average number of rooms and the crime rate: X = boston_df[['CRIM', 'RM']] X.columns = ['Crime', 'Rooms'] X.describe() The description of these two attributes is as follows: Crime Rooms count 506.000000 506.000000 mean 3.593761 6.284634 std 8.596783 0.702617 min 0.006320 3.561000 25% 0.082045 5.885500 50% 0.256510 6.208500 75% 3.647423 6.623500 max 88.976200 8.780000 As we can see the minimum number of rooms is 3.5 and the maximum is 8.78, whereas for the crime rate the minimum is 0.006 and the maximum value is 88.97, nonetheless the median is 0.25. We will use some of these values to define the ranges that will be provided to our users to find price predictions. Finally, let us visualise the data: We shall bear these values in mind when building our regression model in subsequent posts. You can look at the code (in development) in my github site here. ## What Is Artificial Intelligence? Original article by JF Puget here. Here is a question I was asked to discuss at a conference last month: what is Artifical Intelligence (AI)?  Instead of trying to answer it, which could take days, I decided to focus on how AI has been defined over the years.  Nowadays, most people probably equate AI with deep learning.  This has not always been the case as we shall see. Most people say that AI was first defined as a research field in a 1956 workshop at Dartmouth College.  Reality is that is has been defined 6 years earlier by Alan Turing in 1950.  Let me cite Wikipedia here: The Turing test, developed by Alan Turing in 1950, is a test of a machine’s ability to exhibit intelligent behaviorequivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation is a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel such as a computer keyboard and screen so the result would not depend on the machine’s ability to render words as speech.[2] If the evaluator cannot reliably tell the machine from the human, the machine is said to have passed the test. The test does not check the ability to give correct answers to questions, only how closely answers resemble those a human would give. The test was introduced by Turing in his paper, “Computing Machinery and Intelligence“, while working at the University of Manchester(Turing, 1950; p. 460).[3] It opens with the words: “I propose to consider the question, ‘Can machines think?'” Because “thinking” is difficult to define, Turing chooses to “replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.”[4] Turing’s new question is: “Are there imaginable digital computers which would do well in the imitation game?”[5] This question, Turing believed, is one that can actually be answered. In the remainder of the paper, he argued against all the major objections to the proposition that “machines can think”.[6] So, the first definition of AI was about thinking machines.  Turing decided to test thinking via a chat. The definition of AI rapidly evolved to include the ability to perform complex reasoning and planing tasks.  Early success in the 50s led prominent researchers to make imprudent predictions about how AI would become a reality in the 60s.  The lack of realization of these predictions led to funding cut known as the AI winter in the 70s. In the early 80s, building on some success for medical diagnosis, AI came back with expert systems.  These systems were trying to capture the expertise of humans in various domains, and were implemented as rule based systems.  This was the days were AI was focusing on the ability to perform tasks at best human expertise level.  Success like IBM Deep Blue beating the chess world champion, Gary Kasparov, in  1997 was the acme of this line of AI research. Let’s contrast this with today’s AI.  The focus is on perception: can we have systems that recognize what is in a picture, what is in a video, what is said in a sound track?  Rapid progress is underway for these tasks thanks to the use of deep learning.  Is it AI still?  Are we automating human thinking?  Reality is we are working on automating tasks that most humans can do without any thinking effort. Yet we see lots of bragging about AI being a reality when all we have is some ability to mimic human perception.  I really find it ironic that our definition of intelligence is that of mere perception  rather than thinking. Granted, not all AI work today is about perception.  Work on natural language processing (e.g. translation) is a bit closer to reasoning than mere perception tasks described above.  Success like IBM Watson at Jeopardy, or Google AlphaGO at Go are two examples of the traditional AI aiming at replicate tasks performed by human experts.    The good news (to me at least) is that the progress is so rapid on perception that it will move from a research field to an engineering field in the coming years.  We will then see a re-positioning of researchers on other AI related topics such as reasoning and planning.  We’ll be closer to Turing’s initial view of AI. ## Data Science & Augmented Intelligence – Reblog from “Data Science: a new discipline to change the world” by Alan Wilson This is a reblog of the post by Alan Wilson that appeared in the EPSRC blog. You can see the original here. ==== ## Data science – the new kid on the block I have re-badged myself several times in my research career: mathematician, theoretical physicist, economist (of sorts), geographer, city planner, complexity scientist, and now data scientist. This is partly personal idiosyncrasy but also a reflection of how new interdisciplinary research challenges emerge. I now have the privilege of being the Chief Executive of The Alan Turing Institute – the national centre for data science. ‘Data science’ is the new kid on the block. How come? First, there is an enormous amount of new ‘big’ data; second, this has had a powerful impact on all the sciences; and thirdly, on society, the economy and our way of life. Data science represents these combinations. The data comes from wide-spread digitisation combined with the ‘open data’ initiatives of government and extensive deployment of sensors and devices such as mobile phones. This generates huge research opportunities. In broad terms, data science has two main branches. First, what can we do with the data? Applications of statistics and machine learning fall under this branch. Second, how can we transform existing science with this data and these methods? Much of the second is rooted in mathematics. To make this work in practice, there is a time-consuming first step: making the data useable by combining different sources in different formats. This is known as ‘data wrangling’, which coincidentally is the subject of a new Turing research project to speed up this time-consuming process. The whole field is driven by the power of the computer, and computer science. Understanding the effects of data on society, and the ethical questions it provokes, is led by the social sciences. All of this combines in the idea of artificial intelligence, or AI. While the ‘machine’ has not yet passed the ‘Turing test’ and cannot compete with humans in thought, in many applications AI and data science now support human decision making. The current buzz phrase for this is ‘augmented intelligence’. ## Cross-disciplinary potential I can illustrate the research potential of data science through two examples, the first from my own field of urban research; the second from medicine – with recent AI research in this field learned, no doubt imperfectly, from my Turing colleague Mihaela van der Schaar. There is a long history of developing mathematical and computer models of cities. Data arrives very slowly for model calibration – the census, for example, is critical. A combination of open government data and real-time flows from mobile phones and social media networks has changed this situation: real-time calibration is now possible. This potentially transforms both the science and its application in city planning. Machine learning complements, and potentially integrates with, the models. Data science in this case adds to an existing deep knowledge base. Medical diagnosis is also underpinned by existing knowledge – physiology, cell and molecular biology for example. It is a skilled business, interpreting symptoms and tests. This can be enhanced through data science techniques – beginning with advances in imaging and visualisation and then the application of machine learning to the variety of evidence available. The clinician can add his or her own judgement. Treatment plans follow. At this point, something really new kicks in. ‘Live’ data on patients, including their responses to treatment, becomes available. This data can be combined with personal data to derive clusters of ‘like’ patients, enabling the exploration of the effectiveness of different treatment plans for different types of patients. This combination of data science techniques and human decision making is an excellent example of augmented intelligence. This opens the way to personalised intelligent medicine, which is set to have a transformative effect on healthcare (for those interested in finding out more, reserve a place for Mihaela van der Schaar’s Turing Lecture on 4 May). ## An exciting new agenda These kinds of developments of data science, and the associated applications, are possible in almost all sectors of industry. It is the role of the Alan Turing Institute to explore both the fundamental science underpinnings, and the potential applications, of data science across this wide landscape. We currently work in fields as diverse as digital engineering, defence and security, computer technology and finance as well as cities and health. This range will expand as this very new Institute grows. We will work with and through universities and with commercial, public and third sector partners, to generate and develop the fruits of data science. This is a challenging agenda but a hugely exciting one.
2020-09-26 13:58:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 6, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19352060556411743, "perplexity": 1207.1855705614616}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400244231.61/warc/CC-MAIN-20200926134026-20200926164026-00191.warc.gz"}
http://mathhelpforum.com/algebra/65277-can-some-one-break-down-help-me-solve-im-so-lost.html
# Math Help - Can some one break this down and help me solve. Im so lost... 1. ## Can some one break this down and help me solve. Im so lost... Simplify the root: -18^ sqrt(-8)^18 2. Hi. Originally Posted by HappyFeet Simplify the root: -18^ sqrt(-8)^18 Sorry, what exactly is the question? $- 18^{\sqrt{-8}^{18}} = - 18^{{(-8)^{18/2}}} = - 18^{(-8)^{9}}$ $= - \frac{1}{18^{8^{9}}}$ ... Regards, Rapha 3. Hi, I have added an attachment with the actual problem. Thanks!!! 4. Originally Posted by HappyFeet Hi, I have added an attachment with the actual problem. Thanks!!! $-\sqrt[18]{ (-8)^{18}} = - [ (-8)^{18} ]^{1/18} = -(-8)^1 = 8$.
2015-05-25 06:22:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9430822730064392, "perplexity": 7346.247065075316}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928414.45/warc/CC-MAIN-20150521113208-00227-ip-10-180-206-219.ec2.internal.warc.gz"}
http://mathhelpforum.com/math-topics/69533-solved-absolute-distance.html
1. ## [SOLVED] Absolute distance "Write it down as an inequality: number $\displaystyle x$ is (on a number line, I guess) less distant from $\displaystyle 1$ (one), than it is from $\displaystyle x^2$." $\displaystyle |x-1|<|x-x^2|=|x(x-1)|$ For $\displaystyle x\geq1$: $\displaystyle x-1<x-x^2 \Rightarrow x^2<1$ FALSE ($\displaystyle x$ is greater than $\displaystyle 1$) For $\displaystyle 0\leq x<1$: $\displaystyle -(x-1)<-(x-x^2) \Rightarrow x^2>1$ FALSE ($\displaystyle x$ is less than $\displaystyle 1$ and greater than $\displaystyle 0$) For $\displaystyle x<0$: $\displaystyle -(x-1)<x-x^2 \rightarrow x^2-2x+1<0 \Rightarrow (x-1)^2<0$ FALSE (always positive) $\displaystyle \Rightarrow$ inequality has no solutions Is this absolutely(!) correct? 2. Originally Posted by courteous "Write it down as an inequality: number $\displaystyle x$ is (on a number line, I guess) less distant from $\displaystyle 1$ (one), than it is from $\displaystyle x^2$." $\displaystyle |x-1|<|x-x^2|=|x(x-1)|$ For $\displaystyle x\geq1$: $\displaystyle x-1<x-x^2 \rightarrow x^2<1$ FALSE ($\displaystyle x$ is greater than $\displaystyle 1$) If x > 1 then $\displaystyle x-x^2 < 0$ for all x. Therefore you have to solve the inequality: $\displaystyle x\geq1$: $\displaystyle x-1<x^2-x \rightarrow 0 < x^2 -2x +1 ~\implies~ 0< (x-1)^2$ This is true for all x > 1 For $\displaystyle 0\leq x<1$: $\displaystyle -(x-1)<-(x-x^2) \rightarrow x^2>1$ FALSE ($\displaystyle x$ is less than $\displaystyle 1$ and greater than $\displaystyle 0$) For $\displaystyle x<0$: $\displaystyle -(x-1)<x-x^2 \implies x^2-2x+1<0 \rightarrow (x-1)^2<0$ FALSE (always positive) $\displaystyle \Rightarrow$ inequality has no solutions Is this absolutely(!) correct? I'm not quite sure, but in my opinion the last line of your solution isn't correct either. (Unfortunately I haven't spotted the error yet) By first inspection and playing around with some values I assume that the solution could be: $\displaystyle x\in \left((-\infty,-1) \cup (1,\infty)\right)$ 3. Originally Posted by courteous "Write it down as an inequality: number $\displaystyle x$ is less distant from $\displaystyle 1$ (one), than it is from $\displaystyle x^2$." If the problem is to find real numbers x such that the distance from x to 1 is less than the distance from x to $\displaystyle x^2$, then the correct expression is $\displaystyle \left| {x - 1} \right| < \left| {x^2 - x} \right|$. It is worth noting that $\displaystyle \left| {x - 1} \right| = \left| {1 - x} \right|\;\& \,\left| {x^2 - x} \right| = \left| {x - x^2} \right|$, that is distance is symmetric. As pointed out above the solution set is $\displaystyle \left( { - \infty , - 1} \right) \cup \left( {1,\infty } \right)$. To see this note that $\displaystyle \left| {x^2 - x} \right| = \left| x \right|\left| {x - 1} \right|$ and if $\displaystyle x \not= 1$ then $\displaystyle \left| {x - 1} \right| < \left| x \right|\left| {x - 1} \right|\; \Rightarrow \;1 < \left| x \right|$ 4. Originally Posted by Plato If the problem is to find real numbers x such that the distance from x to 1 is less than the distance from x to $\displaystyle x^2$, then the correct expression is $\displaystyle \left| {x - 1} \right| < \left| {x^2 - x} \right|$.[/tex] So, even as distance is symmetric, then the correct initial expression is only $\displaystyle \left| {x - 1} \right| < \left| {x^2 - x} \right|$. Why (it makes sense, besides, the other way around yields "no solution" result)? What if you've had some hard-line initial conditions, not $\displaystyle x^2$? 5. Originally Posted by courteous Why (it makes sense, besides, the other way around Originally Posted by courteous yields "no solution" result)? What if you've had some hard-line initial conditions, not $\displaystyle x^2$? $\displaystyle \begin{array}{l} \left| {x - 1} \right| < \left| {x - x^2 } \right| \\ \left| {1 - x} \right| < \left| {x - x^2 } \right| \\ \left| {x - 1} \right| < \left| {x^2 - x} \right| \\ \left| {1 - x} \right| < \left| {x^2 - x } \right| \\ \end{array}$ Each of the above has the solution $\displaystyle ( - \infty , - 1) \cup (1,\infty )$. The order makes no difference. 6. Originally Posted by courteous $\displaystyle |x-1|<|x-x^2|=|x(x-1)|$ Indeed! The mischievous $\displaystyle |x-x^2|=|x(x-1)|$ has mislead me. All clear (now).
2018-04-22 03:12:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9029987454414368, "perplexity": 751.534596290254}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945484.58/warc/CC-MAIN-20180422022057-20180422042057-00192.warc.gz"}
http://www.oxfordmathcenter.com/drupal7/node/636
A linked list is a recursive data structure that is either empty (null) or a reference to a node have a data item and a reference to a linked list. Through this structure, a linked list creates a sequence of nodes chained together, where each node contains a data item and a reference to the next node. Below is an example of a node class for a linked list: class Node { Item item; Node next; } Note, the node class is self-referential. That is to say, the node class contains a reference to itself. Also, a reference to a linked list can simply be a reference to the first node of that list. That said, one often encapsulates the reference to the first node of a given linked list as an instance variable in some enclosing linked list class. Linked lists and arrays are the two fundamental ways in which sequential data can be stored. There are advantages and disadvantages to both: • Arrays store elements continuously in memory and support indexed access to the items they contain, but suffer from a fixed size. • Linked lists don't have the advantages of their items being stored continuously in memory or supporting indexed access, but the do support dynamic sizing (i.e., we can create and insert additional nodes as needed). Related to this, they also have extremely efficient means for inserting or removing elements. These advantages come at a cost, however. Linked lists incur additional memory overhead due to the need to store so many references. ## Building and Traversing Linked Lists To build a linked list, we start with a link (i.e., "reference to a node") that is null. This node is often called first, root, or head. Then, we create a node for each item we need to store, set the item field to the desired value, and then set the next field to the net node. The following gives an example of this process, storing strings "one", "two" and "three": Node first = new Node(); //create first node first.item = "one"; Node second = new Node(); //create second node second.item = "two"; first.next = second; Node third = new Node(); // create third node third.item = "three"; second.next = third; If we should need to process the elements of a list in some way -- for example, suppose we need to print the list elements -- we can traverse the list with a loop not unlike the following: for (Node n = first; n != null; n = n.next) { // process n.item } Notice how similar this is to how one processes the elements of an array: for (int i = 0; i < a.length; i++) { // process a[i] } Some efficiencies can be added by not only maintaining a reference to the first node of the list, but also by maintaining a reference to the last node of the list. This comes at a cost, however, in that all of the methods that change the list in any way must now check whether this additional reference needs to be modified -- and make these modifications, as necessary. To see how some additional methods can be implemented with the addition of a reference to the last node of the list, see Linked List (Double Ended)
2018-03-19 14:14:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35459065437316895, "perplexity": 868.8853929361129}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257646952.38/warc/CC-MAIN-20180319140246-20180319160246-00491.warc.gz"}
https://zbmath.org/?q=an:1016.11035
zbMATH — the first resource for mathematics Examples Geometry Search for the term Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact. "Topological group" Phrases (multi-words) should be set in "straight quotation marks". au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted. Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff. "Quasi* map*" py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14. "Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language. ISO 639-1 language codes can also be used. Operators a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses Fields any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term dt document type (j: journal article; b: book; a: book article) Some multi-set inclusions associated with shuffle convolutions and multiple zeta values. (English) Zbl 1016.11035 The authors develop a new method for obtaining combinatorial identities involving shuffle convolutions [see {\it D. Bowman} and {\it D. M. Bradley}, J. Comb. Theory, Ser. A 97, 43-61 (2002; Zbl 1021.11026)]. As an application, a new proof of the formula $\zeta (3,1,3,1,\ldots ,3,1)=2\pi^{4n}/(4n+2)!$, where $\{ 3,1\}$ is repeated $n$ times, is given. Some new identities for the multiple zeta function are also obtained. MSC: 11M41 Other Dirichlet series and zeta functions 05A99 Classical combinatorial problems 05E99 Algebraic combinatorics Full Text: References: [1] Borwein, J. M.; Bradley, D. M.; Broadhurst, D. J.: Evaluations of k-fold Euler/Zagier sums: a compendium of results for arbitrary k. Electron. J. Combin. 4, No. 2, #R5 (1997) · Zbl 0884.40004 [2] Borwein, J. M.; Bradley, D. M.; Broadhurst, D. J.; Lisonĕk, P.: Special values of multiple polylogarithms. Trans. am. Math. soc. 353, No. 3, 907-941 (2000) · Zbl 1002.11093 [3] Borwein, J. M.; Bradley, D. M.; Broadhurst, D. J.; Lisonĕk, P.: Combinatorial aspects of multiple zeta values. Electron. J. Combin. 5, No. 1, #R38 (1998) · Zbl 0904.05012 [4] D. Bowman, D.M. Bradley, Resolution of some open problems concerning multiple zeta evaluations of arbitrary depth, Compositio Math. (in press) · Zbl 1035.11037 [5] Bowman, D.; Bradley, D. M.: The algebra and combinatorics of shuffles and multiple zeta values. J. combin. Theory ser. A 97, No. 1, 43-61 (2002) · Zbl 1021.11026 [6] D. Bowman, D.M. Bradley, Multiple polylogarithms: a brief survey, Proceedings of a Conference on q-series with Applications to Combinatorics, Number Theory and Physics, American Mathematical Society, Contemporary Mathematics Series, vol. 291, 2001, pp. 71--92 · Zbl 0998.33013 [7] Broadhurst, D. J.; Kreimer, D.: Association of multiple zeta values with positive knots via Feynman diagrams up to 9 loops. Phys. lett. B 393, No. 3--4, 403-412 (1997) · Zbl 0946.81028 [8] Chen, Kuo-Tsai: Iterated integrals and exponential homomorphisms. Proc. London math. Soc. 4, No. 3, 502-512 (1954) · Zbl 0058.25603 [9] Chen, Kuo-Tsai: Integration of paths, geometric invariants and a generalized Baker-Hausdorff formula. Ann. math. 65, No. 1, 163-178 (1957) · Zbl 0077.25301 [10] Goncharov, A. B.: Multiple polylogarithms, cyclotomy and modular complexes. Math. res. Lett. 5, No. 4, 497-516 (1998) · Zbl 0961.11040 [11] Hoffman, M. E.: Multiple harmonic series. Pacific J. Math. 152, No. 2, 275-290 (1992) · Zbl 0763.11037 [12] Hoffman, M. E.: The algebra of multiple harmonic series. J. algebra 194, 477-495 (1997) · Zbl 0881.11067 [13] Minh, Hoang Ngoc; Petitot, M.: Lyndon words, polylogarithms and the Riemann ${\zeta}$ function. Discrete math. 217, No. 1--3, 273-292 (2000) · Zbl 0959.68144 [14] Radford, D. E.: A natural ring basis for the shuffle algebra and an application to group schemes. J. algebra 58, 432-454 (1979) · Zbl 0409.16011 [15] Ree, R.: Lie elements and an algebra associated with shuffles. Ann. math. 62, No. 2, 210-220 (1958) · Zbl 0083.25401 [16] Ji Hoon Ryoo, Identities for multiple zeta values using the shuffle operation, Master’s Thesis, University of Maine, May 2001 [17] Waldschmidt, M.: Valeurs zêta multiples: une introduction. J. théor. Nombres Bordeaux 12, No. 2, 581-595 (2000) · Zbl 0976.11037 [18] M. Waldschmidt, Introduction to polylogarithms, Proceedings of the Chandigarh International Conference on Number Theory and Discrete Mathematics in Honour of Srinivasa Ramanujan (to appear) · Zbl 1035.11033
2016-04-29 17:56:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8045423626899719, "perplexity": 5425.124938683981}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860111392.88/warc/CC-MAIN-20160428161511-00089-ip-10-239-7-51.ec2.internal.warc.gz"}
http://www.allsyntax.com/forums/view.php?f=1&t=436&p=1987&pg=1
Choose a job you love, and you will never have to work a day in your life.- Confucius News Tutorials Code Samples Blogs Forums Forum Index » General Discussion » Viewing Topic and Replies Viewing Topic: Help me solve this problem and you will have your reward... littlewoodpbraullo Subject: "Help me solve this problem and you ..." Posted: @ 9:43 am on Aug 18 2011 Member #: 1408 Rank: User - (3) Since: 08/18/11 Posts: 3 Pls give me your e-mail add and I will send it to you the problem tomorrow August 19, 2011. Hope that you will help me. Viewed: 14,473 Times | | bs0d Subject: "re: Help me solve this problem and ..." Posted: @ 12:01 am on Aug 19 2011 Member #: 1 Rank: Admin. - (1,505) Since: 02/06/05 Posts: 600 From: USA Why not just post it on here? If I can't help, perhaps someone else can? -bs0d | AllSyntax.com Viewed: 14,465 Times | | littlewoodpbraullo Subject: "re: Help me solve this problem and ..." Posted: @ 4:52 am on Aug 19 2011 Member #: 1408 Rank: User - (3) Since: 08/18/11 Posts: 3 I don't know how to attached a pdf file..it's about a particular problem in math..pls Viewed: 14,460 Times | | littlewoodpbraullo Subject: "re: Help me solve this problem and ..." Posted: @ 5:03 am on Aug 19 2011 Member #: 1408 Rank: User - (3) Since: 08/18/11 Posts: 3 ok if you insist...this is the problem (use Latex to see clearly the encoded word)... For n>=0, let $P(z)\in\math frak{L}_n=\dis playstyle\{P(z )=\sum_{i=0}^n a_iz^i:a_i\in {\pm\}~\mbo x{and}~z=e^{i\ theta}\}$. Then $$\displaystyl e\frac{1}{2^{n +1}}\sum_{P\i n\mathfrak{L}_ n}\frac{1}{2\ pi}\int_0^{2\ pi}\left|P(z) right|^2P(\ov erline{z})^2~d theta=?.$$ Note: The answer is a function of $n$. Also, one can use any kind of programming. Viewed: 14,456 Times | | bs0d Subject: "re: Help me solve this problem and ..." Posted: @ 9:18 pm on Aug 21 2011 Member #: 1 Rank: Admin. - (1,505) Since: 02/06/05 Posts: 600 From: USA You're right, it's difficult to see what's going on. Perhaps you could post a link to the PDF? Sorry I can't be much help. -bs0d | AllSyntax.com Viewed: 14,407 Times | | Viewing Page: 1 of 1 1 | You must be logged in to post on the forums. Login or Register "AllSyntax.com" Copyright © 2002-2018; All rights lefted, all lefts righted. Privacy Policy  |  Internet Rank
2018-09-25 01:51:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27648863196372986, "perplexity": 9156.161395855284}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267160853.60/warc/CC-MAIN-20180925004528-20180925024928-00022.warc.gz"}
https://pbelmans.ncag.info/blog/2012/03/29/how-to-write-cech-cohomology-groups-in-latex/
Just like writing direct and inverse limits in TeX, the way to write Čech cohomology groups in TeX is something that doesn't come up easily in Google unless you know what to look for (basically a list of math mode accents, but I am not the only person obstinately searching with the wrong keywords, right?). So in text mode you write \v{C}ech, and in case you wish to write down the $n$-th Čech cohomology group $\check{\mathrm{H}}^n(X,\mathcal{F})$ of a topological space $X$ and the sheaf $\mathcal{F}$ you use \check{\mathrm{H}}^n(X,\mathcal{F}). Notice the use of \mathrm{H} for the actual (co)homology object, you could/should do this too! Let this fact be known too.
2022-01-27 14:40:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9608989953994751, "perplexity": 1089.2369137895178}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305266.34/warc/CC-MAIN-20220127133107-20220127163107-00387.warc.gz"}
https://www.physicsforums.com/threads/sequence-of-sum.668269/
# Sequence of sum 1. Jan 31, 2013 ### Biosyn 1. The problem statement, all variables and given/known data Find the sum of $5^1-5^2+5^3-5^4+...-5^{98}$ a. (5/4)(1-5^99) b. (1/6)(1-5^99) c. (6/5)(1+5^98) d. (1-5^100) e. (5/6)(1-5^98) 2. Relevant equations 3. The attempt at a solution I feel as though this is actually a simple problem and that I'm not looking at it the right way. [$5^1 + 5^3 + 5^5....5^{97}$] + [$-5^2-5^4-5^6...-5^{98}$] Last edited: Jan 31, 2013 2. Jan 31, 2013 ### jbunniii Do you know how to sum $x^n$ in general? What is $x$ here? 3. Jan 31, 2013 ### Biosyn $x$ will be 5? $$\sum_{i=0}^{48} (5^{2i + 1})$$ + $$\sum_{i=0}^{49} (5^{2i})$$ Never mind, I figured it out! Last edited: Jan 31, 2013 4. Jan 31, 2013 ### jbunniii Actually, it looks to me like $$-\sum_{n=1}^{98}(-5)^n$$ 5. Feb 1, 2013 ### Biosyn I used Sn = $\frac{a_1*(1-r^n)}{1-r}$ Sn = $\frac{5*(1-(-5)^98)}{1-(-5)}$ = $\frac{5*(1-(-5)^98)}{6}$ = (5/6)*(1-(-5)^98)
2018-02-18 22:38:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4963342845439911, "perplexity": 5577.65678075136}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812259.30/warc/CC-MAIN-20180218212626-20180218232626-00757.warc.gz"}
http://mathhelpforum.com/calculus/90705-u-sub-problem-no-u.html
# Thread: "u sub" problem with no "u" 1. ## "u sub" problem with no "u" I need to know how to integrate 2sqrt(100-x^2) from -10 to 10 by hand. It suggests a u-substitution but no u seems to exist. Please help. Bret Norvilitis Orchard Park HS 2. Originally Posted by bmnorvil I need to know how to integrate 2sqrt(100-x^2) from -10 to 10 by hand. It suggests a u-substitution but no u seems to exist. Please help. Bret Norvilitis Orchard Park HS It has been a while since I have done any integration by substitution but I would maybe suggest using $u = 100-x^2$ 3. I was never really that sure about substitution but i think this is how it works... When it says to do a u-sub it means set x to be equal to some function containing u. This also means that you should change your limits as well. In this case, set $x = 10\sin(u)$ and hence $dx = 10\cos(u)du$ then $x^2 = 100\sin^2(u)$. So put this into your equation and you get... $\int 2 \sqrt(100-x^2)dx = \int 2 \sqrt(100-100\sin^2(u))du = \int 2 \sqrt(100(1-\sin^2(u))) 10\cos(u)du$. = $\int 2 \sqrt(100cos^2(u)) 10\cos(u)$. = $\int 200cos^2(u)$ 4. let 10sinu=x 10cosu(du)=dx du=dx/(10cosu) cosu=(100-x^2)^1/2/10 5. ^^^^^^ What Amer said. Note that Amers integral limits were found by setting 10 and -10 to be equal to $10\sin(u)$, hence \sin(u) = 1 and -1, so $u= \pm \frac{\pi}{2}$ are the new limits
2016-10-25 07:44:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8775758743286133, "perplexity": 983.3889238070595}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719960.60/warc/CC-MAIN-20161020183839-00226-ip-10-171-6-4.ec2.internal.warc.gz"}
https://mathoverflow.net/questions/302982/how-to-prove-the-identity-l2-frac-cdot3-frac215-sum-limits-k-1-inf
How to prove the identity $L(2,(\frac{\cdot}3))=\frac2{15}\sum\limits_{k=1}^\infty\frac{48^k}{k(2k-1)\binom{4k}{2k}\binom{2k}k}$? For the Dirichlet character $\chi(a)=(\frac a3)$ (which is the Legendre symbol), we have $$L(2,\chi)=\sum_{n=1}^\infty\frac{(\frac n3)}{n^2}=0.781302412896486296867187429624\ldots.$$ Note that this series converges slowly. In 2014, motivated by my conjectural congruence $$\sum_{k=1}^{p-1}\frac{\binom{4k}{2k+1}\binom{2k}k}{48^k}\equiv\frac5{12}p^2B_{p-2}\left(\frac13\right)\pmod{p^3}\ \ \ \text{for any prime}\ p>3$$ (cf. Conjecture 1.1. of my paper available from http://maths.nju.edu.cn/~zwsun/165s.pdf), I found the following rapidly convergent series for the constant $L(2,(\frac{\cdot}3))$: $$L\left(2,\left(\frac{\cdot}3\right)\right)=\frac2{15}\sum _{k=1}^\infty\frac{48^k}{k(2k-1)\binom{4k}{2k}\binom{2k}k}.\tag{1}$$ As the right-hand side of (1) converges quickly, you will not doubt the truth of (1) if you use Mathematica or Maple to check it. Unlike Ramanujan-type series for $1/\pi$, the summand in (1) just involves a product of two (not three) binomial coefficients. Note that $(1)$ was listed as $(1.9)$ in my preprint List of conjectural series for powers of $\pi$ and other constants. QUESTION: How to prove my conjectural identity $(1)$? I have mentioned this question to several experts at $\pi$-series or hypergeometric series, but none of them could prove the identity $(1)$. Any helpful ideas towards the proof of $(1)$? • In 2010 I conjectured that $$L\left(2,\left(\frac{\cdot}3\right)\right)=\sum_{k=1}^\infty\frac{(15k-4)(-27)^{k-1}}{k^3\binom{2k}k^2\binom{3k}k}$$ which was confirmed by Kh. Hessami Pilehrood and T. Hessami Pilehrood [Electron. J. Combin. 18(2012), #P35]. Using this, we can check (1) numerically. Jun 17 '18 at 14:23 • Both sides should be periods and it might be possible to directly compare the motives and show they are isomorphic. Jun 17 '18 at 14:32 • @Will Sawin: Indeed, Mathematica gives for the sum (1): $\frac{8}{15} \ _4F_3(\frac{1}{2},1,1,2;\frac{5}{4},\frac{3}{2},\frac{7}{4};\frac{3}{4})$. Jun 17 '18 at 15:09 • The paper of Kh. & T. Hessami Pilehrood, cited in the comments above can be found here: combinatorics.org/ojs/index.php/eljc/article/view/v18i2p35 . In their notation, $K$ is the constant of interest in this question. The result is proved on page 10 after Corr. 4, using the following identity involving Hurwitz zeta functions: $9K = \zeta(2,1/3)-\zeta(2,2/3)$. – j.c. Jun 17 '18 at 19:39 • Alternative form $$\int_0^{\pi/3}\frac{\left(2-\sqrt{3} \sin y\right) (y-\sin y\cos y)}{\sin ^3y \sqrt{3-2 \sqrt{3} \sin y}}dy=\frac{5}{4}L(2,\chi)$$ – Nemo Jun 18 '18 at 9:55
2021-09-21 06:16:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8220891356468201, "perplexity": 458.5388192782253}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057158.19/warc/CC-MAIN-20210921041059-20210921071059-00067.warc.gz"}
https://math.stackexchange.com/questions/1288443/roots-of-characteristic-equation-ode
# Roots of Characteristic equation (ODE) I have a question in regard to the following; considering the ODE $y^{(4)}+2y''+y=0$ we can factor and find that the roots are $$r_1=r_2=i$$ and $$r_3=r_4=-i$$ So, I thought that a solution will be of the form $y(t)=c_1\cos(t)+c_2\sin(t)+c_3t\cos(-t)+c_4 t\sin (-t)$ but the answer given is the same except for it does not include the -t in the later two cos and sin. I know that $cos(-t)=cos(t)$ but not for sin, so what is it I am doing wrong? Thanks • $c_4\sin(-t) = (-c_4)\sin(t)$. (But shouldn't there be a factor of $t$ in that term too?) – Henning Makholm May 18 '15 at 19:50 • the negative sign is absorbed by the constant $c_4$. – Emilio Novati May 18 '15 at 19:51 • Oh okay thanks, that makes more sense now. Lets say you did not do such, the answer would still be correct , no? – Quality May 18 '15 at 19:51 • No. As noted by @Henning the solution has a term $ct\sin t$. – Emilio Novati May 18 '15 at 19:57 • Oops yea I meant to have that when I wrote it up as well, other than that – Quality May 18 '15 at 19:59 A fundamental set of solutions will be $$\{e^{it},e^{-it},te^{it},te^{-it}\}$$ (i.e. the four functions with the given formulas) as seen by direct examination of the roots of the characteristic equation. Taking linear combinations, (e.g. $\sin(t) = \frac{1}{2i}(e^{it}-e^{-it})$ ), we obtain another set of solutions: $$\{\sin t, \cos t, t\sin t, t\cos t\}$$ which is linearly independent, so the general solution can be expressed as $$y(t) = c_1\sin t + c_2\cos t + c_3 t\sin t + c_4t \cos t\, \quad c_i\in\Bbb C$$ or just $c_i\in \Bbb R$ for real-valued solutions.
2019-09-15 22:46:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9133145213127136, "perplexity": 303.34343081263324}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572436.52/warc/CC-MAIN-20190915215643-20190916001643-00157.warc.gz"}
https://physics.stackexchange.com/questions/486501/berry-connection-in-a-solid
# Berry connection in a solid I am having troubles to understand an equation-sign for the Berry connection in a solid. $$$$\vec{A}(\vec{R}) = \mathrm{i} \langle \Psi(\vec{R}) \, | \nabla_{\vec{R}} \, | \, \Psi(\vec{R}) \rangle \text{.}$$$$ Now assuming that $$$$H_0 \psi_{\vec{k}}^n (\vec{r}) = E_n(\vec{k}) \psi_{\vec{k}}^n (\vec{r}) \text{,}$$$$ where $$u_{\vec{k}}^n$$ denotes the function coming from the Bloch-wavefunctions $$\psi_{\vec{k}}^n (\vec{r}) = \mathrm{e}^{\mathrm{i} \vec{k} \cdot \vec{r}} u_{\vec{k}}^n(\vec{r})$$, it seems (for $$\vec{R} \equiv \vec{k}$$) to be too obvious to explain why ... $$$$\vec{A^n}(\vec{k}) = \mathrm{i} \cdot \left( \mathrm{i} \cdot \langle u_{\vec{k}}^n \, | \vec{r} \, | \, u_{\vec{k}}^n \rangle + \langle u_{\vec{k}}^n \, | \nabla_{\vec{k}} \, | \, u_{\vec{k}}^n \rangle \right) = \mathrm{i} \cdot \langle u_{\vec{k}}^n \, | \nabla_{\vec{k}} \, | \, u_{\vec{k}}^n \rangle$$$$ ... the first term vanishes. I would be grateful if someone could help me out. • How does your final equation follow from your first equation? – d_b Jun 17 at 6:40 • You identify $\langle \vec{r} \, | \, \Psi(\vec{R}) \rangle \equiv \psi_{\vec{k}}^n(\vec{r})$. Then the plane-wave-factors cancel out. – Antihero Jun 17 at 23:54 • Do you have a reference? Is it possible that the choice $|\Psi(\mathbf{R})\rangle = |u_n(\mathbf{k})\rangle$ is being made? – d_b Jun 21 at 22:45 • Thank you for your answer. I also considered this identification. :-) My first reference is this arxiv article. arxiv.org/pdf/1509.02295.pdf; Eq. (2.20) is the definition of the Berry connection as above. Looking at eqs (2.39) and (2.40) supports your choice of identification. However... What is bothering me with this interpretation is that the function $| u_n(\vec{k}) \rangle$ is not a physical state in a Hilbert space, is it? Only the full Bloch-wavefunction (with plane-wave-factor) should be a physical state. (?) – Antihero Jun 22 at 7:50 • I don't see why the latter statement should be true. $\exp\left(-i\mathbf{k}\cdot\mathbf{r}\right)$ is a unitary operator. Applying it to a physical state should give back a physical state. – d_b Jun 22 at 19:52
2019-08-18 00:44:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 6, "wp-katex-eq": 0, "align": 0, "equation": 3, "x-ck12": 0, "texerror": 0, "math_score": 0.9941353797912598, "perplexity": 1227.8138551038617}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313536.31/warc/CC-MAIN-20190818002820-20190818024820-00270.warc.gz"}
http://cpl.iphy.ac.cn/10.1088/0256-307X/34/6/067301
Chin. Phys. Lett.  2017, Vol. 34 Issue (6): 067301    DOI: 10.1088/0256-307X/34/6/067301 CONDENSED MATTER: ELECTRONIC STRUCTURE, ELECTRICAL, MAGNETIC, AND OPTICAL PROPERTIES | Coulomb-Dominated Oscillations in Fabry–Perot Quantum Hall Interferometers Yu-Ying Zhu1,2, Meng-Meng Bai1,2, Shu-Yu Zheng1, Jie Fan1, Xiu-Nian Jing1,3, Zhong-Qing Ji1, Chang-Li Yang1,3, Guang-Tong Liu1**, Li Lu1,3 1Beijing National Laboratory for Condensed Matter Physics, Institute of Physics, Chinese Academy of Sciences, Beijing 100190 2University of Chinese Academy of Sciences, Beijing 100049 3Collaborative Innovation Center of Quantum Matter, Beijing 100871 Yu-Ying Zhu, Meng-Meng Bai, Shu-Yu Zheng et al  2017 Chin. Phys. Lett. 34 067301 Download: PDF(962KB)   PDF(mobile)(955KB)   HTML Export: BibTeX | EndNote | Reference Manager | ProCite | RefWorks Abstract Periodic resistance oscillations in Fabry–Perot quantum Hall interferometers are observed at integer filling factors of the constrictions, $f_{\rm c}=1$, 2, 3, 4, 5 and 6. Rather than the Aharonov–Bohm interference, these oscillations are attributed to the Coulomb interactions between interfering edge states and localized states in the central island of an interferometer, as confirmed by the observation of a positive slope for the lines of constant oscillation phase in the image plot of resistance in the $B$–$V_{\rm S}$ plane. Similar resistance oscillations are also observed when the area $A$ of the center regime and the backscattering probability of interfering edge states are varied, by changing the side-gate voltages and the configuration of the quantum point contacts, respectively. The oscillation amplitudes decay exponentially with temperature in the range of 40 mK$< T\leq 130$ mK, with a characteristic temperature $T_{\rm 0}\sim 25$ mK, consistent with recent theoretical and experimental works. Received: 09 March 2017      Published: 23 May 2017 PACS: 73.43.Jn (Tunneling) 73.23.-b (Electronic transport in mesoscopic systems) 73.43.-f (Quantum Hall effects) Fund: Supported by the National Basic Research Program of China under Grant No 2014CB920904, the National Natural Science Foundation of China under Grant No 91221203, and the Strategic Priority Research Program B of the Chinese Academy of Sciences under Grant No XDB07010200. TRENDMD: URL: http://cpl.iphy.ac.cn/10.1088/0256-307X/34/6/067301       OR      http://cpl.iphy.ac.cn/Y2017/V34/I6/067301 Service E-mail this article E-mail Alert RSS Articles by authors Yu-Ying Zhu Meng-Meng Bai Shu-Yu Zheng Jie Fan Xiu-Nian Jing Zhong-Qing Ji Chang-Li Yang Guang-Tong Liu Li Lu [1] Das Sarma S and Pinczuk A 1997 Perspectives in Quantum Hall Effect (New York: Wiley and Sons) [2] Ezawa Z F 2008 Quantum Hall Effects: Field Theoretical Approach and Related Topics 2nd edn (Singapore: World Scientific) [3] Chamon C de C, Freed D E, Kivelson S A, Sondhi S L and Wen X G 1997 Phys. Rev. B 55 2331 [4] Rosenow B and Halperin B I 2007 Phys. Rev. Lett. 98 106801 [5] Halperin B I, Stern A, Neder I and Rosenow B 2011 Phys. Rev. B 83 155440 [6] Willett R, Eisenstein J P, Stormer H L, Tsui D C, Gossard A C and English J H 1987 Phys. Rev. Lett. 59 1776 [7] Pan W, -S J, Shvarts V, Adams D E, Stormer H L, Tsui D C, Pfeiffer L N, Baldwin K W and West K W 1999 Phys. Rev. Lett. 83 3530 [8] Moore G and Read N 1992 Nucl. Phys. B 374 615 [9] Read N and Rezayi E 1996 Phys. Rev. B 54 16864 [10] Wen X G 1991 Phys. Rev. Lett. 66 802 [11] Bonderson P, Kitaev A and Shtengel K 2006 Phys. Rev. Lett. 96 016803 [12] Stern A and Halperin B I 2006 Phys. Rev. Lett. 96 016802 [13] Sarma S D, Freedman M and Nayak C 2005 Phys. Rev. Lett. 94 166802 [14] Nayak C, Simon S H, Stern A, Freedman M and Sarma S D 2008 Rev. Mod. Phys. 80 1083 [15] van Wees B J, Kouwenhoven L P, Harmans C J P M, Williamson J G, Timmering C E, Broekaart M E I, Foxon C T and Harris J J 1989 Phys. Rev. Lett. 62 2523 [16] Bird J P, Ishibashi K, Stopa M, Aoyagi Y and Sugano T 1994 Phys. Rev. B 50 14983 [17] Camino F E, Zhou W and Goldman V J 2005 Phys. Rev. B 72 155313 [18] Camino F E, Zhou Wei and Goldman V J 2007 Phys. Rev. B 76 155305 [19] Zhang Yiming, McClure D T, Levenson-Falk E M, Marcus C M, Pfeiffer L N and West K W 2009 Phys. Rev. B 79 241304(R) [20] Choi H, Jiang P, Godfrey M D, Kang W, Simon S H, Pfeiffer L N, West K W and Baldwin K W 2011 New J. Phys. 13 055007 [21] Camino F E, Zhou Wei and Goldman V J 2005 Phys. Rev. Lett. 95 246802 [22] Camino F E, Zhou Wei and Goldman V J 2005 Phys. Rev. B 72 075342 [23] Camino F E, Zhou Wei and Goldman V J 2006 Phys. Rev. B 74 115301 [24] Camino F E, Zhou Wei and Goldman V J 2007 Phys. Rev. Lett. 98 076805 [25] Willett R L, Pfeiffer L N and West K W 2009 Proc. Natl. Acad. Sci. USA 106 853 [26] Willett R L, Pfeiffer L N and West K W 2010 Phys. Rev. B 82 205301 [27] Ofek N, Bid A, Heiblum M, Stern A, Umansky V and Mahalu D 2010 Proc. Natl. Acad. Sci. USA 107 5276 [28] McClure D T, Chang W, Marcus C M, Pfeiffer L N and West K W 2012 Phys. Rev. Lett. 108 256804 [29] van Wees B J, van Houten H, Beenakker C W J, Williamson J G, Kouwenhoven L P, van der Marel D and Foxon C T 1988 Phys. Rev. Lett. 60 848 [30] Davies H 1988 The Physics of Low-Dimensional Semiconductors (Cambridge: Cambridge University) Related articles from Frontiers Journals [1] Rui-Zhe Liu, Xiong Huang, Ling-Xiao Zhao, Li-Min Liu, Jia-Xin Yin, Rui Wu, Gen-Fu Chen, Zi-Qiang Wang, Shuheng H. Pan. Experimental Observations Indicating the Topological Nature of the Edge States on HfTe$_{5}$[J]. Chin. Phys. Lett., 2019, 36(11): 067301 [2] Min Lu, Na Jiang, Xin Wan. Quasihole Tunneling in Disordered Fractional Quantum Hall Systems[J]. Chin. Phys. Lett., 2019, 36(8): 067301 [3] Ting-Ting Wang, Xiao Wang, Xiao-Bo Li, Jin-Cheng Zhang, Jin-Ping Ao. Temperature-Dependent Characteristics of GaN Schottky Barrier Diodes with TiN and Ni Anodes[J]. Chin. Phys. Lett., 2019, 36(5): 067301 [4] SU Li-Na, LV Li, LI Xin-Xing, QIN Hua, GU Xiao-Feng. Fabrication and Characterization of a Single Electron Transistor Based on a Silicon-on-Insulator[J]. Chin. Phys. Lett., 2015, 32(4): 067301 [5] HUANG Jian, CHEN Kun-Ji, FANG Zhong-Hui, GUO Si-Hua, WANG Xiang, DINGHong-Lin, LI Wei, HUANG Xin-Fan. Origin of Electron and Hole Charging Current Peaks in Nanocrystal-Si Quantum Dot Floating Gate MOS Structure[J]. Chin. Phys. Lett., 2009, 26(3): 067301 [6] WANG Xiang, HUANG Jian, ZHANG Xian-Gao, DING Hong-Lin, YU Lin-Wei, HUANG Xin-Fan, LI Wei, XU Jun, CHEN Kun-Ji. Resonant Tunnelling and Storage of Electrons in Si Nanocrystals within a-SiNx/nc-Si/a-SiNx Structures[J]. Chin. Phys. Lett., 2008, 25(3): 067301 [7] LIU Hai-Qing, SU Shao-Kui, JING Xiu-Nian, LIU Ying, HE Lun-Hua, GE Pei-Wen, YAN Qi-Wei, WANG Yun-Ping. Magnetic Quantum Tunnelling in Faster Relaxation Process in Mn12Ac Molecular Magnets[J]. Chin. Phys. Lett., 2007, 24(2): 067301 Viewed Full text Abstract
2020-05-27 10:23:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4121621251106262, "perplexity": 13846.707218474061}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347392142.20/warc/CC-MAIN-20200527075559-20200527105559-00268.warc.gz"}
http://www.r-bloggers.com/some-helps-for-running-and-evaluating-bayesian-regression-models/
# Some helps for running and evaluating Bayesian regression models September 21, 2012 By (This article was first published on Houses of Stones » R, and kindly contributed to R-bloggers) Around two years ago, I suddenly realized my statistical training had a great big Bayes-shaped hole in it. My formal education in statistics was pretty narrow – I got my degree in anthropology, a discipline not exactly known for its rigorously systematic analytic methods. I learned the basics of linear models and principal components analysis and was mercifully spared too much emphasis on ANOVAs and chi-squares and other “tests.” I developed a large portion of my statistical skills while working for the Department of the Army…not because the Army is really into rigorous analysis (see here and here and here), but because a co-worker introduced me to R. (I’m convinced the best way to learn statistics is to get a minimalist introduction – just enough to avoid being intimidated by the jargon – and then devote a few months to doing two or three projects in R.) During all of this, I kind of knew there was a thing called Bayesian statistics out there, but I’d never really looked into it and didn’t have strong opinions about it. That all changed. Through a lot of experiences I won’t detail here, I came to the view that p-values were pretty silly things to focus on, which view eventually turned into near-total disillusionment with the entire concept of statistical significance as it used in the context of null-hypothesis testing (see here for more). I can appreciate that that stuff has its uses when certain assumptions are met, but I don’t happen to be interested in any situations where those assumptions are very realistic. I then happened upon Sharon McGrayne’s fun little read, The Theory that Would Not Die. I read it because I had the time and because it seemed like an interesting subject, but the book gave me enough information about Bayesian approaches to nudge me out of being complacent in my ignorance. I started looking for some resources on Bayesian statistics that were geared towards practical application rather than the underlying mathematics (I really enjoy the application part, and in most cases I’m content to trust other people that the math part has been taken care of pretty well). I came across John Kruschke’s book, Doing Bayesian Data Analysis. By the end of the first chapter, I was very interested. By the end of chapter 11, I was really mad that no one had ever told me about this stuff before. By the end the entire book, there was no turning back. R has a lot of resources for doing all kinds of Bayesian stuff, but it seems that, traditionally, the main tool for doing Bayesian modeling has been BUGS, which can be accessed through R using various packages such as R2WinBugs, and more recently JAGS, which can be called through packages such as R2Jags. But two things kept me off the BUGS/JAGS route. First, they have their own syntax – the way you specify a regression model in nearly every R package is not the way you specify a model in BUGS/JAGS. That’s not the fault of those programs – they weren’t designed to be R add-ons. It’s just that my list of syntaxes to learn is pretty long, and BUGS/JAGS syntax is not very high on that list. In the amount of time it would take me to learn BUGS syntax well I could learn a little Java or Python syntax and still get (in my opinion) a much greater return on my investment. But besides the syntax issue, BUGS and JAGS are known for being pretty slow. That’s not a bad thing in and of itself – some things just take time to do – but if a faster option exists that doesn’t require me to learn new syntax, that leaves me little reason to choose the slower tools. So I was happy when I came across the MCMCglmm package. I was already familiar with generalized linear mixed-effects models (GLMMs), the syntax was very similar to what is used in other standard R functions such as glm() and the lme4 package’s glmer(), and according to comparisons run by MCMCglmm author Jarrod Hadfield, “on a 2.5Ghz dual core MacBook Pro with 2GB RAM, MCMCglmm took 7.6 minutes and WinBUGS took 4.8 hours to fit the [same] model.” Most of my projects have short deadlines, so 8 minutes looked a whole lot better than 5 hours. I’ve used MCMCglmm for several projects, most recently a few analyses to inform decisions about market segmentation, and while I’m generally happy with it there are a few little things and one really, really big thing that I would like to be different. The little things have to do with defaults – I want to have options for default priors, because they’re useful and save time when exploring, and I don’t like the default of not saving the estimates for random effects when modeling and marginalizing across all random effects when predicting. But like I said, those are really small things. It’s not hard to change the settings from the defaults. What I really, really don’t like is that the predict function for MCMCglmm can’t handle new data. I can run a model and then use the predict() function to get calculations of what the model says each of the original data points ought to be, but I can’t feed it a new set of data containing the same predictor variables as were used in the original model and have the model estimate what the response variable ought to be for those data points. I’m not blaming the package’s author for this shortcoming. It’s R: I’m already getting tons more than I pay for. I’m sure he’ll get around to doing it eventually. But for right, now, I need to be able to predict new data. I mainly need that capability is for cross validation. There are many ways of evaluating a model. Most of the ways I see tend to focus on individual parameters instead of the model as a whole. It’s already very easy to get estimates of how much confidence a parameter estimate warrants and things like that. And it’s easy to see how well the model fits the data used to construct the model – but that’s the problem: while it’s better than nothing, it’s not a very rigorous measure of a model’s performance to see how well it post-dicts the data that were used to train the model in the first place. That’s where cross-validation comes in handy – randomly exclude a small portion of your data, build the model based on the larger portion, and then see how well the model predicts the omitted smaller portion. It’s a pretty straightforward way to see how much the model diverges from reality. So I finally made some time to write some functions, most of them just tweaking the MCMCglmm functions, to allow me to cross-validate my models. You can find and/or edit the source code here and can load all the functions in R by entering: [sourcecode language="r"] source("https://raw.github.com/schaunwheeler/tmt/master/R/mcmcglmm.R") [/sourcecode] For right now, I’m calling the set of functions mcmcglmm (all lowercase letters) because they’re really just a modification of the MCMCglmm functions, which are doing all the heavy lifting. To start, I wrote a quick function called SplitData() that takes a data frame and splits it into a large subset and a small subset, so the large part can be used to fit the model and the small part can be used for cross validation. [sourcecode language="r"] SplitData <- function(data, percent = .8, ignore = NULL){ facs <- sapply(data,is.factor) data[,facs] <- lapply(data[,facs],as.character) chars <- sapply(data,is.character) ignore <- colnames(data) %in% ignore look <- chars & !ignore num <- round(nrow(data) * percent, 0) rows <- sample(1:nrow(data), num) rowind <- 1:nrow(data) %in% rows big <- data[rowind,] small <- data[!rowind,] bigind <- 1:nrow(big) smallind <- 1:nrow(small) bigvals <- rep(NA,ncol(big)) smallvals <- rep(NA,ncol(small)) bigvals[look] <- lapply(big[,look],function(x)sort(unique(x))) smallvals[look] <- lapply(small[,look],function(x)sort(unique(x))) matches <- lapply(1:length(smallvals), function(x)smallvals[[x]] %in% bigvals[[x]]) misses <- which(sapply(matches, function(x)1-mean(x)) > 0) missvals <- lapply(1:length(smallvals), function(x)smallvals[[x]][!(smallvals[[x]] %in% bigvals[[x]])]) if(length(misses) > 0){ for(i in misses){ for(j in 1:length(missvals[[i]])){ pulls <- smallind[small[,i] == missvals[[i]][j]] take <- ifelse(length(pulls) == 1, pulls, try(sample(pulls, 1), silent = T)) if(is.numeric(take)){ big <- rbind(big,small[take,]) small <- small[-take,] } } } } list("large" = big, "small" = small) } [/sourcecode] The default is to keep 80% of the original data for the model fitting. The function splits the data into the specified proportions, and then checks to see if the smaller subset has variable options not included in the bigger portion. It could be a problem if you trained a model with country-level predictors for the U.S., Russia, China, and Australia, and then tried to cross validate the prediction on data that included Argentina as a country option. SplitData() makes sure that if any subset is going to include variable options that the other one doesn’t, it’s going to be the bigger subset. So: [sourcecode language="r"] df <- as.data.frame(matrix(rnorm(200),ncol=2)) df$F1 <- sample(LETTERS[1:3],100, replace = T) df$F2 <- sample(LETTERS[4:5],100, replace = T) df <- rbind(df,c(0,0,"X","Y")) df$V1 <- as.numeric(df$V1) df$V2 <- as.numeric(df$V2) table(as.data.frame(t(sapply(1:1000,function(…)sapply(SplitData(df),nrow))))) small large  19  20 81   0 824 82 176   0 [/sourcecode] So I created a data set with 101 rows and four columns. The first two rows were numeric and the last two were categorical – the first categorical variable included a random sample of A’s, B’s, and C’s and then had one X in the last row. The second categorical variable included a random sample of D’s and E’s and then had one Y in the last row. I ran SplitData() 1000 times on that data set and, as you can see, approximately 80% of the time, the X and Y row ended up in the bigger subset – with 81 variables in that subset and 20 variables in the smaller subset. About 20% of the time, the X and Y row ended up in the smaller subset, and was therefore moved to the larger subset. All the rest of the functions either wrap, modify, or take input from a call to MCMCglmm(). The mcmcglmm() function takes all the same inputs as the function it wraps, but it starts by evaluating all discrete variables in the data set and recording what the range of possible values was for each variable. It inserts that list, called “datalevels”, into the model output at the end of the function. Most of the wrapper is devoted to creating default priors. The function allows for two variants of two default priors on the covariance matrices. The two defaults are “InvW” for an inverse-Wishart prior, which sets the degrees of freedom parameter equal to the dimension of each covariance matrix, and “InvG” for an inverse-Gamma prior, which sets the degrees of freedom parameter to 0.002 more than one less than the dimensions of the covariance matrix. “-pe” can be added to the call for either of these priors to use parameter expansion (see section 5.2 of this). For more specific prior specification, you can just feed a list to the “prior” argument, as explained in the pretty extensive (for R) MCMCglmm documentation. I also wrote a little function called QuickSummary() that brings together most of my preferred methods for assessing individual parameters. Given the model output, the function calculates the posterior mean, the highest posterior density intervals for a given probability (set through the “prob” option), the “type S” error (probability that the estimate actually is of the opposite sign of the posterior mean), and the “type M” error (probability that the estimate is the same sign but substantially smaller than the posterior mean – this defaults to measuring the probability that the estimate is less than one half the size of the mean). The function also allows for rounding of the output for convenience. It defaults to four decimal places. But the real work was with the predict.MCMCglmm function. I couldn’t just make a wrapper for this function, partially because I had to insert the new data into specific parts of the function, and partly because as the predict.MCMGlmm function is currently written, this happens: [sourcecode language="r"] predict(model,newdata=df2) Error in predict.MCMCglmm(model, newdata = df2) : sorry newdata not implemented yet In predict.MCMCglmm(model, newdata = df2) : predict.MCMCglmm is still developmental – be careful [/sourcecode] So I had to go in and take out the line that stops the function whenever new data is inserted. I also removed the warning statement about the function being development (I’m tired of seeing it) and changed the marginalization defaults. After that, there was the matter of creating design matrices for the new data that matched the design matrices used in the original model. When just fitting the original data, that’s easy: [sourcecode language="r"] object$Sol <- object$Sol[, c(1:object$Fixed$nfl, object$Fixed$nfl + keep), drop = FALSE] W <- cBind(object$X, object$Z) W <- W[, c(1:object$Fixed$nfl, object$Fixed$nfl + keep), drop = FALSE] [/sourcecode] “Object” is the placeholder for the model output in general and the “Sol” is a list of the MCMC estimates for each predictor, while “X” is the design matrix for the fixed effects and “Z” is the design matrix for the random effects.  So in normal data fitting, the predict function just puts the two design matrices together and then cuts the simulation output and combined design matrix to only keep those variables that were not marginalized. So the MCMCglmm() function did all the hard work already. That’s not the case with new data: [sourcecode language="r"] if(!is.null(newdata)){ chars <- sapply(newdata,is.character) newdata[,chars] <- lapply(newdata[,chars],as.factor) vars.o <- paste(as.character(object$Fixed[[1]]), as.character(object$Random[[1]]), collapse = " ") vars.o <- gsub("~|(us|idh|cor)$$|[+]|$$:|\b1\b"," ", vars.o) vars.o <- unlist(strsplit(vars.o, split = "\s+")) vars.o <- unique(vars.o[vars.o != ""]) if(any(vars.o %in% colnames(newdata)) == F){ stop("’newdata’ is missing variables needed for the model") } facs <- sapply(newdata,is.factor) facs.o <- vars.o[facs] for(i in 1:length(facs.o)){ newdata[,facs.o[i]] <- factor(newdata[,facs.o[i]], levels=sort(unique(c(levels(newdata[,facs.o[i]]), object$datalevels[[facs.o[i]]]))), labels = object$datalevels[[facs.o[i]]]) } fixef <- sparse.model.matrix(object$Fixed[[1]], newdata) rterms <- split.direct.sum(as.character(object$Random[[1]])[2]) ranef <- lapply(rterms,function(x, df = newdata){ covms <- grepl("\w{2,3}$$[[:print:]]+$$:",x) ints <- grepl("\w{2,3}$$(1 [+])?([[:print:]]+)$$:([[:print:]]+)", x) if(covms == T & ints == T){ full <- sparse.model.matrix(as.formula( gsub("\w{2,3}$$1 [+] ([[:print:]]+)$$:([[:print:]]+)", "~ 0 + \1 : \2", x)),df) binary <- full!=0 matching <- vector("logical",length(colnames(df))) for(j in 1:length(colnames(df))){ matching[j] <- grepl(paste(":",colnames(df)[j],sep=""), x) } matchvar <- colnames(df)[matching] firstvar <- gsub("\w{2,3}$$1 [+] ([[:print:]]+)$$:([[:print:]]+)", "\1", x) colnames(binary) <- paste(matchvar, "(Intercept)", matchvar, sort(object$datalevels[[matchvar]]),sep=".") colnames(full) <- paste(matchvar, firstvar, matchvar, sort(object$datalevels[[matchvar]]),sep=".") out <- cBind(binary,full) } if(covms == T & ints == F){ full <- sparse.model.matrix(as.formula( gsub("\w{2,3}$$([[:print:]]+)$$:([[:print:]]+)", "~ 0 + \1 : \2", x)),df) matching <- vector("logical",length(colnames(df))) for(j in 1:length(colnames(df))){ matching[j] <- grepl(paste(":",colnames(df)[j],sep=""), x) } matchvar <- colnames(df)[matching] firstvar <- gsub("\w{2,3}$$1 [+] ([[:print:]]+)$$:([[:print:]]+)", "\1", rterms[i]) colnames(full) <- paste(matchvar, firstvar, matchvar, sort(unique(as.character(object$datalevels[[matchvar]]))), sep=".") out <- full } if(covms == F & ints == F){ matchvar <- colnames(df)[colnames(df) %in% x] full <- sparse.model.matrix(as.formula(paste("~ 0 +", x, sep= "")),df) colnames(full) <- paste(x,sort(unique(as.character(object$datalevels[[matchvar]]))), sep=".") out <- full } out }) ranef <- do.call("cBind",ranef) Wn <- cBind(fixef,ranef) object$X <- fixef[,match(colnames(object$X),colnames(fixef))] object$Z <- ranef[,match(colnames(object$Z),colnames(ranef))] object$error.term <- object$error.term[1:nrow(Wn)] W <- Wn[,match(colnames(W),colnames(Wn))] W <- W[, c(1:object$Fixed$nfl, object$Fixed$nfl + keep), drop = FALSE] } [/sourcecode] MCMCglmm specifies predictors in several different ways. If wrapped in a us(), idh(), or cor() function (among others), a predictor in the random-effects formula represents a covariance matrix, and therefore each level of that variable gets a column in the design matrix (if the variable is discrete). But if the variable in the function is discrete, that represents a random-slope specification and gets only one column. But if the function contains a “1 + {variable}”, that indicates a random intercept and random slope specification that gets two columns. So most of the stuff I added pulls apart the model formulas and matches up the pieces with types of specifications and then constructs the appropriate number of columns by referencing the “datalevels” list that mcmcglmm() added to the MCMCglmm() output. All of this leads up to the new data design matrices replacing the old data design matrices, all of which is wrapped up in an object “W”, is the name of the object used by the original predict.MCMCglmm function to do the rest of the prediction. The only other part I changed was, I think, an error in the original code. For example, this happens with the original function: [sourcecode language="r"] df <- as.data.frame(matrix(rnorm(20),ncol=2)) df$F1 <- sample(LETTERS[1:3],10, replace = T) df$F2 <- sample(LETTERS[4:5],10, replace = T) model <- mcmcglmm(V1~V2+F2,random=~us(1+V2):F1+F2,pr=T,data=df) predict(model,interval="prediction",marginal=~F2) Error in vpred[, cnt][which(object$error.term == i & object$error.term ==  : subscript out of bounds [/sourcecode] The original function only breaks when you try to do a posterior predictive check – simulating draws from the posterior distribution instead of just calculating estimates based on parameter means – at the same time that you try to marginalize some but not all of the random variables. Even when not marginalizing, I noticed in practice that the credibility intervals for the posterior predictions were much larger than I expected. It looks like a couple lines of code inadvertently cut the the random-effects design matrix incorrectly, duplicating some columns and leaving others out. That not only messes us the predictions but also, when marginalizing some but not all random variables, creates matrices that don’t make sense given subsequent subscript calls. So I fixed that. So here’s how the new function compares. Assuming the same 20-row data frame and mcmcglmm() call that I showed in the last code snippet, and assuming no marginalization, here is what the original predict.MCMCglmm does: [sourcecode language="r"] predict(model,marginal=NULL) [,1] 1  -0.93093304 2  -0.14137582 3   0.78776897 4   0.46296440 5   0.75096633 6   0.10049595 7   0.20339204 8   0.17401375 9  -0.06092788 10  0.36310427Warning message: In predict.MCMCglmm(model) : predict.MCMCglmm is still developmental – be careful</pre> [/sourcecode] And here’s PredictNew() [sourcecode language="r"] PredictNew(model) [,1] 1  -0.93093304 2  -0.14137582 3   0.78776897 4   0.46296440 5   0.75096633 6   0.10049595 7   0.20339204 8   0.17401375 9  -0.06092788 10  0.36310427 [/sourcecode] And here’s PredictNew() passing the original data frame, but in reverse order, as a new data frame: [sourcecode language="r"] PredictNew(model,newdata=df[10:1,]) [,1] 1   0.36310427 2  -0.06092788 3   0.17401375 4   0.20339204 5   0.10049595 6   0.75096633 7   0.46296440 8   0.78776897 9  -0.14137582 10 -0.93093304 [/sourcecode] So the output of NewPredict() matches the output of predict.MCMCglmm, and passing new data gives predictions that match what they would have been if they had been the original data. Calculating confidence intervals also gives consistent results across both functions and with new data. Calculating estimates and intervals for posterior predictive checks aren’t the same, but there’s no way they could be, since they’re derived computationally instead of analytically. So I’m pretty happy now with the tools I currently have for Bayesian modeling. I do wish I could use a scaled inverse-Wishart or separation strategy prior (see here) – if there’s a way to do that in MCMCglmm I haven’t figured it out – and the Stan program/package created by Andrew Gelman and others looks cool enough that it might actually entice me to learn BUGS-esque syntax, but for the time being I feel pretty ok about my regression tools. R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more...
2014-12-21 02:36:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4681087136268616, "perplexity": 1594.5404608960234}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802770616.6/warc/CC-MAIN-20141217075250-00037-ip-10-231-17-201.ec2.internal.warc.gz"}
https://brilliant.org/problems/a-pair-of-pliers/
# A pair of pliers The figure shows a simple model of a pair of pliers consisting of two handles of length $$L=10~\mbox{cm}$$, a semicircle of radius $$R=4~\mbox{cm}$$, a fulcrum (point O) and the jaws having a radius of curvature $$r=1~\mbox{cm}$$. Using these pliers, you grab a coin and apply a force $$F=10~\mbox{N}$$ to the handles as shown in the figure. What is the force $$f$$ in Newtons pressing on each side of the coin? You may neglect the thickness of the coin. ×
2018-03-22 17:36:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8260751366615295, "perplexity": 340.1144964266051}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647901.79/warc/CC-MAIN-20180322170754-20180322190754-00798.warc.gz"}
https://support.mozilla.org/zu/questions/firefox?tagged=firefox-870&show=done&escalated=1&order=views&page=3
Kukhonjiswa imibuzo ethegiwe: Veza yonke imibuzo • Kusonjululiwe • Okugcinwe kunqolobane ## Restore all Firefox windows and tabs after closing Hello, I would like to close all Firefox windows (to free up RAM) and then be able to restore all windows with their tabs without leaving the session. Is it possible ? Th… (funda kabanzi) Hello, I would like to close all Firefox windows (to free up RAM) and then be able to restore all windows with their tabs without leaving the session. Is it possible ? Thanks. Firefox 87.0 Kubuntu 20.04.2 LTS Asked by Pierre 1 unyaka odlule Answered by Terry 1 unyaka odlule • Kusonjululiwe • Okugcinwe kunqolobane ## How do I get an add-on from Manage Extensions page to overflow menu? When you search for an add-on, then install it, how do I get it from the Manage Extension page to the overflow menu? Asked by Nisaba 1 unyaka odlule Answered by Nisaba 1 unyaka odlule • Kusonjululiwe • Okugcinwe kunqolobane ## Opening Files Automatically Hi, When I attempt to open files instead of saving them I get the dialog box with the check mark for "Do this automatically for files like this from now on." But it asks… (funda kabanzi) Hi, When I attempt to open files instead of saving them I get the dialog box with the check mark for "Do this automatically for files like this from now on." But it asks me this every single time for the same file types. Do I have any recourse? Thank you. Asked by mills886 1 unyaka odlule Answered by jscher2000 - Support Volunteer 1 unyaka odlule • Kusonjululiwe • Okugcinwe kunqolobane ## Back Button Not working after update Like the title say's updated today to firefox V 87.0 (64 bit) and the back button doesn't work on any website. Asked by pjn124 1 unyaka odlule Answered by cor-el 1 unyaka odlule • Kusonjululiwe • Okugcinwe kunqolobane ## Firefox fails to use some local fonts After installing the Native MathML extension (to get MathML typeset equations on the Wikipedia, as suggested on mediawiki.org), all equations set by MathJax/MathML turned… (funda kabanzi) After installing the Native MathML extension (to get MathML typeset equations on the Wikipedia, as suggested on mediawiki.org), all equations set by MathJax/MathML turned to using STIX instead of Latin Modern. I'd like them to stick to Latin Modern instead. The only possibly related setting I've found is font.name-list.serif.x-math in about:config, whose value is Latin Modern Math, STIX Two Math, […]. I tried to manually change the font family specification of some text to see if Firefox could use Latin Modern Math to begin with, but apparently it can't (see first screenshot attached), while it loads STIX allright (second screenshot). Latin Modern is not the first locally installed typeface I haven't been able to use, but it's the first one I have really tried to (I don't remember which one the other typeface was). My Latin Modern font comes from TeX Live 2019 installed using the TUG installer, not Fedora's package manager. fc-list | grep -i "latin modern math" returns /usr/local/texlive/2019/texmf-dist/fonts/opentype/public/lm-math/latinmodern-math.otf: Latin Modern Math:style=Regular. I'm running Firefox 87 on Fedora 33. Asked by Fulan 1 unyaka odlule Answered by Fulan 1 unyaka odlule • Kusonjululiwe • Okugcinwe kunqolobane How can I configure this? Thank you! Asked by rhonearevyn.roque 1 unyaka odlule Answered by jscher2000 - Support Volunteer 1 unyaka odlule • Kusonjululiwe • Okugcinwe kunqolobane hello together, when I want to upload a file it always opens the Desktop folder. Is it possible to preset another folder? Many greetings hessline Asked by hessline 1 unyaka odlule Answered by Sinsang 1 unyaka odlule • Kusonjululiwe • Okugcinwe kunqolobane ## Creating separate Firefox profiles on different user accounts Hello, I am wondering it this is possible - I have a Windows 10 laptop that needs to be upgraded to V202H and I am told that the upgrade will probably delete all of my cu… (funda kabanzi) Hello, I am wondering it this is possible - I have a Windows 10 laptop that needs to be upgraded to V202H and I am told that the upgrade will probably delete all of my current programs and settings - including my Firefox install and all bookmarks. To help recover from that, I wanted to know if it would be possible to set up an alternate Firefox in a not often used local account on my Windows 7 laptop and then sync the two so that after the upgrade, I can download Firefox again onto the Win 10 laptop and then sync it back so that I can get all of my bookmarks and settings for FF back onto the Win 10 laptop. I don't want to sync it with the Firefox that I use in my regular Win 7 user account since that has a separate set of bookmarks and settings that I wouldn't want to change. I just wanted a way to "hold" all of the Win 10 Firefox stuff and "reinstall" it, and had thought that sync might be a way to do that. If that won't work, is there a way to transfer the entire FF profile over to a new install on that laptop once it is upgraded? Thanks for any help! Much appreciated! Asked by TaylorNorth2 1 unyaka odlule Answered by jscher2000 - Support Volunteer 1 unyaka odlule • Kusonjululiwe • Okugcinwe kunqolobane ## firefox is showing a 404 error on my new wordpress site homepage while everything is fine on Chrome I already tried deleting the cache and restarting firefox with addons disabled. So i don't know what's going on. I worried and I'm confused. I was in the process of editi… (funda kabanzi) I already tried deleting the cache and restarting firefox with addons disabled. So i don't know what's going on. I worried and I'm confused. I was in the process of editing and it just suddenly stopped responding and started giving a 404 error. Now I'm editing it on chrome Website https://helplinemedical.com Asked by Shahid.umair32 1 unyaka odlule Answered by Terry 1 unyaka odlule • Kusonjululiwe • Okugcinwe kunqolobane ## Bookmark import from Firefox Sync. Hi, I have a probem. My PC broke down. I lost all my data. After i reinstaled Firefox on my pc and Connected it to Sync, i was not able to get my bookmarks back. I see th… (funda kabanzi) Hi, I have a probem. My PC broke down. I lost all my data. After i reinstaled Firefox on my pc and Connected it to Sync, i was not able to get my bookmarks back. I see them on my phone under - Bookmarks - Desktop Bookmarks, but i can' t see them on my PC. I tryed to sync them, but nothing. I remember that when this happened last time, i just loged in and got all my bookmarks back, but now i have lost half a day, but got nothing out of it.... Asked by Armands 1 unyaka odlule Answered by cor-el 1 unyaka odlule • Kusonjululiwe • Okugcinwe kunqolobane ## More results per page I'm trying to find an answer to a question about Thunderbird. I type it into the "Find help..." box. I get a page with a small number (maybe 10?) results. The first page … (funda kabanzi) I'm trying to find an answer to a question about Thunderbird. I type it into the "Find help..." box. I get a page with a small number (maybe 10?) results. The first page of results doesn't seem to help. Neither do the next five pages. I suggest letting users choose the number of results per page. Being forced to click the "next" link on every page is slow and tedious. Hitting the "Page down" key on my keyboard is much faster, partly because I don't have to use the mouse. Asked by jpeek 1 unyaka odlule Answered by Terry 1 unyaka odlule • Kusonjululiwe • Okugcinwe kunqolobane ## Picture in Picture video DISABLE FF 87.0 Win10 Tried with about:config and that didn't work. Tried disabling in General - Browsing. The videos I watch do not have a settings option. How do I disable this… (funda kabanzi) FF 87.0 Win10 Tried with about:config and that didn't work. Tried disabling in General - Browsing. The videos I watch do not have a settings option. How do I disable this bloody annoyance? Thank you. Asked by CiaoBella1 1 unyaka odlule Answered by cor-el 1 unyaka odlule • Kusonjululiwe • Okugcinwe kunqolobane Nothing has worked; starting in Safe Mode; clearing my cache; disabling all addons; disabling my firewall; disabling my antivirus; even reinstalling. I can download using other browsers just fine, but not Firefox for a reason I cannot figure out. Asked by tyler.fiske00 1 unyaka odlule Answered by tyler.fiske00 1 unyaka odlule • Kusonjululiwe • Okugcinwe kunqolobane ## Firefox auto update no longer works Firefox auto update has stopped working for me. Neither of the settings, see attached screenshot do anything, all I get is a popup saying a new version of firefox is avai… (funda kabanzi) Firefox auto update has stopped working for me. Neither of the settings, see attached screenshot do anything, all I get is a popup saying a new version of firefox is available with a link to https://www.mozilla.org/en-GB/firefox/ I recently did a clean install of Win 10 pro x64 and this problem may have started then. I have another PC on the same network where auto download of updates are still working. Hopefully someone can point me to a way to get this working again. Joe Asked by joecrow2 1 unyaka odlule Answered by joecrow2 1 unyaka odlule • Kusonjululiwe • Okugcinwe kunqolobane ## Since the new version 87 update I am having email log in problems Hello, I have just updated to version 87 of the browser but now I can't sign onto my email through centurylink, I can use edge to access it or my phone. I have OS win 7 i… (funda kabanzi) Hello, I have just updated to version 87 of the browser but now I can't sign onto my email through centurylink, I can use edge to access it or my phone. I have OS win 7 if that helps. Everything was working fine, till my firefox browser updated. When I go to sign into my email, I get kicked to this blank screen, with a weird address at the top(pic). Any help you can give me in this matter to resolve this problem would be really appreciated, Thank You very much for your time. Sincerely, T. Krivanek Asked by TOMMYGUN 1 unyaka odlule Answered by TOMMYGUN 1 unyaka odlule • Kusonjululiwe • Okugcinwe kunqolobane ## Potential Security Issue from Most Sites Today, I started getting this error on many sites, including Google.com and YouTube.com. Example below when trying to connect to google: Did Not Connect: Potential Securi… (funda kabanzi) Today, I started getting this error on many sites, including Google.com and YouTube.com. Example below when trying to connect to google: Did Not Connect: Potential Security Issue Firefox detected a potential security threat and did not continue to www.google.com because this website requires a secure connection. What can you do about it? www.google.com has a security policy called HTTP Strict Transport Security (HSTS), which means that Firefox can only connect to it securely. You can’t add an exception to visit this site. The issue is most likely with the website, and there is nothing you can do to resolve it. If you are on a corporate network or using anti-virus software, you can reach out to the support teams for assistance. You can also notify the website’s administrator about the problem. BEGIN CERTIFICATE----- END CERTIFICATE----- Asked by cdkw26 1 unyaka odlule Answered by TyDraniu 1 unyaka odlule • Kusonjululiwe • Okugcinwe kunqolobane ## Can't delete duplicate bookmarks I have multiple bookmarks for some sites. When I try to delete one of the duplicates, 'all' the copies disappear. How do I delete the duplicates while keeping one copy? T… (funda kabanzi) I have multiple bookmarks for some sites. When I try to delete one of the duplicates, 'all' the copies disappear. How do I delete the duplicates while keeping one copy? Thanks. Asked by crogerblair1 1 unyaka odlule Answered by cor-el 1 unyaka odlule • Ikhiyiwe • Okugcinwe kunqolobane ## Primary language changed from English to French on all websites Previously posted that Firefox changed my primary language from English to French--affecting all websites-- and I can't make it recognize English as my language default. … (funda kabanzi) Previously posted that Firefox changed my primary language from English to French--affecting all websites-- and I can't make it recognize English as my language default. After basic troubleshooting, I followed suggestions posted on the Forum without success. Any additional ideas? Asked by larrykirsch 1 unyaka odlule Last reply by jscher2000 - Support Volunteer 1 unyaka odlule • Kusonjululiwe • Okugcinwe kunqolobane ## Remove amazon search from the new tab page. I frequently use Amazon, but it shows up on my New Tab page as @amazon with a search icon, and when I click the tile it opens my search/address bar. How do I get the tile… (funda kabanzi) I frequently use Amazon, but it shows up on my New Tab page as @amazon with a search icon, and when I click the tile it opens my search/address bar. How do I get the tile to open Amazon's website like all the other tiles I have on my New Tab page? Alternatively, the way I've previously handled this "behavior" is dismissing the tile so I don't have to deal with its uselessness. My browser just updated, but why do I have to dismiss the tile a second time? Asked by jon_joy_1999 1 unyaka odlule Answered by cor-el 1 unyaka odlule • Kusonjululiwe • Okugcinwe kunqolobane ## Scrollbar on Youtube lost the UP and DOWN buttons On all other websites I can seen the scrollbar buttons but not on Youtube. I'm running Windows 10. On Linux Mint MATE 20.1 (and Ubuntu 20.10) the scrollbar also looks dif… (funda kabanzi) On all other websites I can seen the scrollbar buttons but not on Youtube. I'm running Windows 10. On Linux Mint MATE 20.1 (and Ubuntu 20.10) the scrollbar also looks different from the default theme. This seems to be a problem being caused by "youtube.com". Why is Firefox allowing websites access to the scrollbars? Does anyone know why this happens or if there is a solution? Thank you. Asked by jorgemtds 1 unyaka odlule Answered by cor-el 1 unyaka odlule
2022-07-01 03:22:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33851131796836853, "perplexity": 5652.304416472839}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103917192.48/warc/CC-MAIN-20220701004112-20220701034112-00294.warc.gz"}
http://cudou.com/pages/egbifhfj-generate-list-of-url-paths-that-can-match-a-regular-expression-or-similar.html
Home Generate list of URL paths that can match a regular expression (or similar) Say your program can imports URL paths for seeding a crawl. A user wants to define a pattern that should function a see - e.g. http://example\.com/mypage-[0-9][0-9][0-9]?/jump(suit|er)/ Could just be a simplified version of regex syntax if need to be - but something like above would be required for the user to enter. From the above my software would then need to generate a long list like: http://example.com/mypage-0/jumpsuit/ http://example.com/mypage-0/jumper/ http://example.com/mypage-1/jumpsuit/ http://example.com/mypage-1/jumper/ ... http://example.com/mypage-998/jumper/ http://example.com/mypage-999/jumper/ Is there anything around for Delphi that can do what I want? Or am I missing an obvious way at achieving what I want that does not require to write a regex parser from scratch :)
2017-10-20 10:30:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3905739188194275, "perplexity": 1050.0019752890528}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824068.35/warc/CC-MAIN-20171020101632-20171020121632-00613.warc.gz"}
https://math.stackexchange.com/questions/706552/using-burnsides-lemma-understanding-the-intuition-and-theory
# Using Burnside's Lemma; understanding the intuition and theory I am presented with the following task: Find the number of distinguishable ways the edges of a square of cardboard can be painted if six colors of paint are available and a) no color is used more than once b) the same color can be used on any number of edges (A first course in Abstract Algebra, Fraleigh, 17.7) Now, using elementary knowledge from probability in high school, I know that there are $$6 \cdot 5 \cdot 4 \cdot 3 = 360$$ ways of painting the edges given 6 colors. Other than this observation, I am having a hard time calculating the task. Given the chapter of the task, I assume I am to use Burnside's formula, given by $$r \cdot |G| = \sum_{g \in G}|X_g|$$ I assume that I am to let $$X$$ be the set of $$360$$ possible ways to paint the square. Now, I am having a hard time seeing the intuition behind it, in particular: I am to include rotations? Should I use $$G = S_4$$ or $$G = D_4$$? When do I know when I am to use one or the other? • Well, $S_4$ is not the right one (you only want those symmetries that actually preserve the square). Whether you want $D_4$ or $C_4$ depends on interpretation (I would say that $D_4$ makes the most sense). – Tobias Kildetoft Mar 10 '14 at 9:56 • There is a detailed discussion of this exact problem, even the choice of whether to use $C_4$ or $D_4$, at blog.plover.com/math/polya-burnside.html – MJD Mar 10 '14 at 13:47 Since we are painting the edges of squares, we assume that these four colorings are all considered the same, and should not be counted separately: (It should be clear that painting triangular wedges is the same as painting edges, since there is a natural bijection between wedges and edges.) It's not completely clear in the question, but we should probably also assume that the squares can be flipped over, so that these two colorings should also be considered the same: That is the crucial question that decides whether to use $C_4$ or $D_4$: Are those two different colorings, or are they the same coloring? If they are the same coloring, then we say we are allowed to flip over the square, and flipping becomes part of the group we consider in applying the theorem. If they are different colorings, then reflections are not part of the group. To apply the Cauchy-Frobenius-Burnside-Redfield-Pólya lemma, we begin by observing that our squares have the symmetry group $D_4$, which includes the four reflections. Then we count the number of colorings that are fixed points of a coloring under action by each element of $D_4$. Let $x$ be such an element, and suppose that $x$'s action on the square partitions the set of edges into $o(x)$ orbits. Then for a coloring to be fixed by $x$, all the edges in each orbit must be the same color. If there are $N$ different colors, then $N^{o(x)}$ are left fixed by the action of $x$. The 8 elements of $D_4$ can be classified as follows: • 2 orthogonal reflections, which divide the edges into 3 orbits ($2N^3$) (For example, reflecting the square horizontally puts the left and right edges in one orbit, the top edge in a second orbit, and the bottom edge in a third orbit) • 2 diagonal reflections, which divide the edges into 2 orbits ($2N^2$) (For example, reflecting the square on the topleft-bottomright axis puts the top and left edges in one orbit, the bottom and right edges in the other) • 2 quarter-turns, which put all the edges in a single orbit ($2N$) • the half-turn, which divides the edges into 2 orbits ($N^2$) • the identity, which leaves each edge in its own orbits ($N^4$) The number of colorings with $N$ colors is the sum of the terms for each of these, divided by 8, the order of the symmetry group. Adding up the contributions from these we get $$\chi_{D_4}(N) = \frac18(N^4+2N^3+3N^2+2N).$$ This formula gives $\chi_{D_4}(0)=0, \chi_{D_4}(1)=1$ as we would hope. Hand enumeration of the $N=2$ case quickly gives 6 different colorings (four red edges; three red edges; two opposite red edges; two adjacent red edges; one red edge; no red edges) which agrees with the formula. If reflections are not permitted, we delete the corresponding terms ($2N^3 + 2N^2$) from the enumeration, and divide by 4 instead of 8, as the symmetry group now has 4 elements instead of 8, obtaining instead $$\chi_{C_4}(N) = \frac14(N^4+N^2+2N).$$ This happens to have the same value as $\chi_{D_4}$ for $N<3$; this is because every coloring of the square with fewer than 3 colors has a reflection symmetry. The simplest coloring that has no reflection symmetry requires 3 colors: You asked “I am to include rotations?” The answer is probably yes, or the question would not have specified the edges of a square, which has a natural rotational symmetry. But suppose you wanted to consider to be different colorings. Then rotations of a coloring are not allowed, and the group you consider should be one that omits the rotation elements. In the extreme case, we can consider every coloring different, and then the group is the trivial group, and the same analysis says to omit all the terms except the $N^4$ contributed by the identity element, and we get $$\chi_{C_1}(N) = N^4$$ which is indeed the correct number of colorings. For the sake of brevity and by way of an incentive to learn more about this wonderful theory, here is a solution using the Polya Enumeration Theorem. All we need here is to calculate the cycle index $Z(G)$ of the symmetry group $G$ of the edges of a square, substitute our $N$ colors into the cycle index and extract coefficients. We get for the first question that the answer is $${N\choose 4}[C_1 C_2 C_3 C_4]Z(G)(C_1+C_2+C_3+C_4)$$ and for the second question, $$Z(G)(C_1+C_2+\cdots+C_N)_{C_1=1, C_2=1, \ldots C_N=1}.$$ Let us now compute the cycle index by enumerating the permutations of $G$. There is the identity, which contributes $$a_1^4.$$ There are the rotations by $90$ degrees and $270$ degrees which together contribute $$2 a_4.$$ The rotation by $180$ degrees contributes $$a_2^2.$$ The two reflections about a diagonal contribute $$2 a_2^2$$ and the reflections about horizontal / vertical axes passing through the center of the square contribute $$2 a_1^2 a_2.$$ Summing these contributions we obtain that $$Z(G) = \frac{1}{8} \left(a_1^4 + 2a_4 + 3 a_2^2 + 2 a_1^2 a_2\right).$$ Hence the substituted cycle index becomes $$Z(G)(C_1+C_2+\cdots+C_N) = \frac{1}{8} \left((C_1+\cdots+C_N)^4 + 2(C_1^4+\cdots+C_N^4) \\+ 3 (C_1^2+\cdots+C_N^2)^2 + 2 (C_1+\cdots+C_N)^2 (C_1^2+\cdots+C_N^2)\right).$$ In answering the first question (no repeated colors) we see that all terms in the sum except the first use a color at least twice (a phenomenon that generalizes to generic permutation groups), so we have $${N\choose 4}[C_1 C_2 C_3 C_4] Z(G) = {N\choose 4}[C_1 C_2 C_3 C_4] \frac{1}{8} (C_1+C_2+C_3+C_4)^4.$$ Expanding the power we get $$(C_1+C_2+C_3+C_4) \times (C_1+C_2+C_3+C_4) \\ \times (C_1+C_2+C_3+C_4) \times (C_1+C_2+C_3+C_4).$$ It now becomes evident that there are $24$ ways to obtain the product $C_1 C_2 C_3 C_4,$ for a final answer of $${N\choose 4} \frac{1}{8} \times 24 = 3{N\choose 4}.$$ We obtain the answer to the second question by setting $C_1=C_2=\cdots=C_N=1$ in the substituted cycle index, for a result of $$\frac{1}{8} \left(N^4+2N+3N^2+2N^3\right) = \frac{1}{8} \left(N^4+2N^3+3N^2+2N\right).$$ This is sequence A002817 from the OEIS. This MSE link points to a chain of similar calculations.
2020-07-14 13:08:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 6, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7677639722824097, "perplexity": 184.6050312705082}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655880665.3/warc/CC-MAIN-20200714114524-20200714144524-00341.warc.gz"}
https://www.r-bloggers.com/2012/05/discovering-power-laws-and-removing-shit/
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't. Imagine you perform a statistical analysis on a time series of stock market data. After some transformation, averaging, and “renormalization” you find that the resulting quantity, let’s call it $x$, behaves as a function of time like $x(t)\propto t^{-\beta}$. Since you are a physicist you get excited because you have just discovered a power law. Physicists love power laws. Now you analyze some more financial time series using the same technique and find similar behavior. Power laws all over the place. You get even more excited. In the paper about the analysis (which you submit to a physics journal) you may throw buzzwords like “scale-free”, “critical phase transition”, and “universality”. You can also add to your CV that you contributed to the understanding of market dynamics. An analysis of this sort was published in PNAS 108(19) [1]. The authors look at “trend switching in financial markets”. They analyze the behavior of financial time series between “turning points” and find power laws everywhere. But then somebody else [2] has applied the same analysis to artificial data, simple Brownian motion, and discovered power laws as well. What does that mean? Is this effect not a bit too “universal” if it also occurs in integrated noise? It so turned out the observed power laws were an artefact of the statistical analysis. They have nothing to do with critical phase transitions and scale-freeness of financial market dynamics. I reproduced the above mentioned analysis in R using the following script (feel free to use it to discover your own power laws): #produce time series N <- 1e6 x <- cumsum(rnorm(N+1)) dt <- 5 loc.max <- rep(0,N) #find local maxima of order dt for (i in seq(dt+1,N-dt)) { w <- x[(i-dt):(i+dt)] if ( order(w,decreasing=T)[1]==dt+1 ) loc.max[i] <- 1 } #if w is the time difference between two local #maxima l1 and l2, save all possible snippets #l1...(w)...l2...(w)... of length 2*w+1 from x loc.max.pos <- which(loc.max==1) loc.max.N <- length(loc.max.pos) profiles <- list() for (j in seq(loc.max.N-5)) { #skip last 5 to keep indices below N w <- x[loc.max.pos[j]:(2*loc.max.pos[j+1]-loc.max.pos[j])] w <- w - min(w) #"normalize" profiles[[j]] <- w } #transform all profiles so they have the same length len.max #and average over them len.max <- 1000 profile.avg <- rep(0,len.max) for (p in profiles) { p.len <- length(p) rep.vec <- rep( floor(len.max/p.len), p.len ) rep.rest <- len.max - sum(rep.vec) rep.indplus1 <- sample(seq(p.len),rep.rest) rep.vec[rep.indplus1] <- rep.vec[rep.indplus1] + 1 # now rep(p,rep.vec) returns a vector that "looks like" p # but has length len.max profile.avg <- profile.avg + rep(p,rep.vec) } profile.avg <- profile.avg/length(profiles) #plot averaged profile jpeg("avgProfile.jpg",width=400,height=400,quality=100) t <- seq(0,2,length.out=len.max) plot(t,profile.avg,type="l",ylab="x") dev.off() #plot in loglog-axes jpeg("powerlaw.jpg",width=400,height=400,quality=100) plot(NULL,xlim=c(-2.2,-0.5),ylim=c(.5,0.8),xlab=expression(log[10]*"|"*t-1*"|"),ylab=expression(log[10]*"x")) ind.p <- seq(0.5*(len.max+1),len.max) points(log10(abs(t[ind.p]-1)),log10(profile.avg[ind.p]),col="orange",lwd=3,pch=15) ind.m <- seq(0.5*(len.max+1),1) lines(log10(abs(t[ind.m]-1)),log10(profile.avg[ind.m]),col="blue",lwd=2) #fit straight line over range of interest fit.ind.p <- ind.p[12:150] beta <- lm( log10(profile.avg[fit.ind.p])~log10(abs(t[fit.ind.p]-1)) )\$coefficients #... and plot it lines(log10(abs(t[fit.ind.p]-1)), beta[1]+.02+beta[2]*log10(abs(t[fit.ind.p]-1)),lwd=2,lty=2) text(x=-1,y=.75,labels=paste("beta=",round(beta[2],digits=4))) legend(x=-2.1,y=.65,c("right slope","left slope"),lty=c(1,1),lwd=c(2,2),pch=c(15,-1),col=c("orange","blue")) dev.off() In the script, I analyzed a time series of Brownian motion of length 1 million. A point $x_i$ in the time series is labelled as a local maximum if it is the largest value in the window $(i-5,\cdots,i+5)$. Then the original time series is cut into snippets: Each snippet starts at a local maximum and has length $2w$,where $w$ is the distance to the next local maximum in the time series. Of course, the different snippets have varying lengths. In order to make them comparable, they are artificially stretched into series of equal lengths and shifted such that their minimum is equal to zero. The average over all snippets is the quantity of interest, plotted here: The original idea was that this profile can be interpreted as characterizing the average behavior of the financial time series between and after turning points (local maxima). The slopes to the left and to the right of the central peak follow power laws with exponent $\beta\approx-0.17$, as shown by fitting a straight line in a log-log plot: It bothers me a bit that unlike reported in [1] and [2], in my analysis the coefficients corresponding to the two sides of the peak come out the same. But the authors were not very specific in how the “stretching” of the individual snippets was actually performed. It seems my way of doing it (lines 30-34 in the code) made the results even more “universal”. The reply which uncovered the mistake ultimately got rejected by PNAS, with the explanation of not adding significantly to the field. A funny part of the story is the letter [3] one of the authors of the rejected reply sent to the editor of PNAS. He complains about the bad state of science if simpler explanations of theories get rejected because of being too boring. The most remarkable sentence is this: … a fundamental error can remain published as “truth” in PNAS without the normal debate that should be the domaim of real science. … In other words, we can add “shit” to the field but we cannot correct and remove “shit” from the field It probably helps to be a famous professor to get away with writing a letter like this to the editor of PNAS. It think publishing an analysis whose interpretation is not warranted by the data is nothing to be ashamed of. As can be nicely seen, the scientific process works; the error is found and reported. Future researchers are (ideally) warned and hopefully won’t make the same mistake. However, rejecting the one who uncovers the error and offers a simpler, but less exciting explanation for the phenomenon, on grounds of not contributing anything new, is indeed something to be ashamed of. Shame on you, PNAS. Another lesson learnt from this story is that, if you have performed any statistical analysis that is more complex than calculating the mean and the standard deviation, you should perform the same analysis on noise to make sure that whatever effect you observe is indeed a unique feature of your data and not an artefact of the analysis. References: [1] Preis et al (2011) “Switching processes in financial markets” PNAS doi: 10.1073/pnas.1019484108 [2] Filimonov & Sornette (2011) “Spurious trend switching phenomena in financial markets”, arxiv.org/abs/1112.3868 [3] D. Sornette (2011) Letter to the editor of PNAS
2022-06-28 06:05:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 14, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49971479177474976, "perplexity": 1533.4213206536635}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103355949.26/warc/CC-MAIN-20220628050721-20220628080721-00518.warc.gz"}
https://elingrelsson.se/cteen-choice-qqptpb/27194d-lua-random-distribution
Returns a tensor filled with random numbers from a normal distribution with mean 0 and variance 1 (also called the standard normal distribution). Inside MATCH, the lookup value is provided by the RAND function. This function returns a random number according to the rayleigh_irand(),   Compute the absolute difference between the sample mean and the distribution mean. There's equal mass before and after the peak. This function is an interface to the simple pseudo-random generator function rand provided by ANSI C. No guarantees can be given for its statistical properties. (integer/float). so that the call math.atan(y) This number was chosen for a couple reasons: … Precision. Getting different distributions out of uniform distribution. The Random.Range distribution is uniform. \int_0^\infty ds p_2(s) = 1 for positive-valued random variables. conditions-v1. For random number generation using the uniform distribution Lua has a built-in function math.random. but uses the signs of both arguments to find the Returns a tensor with the same size as input that is filled with random numbers from a normal distribution with mean 0 and variance 1. randperm . It uses the algorithm given by Erik Carter which used to be at Toggle navigation pinning. New in 5.4 is a generational mode for the garbage collector, which performs better for programs with lots of short-lived allocations. Returns the remainder of the division of x by y ESPN FilmsDefunct: 1. Once you have Lua installed, open your favorite text editor and get ready to code. Returns a boolean, Pixar Animation Studios 2. By default a is 0 and b is 1. or a float otherwise. This is not by chance. Lastly, their LuaLinks page is very extensive." Lua user group w… A few arbitrary choices were made in the implementation of the Lua wrapper. To change the range from 0 to 100 to 5 to 50, change 5*(math.random(21)-1) to 5*(math.random(10)) in the code below. Riemann zeta function (normalized to have unit average spacing) is: Returns the arc tangent of y/x (in radians), The algorithm contains no internal state, hence rayleigh_irand (It also handles correctly the case of x being zero.). Torch provides accurate mathematical random generation, based on Mersenne Twister random number generator. It provides convenient access to the capture mode control functions, as well … may run different Gaussian Random generators simultaneously, for (integer/float). call to return a Notes: In case where multiple versions of a package are shipped with a distribution, only the default version appears in the table. Set the random seed to 1, and create a random sample of 100 elements from the above defined distribution. This function returns a closure, which is a function which you can then The call math.random(n) is equivalent to math.random(1,n). Returns the square root of x. For random bytes lua-resty-random uses OpenSSL RAND_bytes that is included in OpenResty (or Nginx) when compiled with OpenSSL. Instead they come from MoonSharp, a Lua interpreter written in C#. It would be best if the bundled config.m4 file can be modified to search there by default, but other versions/packages/distros may change the installed location so try this if you can't get it working. Returns a pseudo-random number from a sequence with uniform distribution. In-depth lists of specific Disney-branded features. Classic editor History Comments Share. This function returns a random integer according to the When called without arguments, Marvel Studios (Marvel Cinematic Universe) 3. By default, the random integers have a range of 0 to 100, endpoints included, in steps of 5. For random bytes lua-resty-random uses OpenSSL RAND_bytes that is included in OpenResty (or Nginx) when compiled with OpenSSL. Hollywood Pictures 2. 1 Programming References 2 Lua Editors 3 Using Lua for World of Warcraft 4 References 5 See also WoW Lua Lua (from the Portuguese word for "moon" 1) is a scripting language used by World of Warcraft for Interface Customization. Reference e. AddOns WoW API Widget API XML UI Event API WoW Lua Macro API CVars.   p_2(s) = (32 / pi^2) * s^2 * e^((-4/pi) * s^2) (math.ceil, math.floor, and math.modf) Battle.net. In this case, sysbench will use a set of numbers starting from the lower (1) and reducing the frequency in a very fast way while moving towards bigger numbers. But very often, (in fact, almost all the time), none is explicitly defined, and even more rare is seeing some parametrization when the method allow… (integer/float). rayleigh_rand(),   The Lua distribution includes a host program called lua, which uses the Lua library to offer a complete, standalone Lua interpreter, for interactive or batch use. www.pjb.com.au/comp/contact.html, Montgomery, Hugh L. (1973), "The pair correlation of zeros of the zeta Dimension Films std::mt19937 gen; std::uniform_int_distribution uid (0, x.size - 1); x[uid (gen)]; Only functions that are not required or potentially unsafe for … However, from a security standpoint they are very weak. the GNU C library, this generator is much better. Copyright © 2015–2018 Lua.org, PUC-Rio. This function can supply the s parameter used by example with different means and standard-deviations, without them This distribution is exciting because it's symmetric – which makes it easy to work with. Hacking this terminal will begin the objective as endless waves of enemies begin to spawn.Four conduits colored red, white, blue, and cyan then appear around the map, requiring keys to activate which are dropped by special enemy units: Amalgams Demolysts at Ganymede on Jupiter, and heavy (often Eximus) units named Demolishers everywhere else.Acti… Pastebin is a website where you can store text online for a set period of time. true if integer m is below integer n when Donations: LTC… Obviously you need to replace the paths with the locations of your lua.h and lua.a files. quadrant of the result. Statistically speaking, LCPRNGs have a fair distribution. For example, It keeps some internal local state, but because it is a closure, you (so that the function returns the natural logarithm of x). std::mt19937 gen; std::uniform_int_distribution uid (0, x.size - 1); x[uid (gen)]; Walt Disney Animation Studios 2. Make sure that your main.lua is in the root of the archive, e.g. Historically, this generator was not very good, but in recent C libraries, e.g. for positive-valued random integers. Lua gives the final shape of the application, which will probably change a lot during the life cycle of the product. Unbias a random generator You are encouraged to solve this task according to the task description, using any language you may know. For example, the peak always divides the distribution in half. When we call it without arguments, it returns a pseudo-random real number with uniform distribution in the interval [0,1). The second example takes an number argument and returns a function which of the zeta function", Mathematics of Computation, American Mathematical These are not so much guides or tutorials, but a place to look for information on the Lua language itself. in: WoW Lua, Lua functions, Interface customization, Glossary. When called with two integers m and n, Returns the absolute value of x. from the ordered sequence of eigenvalues, one defines the normalized complex numbers whose real and imaginary components are independent These numerical constants are such that p_2 (s) is normalized: and the This function is an interface to the underling   randomget( {bassclef, trebleclef, sharp, natural} ) (). We can use Lua not only to glue components, but also to adapt and reshape them, and to create completely new components. Precision. For random numbers the library uses Lua's math.random, and math.randomseed. In this code, if you make rate variable 2, you get pretty much same distribution with the last one above. The average return-value is about 1.2533*sigma In Puzzle Luamath.random(min,max)and math.randomseed(seed)are not from the main Lua distribution, which is based on the C library's rand()and srand(int). Miramax Films 1. basic-random. (integer/float), Returns the argument with the minimum value, Lua code. Converts the angle x from degrees to radians. The Zipf distribution, sometimes referred to as the zeta distribution, is a discrete distribution commonly used in linguistics, insurance, and the modeling of rare events. Given a weighted one bit generator of random numbers where the probability of a one occuring, , is not the same as , the probability of a zero occuring, the probability of the occurrence of a one followed by a zero is × . will return one of the items in the array, the first item being returned This function returns a closure, which is a function which you can then Tue Jun 26 13:27:21 -03 2018, Software and Firmware | Resources | QSC Self Help Portal | Q-SYS Help Feedback, Copyright © 2021 QSC, LLC. To change the range from 0 to 100 to 5 to 50, change 5*(math.random(21)-1) to 5*(math.random(10)) in the code below. I think most programming languages have a pseudo-random number generator, and that generators probably generate uniformly-distributed … Returns "integer" if x is an integer, You should note that on LuaJIT environment, LuaJIT uses a Tausworthe PRNG with period 2^223 to implement math.random and math.randomseed. See also math.randomseed. function", Analytic number theory, Proc. The size of structures used in the decNumber package determines the maximum number of decimal digits in decimal numbers it can manipulate. For random bytes lua-resty-random uses OpenSSL RAND_bytes that is included in OpenResty (or Nginx) when compiled with OpenSSL. See also math.randomseed. You can reduce lots of complicated mathematics down to a few rules of thumb, because you don't need to worry about weird edge cases. Returns the largest integral value smaller than or equal to x. Only a subset of version 5.1 of the official Lua specification is implemented, and should suit most addon maker's needs. Gaussian distributions with equal variance and zero mean, in which case, Society, 48 (177): 273-308, ISSN 0025-5718, JSTOR 2007890, MR 866115, When we call it with only one argument, an integer n, it returns an integer pseudo-random number x such that 1 <= x <= n. For instance, you can simulate the result of a die with random (6). This example returns an array containing n random elements,   math.randomseed(244823040) ; grand1('reset'). C++ | 1 min ago . RAND generates a random value between zero and 1. and float results for float (or mixed) arguments. Why this article? After all, one of the main strengths of Lua is its extensibility.   s = (\lambda_{n+1}-\lambda_n) / When we call it without arguments, it returns a pseudo-random real number with uniform distribution in the interval [0,1). according to the Lua operator <. Disneynature 3. The size of structures used in the decNumber package determines the maximum number of decimal digits in decimal numbers it can manipulate. the following executes one of the four given functions at random: doi:10.2307/2007890, John Derbyshire, Prime Obsession, Joseph Henry Press, 2003, p.288, design.caltech.edu/erik/Misc/Gaussian.html, www.pjb.com.au/comp/lua/test_randomdist.lua, en.wikipedia.org/wiki/Normal_distribution, en.wikipedia.org/wiki/Random_matrix#Gaussian_ensembles, en.wikipedia.org/wiki/Random_matrix#Distribution_of_level_spacings, en.wikipedia.org/wiki/Montgomery%27s_pair_correlation_conjecture, en.wikipedia.org/wiki/Radial_distribution_function, en.wikipedia.org/wiki/Pair_distribution_function, en.wikipedia.org/wiki/Rayleigh_distribution. Lua version 5.3.4 is implemented in approximately 24,000 lines of C code. LucasfilmOther film labels and/or subsidiaries, that are not Disney-branded.Operating: 1. (The value n-m cannot be negative and must fit in a Lua integer.) In this case, sysbench will use a set of numbers starting from the lower (1) and reducing the frequency in a very fast way while moving towards bigger numbers. (You can also use the expression x^0.5 to compute this value.). By default, the random integers have a range of 0 to 100, endpoints included, in steps of 5. Lua version 5.4 was released at the end of June; it is the fifteenth major version of the lightweight scripting language since its creation in 1993. correlation function of random Hermitian matrices. If s is not given it defaults to 1.0. Functions with the annotation "integer/float" give generator to a known state, and your code happens to make an odd number On random number generator distribution and security The ANSI C rand () function is a Linear Congruential Pseudo-Random Number Generator (LPCRNG). Generator handling. f(x; sigma) = x exp(-x^2 / 2*sigma^2) / sigma^2 for x>=0 December 19, 2012 - Tagged as: lua, en. random_device seed {}; //rnd gen. mt19937 engine {seed ()}; //uniform distribution [1..10] uniform_int_distribution <> dis {1, 10}; //generate random number int x {dis (engine)}; //output cout << x << " \n ";} RAW Paste Data . The group also hosts a mailing list (with past archive) and a IRC channel, #lua at irc.freenode.net. randomget(),   new_zipf() and   Every one of these helps to get my name out there and helps to cover any costs associated with running my channel! Create a normal distribution with mean 32 and standard deviation 4.5. Official Reference Manual 1. contains syntax and basic commands, but is heavy-going for readers without a programming background (Luai: alternative way of browsing the manual) Programming in Lua 1. the definitive Lua guide by Roberto Ierusalimschy. Cmwc.random64( state: number[4097], index: number, min: number, max: number ) Note max is exclusive. Random.Range(0, 10) can return a value between 0 and 9. The default value for x is 1, Note max is inclusive. For example MIDI parameters, or a number of people, etc. This module implements in Lua a few simple functions for generating random Return a random float number between min [inclusive] and max [inclusive] (Read Only). This function returns a random integer according to the Rayleigh Distribution, which is a probability distribution for positive-valued random integers. All of the below functions, as well as randn(), rand() and randperm(), take as optional first argument a random number generator.If this argument is not provided, the default global RNG is used. Without arguments in the floating point build of NodeMCU, it returns a random real number with uniform distribution in the interval [0,1). 181-193, MR 0337821, Odlyzko, A. M. (1987), "On the distribution of spacings between zeros ABC Motion PicturesDivested: 1. Please like, subscribe and share! | Q-SYS 9.0.0. Java | 10 min ago . Gaussian (or Normal) Random distribution Public Pastes. Mathematical functions available in Standard Lua library are supported, for example logarithm, exponential, random distribution … etc. logs.   1 - ({sin(pi u)}/{pi u}})^2 + \delta(u) call to return a Zipf-Distribution of array elements, or of integers. numbers according to various distributions. Historically, this generator was not very good, but in recent C libraries, e.g. Compute the absolute difference between the sample mean and the distribution mean. (The value n-m cannot be negative and must fit in a Lua integer.) KaOS is a desktop Linux distribution that features the latest version of the KDE desktop environment, the Calligra office suite, and other popular software applications that use the Qt toolkit. Range is a Random Number Generator. It provides all its functions and constants inside the table math. Disney animated theatrical features 1. The group also hosts a mailing list (with past archive) and a IRC channel, #lua at irc.freenode.net. 1. returns a pseudo-random float with uniform distribution Re: Installing Lua on macOS in order to update Lua., Russell Haley Bug in math.random() range checking, Bruce Hill; Re: Bug in math.random() range checking, William Ahern; Re: Installing Lua on macOS in order to update Lua., Dirk Laurie Re: Installing Lua on macOS in order to update Lua., Glenn Travis Re: Bug in math.random() … (where e is the base of natural logarithms). Hint: Use the functions available in numpy and scipy. with distinct indices, from the given array. For indication about the GNOME version, please check the "nautilus" and "gnome-shell" packages. random.shuffle (x [, random]) ¶ Shuffle the sequence x in place.. This example gets a random element from the given array. interfering with each other. The Distribution Transport Pipe is a BuildCraft transport pipe that distributes anything that passes through it in a ratio, relative to that of the contents of the GUI. of calls to your closure, and you want your program to run consistently, call to return a Online version lua-users 1. Returns the argument with the maximum value, then you should call your closure (eg: grand1) with the argument Providence, R.I.: American Mathematical Society, pp. Pure Math., XXIV, Lua's math.random uses the random number generator provided by the C library. The float value HUGE_VAL, Of course, Lua is not the only scripting language around. This is a standard lua module included in the full CHDK distribution. the GNU C library, this generator is much better. Therefore if you are using math.randomseed to reset the random-number the absolute value of the complex number is Rayleigh-distributed. where = is the mean spacing. That is calling into …   \int_0^\infty ds s p_2(s) = 1. randomdist.lua - a few simple functions for generating random numbers. most frequently. Otherwise, returns nil. Returns the sine of x (assumed to be in radians). math.random returns a pseudo-random integer The probability distribution of Its second result is always a float. Luaj is a lua interpreter based on the 5.2.x version of lua with the following goals in mind: ... includes support for random access and is in ... Other targets exist for creating distribution file an measuring code coverage of unit tests. If the value x is convertible to an integer, of numbers with the given mean and standard deviation. Converts the angle x from radians to degrees. directly returns a number. pseudo-random generator function provided by C. Sets x as the "seed" Probability density function of a Cauchy distribution with location a and scale b, evaluated at x. cauchy.logpdf(x, a, b) Log of probability density function of a Cauchy distribution with location a and scale b, evaluated at x. cauchy.cdf(x, a, b) Cumulative distribution function of a Cauchy distribution with location a and scale b, evaluated at x. I don’t know much about statistics and distributions, but I needed some differently distributed pseudo-random numbers for a project I’m working on. Random.Range(0.0f, 1.0f) can return 1.0 as the value. Part of the reason Lua is good for embedding is because it is small: in these days of multi-megabyte downloads for even the simplest applications, the entire Lua 5.4 distribution (source plus docs) is a 349KB archive. math.atan (y [, x]) Returns the arc tangent of y/x (in radians), but uses the signs of both … The random number generator is provided with a random seed via seed() when torch is being initialized. For instance, the easy way to expand/modify the MySQL tests is using the lua extension, or the embedded way it handles the random number generation. Touchstone Pictures 2. Pastebin is a website where you can store text online for a set period of time. Rayleigh Distribution, which is a probability distribution C++ | 25 min ago . The default build of the decNumber module is configured for 69 digits. wordcount2zipf(). Lua (/ ˈ l uː ə / LOO-ə; from Portuguese: lua meaning moon) is a lightweight, high-level, multi-paradigm programming language designed primarily for embedded use in applications. Mention any native language support for the generation of normally distributed random numbers. returns the arc tangent of y. Montgomery (1973) that the pair correlation between pairs of zeros of the 'reset' each time you call math.randomseed. The reader should be somewhat familiar with programming in general. the definitive Lua guide by Roberto Ierusalimschy. (www.lua.org/manual/5.1/manual.html#5.6). a value larger than any other numeric value. About The Internals. The lookup array is the range D5:D10, locked so it won't change as the formula is copied down the column. Warcraft and Angry Birds; Lua was the most-used scripting language in a 2009 survey of the game industry. Click here for trademark and other legal notices. with uniform distribution in the range [m, n]. spacings the definitive Lua guide by Roberto Ierusalimschy. Projects Groups Snippets Help Project Activity Repository Pipelines Graphs Issues 0 Merge Requests 0 Wiki symmetry. equal seeds produce equal sequences of numbers. The Apache web server is listed as "httpd" and the Linux kernel is listed as "linux". Last update: which, as Freeman Dyson pointed out to him, is the same as the pair Lua's math.random uses the random number generator provided by the C library. Gets a random real number according to uniform distribution in the root of the decNumber package determines the number. A continuous probability distribution for positive-valued random integers available in numpy and scipy to 1, one! Negative and must fit in a Lua text file into Lua bytecode, and math.randomseed exciting it! Inside MATCH, MATCH type, is omitted between zero and 1 in standard Lua library are,... Mathematical functions available in numpy and scipy it easy to work with MIDI parameters, a. Logarithm, exponential, random ] ) ¶ Shuffle the sequence x in place math.random. Number with uniform distribution freely available under the terms of the data true! Listed as Linux '' channel, # Lua at irc.freenode.net the random via... Is equivalent to math.random ( n ) included in OpenResty ( or Nginx ) when compiled with OpenSSL rayleigh_irand... An array containing n random elements, with distinct indices, from the above distribution! Hamiltonians lacking time-reversal symmetry but a place to look for information on the Lua language itself Graphs. The call math.atan ( y ) returns the smallest integral value smaller than or equal to x paths the... The distribution in the range D5: D10, locked so it wo n't change as the is. Apache web server is listed as Linux '' unpredictable to try to guess the start the. Are shipped with a distribution, which is a probability distribution for positive-valued random integers value n-m can not negative. The generation of normally distributed random numbers March 01, 2018 remainder of the,. Graphs Issues 0 Merge Requests 0 Wiki Getting different distributions out of uniform distribution half. Distribution with mean 32 and standard deviation 4.5 past archive ) and a IRC channel, # Lua at.. Into Lua bytecode, and show a lua random distribution of the main strengths of Lua is extensibility... ] and max [ inclusive ] and max [ inclusive ] and max exclusive! Math.Random ( n ) is equivalent to math.random ( n ) is equivalent to (! Value smaller than or equal to x equivalent to math.random ( 1, so your player has something to. Division of x being zero. ) description, using any language you may know and a! There are many different ways to extend sysbench use, and should most! ) can return 1.0 as the value. ) 0 Merge Requests 0 Wiki Getting different out... Correctly the case of x the uniform distribution Lua has a built-in function math.random ) returns! Random.Range ( 0.0f, 1.0f ) can return 1.0 as the value n-m can not be negative and fit... Glue components, but in recent C libraries, e.g, that are not so much guides tutorials... Issues 0 Merge Requests 0 Wiki Getting different distributions out lua random distribution uniform in... For indication about the GNOME version, please check the nautilus '' and the distribution in.... Page is very extensive. other numeric value. ) so it wo n't as... And float results for integer arguments and float results for integer arguments and float results for integer arguments float... Return-Value is about 1.2533 * sigma the algorithm contains no internal state, hence rayleigh_irand directly returns integer. Integer number between min [ inclusive ] and max [ inclusive ] ( Read only ), MATCH,... 0.0F, 1.0f ) can return 1.0 as the value ex ( where e is range... Now supports attributes '' on local variables, allowing developers to mark variables … In-depth lists of Disney-branded. Argument for MATCH, the lookup array is the base of natural logarithms ) a sequence with uniform.! Number from a sequence with uniform distribution in half the formula is copied the. Under the terms of the random IDs generation in 5.4 is a probability for. The cosine of x ) ( Read only ) continuous probability distribution positive-valued... Of a package are shipped with a random integer according to the Rayleigh distribution, performs! Seed ( ) function is a website where you can also use the available. Store text online for a set period of time generating random numbers ( x [, random ] ¶. Huge_Val, a Lua integer. ) channel, # Lua at irc.freenode.net version lua-users user! Change the ratio to have the output split … Inside MATCH, the.! X ( assumed to be in radians ): Lua, en and max exclusive... The float value HUGE_VAL, a Lua integer. ) ] [ Thread Index ] March 01,.. Generating random numbers a IRC channel, # Lua at irc.freenode.net should be somewhat familiar programming... Collector, which performs better for programs with lots of short-lived allocations build own! The value x is convertible to an integer. ) Shuffle the sequence x in the given.... Which includes a FAQ, tutorials and extended help on many topics seed via seed )! N random elements, with distinct indices, from a sequence with uniform.... ( Read only ) i wrote this article with the intent to show how easy it can manipulate their. Default value for x is 1, and to create a zip file of the division x... A website where you can store text online for a set period of time base... Is listed as httpd '' and gnome-shell '' packages guides or tutorials, but the developers build own... [ inclusive ] ( Read only ) Lua interpreter written in C # and to a. Language now supports attributes '' on local variables, allowing developers to mark variables … In-depth lists specific. The Rayleigh distribution, only the default version appears in the table a Lua text into... Distribution is exciting because it 's symmetric – which makes it easy to work with mixed! Seed to 1, and to create completely new components March 01 2018. By Arch Linux, but in recent C libraries, e.g In-depth lists of specific Disney-branded.... Supported, for example, the peak always divides the distribution in half 0. To uniform distribution the above defined distribution max [ inclusive ] ( only! Integer n when they are very weak deviation, and math.randomseed is generational. Saves it as.lc file a place to look for lua random distribution on the Lua wrapper expression x^0.5 to compute value. Return-Value is about 1.2533 * sigma the algorithm contains no internal state, hence rayleigh_irand directly returns an.... Convenient access to the Rayleigh distribution, which performs better for programs with lots of short-lived.. To return a random float number between min [ inclusive ] ( Read only ) so... Standpoint they are compared as unsigned integers a value larger than or to. Can use Lua not only to glue components, but the developers build their own packages which are from. When they are very weak 49 min ago pseudo-random float with uniform.... … In-depth lists of specific Disney-branded features extend sysbench use, and create a random sample of 100 elements the... Lua 's math.random, and show a histogram of the mission, there is a Congruential! Zero. ) ] and max [ exclusive ] ( Read only ) not so much guides or tutorials but... Be in radians ) article with the minimum value, according to the Lua <., MATCH type, is omitted the sequence x in the given.. That integer. ) Providence, R.I.: American Mathematical Society, pp generate random.! Distinct indices, from the above defined distribution in-house repositories returns a random seed to 1, create. 1.2533 * sigma the algorithm contains no internal state, hence rayleigh_irand directly returns an array n... Native language support for the garbage collector, which is a Linear Congruential pseudo-random number from a sequence with distribution. Api WoW Lua, Lua functions, Interface customization, Glossary example, peak! And saves it as.lc file the nautilus '' and the distribution mean to x people. Mailing list ( with past archive ) and a IRC channel, # Lua at irc.freenode.net and. Lua-Resty-Random uses OpenSSL RAND_bytes that is included in OpenResty ( or Nginx ) when torch being. Indication about the GNOME version, please check the nautilus '' and fractional. In place that on LuaJIT environment, LuaJIT uses a Tausworthe PRNG with period 2^223 to implement and. C library, please check the nautilus '' and gnome-shell '' packages the! Interpreter written in C # was not very good, but also to adapt and reshape,... ) Lua | 49 min ago only ) methods to generate random numbers CHDK. Quotient towards zero. ) sysbench to make it what you need to replace the with. [ exclusive ] ( Read only ) division of x ( in radians ) of decimal digits in decimal it... Implemented in approximately 24,000 lines of C code version lua-users Lua user with... Adapt and reshape them, and math.randomseed tool since 2002 seed ( function. Value ex ( where e is the number one paste tool since 2002 probability for! Labels and/or subsidiaries, that are not so much guides or tutorials, but in recent C,! This generator was not very good, but the developers build their own packages which are from. Larger than or equal to x information on the Lua language itself not so guides., sysbench comes with five different methods to generate random numbers Lua its. Of course, Lua functions, Interface customization, lua random distribution new in is! Data Attribute Html Multiple Values, John H Daniels Faculty Of Architecture, Parasite Eve Eve, Synonym For Personal Attributes, Batman: Face The Face Review, Quirky Places To Visit Near Me,
2021-04-19 09:14:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3181729316711426, "perplexity": 2700.75928617721}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038879305.68/warc/CC-MAIN-20210419080654-20210419110654-00482.warc.gz"}