url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
http://tex.stackexchange.com/questions/75262/automatic-enumerate-numbering-from-a-specified-item-number?answertab=votes
# automatic enumerate numbering from a specified item number Is there a way that I can start an enumerate numbering at, say 17, then have the succeeding item numbers automatically add 2 to the preceding one? Say, I want to typeset the answers to the odd-numbered exercises starting from 17, I want my list to show 17. answer 17 19. answer 19 21. answer 21 23. answer 23 . . . I know that this can be done manually but after a while, typing the item numbers manually can become bothersome. Edit I put the list in the code environment to prevent automatic renumbering. - You might as well remove these comments as the problem has been revolved. I will delete this shortly as well. – Peter Grill Oct 4 '12 at 7:32 ## 1 Answer You can use a custom counter with the enumitem package, and increment this counter each time it is used as the label: ## Code: \documentclass{article} \newcounter{MyCounter} \usepackage{enumitem} \begin{document} \setcounter{MyCounter}{17}% initial value \begin{enumerate}[label={\arabic{MyCounter}\addtocounter{MyCounter}{2}}] \item abc \item bcd \item xyz \end{enumerate} \end{document} -
2013-05-23 12:09:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9124591946601868, "perplexity": 2060.6520149038965}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703306113/warc/CC-MAIN-20130516112146-00080-ip-10-60-113-184.ec2.internal.warc.gz"}
https://www.springerprofessional.de/artificial-intelligence-in-medicine/3683470
scroll identifier for mobile main-content ## Über dieses Buch This book constitutes the refereed proceedings of the 13th Conference on Artificial Intelligence in Medicine, AIME 2011, held in Bled, Slovenia, in July 2011. The 42 revised full and short papers presented together with 2 invited talks were carefully reviewed and selected from 113 submissions. The papers are organized in topical sections on knowledge-based systems; data mining; special session on AI applications; probabilistic modeling and reasoning; terminologies and ontologies; temporal reasoning and temporal data mining; therapy planning, scheduling and guideline-based care; and natural language processing. ## Inhaltsverzeichnis ### Understanding Etiology of Complex Neurodevelopmental Disorders: Two Approaches Complex human phenotypes, such as autism, schizophrenia, and anxiety, undoubtedly partially overlap in genomic variations that predispose to or protect against these maladies. (Genetic overlap of complex phenotypes has gained increasing experimental support and is no longer just an ungrounded scientific hypothesis.) Furthermore, as yet largely unknown shared environmental factors likely tend to trigger the manifestation of more than one phenotype. Although it may seem overly ambitious to target multiple phenotypes jointly, we believe we can obtain much more information from existing data and gain new insights into individual phenotypes by modeling phenotypes jointly. My talk sketches two distinct computational approaches to this problem. Andrey Rzhetsky ### What BPM Technology Can Do for Healthcare Process Support Healthcare organizations are facing the challenge of delivering personalized services to their patients in a cost-effective and efficient manner. This, in turn, requires advanced IT support for healthcare processes covering both organizational procedures and knowledge-intensive, dynamic treatment processes. Nowadays, required agility is often hindered by a lack of flexibility in hospital information systems. To overcome this inflexibility a new generation of information systems, denoted as process-aware information systems (PAISs), has emerged. In contrast to data- and function-centered information systems, a PAIS separates process logic from application code and thus provides an additional architectural layer. However, the introduction of process-aware hospital information systems must neither result in rigidity nor restrict staff members in their daily work. This keynote presentation reflects on recent developments from the business process management (BPM) domain, which enable process adaptation, process flexibility, and process evolution. These key features will be illustrated along existing BPM frameworks. Altogether, emerging BPM methods, concepts and technologies will contribute to further enhance IT support for healthcare processes. Manfred Reichert ### Elicitation of Neurological Knowledge with ABML The paper describes the process of knowledge elicitation for a neurological decision support system. To alleviate the difficult problem of knowledge elicitation from data and domain experts, we used a recently developed technique called ABML (Argument Based Machine Learning). The paper demonstrates ABML’s advantage in combining machine learning and expert knowledge. ABML guides the expert to explain critical special cases which cannot be handled automatically by machine learning. This very efficiently reduces the expert’s workload, and combines it with automatically learned knowledge. We developed a decision support system to help the neurologists differentiate between three types of tremors: Parkinsonian, essential, and mixed tremor (co-morbidity). The system is intended to act as a second opinion for the neurologists, and most importantly to help them reduce the number of patients in the “gray area” that require a very costly further examination (DaTSCAN). Vida Groznik, Matej Guid, Aleksander Sadikov, Martin Možina, Dejan Georgiev, Veronika Kragelj, Samo Ribarič, Zvezdan Pirtošek, Ivan Bratko ### Intelligent Configuration of Social Support Networks Around Depressed Persons Helping someone who is depressed can be very important to the depressed person. A number of supportive family members or friends can often make a big difference. This paper addresses how a social support network can be formed, taking the needs of the support recipient and the possibilities of the potential support providers into account. To do so, dynamic models about the preferences and needs of both support providers and support recipients are exploited. The outcome of this is used as input for a configuration process of a support network. In a case study, it is show how such an intelligently formed network results in a reduced long term stress level. Azizi A. Aziz, Michel C. A. Klein, Jan Treur ### Argumentation-Logic for Explaining Anomalous Patient Responses to Treatments The EIRA system has proved to be successful in the detection of anomalous patient responses to treatments in the Intensive Care Unit (ICU). One weakness of EIRA is the lack of mechanisms to describe to the clinicians, rationales behind the anomalous detections. In this paper, we extend EIRA by providing it with an argumentation-based justification system that formalizes and communicates to the clinicians the reasons why a patient response is anomalous. The implemented justification system uses human-like argumentation techniques and is based on real dialogues between ICU clinicians. Maria Adela Grando, Laura Moss, David Glasspool, Derek Sleeman, Malcolm Sim, Charlotte Gilhooly, John Kinsella ### How to Use Symbolic Fusion to Support the Sleep Apnea Syndrome Diagnosis The Sleep Apnea Syndrome is a sleep disorder characterized by frequently repeated respiratory disorders during sleep. It needs the simultaneous recording of many physiological parameters to be diagnosed. The analysis of these curves is a time consuming task made by sleep Physicians. First, they detect some physiological events on each curve and then, they point out links between respiratory events and their consequences. To support the diagnosis, we used symbolic fusion on elementary events, which connects events to their sleep context - sleep-stage and body position - and to the respiratory event responsible of their occurrence. The reference indicator is the Apnea-Hypopnea Index (AHI), defined as the average hourly frequency of arisen of Apneas or Hypopneas while the patient is sleeping. We worked on the polysomnography of 59 patients, that were first completely analyzed by a sleep Physician and then analyzed by our method. We compared the ratio of the AHI got by the automatic analysis and the AHI got by the sleep Physician. $$\delta=\frac{AHI(automatic analysis)}{AHI(Sleep Physician Analysis)}$$ Globally, we overvalued the count of apneas and hypopneas for the group of patients with AHI ≤ 5, that are considered as healthy patients. In average, for these patients, δ = 2,71. For patients with mild or moderate Sleep Apnea Syndrome we globally found a similar AHI. In average, for these patients, δ = 1,04. For patients with severe Sleep Apnea Syndrome, we undervalued a little the count of respiratory events. In average, for these patients, δ = 0,83. This leads to the same severity class for most of the patients. Adrien Ugon, Jean-Gabriel Ganascia, Carole Philippe, Hélène Amiel, Pierre Lévy ### Ontology-Based Generation of Dynamic Feedback on Physical Activity Improving physical activity patterns is an important focus in the treatment of chronic illnesses. We describe a system to monitor activity and provide feedback to help patients reach a healthy daily pattern. The system has shown positive effects in trials on patient groups including COPD and obese patients. We describe the design and implementation of a new feedback generation module which improves interaction with the patient by providing personalised dynamic context-aware feedback. The system uses an ontology of messages to find appropriate feedback using context information to prune irrelevant paths. The system adapts using derived probabilities about user preferences for certain message types. We aim to improve patient compliance and user experience. Wilko Wieringa, Harm op den Akker, Valerie M. Jones, Rieks op den Akker, Hermie J. Hermens ### A Case Study of Stacked Multi-view Learning in Dementia Research Classification of different types of dementia commonly involves examination from several perspectives, e.g., medical images, neuropsychological tests, etc. Thus, dementia classification should lend itself to so-called multi-view learning . Instead of simply combining several views, we use stacking to make the most of the information from the various views (PET scans, MMSE, CERAD and demographic variables). In the paper, we not only show the performance of stacked multi-view learning on classifying dementia data, we also try to explain the factors contributing to its performance. More specifically, we show that the correlation of views on the base and the meta level should be within certain ranges to facilitate successful stacked multi-view learning. Rui Li, Andreas Hapfelmeier, Jana Schmidt, Robert Perneczky, Alexander Drzezga, Alexander Kurz, Stefan Kramer ### Statistical Machine Learning for Automatic Assessment of Physical Activity Intensity Using Multi-axial Accelerometry and Heart Rate This work explores the automatic recognition of physical activity intensity patterns from multi-axial accelerometry and heart rate signals. Data collection was carried out in free-living conditions and in three controlled gymnasium circuits, for a total amount of 179.80 h of data divided into: sedentary situations (65.5%), light-to-moderate activity (17.6%) and vigorous exercise (16.9%). The proposed machine learning algorithms comprise the following steps: time-domain feature definition, standardization and PCA projection, unsupervised clustering (by k -means and GMM) and a HMM to account for long-term temporal trends. Performance was evaluated by 30 runs of a 10-fold cross-validation. Both k -means and GMM-based approaches yielded high overall accuracy (86.97% and 85.03%, respectively) and, given the imbalance of the dataset, meritorious F-measures (up to 77.88%) for non-sedentary cases. Classification errors tended to be concentrated around transients, what constrains their practical impact. Hence, we consider our proposal to be suitable for 24 h-based monitoring of physical activity in ambulatory scenarios and a first step towards intensity-specific energy expenditure estimators. Fernando García-García, Gema García-Sáez, Paloma Chausa, Iñaki Martínez-Sarriegui, Pedro José Benito, Enrique J. Gómez, M. Elena Hernando ### A Data Mining Library for miRNA Annotation and Analysis Understanding the key role that miRNAs play in the regulation of gene expression is one of the most important challenges in modern molecular biology. Standard gene set enrichment analysis (GSEA) is not appropriate in this context, due to the low specificity of the relation between miRNAs and their target genes. We developed alternative strategies to gain better insights in the differences in biological processes involved in different experimental conditions. We here describe a novel method to analyze and interpret miRNA expression data correctly, and demonstrate that annotating miRNA directly to biological processes through their target genes (which is nevertheless the only way possible) is a non-trivial task. We are currently employing the same strategy to relate miRNA expression patterns directly to pathway information, to generate new hypotheses, which may be relevant for the interpretation of their role in the gene expression regulatory processes. Angelo Nuzzo, Riccardo Beretta, Francesca Mulas, Valerie Roobrouck, Catherine Verfaillie, Blaz Zupan, Riccardo Bellazzi ### Ranking and 1-Dimensional Projection of Cell Development Transcription Profiles Genome-scale transcription profile is known to be a good reporter of the state of the cell. Much of the early predictive modelling and cell-type clustering relied on this relation and has experimentally confirmed it. We have examined if this also holds for prediction of cell’s staging, and focused on the inference of stage prediction models for stem cell development. We show that the problem relates to rank learning and, from the user’s point of view, to projection of transcription profile data to a single dimension. Our comparison of several state-of-the-art algorithms on 10 data sets from Gene Expression Omnibus shows that rank-learning can be successfully applied to developmental cell staging, and that relatively simple techniques can perform surprisingly well. Lan Zagar, Francesca Mulas, Riccardo Bellazzi, Blaz Zupan ### Comparing Machine-Learning Classifiers in Keratoconus Diagnosis from ORA Examinations Keratoconus identification has become a step of primary importance in the preoperative evaluation for the refractive surgery. With the ophthalmology knowledge improvement, corneal physical parameters were considered important to its evaluation. The Ocular Response Analyzer (ORA) provides some physical parameters using an applanation process to measure cornea biomechanical properties. This paper presents a study of machine learning classifiers in keratoconus diagnosis from ORA examinations. As a first use of machine learning approach with ORA parameters, this research work presents a performance comparison of the main machine learning algorithms. This approach improves ORA parameters’ analysis helping ophthalmologist’s efficiency in clinical diagnosis. Aydano P. Machado, João Marcelo Lyra, Renato Ambrósio, Guilherme Ribeiro, Luana P. N. Araújo, Camilla Xavier, Evandro Costa ### HRVFrame: Java-Based Framework for Feature Extraction from Cardiac Rhythm Heart rate variability (HRV) analysis can be successfully applied to automatic classification of cardiac rhythm abnormalities. This paper presents a novel Java-based computer framework for feature extraction from cardiac rhythms. The framework called HRVFrame implements more than 30 HRV linear time domain, frequency domain, time-frequency domain, and nonlinear features. Output of the framework in the form of .arff files enables easier medical knowledge discovery via platforms such as RapidMiner or Weka. The scope of the framework facilitates comparison of models for different cardiac disorders. Some of the features implemented in the framework can also be applied to other biomedical time-series. The thorough approach to feature extraction pursued in this work is also encouraged for other types of biomedical time-series. Alan Jovic, Nikola Bogunovic ### Lessons Learned from Implementing and Evaluating Computerized Decision Support Systems A potentially effective IT intervention to implement guidelines and evidence based practice consists of the use of computerized decision support systems (CDSS). CDSSs aim at providing meaningful feedback to professionals in order to positively influence their behavior. Intensive care medicine, with its heavy reliance on information and the advanced information infrastructure in intensive care units (ICUs), is an attractive specialty and environment for applying and investigating CDSSs. In particular, antibiotic prescription, control of the tidal volumes in the lungs, and control of glucose levels in the blood form hot topics in intensive care medicine and provide opportunities for decision support applications. However, issues pertaining to the design, implementation, critical success factors, as well as the evaluation of CDSSs are largely still open, especially in these domains. This work describes important issues learned from designing and implementing CDSSs in these domains based on our literature reviews and lessons learned from conducting various trials in our ICU. Saeid Eslami, Nicolette F. de Keizer, Evert de Jonge, Dave Dongelmans, Marcus J. Schultz, Ameen Abu-Hanna ### CARDSS: Development and Evaluation of a Guideline Based Decision Support System for Cardiac Rehabilitation Cardiac rehabilitation is a multidisciplinary therapy aimed at recovery and secondary prevention after hospitalization for cardiac incidents (such as myocardial infarctions) and cardiac interventions (such as heart surgery). To stimulate implementation of the national guidelines, an electronic patient record system with computerised decision support functionalities called CARDSS (cardiac rehabilitation decision support system) was developed, and made available to Dutch rehabilitation clinics. The system was quantitatively evaluated in a cluster randomised trial at 31 clinics, and qualitatively by interviewing 29 users of the system. Computerised decision support was found to improve guideline concordance by increasing professional knowledge of preferred practice, by reducing inertia to previous practice, and by reducing guideline complexity. It was not effective when organizational or procedural changes were required that users considered to be beyond their responsibilities. Niels Peek, Rick Goud, Nicolette de Keizer, Mariëtte van Engen-Verheul, Hareld Kemps, Arie Hasman ### Using Formal Concept Analysis to Discover Patterns of Non-compliance with Clinical Practice Guidelines: A Case Study in the Management of Breast Cancer Clinical decision support systems (CDSSs) may be appropriate tools to promote the use of clinical practice guidelines (CPGs). However, compliance with CPGs is a multifactorial process that relies on the CPGs to be implemented, the physician(s) in charge of the decision, and the patient to manage. Formal concept analysis (FCA) allows to derive implicit relationships from a set of objects described by their attributes, based on the principle of attribute sharing between objects. We used FCA to elicit patient-based formal concepts related to the non-conformity of multidisciplinary staff meetings (MSMs) decisions with CPGs in the domain of breast cancer management. We developed a strategy for selecting attributes and make lattices manageable. We found that when not using the guideline-based CDSS OncoDoc2, patients with bad prognostic factors were associated with non-compliant decisions. This was corrected when the system was used during MSMs. Nizar Messai, Jacques Bouaud, Marie-Aude Aufaure, Laurent Zelek, Brigitte Séroussi ### Integrating Clinical Decision Support System Development into a Development Process of Clinical Practice – Experiences from Dementia Care This paper describes the process of developing the decision-support system DMSS (Dementia Management and Support System) and some lessons learned. An action research and participatory design approach has been adopted during development, with a strong research focus on optimizing support to physicians in dementia diagnosis assessment, involving a number of physicians and clinics in the process. A stand-alone version is currently used in 11 clinics distributed over four countries. Results from evaluation studies show that the system and the physician comply in 84,6% of the patient cases and that reasons for non-compliance lie primarily in physician’s insufficient knowledge. The impact the system has had on the individual physician’s diagnostic procedure in observation studies, factors identified enabling the integration and obstacles to use are presented and discussed. The system’s support for assessing basic cognitive functions is being improved, primarily as a feature for personalization of a future web-based version of DMSS. Helena Lindgren ### Personalized Techniques for Lifestyle Change Online delivery of lifestyle intervention programs offers the potential to cost effectively reach large cohorts of users with various information and dietary needs. Unfortunately, online systems can fail to engage users in the long term, affecting their ability to sustain positive lifestyle change. In this work we present the initial analysis of a large scale application study of personalized technologies for lifestyle change. We evaluate the stickiness of an eHealth portal which provides individuals with three personalized tools – meal planner, social network feeds, and social comparison – to make change a reality in their lives. More than 5000 Australians took part in a 12 week study and provided solid empirical evidence for how the inclusion of personalized tools can assist and motivate users. Initial results show that the personalized tools boost user interaction with the portal, simplify information access, and assist in motivating users. Jill Freyne, Shlomo Berkovsky, Nilufar Baghaei, Stephen Kimani, Gregory Smith ### The Intelligent Ventilator Project: Application of Physiological Models in Decision Support This paper describes progress in a model-based approach to building a decision support system for mechanical ventilation. It highlights that the process of building models promotes generation of ideas and describes three systems resulting from this process, i.e. for assessing pulmonary gas exchange, calculating arterial acid-base status; and optimizing mechanical ventilation. Each system is presented and its current status and impact reviewed. Stephen E. Rees, Dan S. Karbing, Charlotte Allerød, Marianne Toftegaard, Per Thorgaard, Egon Toft, Søren Kjærgaard, Steen Andreassen ### Clinical Time Series Data Analysis Using Mathematical Models and DBNs Much knowledge of human physiology is formalised as systems of differential equations. For example, standard models of pharmacokinetics and pharmacodynamics use systems of differential equations to describe a drug’s movement through the body and its effects. Here, we propose a method for automatically incorporating this existing knowledge into a Dynamic Bayesian Network (DBN) framework. A benefit of recasting a differential equation model as a DBN is that the DBN can be used to individualise the model parameters dynamically, based on real-time evidence. Our approach provides principled handling of data and model uncertainty, and facilitates integration of multiple strands of temporal evidence. We demonstrate our approach with an abstract example and evaluate it in a real-world medical problem, tracking the interaction of insulin and glucose in critically ill patients. We show that it is better able to reason with the data, which is sporadic and has measurement uncertainties. ### Managing COPD Exacerbations with Telemedicine Managing chronic disease through automated systems has the potential to both benefit the patient and reduce health-care costs. We are developing and evaluating a monitoring system for patients with chronic obstructive pulmonary disease which aims to detect exacerbations and thus help patients manage their disease and prevent hospitalisation. We have carefully drafted a system design consisting of an intelligent device that is able to alert the patient, collect case-specific, subjective and objective, physiological data, offer a patient-specific interpretation of the collected data by means of probabilistic reasoning, and send data to a central server for inspection by health-care professionals. A first pilot with actual COPD patients suggests that an intervention based on this system could be successful. Maarten van der Heijden, Bas Lijnse, Peter J. F. Lucas, Yvonne F. Heijdra, Tjard R. J. Schermer ### A Predictive Bayesian Network Model for Home Management of Preeclampsia There is increasing consensus among health-care professionals and patients alike that many disorders can be managed, in principle, much better at home than in an out-patient clinic or hospital. In the paper, we describe a novel temporal Bayesian network model for the at home time-related development of preeclampsia, a common pregnancy-related disorder. The network model drives an android-based smartphone application that offers patients and their doctor insight into whether or not the disorder is developing positively—no clinical intervention required—or negatively—clinical intervention is definitely required. We discuss design considerations of the model and system, and review results obtained with actual patients. Marina Velikova, Peter J. F. Lucas, Marc Spaanderman ### Voting Techniques for a Multi-terminology Based Biomedical Information Retrieval We are interested in retrieving relevant information from biomedical documents according to healthcare professional’s information needs. It is well known that biomedical documents are indexed using conceptual descriptors issued from terminologies for a better retrieval performance. Our attempt to develop a conceptual retrieval framework relies on the hypothesis that there are several broad categories of knowledge that could be captured from different terminologies and processed by retrieval algorithms. With this in mind, we propose a multi-terminology based indexing approach for selecting the best representative concepts for each document. We instantiate this general approach on four terminologies namely MeSH (Medical Subject Headings), SNOMED (Systematized Nomenclature of Medicine), ICD-10 (International Classification of Diseases) and GO (Gene Ontology). Experimental studies were conducted on large and official document test collections of real world clinical queries and associated judgments extracted from MEDLINE scientific collections, namely TREC Genomics 2004 & 2005. The obtained results demonstrate the advantages of our multi-terminology based biomedical information retrieval approach over state-of-the art approaches. Duy Dinh, Lynda Tamine ### Mapping Orphanet Terminology to UMLS We present a method for creating mappings between the Orphanet terminology of rare diseases and the Unified Medical Language System (UMLS), in particular the SNOMED CT, MeSH, and MedDRA terminologies. Our method is based on: (i) aggressive normalisation of terms specific to the Orphanet terminology on top of standard UMLS normalisation; (ii) semantic ranking of partial candidate mappings in order to group similar mappings and attribute higher ranking to the more informative ones. Our results show that, by using the aggressive normalisation function, we increase the number of exact candidate mappings by 7.1-9.5% compared to a mapping method based on MetaMap. A manual assessment of our results shows a high precision of 94.6%. Our results imply that Orphanet diseases are under-represented in the aforementioned terminologies: SNOMED CT, MeSH, and MedDRA are found to contain only 35%, 42%, and 15% of the Orphanet rare diseases, respectively. Maja Miličić Brandt, Ana Rath, Andrew Devereau, Ségolène Aymé ### The FMA in OWL 2 Representing the Foundational Model of Anatomy (FMA) in OWL 2 is essential for semantic interoperability. The paper describes the method and tool used to formalize the FMA in OWL 2. One main strength of the approach is to leverage OWL 2 expressiveness and the naming conventions of the native FMA to make explicit some implicit semantics, meanwhile improving its ontological model and fixing some errors. A second originality is the flexible tool developed. It enables to easily generate a new version for each Protégé FMA update. While it provides one ‘standard’ FMA-OWL version by default, many options allow for producing other variants customized to users applications. To the best of our knowledge, no complete representation of the entire FMA in OWL DL or OWL 2 existed so far. C. Golbreich, J. Grosjean, S. J. Darmoni ### Improving Information Retrieval by Meta-modelling Medical Terminologies This work aims at improving information retrieval in a health gateway by meta-modelling multiple terminologies related to medicine. The meta-model is based on meta-terms that gather several terms semantically related. Meta-terms, initially modelled for the MeSH thesaurus, are extended for other terminologies such as IC10 or SNOMED Int. The usefulness of this model and the relevance of information retrieval is evaluated and compared in the case of one and multiple terminologies. The results show that exploiting multiple terminologies contributes to increase recall but lowers precision. Lina F. Soualmia, Nicolas Griffon, Julien Grosjean, Stéfan J. Darmoni ### Improving the Mapping between MedDRA and SNOMED CT MedDRA is exploited for the indexing of pharmacovigilance spontaneous reports. But since spontaneous reports cover only a small proportion of the existing adverse drug reactions, the exploration of clinical reports is seriously considered. Through the UMLS, the current mapping between MedDRA and SNOMED CT, this last being used for indexing clinical data in many countries, is only 42%. In this work, we propose to improve this mapping through an automatic lexical-based approach. We obtained 308 direct mappings of a MedDRA term to a SNOMED CT concept. After segmenting MedDRA terms, we identified 535 full mappings associating a MedDRA term with one or more SNOMED CT concepts. The direct approach resulted in 199 (64.6%) correct mappings while through segmentation this number raises to 423 (79.1%). On the whole, our method provided interesting and useful results. Fleur Mougin, Marie Dupuch, Natalia Grabar ### COPE: Childhood Obesity Prevention [Knowledge] Enterprise This paper presents our work-in-progress on designing and implementing an integrated ontology for widespread knowledge dissemination in the domain of obesity with emphasis on childhood obesity. The COPE ontology aims to support a knowledge-based infrastructure to promote healthy eating habits and lifestyles, analyze children’s behaviors and habits associated with obesity and to prevent or reduce the prevalence of childhood obesity and overweight. By formally integrating and harmonizing multiple knowledge sources across disciplinary boundaries, we will facilitate cross-sectional analysis of the domain of obesity and generate both generic and customized preventive recommendations, which take into consideration several factors, including existing conditions in individuals and communities. Arash Shaban-Nejad, David L. Buckeridge, Laurette Dubé ### Repeated Prognosis in the Intensive Care: How Well Do Physicians and Temporal Models Perform? Recently, we devised a method to develop prognostic models incorporating patterns of sequential organ failure to predict the eventual hospital mortality at each day of intensive care stay. In this study, we aimed to understand, using a real world setting, how these models perform compared to physicians, who are exposed to additional information than the models. We found a slightly better discriminative ability for physicians (AUC range over days: 0.73-0.83 vs. 0.70-0.80) and a slightly better accuracy for the models (Brier score range: 0.14-0.19 vs. 0.16-0.19). However when we combined both sources of predictions we arrived at a significantly superior discrimination as well as accuracy (AUC range: 0.81-0.88; Brier score range: 0.11-0.15). Our results show that the models and the physicians draw on complementary information that can be best harnessed by combining both prediction sources. Extensive external validation and impact studies are imperative to further investigate the ability of the combined model. Lilian Minne, Evert de Jonge, Ameen Abu-Hanna ### Automating the Calibration of a Neonatal Condition Monitoring System Condition monitoring of premature babies in intensive care can be carried out using a Factorial Switching Linear Dynamical System (FSLDS) [15]. A crucial part of training the FSLDS is the manual calibration stage, where an interval of normality must be identified for each baby that is monitored. In this paper we replace this manual step by using a classifier to predict whether an interval is normal or not. We show that the monitoring results obtained using automated calibration are almost as good as those using manual calibration. Christopher K. I. Williams, Ioan Stanculescu ### Mining Temporal Constraint Networks by Seed Knowledge Extension This paper proposes an algorithm for discovering temporal patterns, represented in the Simple Temporal Problem (STP) formalism, that frequently occur in a set of temporal sequences. To focus the search, some initial knowledge can be provided as a seed pattern by a domain expert: the mining process will find those frequent temporal patterns consistent with the seed. The algorithm has been tested on a database of temporal events obtained from polysomnography tests in patients with Sleep Apnea-Hypopnea Syndrome (SAHS). M. R. Álvarez, P. Félix, P. Cariñena ### A Rule-Based Method for Specifying and Querying Temporal Abstractions The Knowledge-Based Temporal Abstraction (KBTA) method is a well-established mechanism for representing and reasoning with temporal information. Implementations to date have been somewhat heavyweight, however, and custom tools are typically required to build abstraction knowledge and query the resulting abstractions. To address this shortcoming, we created a lightweight method that allows users to rapidly specify KBTA-based temporal knowledge and to immediately construct complex temporal queries with it. The approach is built on the Web Ontology Language (OWL), and its associated rule and query languages, SWRL and SQWRL. The method is reusable and can serve as the basis of a KBTA implementation in any OWL-based system. Martin J. O’Connor, Genaro Hernandez, Amar Das ### Web-Based Querying and Temporal Visualization of Longitudinal Clinical Data We report on work in progress on the development of SWEETInfo (Semantic Web-Enabled Exploration of Temporal Information), a tool for querying and visualizing time-oriented clinical data. SWEETInfo is based on an open-source Web-based infrastructure that allows clinical investigators to import data and to perform operations on their temporal dimensions. The architecture combines Semantic Web standards, such as OWL and SWRL, with advanced Web development software, such as the Google Web Toolkit. User interaction with SWEETInfo creates OWL-based specifications of (1) data operations, such as filtering, grouping, and visualization, and (2) data pipelines for data analyses. Both of these can be shared with and adapted by other users via the Web. Our system meets the functional and nonfunctional specifications derived from the use cases. We will demo how SWEETInfoprovides non-technical users the ability to interactively define data pipelines for such complex temporal analyses. Amanda Richards, Martin J. O’Connor, Susana Martins, Michael Uehara-Bingen, Samson W. Tu, Amar K. Das ### Careflow Planning: From Time-Annotated Clinical Guidelines to Temporal Hierarchical Task Networks Decision-making, care planning and adaptation of treatment are important aspects of the work of clinicians, that can clearly benefit from IT support. Clinical Practice Guidelines (CPG) languages provide formalisms for specifying knowledge related to such tasks, such as decision criteria and time-oriented aspects of the patient treatment. In these CPG languages, little research has been directed to efficiently deal with the integration of temporal and resource constraints, for the purpose of generating patient tailored treatment plans, i.e. care pathways. This paper presents an AI-based knowledge engineering methodology to develop, model, and operationalize care pathways, providing computer-aided support for the planning, visualization and execution of the patient treatment. This is achieved by translating time-annotated Asbru CPG’s into temporal HTN planning domains. The proposed methodology is illustrated through a case study based on Hodgkin’s disease. Arturo González-Ferrer, Annette ten Teije, Juan Fdez-Olivares, Krystyna Milian ### An Archetype-Based Solution for the Interoperability of Computerised Guidelines and Electronic Health Records Clinical guidelines contain recommendations based on the best empirical evidence available at the moment. There is a wide consensus about the benefits of guidelines and about the fact that they should be deployed through clinical information systems, making them available during consultation time. However, one of the main obstacles to this integration is still the interaction with the electronic health record. In this paper we present an archetype-based approach to solve the interoperability problems of guideline systems, as well as to enable guideline sharing. We also describe the knowledge requirements for the development of archetype-enabled guideline systems, and then focus on the development of appropriate guideline archetypes and on the connection of these archetypes to the target electronic health record. Mar Marcos, Jose A. Maldonado, Begoña Martínez-Salvador, David Moner, Diego Boscá, Montserrat Robles ### Variation Prediction in Clinical Processes For clinical processes, meaningful variations may be related to care performance or even the patient survival. It is imperative that the variations be predicted timely so that the patient care “journey” can be more adaptive and efficient. This study addresses the question of how to predict variations in clinical processes. Given the assumption that a clinical case with low appropriateness between its specific patient state and its’ applied medical intervention is more likely to be a variation than other cases, this paper proposes a method to construct an appropriateness measure model based on historical clinical cases so as to predict such variations in future cases of clinical processes. The proposed method is demonstrated on a real life data set from the Chinese Liberation Army General Hospital. The experimental results confirm the given assumption and indicate the feasibility of the proposed method. Zhengxing Huang, Xudong Lu, Chenxi Gan, Huilong Duan ### A Constraint Logic Programming Approach to Identifying Inconsistencies in Clinical Practice Guidelines for Patients with Comorbidity This paper describes a novel methodological approach to identifying inconsistencies when concurrently using multiple clinical practice guidelines. We discuss how to construct a formal guideline model using Constraint Logic Programming, chosen for its ability to handle relationships between patient information, diagnoses, and treatment suggestions. We present methods to identify inconsistencies that are manifested by treatment-treatment and treatment-disease interactions associated with comorbidity. Using an open source constraint programming system (ECLiPSe), we demonstrate the ability of our approach to find treatment given incomplete patient data and to identify possible inconsistencies. Martin Michalowski, Marisela Mainegra Hing, Szymon Wilk, Wojtek Michalowski, Ken Farion ### Towards the Formalization of Guidelines Care Actions Using Patterns and Semantic Web Technologies Computer Interpretable Guidelines (CIG) have largely contributed to the simplification and dissemination of clinical guidelines. However, the formalization of CIG contents, especially care actions, is still an open issue. Actually, this information, which is the heart of the guideline, is still expressed as free text and therefore prevents the development of intelligent tools for assisting physicians defining treatments. In this paper, we introduce a framework for formalizing care actions using natural language processing techniques, Semantic Web technologies and medical standards. Cédric Pruski, Rodrigo Bonacin, Marcos Da Silveira ### Exploiting OWL Reasoning Services to Execute Ontologically-Modeled Clinical Practice Guidelines Ontology-based modeling of Clinical Practice Guidelines (CPG) is a well-established approach to computerize CPG for execution in clinical decision support systems. Many CPG computerization approaches use the Web Ontology Language (OWL) to represent the CPG’s knowledge, but they do not exploit its reasoning services to execute the CPG. In this paper, we present our CPG execution approach that leverages OWL reasoning services to execute CPG. In this way, both CPG knowledge representation and execution semantics are maintained within the same formalism. We have developed three different OWL-based CPG execution engines using OWL-DL, OWL 2 and SWRL. We evaluate the efficacy of our execution engines by executing an existing OWL based CPG. We also present a comparison of the execution capabilities of our three CPG execution engines. Borna Jafarpour, Samina Raza Abidi, Syed Sibte Raza Abidi ### Guideline Recommendation Text Disambiguation, Representation and Testing This paper describes a knowledge acquisition tool for translating a guideline recommendation into a computer-interpretable format. The novelty of the tool is that it is addressed to the domain experts, and it helps them to disambiguate the natural language, by decomposing the recommendation into elements, eliciting tacit and implicit knowledge hidden into a recommendation and its context, mapping patient’s data, available from the electronic record, to standard terms and immediately testing the formalised rule using past cases data. Silvana Quaglini, Silvia Panzarasa, Anna Cavallini, Giuseppe Micieli ### A Token Centric Part-of-Speech Tagger for Biomedical Text A difficulty with part-of-speech (POS) tagging of biomedical text is accessing and annotating appropriate training corpora. The latter may result in POS taggers trained on corpora that differ from the tagger’s target biomedical text. In such cases where training and target corpora differ tagging accuracy decreases. We present a POS tagger that is more accurate than two frequently used biomedical POS taggers (Brill and TnT) when trained on a non-biomedical corpus and evaluated on the MedPost corpus (our tagger: 81.0%, Brill: 77.5%, TnT: 78.2%). Our tagger is also significantly faster than the next best tagger (TnT). It estimates a tag’s likelihood for a token by combining prior probabilities (using existing methods) and token probabilities calculated in part using a Naive Bayes classifier. Our results suggest that future work should reexamine POS tagging methods for biomedical text. This differs from the work to date that has focused on retraining existing POS taggers. Neil Barrett, Jens Weber-Jahnke ### Extracting Information from Summary of Product Characteristics for Improving Drugs Prescription Safety Information about medications is critical in supporting decision-making during the prescription process and thus in improving the safety and quality of care. The Summary of Product Characteristics (SPC) represents the basis of information for health professionals on how to use medicines. However, this information is locked in free-text and, as such, cannot be actively accessed and elaborated by computerized applications. In this work, we propose a machine learning based system for the automatic recognition of drug-related entities (active ingredient, interaction effects, etc.) in SPCs, focusing on drug interactions. Our approach learns to classify this information in a structured prediction framework, relying on conditional random fields. The classifier is trained and evaluated using a corpus of a hundred SPCs. They have been hand-annotated with thirteen semantic labels that have been derived from a previously developed domain ontology. Our evaluations show that the model exhibits high overall performance, with an average F 1 Stefania Rubrichi, Silvana Quaglini, Alex Spengler, Patrick Gallinari ### Automatic Verbalisation of SNOMED Classes Using OntoVerbal SNOMED is a large description logic based terminology for recording in electronic health records. Often, neither the labels nor the description logic definitions are easy for users to understand. Furthermore, information is increasingly being recorded not just using individual SNOMED concepts but also using complex expressions in the description logic (“post-coordinated” concepts). Such post-coordinated expressions are likely to be even more complex than other definitions, and therefore can have no pre-assigned labels. Automatic verbalisation will be useful both for understanding and quality assurance of SNOMED definitions, and for helping users to understand post-coordinated expressions. OntoVerbal is a system that presents a compositional terminology expressed in OWL as natural language. We describe the application of OntoVerbal to SNOMED-CT, whereby SNOMED classes are presented as textual paragraphs through the use of natural language generation technology. Shao Fen Liang, Robert Stevens, Donia Scott, Alan Rector ### Evaluating Outliers for Cross-Context Link Discovery In literature-based creative knowledge discovery the goal is to identify interesting terms or concepts which relate different domains. We propose to support this cross-context link discovery process by inspecting outlier documents which are not in the mainstream domain literature. We have explored the utility of outlier documents, discovered by combining three classification-based outlier detection methods, in terms of their potential for bridging concept discovery in the migraine-magnesium cross-domain discovery problem and in the autism-calcineurin domain pair. Experimental results prove that outlier documents present a small fraction of a domain pair dataset that is rich on concept bridging terms. Therefore, by exploring only a small subset of documents, where a great majority of bridging terms are present and more frequent, the effort needed for finding cross-domain links can be substantially reduced. Borut Sluban, Matjaž Juršič, Bojan Cestnik, Nada Lavrač ### Diagnosis Code Assignment Support Using Random Indexing of Patient Records – A Qualitative Feasibility Study The prediction of diagnosis codes is typically based on free-text entries in clinical documents. Previous attempts to tackle this problem range from strictly rule-based systems to utilizing various classification algorithms, resulting in varying degrees of success. A novel approach is to build a word space model based on a corpus of coded patient records, associating co-occurrences of words and ICD-10 codes. Random Indexing is a computationally efficient implementation of the word space model and may prove an effective means of providing support for the assignment of diagnosis codes. The method is here qualitatively evaluated for its feasibility by a physician on clinical records from two Swedish clinics. The assigned codes were in this initial experiment found among the top 10 generated suggestions in 20% of the cases, but a partial match in 77% demonstrates the potential of the method. Aron Henriksson, Martin Hassel, Maria Kvist ### Backmatter Weitere Informationen
2019-06-17 04:45:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3181949853897095, "perplexity": 4739.610578813584}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998376.42/warc/CC-MAIN-20190617043021-20190617065021-00543.warc.gz"}
https://chemistry.stackexchange.com/questions/16579/balancing-the-redox-reaction/16588
# Balancing the redox reaction Question: Balance the following redox reaction: $$\ce{FeS2 + O2 -> Fe2O3 + SO2}$$ My Efforts: 1. I tried balancing with oxidation number method. First of all i determined the oxidation state as follows: $$\ce{Fe^{(2)}S2^{(-1)} + O2^{(0)} -> Fe2^{(3)}O3^{(-2)} + S^{(4)}O2^{(-2)} }$$ Here, all the element are either oxidized or reduced, so how to move ahead? 1. I tried balancing with Half reaction method. $$\ce{Fe^{(2)} -> Fe2^{(3)}} \text Oxidation$$ $$\ce{S2^{(-1)} -> 2S^{(4)} } \text Oxidation$$ $$\ce{O2^{(0)} -> 2O^{(-2)}} \text Reduction$$ Same here, how to move ahead? P.S. Oxidation state are mentioned in the brackets. Correct me if i am wrong anywhere. • A couple things... Are you sure it's FeS2 and not FeS? Also, in the oxidation state method, the charges don't balance in FeS2. – jerepierre Sep 22 '14 at 12:34 • @jerepierre Yes i am sure it is FeS2. If you can show it by half reaction method then also it will work, because i have test tomorrow. – Freddy Sep 22 '14 at 13:40 FeS2 has an odd structure; the iron atom has a +2 oxidation number and each of the sulfurs has a -1 oxidation number. This can be balanced by inspection. 4 FeS2 + 11 O2 --> 2 Fe2O3 + 8 SO2 Just to check, using oxidation numbers we get: Sulfur goes from -1 to +4. (Total change from sulfur = +40) Iron goes from +2 to +3. (Total change from iron = +4) (I'm not sure where you got 4 for the oxidation number of iron in FeS2.) Oxygen goes from 0 to -2. (Total change from oxygen = -44) ETA: Half reactions are messy here, since both iron and sulfur are oxidized. Similarly, using oxidation numbers is problematic (except to check the solution) because there are three substances changing oxidation states. You could write a system of equations to describe it, but that's a lot more trouble than it's worth. You know from the structure of iron(II) disulfide that there are twice as many sulfur atoms as iron atoms. That means that the number of sulfur dioxide molecules must be four times the number of iron(III) oxide molecules. With that relationship in mind, the smallest ratio of molecules that fits the pattern is the one I wrote above. If you really want to use oxidation numbers, here's what I've come up with: Let a equal the number of iron atoms, b equal the number of sulfur atoms, and c equal the number of oxygen atoms. Iron increases its oxidation state by 1, sulfur by 5, and oxygen decreases by 2. So: 1a + 5b - 2c = 0 We also know that iron and sulfur are in a 1:2 ratio because they come from pyrite. 2 a = b Substitution yields: 1a + 5(2a) - 2c = 0 11a - 2c = 0 Iron and oxygen are in a 2:11 ratio. Oxygen must be even since it comes as O2, so its smallest possible value is 22 atoms. Using the ratios listed above, we get 22 O: 4 Fe: 8 S. ETA #2: Half Reactions Fe+2 --> Fe+3 + e- S-1 --> S+4 + 5 e- O20 + 4 e- --> 2 O-2 We know from FeS2 that there must be twice as many sulfur atoms as iron, so the second equation has to be multiplied by two. 2 S-1 --> 2 S+4 + 10 e- If we add together all of the species that are being oxidized (Fe and S) then we get an oxidation half reaction of: Fe+2 + 2 S-1 --> Fe+3 + 2 S+4 + 11 e- This must be multiplied by 4 and the reduction equation by 11 to balance the number of electrons. 4 Fe+2 + 8 S-1 --> 4 Fe+3 + 8 S+4 + 44 e- 11 O20 + 44 e- --> 22 O-2 4 Fe+2 + 8 S-1 + 11 O20 --> 4 Fe+3 + 8 S+4 + 22 O-2 Since we know the structure of the molecules, putting it all back together is fairly straightforward, and it gives the same result as listed above. • I would be thankful if you can saw me by using oxidation number method or by half reaction method. Thank you for correcting my mistake. – Freddy Sep 22 '14 at 14:06 • Welcome to Chemistry.SE! Please have a look at this and the documentation for mhchem – Klaus-Dieter Warzecha Sep 22 '14 at 14:07 • Think of $\ce{FeS2}$ as the $\ce{Fe(II)}$ salt of hydrogen disulfide. In nature, you'll find it as Pyrite. Looks nice, unless you're in for the real gold. – Klaus-Dieter Warzecha Sep 22 '14 at 14:39 ## Balancing With Half Reaction method Step 1: Determine oxidation number of each element. $$\ce{Fe^{(2)}S2^{(-1)} + O2^{(0)} -> Fe2^{(3)}O3^{(-2)} + S^{(4)}O2^{(-2)} }$$ Step 2: Determine total increase and decrease in oxidation number. Also maintain $\ce{Fe}$ and $\ce{S}$ ratio(1:2) on both side. $$\ce{Fe^{(2)} -> Fe2^{(3)}} \text Oxidation -----------(A)$$ $$\ce{S2^{(-1)} -> 2S^{(4)} } \text Oxidation -----------(A)$$ $$\ce{O2^{(0)} -> 2O^{(-2)}} \text Reduction -----------(B)$$ Step 3: Balance Oxidation number of of reaction (A) and (B) There are total 11 electron transfer take place in (A) and 4 electron in (B) $$4\ce{[Fe, S2] ->4[Fe, 2S] }$$ $$\ce{11[O2] -> 11[2O]}$$ Step 5: Finally combine above two reactions $$\ce{4FeS2 + 11O2 -> 2Fe2O3 + 8SO2}$$
2019-10-20 04:07:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5895944833755493, "perplexity": 1890.0587433785117}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986702077.71/warc/CC-MAIN-20191020024805-20191020052305-00007.warc.gz"}
https://math.stackexchange.com/questions/183513/what-is-aleph-0-powered-to-aleph-0/183515
# What is $\aleph_0$ powered to $\aleph_0$? By definition $\aleph_1 = 2 ^{\aleph_0}$. And since $2 < \aleph_0$, then $2^{\aleph_0} = {\aleph_1} \le \aleph_0 ^ {\aleph_0}$. However, I do not know what exactly $\aleph_0 ^ {\aleph_0}$ is or how I could compute it. No. By definition $\aleph_1$ is the least uncountable $\aleph$ number. $2^{\aleph_0}$ can be quite a large $\aleph$, or it could be $\aleph_1$. For example, many forcing axioms (e.g. the proper forcing axiom) prove that $2^{\aleph_0}=\aleph_2$. The assertion $2^{\aleph_0}=\aleph_1$ is known as The Continuum Hypothesis and was proven unprovable from the usual axioms of set theory. We can therefore add axioms which decide the continuum hypothesis, e.g. itself or the aforementioned forcing axiom. $$2^{\aleph_0}\leq\aleph_0^{\aleph_0}\leq (2^{\aleph_0})^{\aleph_0}= 2^{\aleph_0\cdot\aleph_0}=2^{\aleph_0}$$
2019-09-16 20:38:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9918200373649597, "perplexity": 118.42430918969512}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572934.73/warc/CC-MAIN-20190916200355-20190916222355-00499.warc.gz"}
https://www.physicsforums.com/threads/minimal-compactification-of-an-infinite-dimensional-space.463382/
# Minimal compactification of an infinite dimensional space ## Main Question or Discussion Point Wikipedia seems fairly consistent in stating that infinite-dimensional topological vector spaces such as Hilbert space aren't locally compact, which means that they can't have a one-point compactification. As metric spaces they're Tychonoff spaces, and thus can be compactified with the Stone-Cech compactification, but this is the "maximal" construction. Does anyone know of a minimal compactification of such manifolds, in the sense that it obtains the smallest possible compact extension of such a space? ## Answers and Replies Related Differential Geometry News on Phys.org Here's a procedure I just came up with, which seems to suggest that I could compactify the space with at most one extra point In what follows any "facts" I quote will probably derive from wikipedia articles; I'd appreciate anything contentious being drawn to my attention. Let our infinite-dim vector space, assumed metrizable, be called X. As a metric space it's compactly generated, and hence we can infer the existence of a locally compact hausdorff space Y such that X is the quotient space of Y under some map. As Y is a locally compact hausdorff space, it admits a one-point compactification. Then apply the original quotient relation to obtain X', our original point set with at most one extra point if the point at infinity should prove inequivalent to members of X. As the quotient space of a compact space, X' is compact. Is there a flaw in this procedure? As it seems that only locally compact Hausdorff spaces admit one-point compactifications, if this does result in a compact space it seems that it must do some great violence to the original topology; would there be a way of showing whether or not properties such as being hausdorff were preserved by this procedure? mathwonk Homework Helper Its sort of obvious that Hilbert space is not locally compact since one can presumably find an infinite orthogonal sequence of unit vectors. I don’t know the answer to this interesting question, but seem to recall (from a class 45+ years ago) a relevant fact. An inclusion from a completely regular T1 space X into a compact such space Y, induces by restriction an injection from the algebra of continuous functions on Y to a uniformly closed subalgebra of bounded continuous point separating functions on X containing the constants. Conversely any such subalgebra of BC(X) recovers the compactification Y. The largest compactification Y is the one associated to the full algebra BC(X), and a smallest compactification would come from a smallest such subalgebra, if one exists. When X is locally compact, one can consider the subalgebra of continuous functions on X having “limits at infinity”, i.e. such that there exists L such that for every e>0, |f-L| < e, everywhere off some compact set. Then the closure of the embedding of X in the Tychonoff cube defined by these functions gives the one point compacitification. Just the mumblings of an old man with a kid’s memory from math 212. mathwonk Homework Helper you might want to ask this at stack exchange, mathematics section, where lots of mathematicians answer these questions. Thanks for your replies, mathwonk- as you might have noticed from my posts in other subforums here, I'm a theoretical physicist without much of a brain for pure maths. I'm glad you found it interesting anyway! If you figure out an answer, let us know. It's an incredibly interesting problem!! The closest anwer I can give is to consider the projective space associated with the infinite dimensional vector space. I have a feeling that this is a rather small compactification. But I didn't check the details yet, so I don't even know if it's a compactification at all... mathwonk I was thinking about my attempt yesterday whilst bored in a seminar. If the "facts" I quoted from wikipedia are indeed facts, then it looks as if my argument constructs a one-point compactification of the original set X; define the equivalence relation on the one-point compactification of the locally compact set Y by the union of the equivalence relation on Y that leads to X with $$\{(\infty,\infty)\}$$; then the equivalence classes form X along with a single addition. I got stuck when it came to thinking about the topology on whatever set it is that has elements of an infinite dimensional hilbert space as equivalence classes, but (particularly as a result of mathwonk's post) I'm inclined to say that the end result can't be hausdorff, whatever it is.
2020-05-30 12:04:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8089614510536194, "perplexity": 381.94185924337773}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347409171.27/warc/CC-MAIN-20200530102741-20200530132741-00462.warc.gz"}
https://electronics.stackexchange.com/questions/246704/why-do-we-have-2-representations-for-load-shown-below-in-a-power-amplifier
# Why do we have 2 representations for load (shown below) in a power amplifier? OR I get confused when i see two different ways to connect load resistor in a power amplifier. Why do we have two such representations (RL in series with collecor Vs RL connected across collector and ground)? The closest answer I find is, for an AC signal both representations are equivalent. If so, can we connect (any) load in any of these ways? Is there any other consideration i'm missing out? Any help would be appreciated. As you can see in the litte graph included in your first schematic there is a current offset ("bias level") for the load current. I.e. • current is always positive and • even if there is no input signal there will be some quiescent current. If your load is e.g. a speaker you don't want to have any current going through the speaker if there is silence. That can be accomplished by AC coupling the speaker like shown in the second schematic. That way • current is centered at 0A (it may become positive and negative) • if there is no input signal there will be no current through the load (no quiescent current through the load) Adding to the good answer of Curd: notice that from a signal (AC) point of view, $R_C$ and $R_L$ are in parallel, since the supply act like a short for AC. This means that, if $R_L$ is your "real load", it must have a much lower impedance than $R_C$, which is used only to set the quiescent point of the BJT. Moreover, in some cases you have a load that needs a DC component: take as a simple example an LED used for lighting, assuming the BJT is used as an amplifier and not as a switch, for instance using it as a current regulator (current sink in this case; inefficient but simple -- it's essentially your first schematic). Note however that the load is not ground referenced, unless you use a PNP BJT as a current source on the high side of the load (i.e. connected "above"). Some loads may need one of their terminals connected to ground, so this is a distinct disadvantage in this case. To sum up, there are different design decisions to be made when choosing where to place the load in an amplifier. The fact is that in most textbooks they stick to the very basic case of an AC coupled, class A, small signal amplifier for low frequency (the second schematic you posted). That's just to keep thing simple for the learners.
2021-06-15 11:00:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7657385468482971, "perplexity": 699.1277425435621}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487620971.25/warc/CC-MAIN-20210615084235-20210615114235-00281.warc.gz"}
https://socratic.org/questions/what-instrument-is-used-to-measure-air-pressure
# What instrument is used to measure air pressure? Feb 27, 2017 A barometer #### Explanation: A barometer using mercury is common. The atmospheric pressure presses down on a reservoir of mercury and the mercury is forced up the tube a certain distance depending on the pressure. This is often recorded as $\text{mm}$ $H g$ or (rarely) $\text{inches}$ $\text{Hg}$. Our lab uses meters with a digital display to measure the pressure and I suspect that they use a method similar to an aneroid barometer which uses wafers that expand and contract with pressure changes to move an instrument. More can be read here: http://www.windows2universe.org/earth/Atmosphere/measuring_press.html
2019-10-16 22:51:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 4, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45982733368873596, "perplexity": 1273.5362103430314}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986670928.29/warc/CC-MAIN-20191016213112-20191017000612-00508.warc.gz"}
https://stats.stackexchange.com/questions/21162/generate-random-correlated-variable-from-known-x
# Generate random correlated variable from known $X$ I want to use Excel to generate a random correlated $Y$ from a known $X$. From another thread, I found the equation $Y = r\cdot X + E$, where $X$ is standardized and $E$ is a random variable from normal distribution having mean $0$ and $\sigma = \sqrt{(1-r^2)}$. I assume $r$ is the correlation coefficient found using Excel's CORREL function. I also assume I can calculate $E$ by using Excel's NORMDIST function. Are my assumptions correct? If I have a known $X$, how do I "standardize" $X$? Thanks for any help. • $r$ is the correlation that you want $X$ and $Y$ to have, not something computed via Excel. A standardized $X$, call it $\hat{X}$, is related to $X$ via $$\hat{X} = \frac{X - \mu}{\sigma}$$ where $\mu$ is the mean value of $X$ viz.,the average of the $N$ cells if $X$ is stored in an array of $N$ cells, and $\sigma$ is the standard deviation of the $N$ values of $X$. $\hat{X}$ has mean $0$ and standard deviation $1$. Your equation thus is $$Y = r * \hat{X} + E,$$ and $Y$ is also a standardized random variable with mean $0$ and standard deviation $1$. $aY+b$ also has correlation $r$ with $X$. – Dilip Sarwate Jan 16 '12 at 16:24 • So is this the equation? (Y-meanY)/sdY = r * (X-meanX)/sdX + E where E is a random variable from a normal distribution with mean 0 and sd sqrt(1-r^2)? Still confused as to what r is in my example. – Charles Isaak Jan 16 '12 at 19:54 • Yes, your equation is correct. As to $r$, you need to look at the specifications given to you when you were told "Create a random variable $Y$ that is correlated with $X$". The statement should have included a specification of $r$ e.g. "... that has correlation $r = 0.8$ with $X$". If your client/professor/boss/colleague did not say what value of $r$ is desired, ask! $r$ should be between $-1$ and $+1$. All else failing, set $r=\sqrt{1-r^2}=1/\sqrt{2} \approx 0.7071$ because I said to do so. Hey, if you can't trust something you read on the Internet, what's the world coming to? – Dilip Sarwate Jan 16 '12 at 21:27 • Thanks. I think I am close now. Here is the equation I am using: Y = (((r*((actualX-meanX)/stdX))+RN)*stdY)+meanY, where RN = a random normal variable with mean 0 and std of sqrt(1-r^2). However, I am still confused about r. This is not an assignment so no one is giving me a target correlation. My goal remains to generate the most accurate possible random Y from a known X using what I have found from regression analysis. When using the above formula, the generated Ys are highly affected by r so it seems to be important to use a proper r. – Charles Isaak Jan 17 '12 at 15:28 • From which thread did you find that formula? I would like to have a look. Thanks. – qed Sep 1 '13 at 9:14 If $X \sim N(0, 1)$ and $Y = rX + \epsilon$, where $\epsilon \sim N(0, 1 - r^2)$, then $Cor(X, Y) = r$. By definition, \begin{align*} Cor(X, Y) &= \frac{E((X - E(X))(Y - E(Y)))}{\sqrt{Var(X)Var(Y)}} \\ &= \frac{E(XY)}{\sqrt{Var(Y)}} \\ &= \frac{E(rX^2 + \epsilon X)}{\sqrt{Var(rX + \epsilon)}} \end{align*} Assuming $X$ and $\epsilon$ are independent, we have \begin{align*} Cor(X, Y) &= \frac{rE(X^2) + E(\epsilon)E(X)}{\sqrt{Var(rX) + Var(\epsilon)}} \\ &= \frac{rE(X^2)}{\sqrt{r^2 + 1 - r^2}} \\ &= rE(X^2) \\ \end{align*} Since $X^2 \sim \chi^2(1)$, we get $Cor(X, Y) = r$. This can also be verified by a simple simulation in R: require(foreach) x = matrix(rnorm(1000*1000), 1000) err = matrix(rnorm(1000*1000, 0, sqrt(1 - .1^2)), 1000) myd = (.1*x + err) allr = foreach(i=1:1000, .combine='c') %do% cor(x[, i], myd[, i]) png('a.png') hist(allr) dev.off()
2020-07-04 18:56:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9977677464485168, "perplexity": 453.1735860393614}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655886516.43/warc/CC-MAIN-20200704170556-20200704200556-00481.warc.gz"}
https://cs.stackexchange.com/questions/106324/proof-for-optimal-interval-scheduling-using-a-greedy-approach
# Proof for optimal interval scheduling using a Greedy Approach You are given a set of n jobs, where each job j is associated with a size s(how much time it takes to process the job) and a weight w(how important the job is). Suppose you have only one machine that can process one unit of jobs per time slot. Assume all jobs are given at time t = 0 and are to be processed one by one using this machine. Let C > to be the time that job j is completed. The goal is to find a schedule (of all the jobs) that minimizes the weighted completion time, i.Σ(j=1 to n) wj * Cj • Approach 1: Process Jobs according to the highest weight first • Approach 2: Process jobs in ascending order of their size • Approach 3: Process jobs in descending order of their density (w/s) So basically, I need to find out which approach is optimal and why the other 2 wouldn't work. My understanding is as follows: • Approach 1 wouldn't be optimal if the higher weights(w) have a greater size(s). • Approach 3 wouldn't work if the weight was equal to the size in case of all the jobs. If w=s for all the jobs, you wouldn't be able to determine what to chose first. • Hence, my answer is that Approach 2 would be the optimal choice out of the 3 as it focuses on minimizing w*c. Is this answer correct? Is there a better way to prove why approach 2 is the optimal choice in this question? • Please don't delete your question once it has been answered. Answers are for everyone, even someone who has a similar question in the future. Apr 11 '19 at 14:19 Let's consider two jobs in the sequence you obtained: • $$A$$, of weight $$w_A$$ begins at $$t_0$$ and finish at $$t_0 + s_A$$ • $$B$$ coming just after $$A$$, of weight $$w_B$$ begins at $$t_0 + s_A$$ and finish at $$t_0 + s_A + s_B$$ If we compute only $$K_{A, B}$$ the contribution of $$A$$ and $$B$$ in $$K = \sum_j w_j C_j$$: $$K_{A, B} = w_A (t_0 + s_A) + w_B (t_0 + s_A + s_B)$$ If A and B are inversed in the sequence, we have $$K'_{A, B}$$: $$K'_{A, B} = w_A (t_0 + s_A + s_B) + w_B (t_0 + s_B)$$ The difference is: $$\Delta K_{A, B} = K'_{A, B} - K_{A, B}$$ $$= w_A s_B - w_B s_A$$ $$= (w_A/s_A - w_B/s_B) \times (s_A s_B)$$ The switch should be done if and only if $$\Delta K_{A, B}$$ is negative in order to minimize $$K$$. Only the approach 3 provides you a sequence where no more switch is worth. If two jobs have the same $$w/s$$ ratio, just take them in any order, the final $$K$$ would remain unchanged. • I'm a little confused about your conclusion. I don't seem to understand what you mean. "Only approach 3 provides you a sequence where no more switch is worth". Apr 2 '19 at 2:24 • Approach 3 is decreasing $w/s$, thus for any pair of subsequent tasks A and B, $w_A/s_A - w_B/s_B > 0$ => $\Delta K_{A, B} > 0$. Switching A and B would necessarly increase $K$. Apr 2 '19 at 7:14 • Could you perhaps give me a counterexample where Approach 2 wouldn't work Apr 2 '19 at 12:12
2021-09-23 15:37:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 23, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6608468294143677, "perplexity": 434.63850728921227}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057424.99/warc/CC-MAIN-20210923135058-20210923165058-00423.warc.gz"}
https://www.neetprep.com/question/54212-vertical-Utube-uniform-inner-cross-section-contains-mercury-sidesof-its-arms-glycerin-density---gcm-column-length--cm-isintroduced-one-its-arms-Oil-density--gmcm-poured-theother-arm-until-upper-surfaces-oil-glycerin-samehorizontal-level-Find-length-oil-column-Density-mercury--gcma--cm-b--cmc--cm-d--cm?courseId=8
• Subject: ... • Topic: ... A vertical U-tube of uniform inner cross section contains mercury in both sides of its arms. A glycerin (density = 1.3 g/${\mathrm{cm}}^{3}$) column of length 10 cm is introduced into one of its arms. Oil of density 0.8 gm/${\mathrm{cm}}^{3}$ is poured into the other arm until the upper surfaces of the oil and glycerin are in the same horizontal level. Find the length of the oil column, Density of mercury = 13.6 g/${\mathrm{cm}}^{3}$ (a) 10.4 cm               (b) 8.2 cm (c) 7.2 cm                  (d) 9.6 cm
2019-03-23 14:47:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.756771445274353, "perplexity": 6442.323631590642}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202872.8/warc/CC-MAIN-20190323141433-20190323163433-00379.warc.gz"}
https://domymatlab.com/logical-array-matlab-assignment/
Logical Array Matlab Assignment | Pay Someone To Do My Matlab Homework Logical Array Matlab Assignment Manual Introduction Many numerical processes, especially computer science research applications, are performed in linear time. There are many things associated with this new type of programmable design. For thousands of code examples, there is a tremendous demand for knowledge in this field for the training of students (often, all of us) in this specific area of numerical analysis. The number of instances required for the training of several candidates is relatively large and typically of considerable order in the course of a programming course. How do we train such programmable Designers? First, we need to build them, which involves physical (usually electric or mechanical) processing. This is now well understood. With numerical simulation, we can carry-delaying and we can carry-delaying are often the common method and we can also use an object model to introduce an object during an execution of a simulation. Matlab Assignment Help Near Me Once all these components have been assembled, we can proceed to the next step. First we must recall the general presentation of concepts presented in the previous section. The technical definition of a ”point-source” is introduced and some initial methods of building components are also taken into account. Also, the object model is discussed and a framework is developed which defines a second ”object model”. These ’point-source’ methods are the “point-source model check this site out In the case of numerical simulation, the point-source was designed with the aid of a numerical model. For a general finite element model from different numerical simulations we may mention the following. Matlab Project Helper Form A FEM (FieldEvaluation Method) model (like that used for the simulation of a hyper-infinite element) consists of a set of points placed from a first imp source (dashed line which can be seen as a horizontal scale bar), a second (or grid) meshgrid (dot) and a mesh of surface layers. Any of the points can be placed between two vertical grids (horizontal level), and it is shown that this does not introduce physical problems. Form, we also mean there can be any number of grid points which span the line boundary of the mesh grid, such that on each edge of the grid a piece of box is inserted to cut away existing layers of the box. Some of the types of points involved are finite element, continuous lines whose boundaries will provide features such as nodes or edges which will help us to build various types of objects and we are able to take it more or less easily. We talk about finite element based “point-source”. A discrete line element is a periodic solid of the type shown in Fig. 5. Matlab Coding Project Ideas 4, 8. The continuous line defines its boundaries; the continuous line is extended between points in the system because the starting square comes to some points from which another ”line” will be joined, then ”distance” is added (refer to Fig. 5.4’s starting point with the continuous line pointing into the right direction). See (8) as the first ingredient to complete a composite object representation Many points are needed to provide a finite element model, generally involving a shape whose geometry is quite complex. In principle this can be done very simply as shown in our previous remark: 1. The image of the point is represented as the sequence of smooth vector space coordinates given by the square function . Matlab Homework Solutions .. TheLogical Array Matlab Assignment A: Linear Matrix Sequence Function {#sec_param_sequence_function} =================================================================== We present the *linear multispectral assignment algorithm* [@schneider1996preprocessing; @schoenecker1996modeling; @schneider2017scascascascascorner] for computing the *full-rank* [generalized linear]{} multiscreading [matrix]{} sequence function (matrix) from the original multiscreading process $S$. The proposed algorithm first finds the multiscaling matrix $k$, and its rank in the rank space by solving the Jacobian matrix $j$ with the Kullen-Leibler (K-L) divergence. Then, we take the rank of $k$ and perform $\ell _{0}$ iterative gradient minimization. Thus, we obtain matrices of rank $r=m$, where $m$ is the rank of the matrix. $sec\_matrix\_k\_iset$ Numerical Experiments {#sec_num_experiments} ============================================== In this section, we perform numerical experiments on the multiscaling linear multispectral matrix sequence function (generalized linear) ($k(\cdot )=0$) from full-rank multiscreading processes $S$ and $T$, provided standard numerical experiments are run on the selected subset of experiments. Matlab Programming Homework For this initial set, the original multiscreading process $S$, but now the multiscreading process $T$ is computed via the program Matlab code. It is implemented on a computationally fast Intel Duo III Processor @3000, of which the code is implemented in the Matlab code in the [^13]. Next, after performing the order by evaluation, we run the program Matlab code [^14] for constructing the matrix $k$ in a linear cluster (including $T$) and for finding $X$, we perform the $\ell_0$ iterative gradient minimization. It returns the solution of the $O(N \log N)$ algorithm [@schoenecker1996modeling; @schneider2017scascascorner] for the order $O(n \log n)$ that is equal to $\frac{1}{2}$ since $1/2$ can be obtained from the block matrix of the iteration. Concordance Matrix Extraction (CEE) {#sec_CEE} ———————————– An easy CEE algorithm is to partition the input matrices $A$ and $B$ into an orthogonal set and a disjoint symmetric set, in order to obtain a sparse matrix $G$. Next, a kernel for the non-adjacent sparse matrix in the sparse matrix $A$ is called *concordance matrix (CMA)*. The CMA algorithm will always converge to an asymptotic asymptotic norm of the CSE in the space of matrices for finite size $m \in \mathbb{R}^2$. Matlab Programming Homework Help It contains all the most general operations from a semilibration algorithm in the sense that the matrix can be reduced to that of the true CSE. The CEMA algorithm receives the CSE and the multiscaling matrix as the inputs. For simplicity, here we omit the CMA for the $^3$-coordinates that are all free from a diagonal. A detailed description of the procedure is given in the Appendix. The first step is to calculate the CEMA[^15] as the block matrix: $\mathbf{U \in \mathbb{R}^{m \times m}}$ where $f(A_1,\ldots,A_m) = A_1^{(1)} + look at here now + A_m^{(1)}$ and f(A_{1:m}^{(1)})= \frac{\sqrt{ 2 \pi}}{ \sqrt{m }} \frac{\sum_{i=1}^{m} \frac{3 m }{k_{1:m}} \left(\frac{\alpha _i}{\nu_i} + \sum_{j=1}^{m} \frac{Logical Array Matlab Assignment In mathematics, Array is one of the simpler ways to name lists and formulas. This is because the structure of the Array data structure is structured such that you have a shape-bounded (or “sub-box”) list containing cell lists for each of the 6 blocks (the rows of List). The information you get when trying to represent the Array list by a list of cells is stored in two different containers (the *(4,4) block* and the *(2, 2) block*): the first one is the list of the 6 blocks, and the second one is the list of the 6 cells in the fourth block (the. Find Someone to do Matlab Homework class), where the fourth column represents a cell within the row of the List set. The distinction between the lists of columns provides the possibility of organizing the data structure much the same way you do using HTML and CSS. You have a built-in method of _ —all arrays can have a common _ name (by convention this is your _ name). This class provides a structure using _, and you can modify an existing array along the way. So you’d do something like: | Row array of array of Sort “_**Array(:key =>’sort_type:value’)[]“ | Row array of Sort where: row_type returns your title for this array: sort_type:value, because 2 columns represent the entire Array list. “2 columns in the 12 class is because the Order class has a built-in _ property, Order. in [class]: # of Row Then you would do the same as before, but the Order class is a “standard” class and only has an argument. Find Someone to do Matlab Project The Array class then creates a list of row-specific data structures from your _ class list. This list is then used with a for loop to use Array data structures. In [class]: # of Row And then you can apply this to the data structure. You can perform many things using Array and append new rows to this list; this is probably the simplest possible way to do it. You can also work with arrays and append fields. You can call your _ data structure: final Array(Partition) of Array. partition(**. Find Someone to do Matlab Project C(n~k**)) [] **= Partition using Tuple **= BEGIN partition() you can try this out [partition(“Name”)] C[n-1])) // and so on If this is the simplest way — that you really want to use ._ as the type of your initial data structure, using it like this: Array(partition(*)[ Partition ]) { [_] = “” } data( 5); [new-row = _Tuple] { partition() partition() partition() partition() partition() partition() partition() partition() data(10.row 10.array 10.array) {} } ~ **Goncalves™ of T~Obsenthetic A** Partitioned by a T~Obsenthetic A consists of a set of 16 rows and a set of 22 columns and spans 2080 words from the beginning. Such a structure is not very extensive but is very good at describing each row
2023-03-25 04:29:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5165415406227112, "perplexity": 832.098302768278}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945315.31/warc/CC-MAIN-20230325033306-20230325063306-00002.warc.gz"}
https://tex.stackexchange.com/questions/493041/how-do-i-test-for-a-unique-string-with-multiple-possibilities
how do I test for a unique string with multiple possibilities? I'm writing a macro that takes a string and it returns an href based on that string. The string could have 150 different values. My question is how to write this in an efficient way. Using pdflatex. Here's what I've got right now, shortened to two conditions for explanation: \documentclass{article} \usepackage{hyperref} \newcommand{\myref}[1]{% \ifnum\pdfstrcmp{#1}{aaa}=0% \fi \ifnum\pdfstrcmp{#1}{bbb}=0% \fi } \begin{document} Here is \myref{aaa}. \end{document} But of course once I get a success (#1=aaa, say), I'm still testing the other 149 conditions for no reason. How to code this efficiently? • Are all of the outcomes of the form \href{example.com/<string>}{my <string> link}? – Joseph Wright May 28 at 13:13 • Ah, I simplified too much. It will actually take two args. first arg goes in the link address, the second provides the link text. Like \href{example.com/aaa}{The Title of the AAA Document} (link text will have completely different text on each invocation of the macro. – Tim A May 28 at 13:16 I would e.g. create command names: \documentclass{article} \usepackage{hyperref} \makeatletter \newcommand{\myref}[1]{% \begin{document} Here is \myref{aaa}. \end{document} let biber do the searching for you: assuming there is a file test.bib with @online{tex, note={my aaa text}, url={tex.stackexchange.com} } note={my bbb text}, } then \documentclass{article} \usepackage{biblatex} \DeclareFieldFormat{url}{\href{#1}{\printfield{note}}} \DeclareCiteCommand{\myref}{}{\usebibmacro{url}}{}{} \usepackage{hyperref} \begin{document} Here is \myref{tex} \end{document} You can use xparse: \pdfcompresslevel=0 \documentclass{article} \usepackage{xparse} \usepackage{hyperref} \ExplSyntaxOn \NewDocumentCommand{\newref}{mmm} {% #1 = key, #2 = URL, #3 = description, #4 = options for \href \prop_gput:Nnx \g_tima_sites_prop { #1 @ url } { \tl_to_str:n { #2 } } \prop_gput:Nnn \g_tima_sites_prop { #1 @ desc } { #3 } } \NewDocumentCommand{\myref}{m} {% #1 = key \tima_href:xx { \prop_item:Nn \g_tima_sites_prop {#1 @ url } } { \prop_item:Nn \g_tima_sites_prop {#1 @ desc } } } \prop_new:N \g_tima_sites_prop \cs_new_protected:Nn \tima_href:nn { \href{#1}{#2} } \cs_generate_variant:Nn \tima_href:nn { xx } \ExplSyntaxOff \newref{texworks}{http://profs.scienze.univr.it/~gregorio/introtexworks}{\TeX works intro} \newref{arara}{http://profs.scienze.univr.it/~gregorio/introarara}{Arara intro} \newref{tex.sx}{https://tex.stackexchange.com}{Nice site} \begin{document} \myref{texworks} \myref{arara} \myref{tex.sx} \end{document}
2019-07-22 14:54:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5363425016403198, "perplexity": 8312.213463992273}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195528037.92/warc/CC-MAIN-20190722133851-20190722155851-00387.warc.gz"}
http://cms.math.ca/cmb/msc/20D45?fromjnl=cmb&jnl=CMB
location:  Publications → journals Search results Search: MSC category 20D45 ( Automorphisms ) Expand all        Collapse all Results 1 - 2 of 2 1. CMB 2011 (vol 55 pp. 390) Riedl, Jeffrey M. Automorphisms of Iterated Wreath Product $p$-Groups We determine the order of the automorphism group $\operatorname{Aut}(W)$ for each member $W$ of an important family of finite $p$-groups that may be constructed as iterated regular wreath products of cyclic groups. We use a method based on representation theory. Categories:20D45, 20D15, 20E22 2. CMB 1997 (vol 40 pp. 266) Bechtell, H.; Deaconescu, M.; Silberberg, Gh. Finite groups with large automizers for their Abelian subgroups This note contains the classification of the finite groups $G$ satisfying the condition $N_{G}(H)/C_{G}(H)\cong \Aut(H)$ for every abelian subgroup $H$ of $G$. Categories:20E34, 20D45 top of page | contact us | privacy | site map |
2015-08-29 12:39:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.742036759853363, "perplexity": 2341.1378447072893}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064445.47/warc/CC-MAIN-20150827025424-00234-ip-10-171-96-226.ec2.internal.warc.gz"}
https://8ch.net/tech/res/733048.html
[ / / / / / / / / / ] # /tech/ - Technology Catalog Name Email Select/drop/paste files here * = required field [▶ Show post options & limits]Confused? See the FAQ. Flag None2XOS9frontAbsolute LinuxAlpine LinuxALT LinuxAmigaAndroidAndroid-x86AntergosantiXApartheid LinuxAPODIOAppleApple (Classic)ArchBangArch LinuxArtistXAsteriskNOWAtariAzure Cloud SwitchBlackberry OSBodhi LinuxCentOSChakraChrome OSChromium OSClonezillaCommodoreCrunchBangCRUXDebianDOSDragonFly BSDDreamlinuxEdubuntuelementary OSEliveEmacsEvolve OSFedoraFirefox OSForesight LinuxFreeBSDFreeNASFrugalware LinuxFuduntuFuntooGeeXboXGentooGhostBSDgNewSenseGNUGNU/LinuxGNU HurdGuixGuixSDHaikuIBMIllumosKali LinuxKnoppixKororaKubuntuLinuxLinux MintLubuntuMageiaMandrivaManjaro LinuxMEPISMINIXMooOSMorphOSMythbuntuNetBSDNexentaStorNimbleXNuBSDOLPC OSOpenBSDopenSUSEParabola GNU/Linux-librePardus LinuxPC-BSDPCLinuxOSPinguy OSPlan 9 from Bell LabsPuppy LinuxReactOSRed HatSabayonSailfishSalix OSScientific LinuxSlackwareSlaxSliTaz GNU/LinuxSolarisSymbianTailsTempleOSTiny Core LinuxTizenToleranUXTrisquelUbuntuUbuntu GNOMEUbuntu MATEUbuntu StudioUltimate EditionUtutoVectorLinuxViperrVoid LinuxWindowsWindows (Classic)Windows 7Windows VistaWindows XPXubuntuZenwalkZorin OS Show oekaki applet (replaces files and can be used instead) Do not bump(you can also write sage in the email field)Spoiler images(this replaces the thumbnails of your images with question marks) (For file and post deletion.) Allowed file types:jpg, jpeg, gif, png, webm, mp4, pdfMax filesize is 12 MB.Max image dimensions are 10000 x 10000. You may upload 3 per post. File: 53eac902e05dd53⋯.png (2.82 KB, 200x200, 1:1, questionmark.png) No.733048 Bring all your hardware, software and other troubles here. No.746703 >>746600 I'm on Vista, and no matter which browser, I can not access sites like google, ixquick, and bunch of others. I think it's just straight up malware No.746705 File: 2a9577ab83171e5⋯.jpg (83.91 KB, 465x620, 3:4, fifth element.jpg) >>746703 i do a full scan weekly with both win Defender and Malwarebyte, nothing found. No.746715 >>746705 I never did those, my vista-system is already a metaphor for an aged, crippled , STD-ridden whore, I will go full GNU/Linux in the future anyway. No.746719 File: 00ab6f8c0c392bb⋯.jpg (28.25 KB, 342x285, 6:5, 090128102020396644.jpg) >>746715 i wish i could move away from M$, sadly the software i use don't have a Linux version and wouldn't work properly with wine No.746722 >>746719 get a separate, more secure computer then, duh, hardware is dirt cheap these days. No.746744 >>746681 have you check your /etc/pacman.conf and see if there's anything fishy in there? No.746772 >>746472 Bumping this question. I literally might have to go to [s]reddit[/s] because everywhere else I've asked hasn't answered. No.746779 >>746772 You really need to fuck off to plebbit. Cisco books are available everywhere on the net. No.746780 File: 634041566971f99⋯.jpg (114.06 KB, 496x1307, 496:1307, what fennec.jpg) >>746540 >ctrl+c & ctrl +v ^C stops terminal programs m8. I'm just going to buy a better mouse anyways so w/e No.746785 >>746780 It's ctrl + shift + v & ctrl + shift + c you stupid fuck. No.746787 >>745529 pulseaudio No.746826 >>733048 I don't normally play games, but I just wanted to spend the rest of my afternoon playing Half-Life 2 in Russian on Linux. How hard is that? I mean I'm launching it from steam. Oh wait, the music doesn't fucking work, like it hasn't for the past FUCKING $$\color{red}{YEAR AND MORE}$$. I can actually play the game but I really want the music so this is the time I really try to eliminate the sound problem, so I go and verify the integrity of the game files and what do you know 1154 files failed to validate. Sweet, now after waiting for it to finished downloading, I get the error that the update is corrupted. I don't know how to get rid of this error. I have deleted the folder that contains Half-Life 2, I have restarted steam, I deleted the 220 folder in the downloading folder in the steam library for Linux, logged out of steam, uninstalled Half-Life 2. It doesn't work. Sometimes I get the error that the whitelist.cfg is corrupted. I deleted that and the download is still corrupted. I don't know what a healthy whitelist.cfg looks like. Please can someone help. I just want to decompress and play Russian Half-Life 2. No.746828 >>746826 Well fuck me, I didn't format right and now I look like a newfag. No.746864 What are the top 10 most redpilled tips/tricks that you guys can give me for collaborating with programmers who don't speak English? No.746877 >>746826 Anyone? No.746879 >>746826 If no one responds, I'm going to restart. No.746880 >>746466 xdotool click --clearmodifiers 2 No.746896 >>746879 That didn't work. No.746900 >>746705 It is malware you dork. Uninstall every pajeet 3rd party application you installed. Mallwarebytes and other antivirus programs don't help against user stupidity or the problem of Windows in general. I mean, if your antivirus program would really do what it promises it should delete system32 immediately. No.746924 For those familiar with windows, is it possible to install a program with limited permissions (e.g., file and internet access)? Virtual machines are not practical for everything, specially programs that rely heavily on GPU. I'm aware of sandboxie but I've no reason to trust it. I want to know if some of its functionality can achieved natively. No.746925 *especially No.746926 >>746924 >For those familiar with windows, is it possible to install a program with limited permissions (e.g., file and internet access)? Not that I can think of how about a firewall whitelist at least for internet access? >Virtual machines are not practical for everything, specially programs that rely heavily on GPU. How about PCIE passthrough? No.746964 File: 7a74a416c23b22b⋯.jpg (88.75 KB, 299x385, 299:385, 1475602824323.jpg) >>746924 Make a new account with parental restrictions then go back to your regular account, shift+right click an executable, click "run as" then choose to run as the less privileged user. There ought to be a better way but that's the first thing that popped in my head No.746973 Are there any local, preferably open-source, translation applications without a networking component? No.746978 How to get exhentai and H.264 videos working on Pale Moon? No.746997 >>746744 It looks fine to me, is there anything in particular I should look for? No.747038 File: e3edfe14ed26423⋯.png (167.92 KB, 256x362, 128:181, Dark_Corners_of_the_Earth.png) Is there any way to play this game on PC with a 360 controller? No.747092 >>746978 Well for exhentai use an earlier version of the firefox plugin No.747094 >>746826 What distro are you using and is sound working for other things (videos, browser, etc.)? No.747140 how do i remove opengapps from lineage on a note 3? No.747150 File: 57fc461db8c009a⋯.png (2.23 MB, 1466x915, 1466:915, Screenshot_1.png) What is Kobayashi programing? No.747152 No.747155 >>747150 Some kind of web server code for handling session cookies. The first bit appears to implement some system where user login sessions can be tied to their IP address. The code under her finger appears to be the part which generates the session tokens. No.747157 What updates do I install on windows 8.1 and 7 to protect myself from the NSA malware leaks? No.747159 >>747157 They released an update before the regular Tuesday patches so they probably merged it. Download the latest security only bundle No.747175 >>747159 My windows update seems to stuck in "Checking for Updates". I remember doing some registry shit to prevent windows 10 from ever appearing on my computer so it might have something to do with it. Could you post the update code/id so I can search it up and install it manually? No.747184 >>747159 >>747175 https://technet.microsoft.com/en-us/library/security/ms17-010.aspx [Source:Third paragraph from the bottom, ars technica.com/security/2017/05/wcry-is-so-mean-microsoft-issues-patch-for-3-unsupported-windows-versions/ There is also a update for Windows update to help make it run better which you may have already installed. No.747186 >>746544 is there an arch wiki page on how to recombile my kernel with usb mass storage/sd card support? No.747188 File: 6a4a780d7b7d8cc⋯.gif (472.15 KB, 508x270, 254:135, ▄█▀ █▬█ █ ▀█▀.gif) >wanna play some klonoa >sudo apt-get install pcsxr (which installs version 1.92) >emulate it on PCSX >audio clipping Using debian with pulse audio No.747198 >>747188 get retroarch instead and download the core for PS1 gaming No.747205 >>747092 How to get one? The ddls have all been taken down, and the previous versions listed on the official addon site are all incompatible. No.747214 >>747175 >My windows update seems to stuck in "Checking for Updates". It's a bug in Windows Update. Download and install patch KB3102810 and KB3172605. Before you can install them you need to manually restart the update service because the bug prevents any update to be installed. No.747233 I have a bunch of multipart files that are the the .001/.002 etc format, problem is for some reason the first file is .000. If I try to extract that .000 file it doesn't work (7zip/hjsplit/bunch of other program), if I extract the .001 I get a truncated file. And if I increment the part number on each part by 1 and I then extract the .001 it works fine. Is there either a program that will deal by default with the .000 file or is there a way to make changing the extension of several hundred files not an absolute pain in the ass? From what I could find it's a usenet thing but that doesn't help me much No.747234 >>747233 This is pretty much exactly what scripting languages were designed for. You could do this in sh script, but my go to language is Python. Just grab all the files with the same name and different extension, go through them in descending order based on the extension, and move them to the increased file number. No.747250 I really need some fucking help. So I went to the nvidia website, put in my driver's information and they say there is a may 9 version and a may 4 version that are newer than the one my nvidia currently uses: >375.39 used by driver >381.22 released may 9 >375.66 released may 4 How do I update to use the 381.22???? Also I'm still having problems with the dll libraries, for example installing spellforce on wine from gog, winehq says "it works flawlessly, platinum rating" but when I installed it the ground textures were invisible so the grass and stone road were replaced by a grey-black patch of color. Then stuff like wiki.playonlinux.com is useless and empty. No.747254 >>747234 I somehow managed to pull something out of my ass without knowing anything about making scripts, Oh and yeah it actually took me that long and yeah it actually still took me less time than doing it manually No.747256 >>747254 But you've built up some skills that will help in the future, which is probably a lot more important. No.747257 No.747273 >>747250 Nevermind, I figured it out. https://johners.tech/2017/01/11/installing-the-latest-nvidia-graphics-drivers-on-linux-mint-18/ >software sources >Add a New PPA button. In the text box, type in ppa:graphics-drivers/ppa No.747281 Whenever I start my computer (Ubuntu GNOME), a process called "tracker extract" begins hogging all the memory and lagging my shit until I inevitably kill it. What is it? How do I stop it from doing that, or from starting up in the first place? No.747312 No.747328 I posted about troubleshooting a new pc a few weaks ago. Even though nothing suggested was the problem it was still helpfull in narrowing the possibilities and I was able to get the pc running. Yesterday, I was able to get few programs going, downloaded music, and set a few games to download overnight. This morning I found that everything had installed correctly. I then went to the official site of my graphics card and started to download a driver, then my computer froze. At that point I could get to the desktop but running a program would cause it to freeze. I changed my power settings in windows to give more power and I turned the eco-mode on my psu off. Before that, I could play a song to 20 seconds then it would freeze. After that, I can play a full song without freezing, but I still can't resume my downloads. Wat do? No.747329 >>747328 Fug spelling errors. Forgive my retardation and failure to touchtype. No.747333 File: 2cb161e5a02976b⋯.jpg (355.78 KB, 900x563, 900:563, 80s_5ce2f6_5848577.jpg) for MAXIMUM security and anonymity, do I need to avoid buying a laptop online? Or once I wipe and replace the OS and firmware, am approved by Stallman and /tech/s most paranoid? No.747338 File: 338114d0fa18d7a⋯.jpg (25.46 KB, 337x342, 337:342, 338114d0fa18d7aa6196f4b3a4….jpg) Hey, My Windows 7 installation is complete ass and I want to upgrade to Windows 8.1. I got a really old pre SP1 pirated addition of W7 and its really not working well for me. I have a lot of issues with installing and uninstalling things and issues with ownership over files. How do I upgrade to Windows 8.1 and keep most of my games, photos, documents and such? No.747371 I need to update my BIOS but I'm stuck in trying to figure out what version I currently have, and what I need to download. This is what I have: https://www.msi.com/Motherboard/support/X370-GAMING-PRO-CARBON.html#down-bios Physically on the motherboard itself it says M5 - 7A32 ver 1.1, but when I boot into the bios menu it says BIOS ver E7A32AMS.120, BIOS build date 03/16/2017. Halp No.747413 is real analysis useful for computer scientists? No.747417 What would you recommend for a circuit tester, /tech/? I have this old ass Kenwood R-1000 that a relative pulled out of their attic, and I think there's a dirty contact on the band dial that fucks with it so that it's pretty much permanently stuck at 9 MHz, but it's more likely a problem with the board itself since the frequency doesn't change with the fine adjustment dial either. I honestly can't know without tracking down the exact part of the board that's the source of the problem, and I can't do that with an actual tester instead of this shit analog multimeter. >inb4 you should already have one, you're on /tech/ It's only been recently that I took an interest in working with circuitboards at all, and I'm not too worried about learning with this Kenwood since it's free and easy to work with. No.747419 >>747417 Fuck, wrong sticky, should have posted in the hardware thread. Would it still be relevant since it deals with an actual troubleshooting problem that I'm having? No.747422 I do not understand how wine works. When I check to see if a game runs in wine for me it talks about 32-bit WINEPREFIXES and shit and I have no idea what the hell to do to get this stupid game to run No.747424 >>747422 https://wiki.winehq.org/FAQ >ctrl+F "wineprefix" Go from there, and RTFM. No.747493 >>747333 yes. Online purchases can and will be sent to a black chamber before being delivered to you. If you pick up something in a store you can be more confident that it wasn't targeted to you. Especially if it's a Best Buy or Walmart or something where you can see them pull the box and can never lose track of it. No.747498 I've somewhat inadvertently inherited the backend of a website that I like quite a lot. It's written in PHP, and the code is rather messy. I'm not very experienced, so I'm trying to figure out how to go about this and avoid fucking up. I've been looking up best practices and such, my main sources of confusion right now are how to organize the layout of the project and minimize redundancy. Would this be a good case for a PHP framework? I'm looking for a solution that will: >A: allow me to improve upon the organization of the project over time without breaking its existing functionality (I don't want to have to rewrite the whole backend from scratch, I'd have to learn everything too quickly) >B: add additional functionality as needed No.747501 File: 7afff0bbaba3bac⋯.png (42.03 KB, 642x347, 642:347, php.png) >>747498 >PHP Maybe somebody with the patience to work with it would be better suited to helping you, but PHP is kind of the posterchild of "it's not a bug, it's a feature." https://archive.fo/rDRvD No.747507 >>747233 cat file.[0-9][0-9][0-9] >file No.747527 File: 3ca0a304cdc23aa⋯.jpg (50.31 KB, 750x768, 125:128, demichan despair.jpg) Oh boy, I really fucked it up now. >So I've been using this WD 4tb external as supplemental storage for the past year. >Only plugged it in maybe six times to transfer data onto it >Been treating the fucking thing as if it was my newborn baby ever-since I took it out of the box >"No way a HD this new and expensive will fail me anytime soon lel!" >Started lurking /tech/ in recent months only to discover that externals, and especially Seagate and WD, are extremelly unreliable >Decide to double down on an HGST SATA and copy all my shit onto there instead >certain, random files on WD refuse to transfer, freeze entire system even though they work fine when run from external >been searching for solutions for 5 days >chkdsk /r froze at 10% >tried on two different computers with xp/win 7 and same results with same files >tried on raspi multiple times and everything froze but I'm a complete faggot at Linux so I can't count that Man, I wish I wasn't so fucking dumb. Where do I go from here, /tech/? No.747538 >realize I have the tor daemon running everytime i boot >thought to myself "i should use this somehow" other than configuring irc clients and what not, is there a way to configure my package manager (apt) to grab packages through tor? No.747540 >>747527 Try it with a livecd No.747542 >>747538 run with tsocks. you'll have to configure tsocks first, and make sure you use tsocks AFTER sudo. e.g. sudo tsocks apt-get install <something> you may want to do some shell configuration if you don't want to do extra typing No.747560 >>747542 What's the difference between using torsocks and tsocks? No.747594 Since it's starting to get hot over here, my CPU is scaring me a little, I always heard that I shouldn't care about temperatures until my PC starts to shut-down by itself, is it true ? I have the habit of having Speccy open 24/7 on my second monitor to check on it. It heats up to ~75°C when I play some games or render things on Premiere. No.747635 >>747560 Use torsocks. tsocks is not maintained. No.747640 No.747670 I have Windows 10 and just completely removed OneDrive from my computer. For some reason though, my Documents and Pictures folders are now empty. All the other folders in the "This PC" section (Desktop, Downloads, Music, Videos, etc.) are still there, but Documents and Pictures are now inaccessible. I haven't backed my computer up in a long time and have no recent copies of anything that was lost. If I can't get all that stuff back, I'm probably just going to kill myself because that's years worth of data and personal information that I just unwittingly purged because of my sheer stupidity. No.747683 >>747670 >win10 >no backups Anon.... You're making me feel better about myself. No.747688 >>747683 I'm guessing that that's a "no," then No.747701 >>747688 nothing gets deleted when its "deleted" its just marked as empty space if you have not written too much data on to the disk most of your stuff should still be there https://en.wikipedia.org/wiki/List_of_data_recovery_software No.747711 I want to stream my laptop to my TV and watch my shit there. not only youtube or browser shit i was thinking of chromecast, but google. so whats a cable-free alternative? No.747758 File: 951afeeb8dc876c⋯.png (516.06 KB, 486x508, 243:254, 2017-02-22 01_24_56-(21) _….png) >Download a shitload of .swf file porn >HAVE NO WAY TO FUCKIGN PLAY IT IN THE CORRECT ORDER WITH ANY FUCKING PROGRAM For fucks sakes, is there any player that can play .swf files without error and me needing to click back and forth? There's no way to do this with my Web browser and any extensions available to me over Pale Moon, because it means sometimes the .Swf not playing at all due to how flash disablers tend to work. MPC-HC using CCP or KCP does play .swf, but at limitation, which blows ass, and I can't arrange the fucking playlist or make things loop, so for whatever reason, it doesn't let me play .swf outright. I really need something with the frame control like you'd find on .swfchan, ability to pause and play in a playlist, or arranged order, I can't have this thing fucking decide to play .swfs in it's own preset order arrangement, like, say you arranged the files in order of realease but the fuck program plays them by the like of Sorted by name, and there's no way to change it. This is driving me fucking nuts. No.747766 >>747758 Nvm Fixed No.747767 I have been using Cyberfox for a while now, but its development has stopped. Is there a browser /tech/ recommends? No.747771 >>747767 I use Pale moon with Grease monkey 1.5 It works pretty swell, if not for a lack of working addons. No.747773 File: ef8c702f0814aa3⋯.jpg (18.88 KB, 210x330, 7:11, gallery_1014_16_8285.jpg) >>747758 NO WAIT FUCK, IT DOES EVERYTHING BUT ALLOW YOU TO LOOP THE .SWFS AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA No.747813 File: fa5575e8a5356c0⋯.gif (134.12 KB, 281x281, 1:1, annoyed cat.gif) >>747198 >10 gigabytes later still had to download cores >still has audio clipping issues No.747814 >>747773 Tried uploading them to >>>/swf/ ? No.747875 /tech/ approved email client for gnu/linux? No.747927 How do I create a script that moves files Ive downloaded from directory to destination. The script should take two arguments: from and to. The categories could be files with the following characters as the file name's first letters (case insensitive): A to F, G to L, M to R, S to Z. and "-" for the rest. Each of these categories should be represented as a directory in the destination and creation of these directories should be done interactively (Hint: command "read"). However, ask for creation of these subdirectories only if the destination directory itself exists. If it doesn't exists, create an error report and exit prematurely. Check also the existence of the "from" directory and the number of arguments. No.747952 >>743860 Blink your eyes in the same rhythm. No.747953 >>747927 >/tech/ plz write my school assignment for me no No.747954 >>745126 Install gstreamer0.10-ugly too. No.747964 File: ee14cbbbb97500b⋯.jpg (45.15 KB, 500x500, 1:1, 1459301866800.jpg) Im working on an mksh script to convert all my .hgt files into nice images. it creates$FILE_hillshade.png and $FILE_color.png, then composes them into$FILE.png if I want to delete the hillshade and color ones, would rm "${i%.*}_${hillshade|color}.png" be correct? the full script so far, in case anyones curious. Ill change it to unzip all the archives then do it for every directory later. #!/bin/mkshfor i in *.hgt;do gdaldem hillshade "$i" "${i%.*}_hillshade.png" -z 1 -s 10000; gdaldem color-relief "$i" /home/yuuka/cmap.txt "${i%.*}_color.png"; gm composite -dissolve 30% "${i%.*}_color.png" "${i%.*}_hillshade.png" "${i%.*}.png"; rm "${i%.*}_${hillshade|color}.png"done No.748017 >>744338 What the other anon said: >>744358 More explicitly, a literal string in double-quotes is essentially a pointer to an array of const char. You can add an integer to a pointer in C (and in C++ which inherited this syntax), which will move the pointer by the number of items specified by the integer. IOW, with variables T *a; int i; the expression a+i is equivalent to &(a[i]). Therefore, a line (parens don't change anything, added for clarity): output += (i + "\n"); is equivalent to: output += &("\n"[i]); which will obviously concatenate to output random garbage that exists in memory i-2 bytes after the end of the two-byte string constant "\n", interpreted as a null-terminated string. Yes, using std::string is a minefield. Also, the whole exercise is misguided. >how much faster I can make fizzbuzz by simply concatenating a string so I only have to print it once. This is exactly what iostreams are doing for you under the hood. Everything you pipe to std::cout is buffered until you explicitly flush it or the buffer overflows. Buffering it yourself once more on top of that only causes an additional unneeded copy of the data for no benefit. >(About 3 times faster, btw) Lemme guess. You've passed std::endl for endlines instead of '\n' , right? std::endl appends a newline character and flushes the buffer afterwards. That costs extra time for no reason if you produce lots of lines at once. No.748023 >>744688 use Pale Meme instead No.748030 best search engine? qwant? duckduckgo? searx? (((google)))? bing? No.748032 >>747953 Sourcing information is a skill and its awesome to see what people that are smarter than me do because im just starting. No.748046 I have a prebuild with a IntelPentium(R) Dual-Core CPU E5500 2.80GHz, 3Gb of memory, and a AMD HD 6670 1Gb GPU that I added latter to it a long time ago. And I would like to know if it would be worth to upgrade the Ghraphic card (that doesnt surpass the 200 euros). If yes, what GPu would I get that would improve my gaming, without bottlenecking my CPU? No.748047 File: c0b36362d465472⋯.png (324.12 KB, 1200x800, 3:2, wannacry_05_1024x774.0.png) I just had a popup on Chrome that claimed to be a ransomware. I was prevented from closing the window and my computer was getting very slow. I was afraid it was this WannaCry ransomware shit so I quickly shut down my PC and disconnected the internet cable. I booted back my PC and everything seems fine (files are accessible, Chrome works just fine, system’s speed is normal) and I’m now performing a full system antivirus scan to be sure. Upon opening Chrome history, I realised the popup page refreshed itself probably 1000 times in a minute which I think is why my computer was very slow when it happened. I suspect it was just a fake ransomware popup no different than those security alert popups. Still, I’m afraid of one thing: Would it be possible that it was a real ransomware but that I shut down my computer so quickly that it didn’t get the chance to fully install itself on my computer? Therefore, would it be possible that it encrypted some of my files without me knowing it? I can’t afford to spend the many weeks it would take to open all of my files one by one to see if everything is alright. Pic not fully related as I don’t remember fully what was written on the popup as I shut down my computer very quickly. No.748048 What's a good ebook reader for GNU/Linux? Needs to be able to open .epub files. No.748100 >>748048 ebook viewer? No.748101 >>748030 startpage, it's in the fucking "welcome" sticky... No.748103 >>748100 Thanks m8. No.748134 >>748046 Get a Radeon RX 460, and some extra RAM as well. You might also want to search your local markets for a used Core 2 Quad (preferably Q9400 or higher) for a cheap and massive CPU upgrade. No.748160 >>745743 Anyone found a fix for the referer error? Firefox works fine while Pale Moon doesn't work. It used to work fine a few weeks ago. No.748162 What do you use to chat with family members on Linux? Skype is flaky and discord seems...gay. Chat software is the one thing holding me back from migrating. No.748164 >>748162 You could host your own XMPP server, even iMessages can handle that. No.748168 >>748162 Skype is exactly the same as using it on windows tbh, so if you're fine with minor botnet on windows, it's fine for linux a landline phone is still probably the best way though No.748178 >>747527 On a related note, does anybody have a good solution for backing up X terabytes of data onto portable HDDs and retaining some level of redundancy against arbitrary disk failure without relying on X*N terabytes worth of HDDs? No.748188 >>748178 The proper way of doing this would be using a proper filesystem like ZFS and choosing an acceptable level of redundancy (Say 5 drives and 2 being redundant). You said external drives though, so I'll assume you're not going to have more than two connected at once, and you didn't want a 1:1 backup ratio (mirror). Creating PAR2 files is probably the best solution for you, but you'll have to do it manually. No.748190 >>748188 http://www.quickpar.org.uk They uses Reed-Solomon codes. Same error correction scheme used in QR codes. No.748193 >>748190 Meant to send this link: en.wikipedia.org/wiki/Parchive No.748210 >>748188 >>748190 >>748193 Thank you very much. I was looking for a way to back up ZFS volumes & disk images in preparation for a server migration without resorting to the expense of a tape drive+tape, and that may be exactly what I'm looking for. No.748217 My computer lags to fucking shit when I transfer files to an external HDD. I'm running Ubuntu GNOME, but I had the same problem on Fedora GNOME even though it uses a different file transfer program. How to fix? No.748227 File: 3d7a2e2c48b50f7⋯.png (3.68 MB, 2200x1467, 2200:1467, 1486939893003.png) I want to set up a dual boot. One will be an operating system I'm familiar with, and the other I will be working on a LFS. Ideally, I'd be able to set up the familiar partition and work on the LFS in my free time. Is this doable, or does LFS need to be installed in one go? No.748233 >>748227 >does LFS need to be installed in one go? It does not, but you have to be careful to replicate the environment if you leave it and come back later. That means all of the partitions need to be mounted as they were before, the right environment variables need to be set again, and the chroot needs to be reentered the same. If you fuck something up, it might not be obvious until you've wasted a bunch of time. It's best to do it in one go. It doesn't really take all that long unless your CPU is ancient or you have way too little RAM. Best to pick a Friday night or something where you don't have anything going on the next day (and, let's face it, if you're installing LFS, your social calendar is probably pretty open), get your fapping out of the way, chug some energy drinks, and just get it done. After it's installed, you can always return to it in shorter intervals later to install additional software and get it set up the way you want it. No.748235 >>748233 Ok, ty anon, will fap n chug now If I get tired of it, is it pretty easy to remove that partition completely and let my main os take up the whole drive? No.748266 >>748235 That depends on the tools available to you in your main OS. Resizing partitions is a pain in the ass with some filesystems/partitioning tools/OSes. With others it's no problem. But LFS having been on the partition(s) you want to recover isn't an issue. It would be no different than another OS having been installed there, or it simply being a blank partition. Good luck. No.748294 File: f06e76b25e77738⋯.png (1.03 MB, 1500x2670, 50:89, torrent guide.png) Not sure if I should ask here or start a new thread (I can start a new thread if you think it can generate discussion). How to torrent? What is the best torrent? Is pic related accurate? Can ISPs cut your internet for magnet links? How to set the web browser to use a certain torrent program for magnet links (rather than the default one, which is shit)? Are there any trustworthy free VPNs for torrenting? How do you use magnet links with a torrenter? No.748298 >>747814 >It's dead WAIT, NO ONE FUCKING REVIVED IT? Also, no, /swf/ wouldn't solve my problem anyway. No.748299 >>748294 Deluge, Magnets open Torrent program automatically when clicked or added via copy pasta, use privoxy with Deluge, make your own VPN with Smartphone/Ipad shit you got for christmas, the first two are barebones If I recall. No.748300 >>748299 >deluge I just tried it and qbittorrent seems more user friendly and a very big plus is that qbittorrent has a search option and auto-updates the website source list, but I'd be happy to use deluge if you could tell me any method that replaces the search function. >privoxy I just downloaded it, how do I use it? I'm on gnu/linux so I can use a terminal if there are easy commands. >make your own vpn with smartphone/ipad shit What do you mean? Do you mean I turn my phone into a wifi hotspot? But I don't have that much monthly internet tbh. It would take me several months to download a 5 gb movie, for example. Or do you mean something else? >the first two are barebones What do you mean? No.748302 >>748300 HTTP 127.0.0.1 and port 8118, if SSL is present in any program option, fill it out the same as well, use it for just about everything. No.748305 >>748302 http://www.tech-faq.com/127-0-0-1.html >127.0.0.1 is your localhost Do you mean this? "If any public switch, router, or gateway receives a packet addressed to the loopback IP address, it is required to drop the packet without logging the information. As a result, if a data packet is delivered outside of the localhost, by design it will not accidently arrive at a computer which will try to answer it. This aspect of the loopback helps ensure network security is maintained, since most computers will answer packets addressed to their respective loopback address which may also unexpectedly activate other services on a machine by responding to a stray data packet." No.748306 >>748305 Look mate, if any program you use has proxy option, you pump that shit in, and it'll use Privoxy. No.748307 >>748306 I see, thanks. So I just set that up for qbittorrent. Do I need to fiddle with privoxy settings in any way? Also, what should the "listening port" be? Should it be "use different port on each start-up"? Also, what does this all mean now? Does it mean I don't need VPN because of privoxy? No.748308 >>748307 Hm? No, You're gonna need a VPN, I suggest looking up those tutorials on how to make them using devices at home if you're on a budget, because you can at least mask traffic with them. No.748310 File: 99b22c3fb7c3508⋯.png (22.79 KB, 670x698, 335:349, bad vpns.png) File: 29c92c436c65fcd⋯.png (29.33 KB, 926x716, 463:358, best vpns.png) File: 2f08ec68492e5b1⋯.png (93.04 KB, 1385x729, 1385:729, worst vpns.png) >>748308 Thanks a lot, I'm just now going to read up on how to set up a home VPN. But, are (any) free VPNs trustworthy? I have those guides but mullvad costs$5 per month, and it seems the most secure. I will have a look at the home vpn set up though. No.748312 >>748310 There's a /g/ issued list of safe VPNs to get set up with regionalyl I recall, It's knocking abotu somewhere, you'll have to check the wiki, or the uncommon /pol/ thread when they're on about anonymity. No.748319 >>748312 So I got openvpn and also got KVpnc which is a GUI for openvpn, apparently. It allows me to use openvpn but it asks for an openvpn config file. I presume I could set up my PC as a vpn then connect to itself? Or should I instead download privatetunnel from the android app store and transform the phone into a vpn using that app? I presume both options will require some configuring of a setup file, would that be complicated? No.748327 Nevermind, I got gadmin which allows me to set up a vpn using openvpn. One slight problem is that I downloaded and installed bridge-utils but when I start up gadmin server it says "error: bridge-utils are not installed". How do I fix that? (I installed bridge-utils twice, once using the software package manager and again using sudo apt-get install. I still get the error on gadmin server startup). No.748401 >>733048 Does Libreboot erase my current hard drive contents and wipe the OS? Or does it somehow manage not to screw any of that up and only change firmware? As in, if I install it today, and it succeeds, will my Debian OS still be exactly the same? No.748402 >>748401 Also, any OS can be just as secure as any other OS, correct? I hear lots of security hype about hardened Gentoo, but assumed I'd be just as private behind GuixSD with full disk encryption, libreboot, etc.. No.748404 >>748402 >>748401 Last question, I see on the wiki for a true full disk encryption I'd need to boot from a flash drive since boot can't be encrypted, and is therefore susceptible to evil maid attacks. Is this still the case? No.748414 do I really need a case for my hard drive? W No.748415 >>748402 >Also, any OS can be just as secure as any other OS, correct? In theory, with a thorough understanding of the OSes in question, access to the source code, and enough time and skill, sure. In practice, no. >I hear lots of security hype about hardened Gentoo, but assumed I'd be just as private behind GuixSD with full disk encryption, libreboot, etc.. Security and privacy are not the same thing. Clarify your thinking and reformulate your question. >>748404 Why would it not still be the case? Yes, to avoid your unencrypted /boot being tampered with in your absence, you need to boot from some external medium instead. Even if you do that, there are still other attacks available to an attacker, however. No.748416 >>748415 Of course I'd want both privacy and security. But I imagine during the installation, I should be more concerned about security. The question(s) should still be rather obvious.. But just in case: > any OS can be just as secure as any other OS, correct? > any OS can be just as private as any other OS, correct? No.748420 >>748416 > any OS can be just as secure as any other OS, correct? > any OS can be just as private as any other OS, correct? In theory, with a thorough understanding of the OSes in question, access to the source code, and enough time and skill, sure. In practice, no. No.748422 >>748047 You're fucked m8, throw your computer in a fire right away. Buy an old thinkpad, librebooted and install gentoo. No.748519 File: bd39db7959488a4⋯.jpg (28.65 KB, 414x389, 414:389, 2f9539d36e9d85082d5a6ca91b….jpg) Do K-series i7 CPUs have VD-t removed/disabled on the actual chip or is that somewhere in the BIOS software? I have a i7 4770K that I'd like to overclock and use in conjunction with GPU passthrough in Gentoo. Sadly, it seems the jews at intel have disabled this functionality on CPUs with unlocked multipliers. I was gifted this system, should I just replace the CPU or is there hope that there will be a work around for this at some point? No.748523 >>748420 Where might some of the differences lie, and how might I work to increase my security and privacy (or really, understanding of), between operating systems? I was attracted to Gentoo for the reasons mentioned, but decided to install GuixSD today just for the hell of it, a stateless system was attractive to me, but I'd like it's security and my privacy to be top knotch, even if it's just for the sake of learning. So far I understand I should be looking at a secure OS and looking for full disk encryption, with my bootloader on a USB. But I'm not sure where to go from there for a /tech/ approved set up, or how to optimize my OS after installation. Any resources or pointers would be great. No.748542 >>748519 The chip itself is jewed, unfortunately. You can forget about virtualization on this one. The good news is that Intel stopped doing this faggotry since Devil's Canyon. Your CPU is literally the last K model that has VT-d jewed off, kek. If you manage to get your hands on a 4790K, it will work with VT-d and overclock. Make sure your mobo actually supports VT-d before you start shopping for a replacement CPU. No.748548 >>745815 It references the (first four bytes of) memory occupied by var as if it were of int type. No.748551 >>745902 Depends on the kind of porn it was. No.748601 >>733048 Hello, A while ago (maybe two years) there was a thread about some german hacker that copied emails from a political slave/pedo ring in Germany and was killed by the mafia afterwords does someone has the name of the hacker and some screenshots of the mails in question ? No.748602 >>747527 Next time before buying a HDD Read the data sheets you'll learn useful info and you'll recognize what are the good ones. No.748603 >>746719 >sadly the software i use don't have a Linux version and wouldn't work properly with wine >gaymen detected Like this >>746722 anon said just buy some cheap hardware to surf the web. And stay offline for your game rig. No.748605 >>748542 >4790K So I'm looking at 300+ just for GPU passthrough? kek indeed, I wonder if I could pass this CPU off on a normalfag and upgrade for cheaper. Thanks anon No.748607 File: 00b63d6c3aac986⋯.png (85.21 KB, 788x607, 788:607, ClipboardImage.png) >>748542 >>748605 Pic related are the specs. Motherboard seems to support what I need. Not to sure about the GFX card in linux though everyone says AMD is better as of late. I haven't had a system so close to [current year] hardware is a long time so I'm unsure how support is on the high end. I want to build a computer I can run Gentoo on and still play the one or two games I play that don't work well in WINE. Should I sell this off to a friend and start from scratch or just replace the CPU? Since everything is botnet now I'm not loyal to any company. I just want something to encode video on, play games, and have 4-6 monitors so I can use it on my TV and shitpost while watching/reading/playing stuff on other monitors. No.748609 >>748607 Also I know the temps are high. It has the stock cooler and needs a good dusting. I don't really like the case either. I bought a good aftermarket cooler because I planned on overclocking it so I'm covered there once I get a free day to install it. No.748625 >>748523 FDE protects you from theft of your data, and some attacks by certain unsophisticated adversaries with physical access to the machine. Even with FDE + /boot on removable media, a sophisticated attacker can still compromise your machine. Develop a threat model. What/who are you trying to protect against? Understand various Linux kernel hardening technologies, e.g. SELinux, AppArmor, Tomoyo, SMACK, etc. There used to be grsec, but he only gives access to the testing version of the patch now unless you pay. Understand various exploit mitigations. Stack canaries, position independent executables, relro. Understand various sandboxing technologies, e.g. firejail. Now look at the different Linux distributions and see what they support. Fedora has out-of-the-box support for SELinux. Ubuntu has AppArmor. More important than the fact that they've enabled these technologies in their default kernels is that they've developed policies for them. Do they compile daemons and binaries that handle arbitrary information from the network as PIE binaries? Do they include firejail or other sandboxing technologies in their repos? If not, are you willing to find a distro that does, or make sure you keep it up to date yourself? Speaking of repos, does the distro's package manager check package signing? Is the distro good about key management/security? What are their security policies? Debian testing, for example, is garbage for security. They are quite explicit about the fact that Debian testing is very low priority for their security team. They get around to it if they can. Their focus is Debian stable. A lot of Debian testing users don't know that. So develop your threat model. Then decide which of the available combinations of techniques and technologies will most acceptably mitigate the threats in your threat model. Then find a distro that is close to what you need. You likely won't find a perfect one. But if you find a close one, it will be less work for you to, e.g. keep a custom kernel up to date if that's what you have to do, or recompile certain packages with hardening options. But as you can tell by what's above, you'll probably end up with some kind of kernel hardening + executable/daemon hardening + sandboxing. Add a good firewall policy. Then don't do dumb stuff like running network sniffing programs or web browsers as a privileged user. Actually, the less time you spend using a modern web browser or, indeed, even connected to the internet at all, the safer you are. No.748665 Got a new motherboard, CPU, and RAM (8GB) and I keep getting various programs crashing on 32-bit Windows XP and I'm not sure why. I know 32-bit OSs are limited to roughly 3 GB of detected RAM but I ran on 2GB for a decade and never had this sort of problem. I did a memtest a few days ago and my RAM seems fine. Anyone have any ideas? I'm not dumping WinXP. This may also be a problem on linux distros but I haven't had the time to test it yet. No.748672 trying to install libfpx in the AUR but it fails to install, not sure what to do. Anyone have any suggestions No.748680 File: 72397217ed2bb56⋯.png (66.61 KB, 1938x1048, 969:524, stabilitytest3.png) File: db9dd694ae163cd⋯.png (64.49 KB, 1938x1048, 969:524, stabilitytest2.png) Under a full synthetic load my CPU temps peak at 90 C on core #1 Is this something I should worry about? I also did not experience throttle. At what temps should I start to see some throttling? 95 degrees C? My motherboard itself also hovered around 90 degrees. No.748684 I'm looking for an old thread (or the operating system it was about) that was on /tech/. It started with something like "Why doesn't /tech/ talk more about this?", but it wasn't like a bait post. It started with a screenshot of the OS's desktop and it wasn't based on *nix. I only had the time to briefly review the OS's website itself. I think the website was a dull teal color and the website design itself looked old-fashioned (like a decent looking 90s to early 2000s website). But the OS itself was a recent thing. I think the features that impressed me were it targeting low-end hardware and some focus on security. The thread had a few responses, but couldn't have been more than 10 or 15. Does anyone happen to know what I'm talking about and/or have an archive to that thread? The OS's name, I think was just 1 word without any acronyms. No.748702 Is it safe to pay for a VPN using a credit card? If not, is it safe to pay for bitcoin using a credit card? No.748703 >>748684 Temple OS? No.748746 hey guys, I have a query. My brother is a doctor and he wants to automate the taking in input to generate reports and diagnosis and keeps some sort of database of them third world country so he has no money for a computer so he uses his android smartphone for this(he has a bluetooth keyboard for it). so the problems are the following, excel SHOULD work but he says it has an erratic behavior, digging he found out he can use macros on the online version of word to do the things he wants, however, he doesn't know how to use vba(and neither do I) and before starting with this I'd like to know if anyone knows about another option to the same (without having to go through microsoft if possible) No.748755 >>748684 Plan9? BeOS? No.748759 >>748746 Maybe Google Docs would work? No.748761 >>748759 he wants to automatize the process of creating the reports, not just storing them No.748793 >>748625 thanks, screencapped for future reference No.748817 File: 13061b345e5b675⋯.png (70.47 KB, 472x350, 236:175, cats.png) How easy would it be replace some arch partitions on a dual boot drive with windows, with a set of gentoo partitions? Would I have to reinstall windows all over again, essentially completely wiping the drive? No.748820 >>748746 maybe i should rephrase this, somethign to automatize the process of datatyping(like already having a set of "tags" that would switch every time we press enter, like name + enter + address...) and preferably having a way to access all the data from a given users(in case there are several consults) >>748817 ???you just delete the / partition and use it for gentoo, swap doesn't need to be formated, and if you have /home that doesn't have to be formated either(it'll contain your files but also your configuration files which you might want to delete beforehand to try to not get shit fucked), seriously if you don't know something like this you probably shouldn't be using gentoo just yet, you'll just get frustrated No.748821 >>748817 And is Void Linux basically Arch minus Systemd? I see it's a binary release distro, and thats what I'm currently looking for on my laptop, a no fuss quick install binary distro. No.748822 >>748820 Damn that's awesome, sounds ez. And no I'm gonna use it, I just worry whenever my precious windows dual boot is involved since it took me forever to get that right No.748823 >>748822 if you dont touch the windows partition then nothing will happen to it, generally dual boot problems arise when you install windows ater linux is already on the disk becuase microsoft is full of dicks and they replace grub with their own boot manager which ignores linux even exists (which is still easy to fix but it shouldn't need to be in the first place) No.748825 >>748823 I'm really offended that you think I shouldn't be installing gentoo, but I know it's because deep down, I too think I'm not ready. But I will learn, senpai No.748837 I need your help /tech/ I'm currently studying programming and software development at a shitty institute of technology in my piece of shit third-world country. I've been taught how to do simple business-oriented programs in Java and Visual Basic, but I feel like I'm never going to get anywhere with this kind of knowledge. I'm afraid that once I'm finished I'll have a bunch of useless knowledge compared to other programmers who studied in a university or something. I've no intention of abandoning my studies but I'd like to know if there's some online resources or books that will help me complement the things I'm currently learning and make me into a better programmer. What do? I don't want to end up as a wageslave stuck in a shitty dead-end job. No.748842 >>748703 >>748755 I combed through the archived catalog and found it. https://web.archive.org/web/20170419120537/http://8ch.net/tech/res/736204.html I misremembered the website color. But it was Genode. No.748848 I have a Z77X-UD3H motherboard with a wifi card. I use the wifi card to host wifi in the apartment. The problem is that after it is on for a while, it fucks up the lan internet on the PC - it starts loading pages slower and slower. It seems to be somewhat dns related, but I am not even completely sure about that. I am doing it through cmd commands like "netsh wlan start hostednetwork" on Windows 7. It has a password. Turning it off doesn't immediately fix it. It might require a combination of several "ipconfig /flushdns" and restarts. Any ideas? No.748850 >>745990 >>748837 Get involved in some Foss projects No.748858 I fell for the motherfucking website meme and now it looks very black and white. I looked at some free templates and they are being shady (login to download etc.). How can I design a simple website with minimum pajeetscript, which looks modern and good, but not as modern as two words per page or bloated. No.748860 >>746268 >is linux gaming good yet? Why do you want to play modern games? They are crap. Just install wine and enjoy old games. No.748862 >>747767 Palemoon No.748870 No.748871 >>748858 What kind of website are you trying to make here? Describe what you want. No.748903 >>748871 Personal website with a top menu and several article pages. A footer will also be good. It should be colorful and pleasing. There can be some images on the top. No.748938 What's the standard way that people make changes to an existing mysql (or otherwise) db? Use an administrative tool like phpmyadmin to make changes to the tables manually, then generate the DDL for the table's existing structure with something like sqldump so that other contributors can build the db? No.748955 >>748903 I'm a fan of using CSS buttons for top page navigation. You can make them look nice using borders, shadows, and gradient shading. No.748974 >>748665 Well at long last the problem actually seems to be that some of the cores on my new CPU. When I was installing the stock cooler onto it I kind of misaligned it at first and had to move it a bit, smudging the square of thermal paste it came with. Is it possible this has caused damage to it? Or is this just a case of a bad factory result and I should be able to easily get this RMA'ed? No.748984 >>748974 that some of the cores on my new CPU are bad* No.748989 >>748842 Wow, can't say I've ever heard of that one. No.749018 how easy is it to get a lcd screen and use it with a beaglebone black? No.749069 >>748680 >My motherboard itself also hovered around 90 degrees. Isn't that really hot ? I have never seen a motherboard of mine hotter than 30°C... No.749109 >>749018 I always hear BBB has shit graphics No.749110 File: 0885b711df2d986⋯.jpg (46.64 KB, 620x413, 620:413, vx4tdwhi4x2q8hqxex8c.jpg) Is it possible to dual boot as such: shared home partition, shared swap partition and boot partition on USB, and the whole disk encrypted? I imagine I couldn't share a home folder without having some highly cautious configs and encryption set up, in order not to screw the whole thing up. If so, my next question would be: Can I shrink my home directory on an already installed GNU/Linux distro to allow room for a second OS? No.749111 >>748938 upvoting question, please respond No.749123 >>748850 Do Foss projects in Java and VB actually exist? No.749131 Does anyone know if the AMD Opteron 6300 line of CPUs have PSP? No.749137 >>749131 They don't, and in fact one of the Opteron boards makes for the only set of hardware certified fully libre by the FSF. No.749138 >>749137 :-) I know. I gotta get me one in a hurry. No.749139 >>749110 yes to all questions No.749159 >>733048 Yo /tech/, My Windows 8.1 is fucking up once again, so I've been wondering. Windows 7 or Linux, which is a better alternative? Also, which one is easier for a normalfag to use/instal? No.749203 How do I move my boot partition to USB AFTER install? No.749204 No.749206 >>749139 Ok, the wise guy, let's see if you can answer this one: >>749203 No.749210 >>749138 You must have a lot of money. No.749211 How do I deal with EFI/UEFI fuckery when trying to install GRUB? I have a bunch of OS partitions that all just got fucked up by NT Loader from a Windows reinstall and now I can't seem to simply run grub-install to fix them. No.749229 unix pros, I have a very poor understanding when it comes to the mechanics of the shell. would it be possible to execute a command like grep to read from stdin while executing some other script and watching the stdout (from the same terminal)? i would appreciate unix literature recommendations No.749236 I have a question. How can I link my facebook account to 8chan? Every time I make a post I want it to automatically appear on my timeline No.749246 >>747670 >>747701 Update: I found a working pirate of Data Recovery Wizard Professional and was able to get back pretty much everything I lost. Thanks a lot to the anon who linked that Wikipedia article with the list of the recovery programs; I feel like I knew about that sort of thing already (nothing being permanently deleted from your computer, I mean) but was just in too much of a panic to know where to go next. Unfortunately, a lot of the recovered files are corrupted and can't be viewed/opened. Are there any reliable programs out there that are capable of repairing the following file types: DOCX*, GIF, JPG, PNG, TXT, XCF and XLSX*? For the non-image based ones, I'm even ok with something that simply takes the text from the original file and transfers it to a working copy as long as I get the base content back. * = 2010 version No.749247 >>749229 Think that would depend on the program in question's support for outputting to two things at once. No.749270 File: 055550d6f4c937d⋯.gif (1.7 MB, 480x270, 16:9, catching bugs.gif) >>749159 Dual boot both. No.749277 File: ef4dcd4c8306bdb⋯.gif (1.37 MB, 264x264, 1:1, costanza.gif) >>749159 Linux Mint. Virtually identical to windows. >>749270 >recommending windows >unironically No.749287 File: 6ae7aeb03d2f754⋯.jpg (15.81 KB, 399x340, 399:340, 1494563928048.jpg) >>749236 Its called facebook integration. They are working on it. Soon. facebook too will have the privilege of having access to the highest quality memes on the internet No.749359 >>733048 when did this place become a wonderland? I don't have to fill out a captcha to Tor post any more? Are all Tor posters shadowbanned? What is going on?! No.749365 >>749229 >would it be possible to execute a command like grep to read from stdin while executing some other script and watching the stdout (from the same terminal)? Not sure i understand what you are asking, please clarify. But take a look at tee(1). grep pat | tee /dev/tty | other script >i would appreciate unix literature recommendations The Unix Programming Environment, by Brian W. Kernighan and Rob Pike Some of the specifics have changed in the 33 years since it was published but the ideas it teaches still apply today. I found it a an enjoyable read. No.749367 >>749359 No idea but not filling out captchas is great. No.749390 File: 95a2f82382a522d⋯.jpg (27.98 KB, 600x800, 3:4, 95a2f82382a522dc3895f5480c….jpg) I just want to cut a section of an mkv video file, could you please reccomend me a simply program free from premium nonses like watermarks, that can support mkv? I've looked for lists online and it's always some shill website by the company making the product. No.749397 >>749390 ffmpeg No.749398 >>749397 >ffmpeg thank you. No.749418 >>749398 you don't even need to re-encode ffmpeg -i input.mkv -ss 00:05:15 -to 00:07:20 -c:a copy -c:v copy output.mkv No.749420 >>749359 >when did this place become a wonderland? yogapig pls go >>>/suicide/ No.749438 Hello Anons. My Mint 18.1 desktop isn't loading. I get to the login screen, login, my wallpaper flashes briefly, disappears and then I'm left with nothing but a mouse cursor (the default mouse cursor, not the one I changed it to). If I right-click then what looks like a Cinnamon menu opens, but it is empty and without buttons. As for the cause, lastnight I cleaned up the desktop menu, but I don't understand how removing menu shortcuts -- not files or programs -- would somehow break my desktop. Was it that, or something else? And how do I fix it? Thanks. No.749452 File: e487dcd001158b9⋯.jpg (9.89 KB, 252x366, 42:61, 43263113.jpg) I want to use fvwm95 as my WM on Slackware Linux, but when I compiled it via sbopkg and I try to startx, xorg says no screens found. xfce works just fine. No.749479 Is there a program for linux that lets me merge edit 2 text files and using a filter where it shows conflicts only when it has a entry on the filter list? No.749494 >>749479 I don't know about the filter but you might want to try kdiff3 and meld. No.749496 >>749270 Are there any downsides to duel booting both windows and mint? >>749277 So if I want to play games with friends I am still able to do this on Linux Mint? No.749501 >>749236 I don't know where are you from, but you have to go back. No.749528 What will be the Ryzen equivalent of the AMD FX 6300 CPU? No.749562 Im testing out qutebrowser on windows. and i want to create some short cuts and aliases(writing 8 and then tech to arrive here, for example) but in setting i cant seem to write or edit them how do i create does short cut on windows? No.749564 Is anyone familiar with malware called "qatuvdz"? I recently saw this in my process manager and after a quick search on google it seemed that it tracks personal info and makes your pc slower. I installed malwarebytes but it did not recognise it and I don't know how to get rid of it because all the results I get when googling it seem to be shady sites that try to get you to install software that seems to be malware as well. No.749565 >>749528 Equivalent in what metric? No.749566 File: 2e61294187b8bad⋯.png (3.89 KB, 247x204, 247:204, index.png) Is Void Linux basically Gentoo with the option of binary installs? I'm running on an old x60t with 1G of ram, looking for something to accommodate such light resources, and Arch is discontinuing 32 bit. My only qualm with Gentoo is compile times, since my wifi reception is often poor, I can see this being a massive issue on this particular machine. What would you recommend? No systemd is a plus. No.749576 >>749566 Gentoo has binary options for some packages. I've been running Gentoo for 2 years on my T60, and compile times aren't that big of an issue. The only packages I've noticed that have outrageous compile time are Firefox and webkit-gtk (fucking hate this piece of shit). Even Wine and GTK+3 manage reasonable times! No.749578 >>749438 Can someone help me, please? I honestly have no idea what to do beyond installing a new OS. No.749582 >>749565 In price and role. No.749584 >So if I want to play games with friends I am still able to do this on Linux Mint? It depends on the game but yes (with a little tweaking). For example if you want to play dawn of war vanilla, you just install playonlinux, then install dow 1 with dotnet20, directplay and dxfullsetup libraries. Then it works just like in windows. Other games you just double click the .exe and wine auto installs it just like in windows (make sure to download libraroes through winetricks first). If you want to play new games (past 1-2 years) then you might want to dual boot, depending on how lazy you are but it could potentially save you a lot of time (although some games have linux .deb installers nowadays). No.749587 >Can someone help me, please? I honestly have no idea what to do beyond installing a new OS. Post pictures/screenshots maybe? Or google some terminal commands and how to boot into safe mode without loading the desktop. No.749588 >>749587 I can't post screenshots as Cinnamon isn't loading. I can't even open a terminal using Ctrl-Alt-T. No.749594 >>749576 So then you'd recommend Gentoo on the basis that compile times won't suck dick? No.749608 >>749562 What kind of shortcuts do you mean exactly? You could bind a keybinding using :bind, or add a quickmark (press m on this page), name it 8tech, and then do o and enter 8tech there. No.749609 My Dell XPS charger stopped working recently. The cable that connects to the laptop itself has a blue light indicator that no longer turns on with any power socket in the house. It did turn on when I plugged it in to a socket outside my house though I couldn't test it long enough to check if it would eventually fail. What do? It's still under warranty. No.749613 >>749608 https://youtu.be/g2RtjO_jXvY?t=1m31s i want to do what he does there, its pretty cool. i think he edits the .config file, but i dont know where to find that on windows or how to do that No.749615 >>749609 Sounds like a hardware issue. Get a new one. No.749618 >>749615 I see, I'll call support to ask for a replacement then. The cable does not show any signs of physical damage. Could it simply be a defective charger? No.749619 >>749564 I think I got rid of it but I have no idea what the souce of it was. Apparently I had a shit-ton of other malware on my pc as well. No.749635 >>749582 Ryzen 3 will hit this price range when it lands next quarter. You'll need to be more precise with "role". No.749636 >>749635 I mean about CPU power, relative to the price and type of prcesor. No.749638 >>749613 Either just do :set searchengines name url, or edit the configfile in %LOCALAPPDATA%/qutebrowser. No.749640 >>749636 "CPU power" under what workloads? You know damn well how CPUs' relative performance can change quite dramatically depending on what they're benchmarked in. No.749641 >>733048 I want certain text files to start upon booting up and logging in. Is this something I can do within some hidden config like files, or should I write a script for it? No.749668 >>749638 yep i found it on /localdata and it works now do you now how to make the scrolling faster? i mean, im pressing 'j' and 'k' to move up and down, but itbarely moves and i need to hold it for a log time or press it a bunch of times. also, browsing 8ch with a mouse you can just hover over the post number and see the reply, which is nice since clicking over all the replies and then going back its annoying. is there a way to do that on qutebrowser? No.749729 how do i bypass Easy Anti Cheat for an online video game? No.749732 >>744977 lain idiot No.749733 >>747493 this includes used items on ebay? No.749736 >>749566 http://www.connochaetos.org/wiki/home http://cyti.latgola.lv/ruuni/ >and Arch is discontinuing 32 bit. Just up the RAM and install 64bit. The X60 can hol up to 8 jiggabits No.749738 File: 267994f58db6c68⋯.jpg (598.83 KB, 1600x1200, 4:3, Transmission.jpg) Fags. If I make my torrent client "require" encrypted connections, does this prevent my ISP from seeing what I leech and/or seed? Does it lessen the chances of getting mafiaa'd? No.749741 Okay so I went to the debugging page for php on the official website and I see people unironically recommending that I debug things by commenting out blocks of code until i isolate the error and all that This is an incredible waste of productivity. Do other backend languages/frameworks allow for easier debugging? No.749765 What is the most uncucked x86 CPU. No.749770 File: 2bd03dfe525b0d4⋯.mp4 (8.43 MB, 640x360, 16:9, Honkstreet Girls - Everybo….mp4) >>749496 >any downsides The windows and linux partitions have to be divided, so if you use one more than the other, you'll take up space faster. ganhu is pretty lightweight for the most part, beware the KDE, and on linux it's pretty easy to mount the windows partition. I'd recommend debian over mint but they're nearly the same. >gaems Only some. Wine can't handle DirectX11, and you have to compile wine from source if you want the latest updates, such as not being a pain in the ass when installing the .NET framework. also, give a tiling wm, such as i3 a try sometime, you might like it Webm unrelated No.749792 >>749738 It stops the ISP seeing what your traffic is, but remember your IP is exposed to everyone in the torrent. A copyright holder can still get a hold of this, and file a complaint with your ISP. It slightly lessens your chances of getting caught, but you're really protecting yourself against the wrong adversary; your ISP isn't motivated to charge you with copyright violation. Use a VPN instead, or get a seed box. No.749794 >>749741 >Do other backend languages/frameworks allow for easier debugging? Erlang certainly does. You can attach debuggers to running code, and even patch it on the fly without even interrupting service. Really though, you should be doing your testing with unit tests, and pretty much every language on the planet lends itself to better software engineering practises than PHP. No.749799 I ran this script ~/torchroot-setup.sh unprivileged once and as root the second time #!/bin/bashexport TORCHROOT=/opt/torchrootmkdir -pTORCHROOTmkdir -p $TORCHROOT/etc/tormkdir -p$TORCHROOT/devmkdir -p $TORCHROOT/usr/binmkdir -p$TORCHROOT/usr/libmkdir -p $TORCHROOT/usr/share/tormkdir -p$TORCHROOT/var/libln -s /usr/lib $TORCHROOT/libcp /etc/hosts$TORCHROOT/etc/cp /etc/host.conf $TORCHROOT/etc/cp /etc/localtime$TORCHROOT/etc/cp /etc/nsswitch.conf $TORCHROOT/etc/cp /etc/resolv.conf$TORCHROOT/etc/cp /etc/tor/torrc $TORCHROOT/etc/tor/cp /usr/bin/tor$TORCHROOT/usr/bin/cp /usr/share/tor/geoip* $TORCHROOT/usr/share/tor/cp /lib/libnss* /lib/libnsl* /lib/ld-linux-*.so* /lib/libresolv* /lib/libgcc_s.so*$TORCHROOT/usr/lib/cp $(ldd /usr/bin/tor | awk '{print$3}'|grep --color=never "^/") $TORCHROOT/usr/lib/cp -r /var/lib/tor$TORCHROOT/var/lib/chown -R tor:tor $TORCHROOT/var/lib/torsh -c "grep --color=never ^tor /etc/passwd >$TORCHROOT/etc/passwd"sh -c "grep --color=never ^tor /etc/group > $TORCHROOT/etc/group"mknod -m 644$TORCHROOT/dev/random c 1 8mknod -m 644 $TORCHROOT/dev/urandom c 1 9mknod -m 666$TORCHROOT/dev/null c 1 3if [[ "$(uname -m)" == "x86_64" ]]; then cp /usr/lib/ld-linux-x86-64.so*$TORCHROOT/usr/lib/. ln -sr /usr/lib64 $TORCHROOT/lib64 ln -s$TORCHROOT/usr/lib ${TORCHROOT}/usr/lib64fi (https://wiki.archlinux.org/index.php/Tor#Running_Tor_in_a_Chroot) And it seems to have permanently messed up my install. (For example, /etc/resolv.conf doesn't exist anymore after reboot even if I create it.) Is there any way to fix this other than a complete reinstall? No.749806 File: d1641cd88930002⋯.webm (2.9 MB, 640x360, 16:9, 2muchWaifuRuinsYourLaifu.webm) >using cmus >pirate my music >sometimes comes out w/borked tags >use id3v2 to fix them >want to do it directly in cmus :run id3v2 -a Dragonforce {} >segmentation fault Anything else with :run results in similar results. :run "id3v2 -a Dragonforce {}":run something:run echo:run quit wat do. No.749807 >>749668 You can do something like :bind -f j run-with-count 5 scroll down (and the same with k and scroll up). Or you could bind it to :scroll-px instead, but that won't work on all websites... As for hovering links: Either use the mouse, or use ;h to hover links via hints. No.749916 >>749807 >>749807 it works! im guessing that 5 is the speed number, right? if i use 8 instead it would go faster, right? >or use ;h to hover links via hints. 10/10 this thing is gr8 thanks men, let me ask you something else, is qutebrowser at least as safe and private as firefox with its addons like https, unblock, selfdestructing cookies, privacy badger, ramdon agent spoofer? is there a way to add firefox addons and greasefork scritps to qutebrowser? also, sometimes the browser gives me a --PASSTHROUGH MODE-- wats dat? No.749922 >>749916 > im guessing that 5 is the speed number, right? if i use 8 instead it would go faster, right? Yes - qutebrowser simulates cursor down keypresses, and that 5 means it "presses" that key 5 times. > men, let me ask you something else, is qutebrowser at least as safe and private as firefox with its addons like https, unblock, selfdestructing cookies, privacy badger, ramdon agent spoofer? Install the QtWebKit-NG or QtWebEngine backend (start with --backend webengine) if it's available on your distribution for a more secure backend. As for those addons - some of those are probably not possible to implement for qutebrowser currently, others are already implemented, and others will come with https://github.com/qutebrowser/qutebrowser/issues/27 and https://github.com/qutebrowser/qutebrowser/issues/30 > is there a way to add firefox addons and greasefork scritps to qutebrowser Firefox addons: No. There might be some partial support for WebExtensions in the future, but it's hard to tell what's possible. I haven't looked at it yet. Greasemonkey/Greasefork: Yes, but not yet - see https://github.com/qutebrowser/qutebrowser/issues/341 > also, sometimes the browser gives me a --PASSTHROUGH MODE-- > wats dat? A mode (entered with ctrl-v in normal mode) where all keys (apart for escape) are passed through to the website. No.749930 File: 46b6f4249896f87⋯.jpg (8.87 KB, 250x190, 25:19, __tn_1257639081374.jpg) I have a WNDR3700 netgear router, should I keep updating the software or can I install in it something better? No.749947 > infinity watchlist > loading lag and reply tracking for god sake people, Dashchan has spoiled me from using 8ch on the PC...is there any browser app that doesn't make you want to kill yourself? how hard is to get dashchan running on loonix? No.749989 File: f3653c96d08f39d⋯.jpg (71.85 KB, 400x225, 16:9, cat5.jpg) What is the best way to extend a Cat 5 cable? What I usually do is to solder each corresponding wires and cover everything in heat shrink tubing like pic related. Is this halal or my zeros and ones are spilling all over the place? No.749991 No.749992 In my Gentoo installation, I skipped making a passwd and user account. Can I just do it later or did I shoot myself in the foot? No.749998 >>749992 >Can I just do it later Yes No.750004 >>749792 >your ISP isn't motivated to charge you with copyright violation. Your ISP can not charge you for copyrights they do not hold. That would not make any sense, that is not how copyright works. No.750007 >>749998 ok perfect, just did it. How long does the install take after updating @world set? No.750022 >>750007 >How long does the install take after updating @world set? I do not remember, it has been a while since I used Gentoo. But if you are worried about how much time it is taking I would recommend not using Gentoo, it can be a pain. No.750043 >>749494 kdiff3 and meld has only a black list type for filtering text and they don't offer a white list filtering variant at all, I tried both of them but none helped of those tools helped me editing 2 huge files for merging those files contains a large list of several one line items which describes the traders item sell/buy/supply behavior. No.750053 My PC shutdown from overheating during a gentoo install, how do I pick up where I left off? No.750057 >>749991 Seriously? Is that the way Certified® 1337 Cisco professionals do permanent cable extensions? No.750067 is rsync safe if the files that are being copied from could become corrupted on subsequent backups? No.750071 Sorry if this doesn't long here, but what kind of TDP overhead should I plan for when searching for parts? https://pcpartpicker.com/list/3gDfcc Yes, I know what the site says is the TDP, but sites can be wrong. I just want to know if a 500W PSU (and stock case fans) is sufficient for this specific build. No.750092 >>749922 > if it's available on your distribution im on windows atm another question if you dont mind... how can i select some text and then search that text on another tab? lets say i want to select QtWebEngine and then search it on DDG, or use w command for wikipedia and search it there, for example No.750097 >>750057 Only up to a limit, after which you need a repeater to boost the signal. No.750102 >>750004 Urgh, that's the point. At best they could snitch on you, but again, they have no motivation to do so. No.750105 >>750057 A true professional would rerun the line with premium gold plated Monster® cables. No.750159 Is there a way to set up an rss feed so it catches new 8chan posts? No.750164 >>750071 Also, how the fucking hell do I do the CPU+thermal paste+heatsink part of the build? Can I trust the paste that comes on the chip when I get it? No.750180 >>750071 TDP refers to heat dissipation, not power consumption. If you're planning on overclocking you should get 600w to be safe. Get a Cryorig M9i instead of the 212 Evo, it's newer, cheaper, performs slightly better. >>750164 Yeah No.750181 >>750071 >Sorry if this doesn't long here, but what kind of TDP overhead should I plan for when searching for parts? https://pcpartpicker.com/list/3gDfcc >Yes, I know what the site says is the TDP, but sites can be wrong. I just want to know if a 500W PSU (and stock case fans) is sufficient for this specific build. >>750164 >Also, how the fucking hell do I do the CPU+thermal paste+heatsink part of the build? Can I trust the paste that comes on the chip when I get it? Simply add the total wattage requirements, and get a PSU with a single 12v rail who's max wattage is at least 10-25% higher than your system requirements. Feel free to spend more on a PSU with higher efficiency provided doing so is within your budget. No.750182 >>750159 >Is there a way to set up an rss feed so it catches new 8chan posts? I thought 8ch provides RSS already? https://8ch.net/faq.html#can-i-have-a-list-of-all-api-endpoints-for-getting-raw-data-from-8chan >>750067 >is rsync safe if the files that are being copied from could become corrupted on subsequent backups? I'm not sure I understand what you're asking, though corrupted files are corrupted files. It is of no concern to think of rsync in this question. No.750188 >>750182 >Read the goddam FAQ -_- my bad, thanks. No.750202 >>749930 That depends on which hardware revision you have there. You can install OpenWRT/LEDE or Gargoyle on revisions 1, 2 and 4. Revs 3 and 5 didn't work completely last time I checked. Your revision should be marked on the packaging and within the web interface. No.750206 >>750188 >-_- my bad, thanks. But anon, I was not rude to you. These sorts of questions are why support threads exist. No.750208 >>750182 suppose that I use rsync on file a, at location A, to sync it with file a at location B. Suppose that i perform this operation again, with file a at location A being corrupted. What happens? Will rsync fuck up my backup? *scratches chin with paws, gets out a stack of stickies and starts doing very advanced mathematical proofs, but accidentally proves that the set of his poopoop is disjoint from his colon and shits himself* No.750211 >>750208 >poopoop Alright little Pajeet, I think I may understand what you are asking. rsync is just going to copy the file. If your disk is corrupt, or more accurately the sector of the disk that contains this file is corrupt, then yes copying a file to it and then from it most likely would not be a good idea. You could try to poo in it, though I'm not sure what that would solve. No.750213 >>750211 well, rsync works by checking the modification date on both files. So even though the content of file A had changed, if the modification date was still identical between both files, i should be fine, right? No.750215 >>750213 >well, rsync works by checking the modification date on both files. So even though the content of file A had changed, if the modification date was still identical between both files, i should be fine, right? If the data was important, I would consider whatever this process you are trying to accomplish to be unsafe and look for an alternative. If the risk of your data being corrupt is not a concern then yea, sure, try it fam. No.750217 >>750206 Not shitting up the support threads is exactly why FAQs exist. No.750235 Is it possible to do terminal DNS lookup with Tor? I know programs that support socks5 proxies can use the local Tor service or even use torsocks [program] with mixed results, but either way a program ends up connecting to a service over Tor. Why is it that programs are able to connect but a DNS lookup with host or nslookup fails? Does Tor use its own method of DNS lookup? How can I use it to resolve DNS given a hostname? No.750236 >>749738 >If I make my torrent client "require" encrypted connections, does this prevent my ISP from seeing what I leech and/or seed? Yes. >Does it lessen the chances of getting mafiaa'd? Not necessarily. Requiring encryption just makes it so the data sent cannot be seen by anyone outside of the torrent swarm. Anyone in the swarm is able to see what you're downloading, and usually companies catch users by being in the swarm to keep track of who the peers are downloading/seeding the copyrighted files. There's nothing wrong with requiring encryption, but you're not protecting yourself against the right entity in this manner. A good proxy/VPN is what you should be using. >>749496 >Are there any downsides to duel booting both windows and mint? You can only boot into 1 OS at a time. Partitioning isn't too difficult (if you install WIndows first) but remember that it cuts down on available disk space. You'd also have to be careful on where to save files if you want them to be accessible to both OS's. (Linux can read NTFS, but Windows cannot read ext4.) >So if I want to play games with friends I am still able to do this on Linux Mint? Depends on the game. Think of games you play with friends and look up either "linux [game]" or "wine [game]" to get an idea of compatibility. I got around this by using a gaming VM, but I had to plan my computer build around it. No.750239 Got an android phone that appears to have been exploited. When I take a photo from my phone and run it through exiv2 it hits the comment block and then overflows. Fortunately the payload doesn't seem to be configured in a way that exploits my linux box so I didn't just fuck up my day completely. Anyway what should I do with this? I've done some searches and learned that EXIF based attacks on Android have been a thing since at least 2007 with a big one found last year. If this is a 0day permutation on an old exploit it would seem irresponsible to just wipe the phone and start over without at least sending a sample to someone. Who's legit and worth submitting reports to for Android stuff? My phone is on 6.0.1 which is the newest I can get on this device AFAIK. I guess the exploit IS a risk on other platforms, its just that the payload isn't tailored for this machine. Fuck I hate phones. No.750241 >>750235 DNS lookups are usually udp but tor only supports tcp. DNS lookups are done by exit nodes. >How can I use it to resolve DNS given a hostname? tor-resolve(1) No.750243 >>750235 >Is it possible to do terminal DNS lookup with Tor? The tor-resolve program bundled with Tor does exactly that. >Does Tor use its own method of DNS lookup? The client program asks Tor to connect to a given target (either hostname or IP address) through the SOCKS protocol, Tor then tunnels the request to an exit node and the exit node performs necessary DNS resolving to make the final connection. Special case is made for .onion pseudohostnames, which do not have an associated IP but are instead routed entirely within the Tor network. No.750259 I'd like to dual boot into Windows on my new computer, and I think I might have to make it Windows 10 that I'll install for that... what can I do to minimize its data harvesting? What are some security practices that I should keep in mind? No.750271 >>750259 I saved this sometime ago, might help: > Download Windows 10 Enterprise ltsb (no other version can guarantee anything) > Disable settings for maximum privacy during installation > Go through settings after installation and disable everything > Run services.msc, stop and disable: Bluetooth Support Service DataCollectionPublishingService dmwappushsvc Remove Desktop Services Remote Registry Sensor Monitoring Service Sensor Service Windows Error Reporting Xbox Live Auth Manager Xbox Live Game Save Xbox Live Networking Service > Run the Task Scheduler, disable and remove all triggers for: Microsoft/Windows/Application Experience/Microsoft Compatibility Appraiser Microsoft/Windows/Application Experience/ProgramDataUpdater Microsoft/Windows/DiskDiagnostic/Microsoft-Windows-DiskDiagnosticDataCollector Microsoft/Windows/Customer Experience Improvement Progeam/Consolidator Microsoft/Windows/Customer Experience Improvement Progeam/KernelCeipTask Microsoft/Windows/Customer Experience Improvement Progeam/UsbCeip > Run gpedit.msc go to Administrative/All Settings/ and configure the following: Allow Cortana - Disable Allow input personalization - Disable Allow search and Cortana to use location - Disable Allow Telemetry - Enable and set options to "0 - Off [Enterprise Only]" Allow the use of biometrics - Disable Configure Automatic Updates - Disable Disable pre-release features or settings - Disable Disable Windows Error Reporting - Enable Do not allow web search - Enable Do not send additional data - Enabled Don't search the web or display web results in Search - Enable Enable/Disable PrefTrack - Disable Prevent the usage of OneDrive for file storage - Enable Set what information is shared in Search - Enabled and set options to "Anonymous Info" Turn off Application Telemetry - Enabled Turn off Automatic Learning - Enabled Turn off handwriting personalization data sharing - Enabled Turn off handwriting recognition error reporting - Enabled Turn off Inventory Collector - Enabled Turn off Managing SmartScreen Filter for Internet Explorer 8 - Enable Turn off Steps Recorder - Enabled Turn off the Windows Messenger Customer Experience Improvement Program - Enabled Turn off the Windows Customer Experience Improvement Program - Enabled Turn off Windows Defender - Enabled No.750272 >>750271 Thanks, anon No.750273 >>750241 >>750243 Thanks. I wonder why no search results mentioned tor-resolve as a replacement for nslookup or host. That's exactly what I wanted. No.750326 How do I uninstall void? No.750327 >>750326 void linux I mean. No.750361 >>750092 bump anon dont get mad at me for asking question pls No.750388 >>750271 ITP: how to cripple your system with placebo Why not read the privacy policy and terms of service (y'know, the contract Microsoft asks you to agree or refuse as part of the Windows license you purchase) before even installing the OS? No.750399 >>733048 Rate my partition scheme, dual booting Gentoo and GuixSD: /sda1: primary, gentoo root /sda2: primary, GuixSD root /sda3: shared drive so I can mount and share files /sda4: logical /sda5: Gentoo home /sda6: GuixSD home /sda7: Gentoo boot /sda8: GuixSD boot /sda9: swap Now I just gotta work on encrypting the whole damn thing No.750404 test. test. No.750417 >>750399 did I fuck up by putting roots at /dev/sda1 & 2? Should they be "farther out" on the disk since that increases the read/write speed since the disk spins faster at it edges?? Or is /dev/sda1 the farthest point out? No.750418 Hi /tech/ where can I buy this game? https://en.wikipedia.org/wiki/Gender_Wars >In the future, after an era of "Political Correctness and equality",[1] humanity is divided into two hostile factions. Each faction represents one of humanity's two genders, the Males (who are ruled by a Patriarch) and the Females (who are ruled by a Matriarch), both of which behave in stereotypical manners (for instance, the Males being crude and focusing too much of drinking beer, the Females being easily distracted by fashion-related merchandise), and which may try to eliminate each other and capture each other's rulers. Either faction sometimes conduct raids against the other faction to steal reproductive cells, in order to produce more members for each side. >The player has to choose between the Male faction (who tries to capture the Matriarch) or the Female faction (who tries to capture the Patriarch). Regardless of the player's initial choice, the victorious faction of the two will put the remaining members of the defeated faction into servitude. The game ends by mentioning a rebellion caused by men and women working together, taking place a few years after the end of the Gender Wars. No.750419 >>750417 Yes. /root should be kept somewhere in the middle. swap should be first. No.750423 No.750424 >>750419 Why even care about swap performance? Linux should ideally never swap. No.750463 No.750466 >>750417 First partition is the most outwards one. The only thing you fucked up is not putting root on a separate SSD. No.750472 How do I make a youtube account without giving it my phone number? No.750486 >>750472 Give youtube the phone number of someone else. No.750493 >>750486 Very underrated post. No.750496 >>750466 I will next time senpai No.750499 I have a BIOS machine. Should I go MBR or GPT? No.750546 Does anyone have any thoughts on Gitgud.io v. Github? No.750573 >>750499 there's practically no difference No.750602 File: d35d3e5c3751faa⋯.jpg (72.73 KB, 576x576, 1:1, gnu.jpg) A question for unix people: How do you attach the input of a command-line / curses program to an IP address so you can telnet into it? For example, let users telnet in to reach an old BBS system or one of the BSD games like mille or robots without going through authentication. No.750610 >>750546 One is run by chan autists and the other by neon-haired tumblr cancer. Decide which you trust more. No.750635 >>750602 Create a new user on your system and set its shell to the program you want. No.750669 >>750472 Google accounts don't require a phone number. No.750694 Im having a weird problem with my laptop. Im using opera browser with xubuntu and for some reason it wont connect to 8chan. Every other website works fine but 8chan. Could this just be a setting in the browser fucking up? I havent updated anything between this problem occuring and when it worked. Anyone else had one website not work? If so how did you fix it? No.750699 >>750486 and how would I verify it? >>750669 Yes, they do. No.750715 >>750399 >/sda7: Gentoo boot >/sda8: GuixSD boot wtf are you doing? No.750727 >>750699 Blackmail ask that somebody else for the activation code he received. No.750763 I'm trying to compile linux-libre 4.9 with the last testing grsec available, but am getting errors and don't know what I'm doing. I thought I might as well ask if anyone knows if someone has already done it already and made it puplicly available somehwere? No.750798 Where should I upload a 7gb romset? Ideally somewhere that won't sell me out or take it down too quickly No.750810 >>748160 I found the fix for the referer error. In about:config, set network.http.sendRefererHeader=2 No.750812 >>750327 sudo rm -rf / --no-preserve-root >>750499 GPT unless you also use Windows No.750856 >>750602 Sorry, I wasn't clear but I was looking for a solution that works in userspace. How could a normal user do this? I'm looking for something like a telnet server that can run on an arbitrary port and forward everything to whatever program you tell it to. Telnet has control characters that it needs to send for the connection to be interactive by character and not by line. I've looked up some tutorials for netcat and ncurses but I can't get them to do what I want. No.750869 Does Privatix count as a VPN? I can't afford shit apart from free right now. Short version I'm thinking of applying for moving over-seas, but I'm concerned my own government will fuck me over. What should I be looking into? No.750870 >>750763 Post the errors. No.750918 File: a7a1990181d4e90⋯.png (177.39 KB, 316x321, 316:321, 1452743617170.png) Is there a way to Activate my copy of Windows 10 Enterprise Edition without downgrading to Pro or Home? Legit or otherwise. I don't care, was willing to pay for a key at first but they don't even sell Enterprise Keys. No.750919 >>750918 Also, since I'm running Enterprise can I upgrade to LTSB? Or would I still have to do a fresh install? No.750925 >>745955 yes, librebooting so kiketel and amd cant spy on you. >>745972 although it is a botnet and i'd use the utmost caution while using it, google has a service where you can make a phone call through their system. you might also be able to text but i'm not sure. doesn't being in central asia basically mean the nsa can't come to get you anyway? thats where all the best cybercriminals come from isnt it? No.750940 for some reason pacman isn't searching official repositories. Can't update some programs and can't install things like wine. Any idea why this is and how I can fix it? I did look through arch wiki but I'm kinda puzzled where to even begin though... No.750966 Is there any other sites similar to mixtape.moe for uploading files? No.750976 I'm fucking stumped, been dealing with my old dell n1710 for the past week. It started when the fan failed, CPU started thermal throttling. I ordered a new one and installed it. Still was thermal throttling, took it apart cleaned off the old paste applied new, tried to make sure the heat sink was seated properly. Its still slowing to a crawl after 5-10 mins after boot CPU is under 50C heat sink is working. Ram is fine only 2/8gb in use. Ran checkdisk recently Help No.750977 File: 1b765de75d68310⋯.jpg (125.71 KB, 1194x747, 398:249, archiveis.jpg) File: cc41b2f3e4d2e18⋯.jpg (50.86 KB, 1097x562, 1097:562, archiveis2.jpg) archive.is is offline for me, but everyone else is using it as if nothing happened. Any ideas? No.751045 >>750977 archive.today No.751079 I can't figure out NetworkManager. wifi-menu worked perfectly fine before but then I enabled NetworkManager and now it won't connect to the internet. Any help? No.751092 >>750925 Google requires your phone number before they let you use that service. No.751093 >>750940 Start by checking which mirrors you have enabled in /etc/pacman.d/mirrorlist (I suspect you haven't enabled any) and if that doesn't help, check enabled repos at the end of /etc/pacman.conf No.751102 How do i remove retarded emoticons from unicode? No.751106 File: 321b9b961127e1b⋯.jpg (73.54 KB, 1206x760, 603:380, archivetoday.JPG) >>751045 Same deal. No.751107 File: 5c740939b4982a2⋯.jpg (88.13 KB, 1137x733, 1137:733, archivetoday2.JPG) No.751108 No.751109 >>751108 I tried archive.fo here. >>750977 It's still dead now. No.751126 >>750812 Thanks. No.751133 >>751102 McVeigh the Unicode Consortium. No.751161 trying to play a game and I get this errore ./runner: error while loading shared libraries: libcrypto.so.1.0.0 please help No.751170 File: 3b34b98cdcf5fce⋯.jpg (53.13 KB, 1016x568, 127:71, fugg tank.jpg) No.751171 File: d394c42134e3f97⋯.jpg (44.34 KB, 480x455, 96:91, frowning on your conclusio….jpg) >>751161 Read the fucking error message you nigger. No.751178 File: f9975ae4d86b41a⋯.jpg (113.01 KB, 808x609, 808:609, sensibleNiggers.jpg) >>750918 https://wiki.installgentoo.com/index.php/Windows_7 >>750966 >can't look up "pomf clones" >>750798 IPFS is good if you don't mind keeping your PC on the entire time. Making a torrent is another good option. If it needs to be a "cloud" service, either go with mega or a pomf clone largest maximum file size I've seen on a pomf clone is 2gb, so no matter what you'll have to cut the file up into smaller parts. They still have to follow copyright law, but most companies don't really give a fuck about some NEET sharing some old vidya with some tiny vietnamese llama wool appreciation forum, so odds are they won't take it down. No.751188 >>751178 I figured I'd need to use a loader. Just wasn't sure where to look. Last time I used one I got from God knows where and very bad things happened. No.751189 File: 78b7ca6847eed0d⋯.png (6.18 KB, 303x193, 303:193, not supported.png) >>751178 Also, that link is for Windows 7 so the loader doesn't work. It says to use KMSPico but doesn't provide a link like it did for DAZ No.751194 No.751198 File: 82f81ce6620d6b4⋯.png (28.69 KB, 1360x768, 85:48, fuckery.png) >>751194 >>751189 >>751178 >>750918 Everything is working. On a side-note, when I downloaded KMSPIco I could not run it. I'd click it, nothing would happen, I knew something was up, and then I noticed this. (in the picture) Windows was treating the folder like a "restricted website" Copying and pasting the installer into my general download folder allowed me to open it up from there. Did they think I wouldn't notice and just give up? No.751212 No.751213 >>751212 Is it posible to route qutebrowser trough tor? how No.751228 I downloaded tor browser and followed the instructions on their website but when I run the executable nothing happens. It should show the connection window then launch the browser. Tried both 64 and 32 bit versions, same thing. How can I get it to start? No.751229 File: da56408d1c45402⋯.jpg (42.57 KB, 640x480, 4:3, shruggeru.jpg) >>751198 I was suggesting you use windows 7 since there's really no reason to use 10 other than work, and you'd expect your job to give you a key anyways but congrats m8. No.751261 >>751228 What operating system? How did you run the executable? No.751262 >>751261 Check his flag. No.751264 >>751171 right okay. I tried reinstalling libcrypto and it doesn't find anything. No.751265 >>751261 Gentoo, fresh install. Firefox standalone seems to work fine. I tried launching it from a terminal with ./start-tor-browser.desktop just like it says to on their website. Nothing happens when I launch it in htop. I extracted it in my home directory so there shouldn't be permission issues. I tried chmod +x on the .desktop file just in case, but it did nothing. Not sure where else to go from here.>>751261 No.751308 >>750092 >>750361 >>751212 Sorry, missed your answers. I'm not really following this thread (can I get a mail for replies somehow?), only a search for qutebrowser here. With the next release, the Windows builds are going to use QtWebEngine. There are some test builds here: https://qutebrowser.org/tmp/qutebrowser-0.11.0-pre-amd64.exe https://qutebrowser.org/tmp/qutebrowser-0.11.0-pre-win32.exe As for your other question, copy-paste is your easiest option currently - right click and copy, or ctrl-c in insert mode. Or you could write an userscript doing stuff with the selected text: https://github.com/qutebrowser/qutebrowser/blob/master/doc/userscripts.asciidoc >>751213 No.751315 what is some actually good antivirus software for windows 7? No.751319 >>751315 GNU/Linux No.751373 >>746433 If there is nothing on stack overflow, why don't you contribute to that mess you call a community? No.751374 >>751373 * wait fuck, I don't know why I said that last part, just try contributing to stack overflow or some shit. No.751401 >>751308 >Note this won’t give you the same amount of fingerprinting protection that the Tor Browser does Is it safer to use tor like this than to use qutebrowser alone? No.751425 File: 202d3be04d8a1e5⋯.png (274.49 KB, 800x508, 200:127, 1476481854273-0.png) I've spent a fuckton more time trying to fix this problem, and narrowed it down to 'sudo'. (I'm >>746681) For some reason, any 'sudo' command takes a massive amount of time to start the first time, then works fine after that. Searching online showed that I needed to add my hostname to /etc/hosts, but it was already there. A Red Hat support ticket showed that having a ton of groups could slow things down significantly, but I've only got 68 (https://access.redhat.com/solutions/430643). Has anyone come across this problem too? Would it be related to number of packages? No.751429 File: f2fd3c37af3b4af⋯.mp4 (3.09 MB, 640x360, 16:9, nonfree.mp4) Someone told me to post it here. USE="deblob debug ssp bindinst mmx sse sse2 -jit -boundschecking X crypt latex gtk vim-syntax threads python xattr hardened pic pax_kernel chroot secure_delete webrsync-gpg -qt4 perl unicode jpeg png readline icu cryptsetup gnutls -suid clang tcpd pam symlink -systemd -geolocation -sslv3 -tls-heartbeat -binary -mysql networkmanager octave" CFLAGS="-march=native -O3 -fforce-addr -pipe" CXXFLAGS="$CFLAGS" MAKEOPTS="-j4" ACCEPT_KEYWORDS="amd64" CONFIG_PROTECT="/etc" PORTAGE_NICENESS=10 INPUT_DEVICES="endev keyboard mouse" I haven't installed yet, will this configuration fuck anything up? Planning to install i3-gaps, does the i3 from the repos have gaps or do I have to install from github? Will install encrypted lvm. No.751438 >>751429 RUSTFLAGS="-C target-cpu=native" No.751454 >>751425 Check your PAM configuration. You might have some unwanted authentication module enabled that tries to connect to a remote user database or some stupid shit like that. man PAM No.751456 No.751464 File: 54ed9d4d03df433⋯.pdf (2.7 MB, genode-foundations-16-05.pdf) I know about Tor and VPNs, and stuff like that, but I have image about how it all pieces together. I don't know how to tell whether or not I have a gaping hole in my security / privacy set up. Could someone post a comprehensive link that goes all the way from hardware back doors, to networking stuff? My hardware, firmware and OS situation is pretty solid imo, soon to be librebooted with full disk encryption, but I have no fucking idea how to connect to the interet safely, or take care of a malicious program or whatever. Willing to learn, willing to read just looking for resources. No.751475 Is LiveOverflow a chink? http://www.liveoverflow.com/ He got some interesting videos, but I won't watch them if he is a disgusting mongoloid. No.751478 >>751475 topkek yeah he sounds really asian source: I've worked with a bunch before. No.751494 File: b570b3494f2dc50⋯.png (10.52 KB, 574x179, 574:179, 1.png) File: 84cdfc31fc419ee⋯.png (3.71 KB, 274x55, 274:55, 2.png) File: 3bb7d67a1f8da97⋯.png (9.8 KB, 410x174, 205:87, 3.png) How do I fucking get rid of fucking undeletable fucking files fucking fuck AAAAAAAAAAAAAAAAAAAAAAAAAAAAA No.751495 File: 7d42dc6adc60ed5⋯.png (11.38 KB, 415x192, 415:192, 4.png) No.751497 >>751494 >>751495 You run fsck. No.751499 File: acb895f6e95440b⋯.png (23.67 KB, 604x180, 151:45, uh - you dont get to brin….png) No.751508 File: 8c57a2667d2f31c⋯.jpg (15.43 KB, 400x293, 400:293, ahmed peace.jpg) >>751499 Boot into a gparted rescue environment (http://gparted.org/livecd.php) and boot into that, then run fsck. No.751511 >>751508 but it's not about the pc itself, it's about a usb that has some files that I cannot delete from it No.751512 >>751264 Sure it was 1.0.0? why the fuck does a vidya need a crypto library? No.751543 >>751265 There is an overlay called torbrowser. It worked the last time I used it. No.751555 what's a quick and easy way for me to find the UUID of something I've just plugged in No.751558 >>751555 You can run: lsblk -f to get all info from disks, or lsblk -o UUID to get just the UUID. Are you looking to find this out automatically on the 'mount' event? >>751454 fuck, I had PAM on, even though I don't need it, but when I turned it off nothing changed. It's definitely a yaourt/pacman issue so here's the entire result of running the following strace command: sudo strace -ttT -o st1.txt yaourt -Syu https://pastebin.com/F3CF31Yz An interesting thing I found is that the longest syscall is the following line: 20:22:36.491307 wait4(-1, [{WIFEXITED(s) && WEXITSTATUS(s) == 0}], 0, NULL) = 1390 <13.468984>` It seems like it's waiting on some wifi thing to exit, but why? I've got a good connection, and everything else is working. No.751565 >>751558 >Are you looking to find this out automatically on the 'mount' event I wanna make it so it automatically mounts when it recognizes a UUID (my external HDD and some USBs) No.751576 >>751565 I found a cross-platform tool that can auto-mount usb devices, but I have no clue how it detects that a new device was plugged in, and it also doesn't seem to have support of UUIDs. https://github.com/rbrito/usbmount It seems that mounting devices (even by UUID) isn't hard (this is some script that looks like it does that: https://access.redhat.com/discussions/1573543), but I don't know how they can detect a "plugged in" event. No.751589 >>751576 I already have some scripts which I have for my i3 config for mounting and unmount which I enable through key commands. I have it so it checks a list of UUIDs and sees if it recognizes one of them and mounts it accordingly mount my.mixtape.moe/yulrfe.sh unmount my.mixtape.moe/dqwnbu.sh No.751591 >Unable to install Chocolate Doom on Raspbian >"better move on to Corebooting the x220" >Almost brick Thinkpad when attempting to flash coreboot >give up after three days >"better move on to dual booting win7 and linux" >Run Ancile script and use windows update again after because I forgot to install some shit >unable to run Ancile a second time >"better move on to installing an easy Linux distribution instead" >"Manjaro looks so easy, you'd have to be retarded to fuck it up!" >Calamares refuses to fucking work properly I was thinking that my hardware may have some kind of plot against me, but I'm beginning to suspect that I may actually be retarded. This is making me lose my mind. No.751611 quick question: I fixed my earbuds by gluing something and I want to speed up the glue setting proccess. My Idea is sticking them in my gpu's exhaust fan so they heat up and the glue particles accelerate. Could anything between 40-70 degrees damage an earbud? No.751614 >>751464 one of you wise guys please halp No.751616 >>751614 >>751464 you get libreboot and a truly free as in freedom operating system togheter with non-botnet hardware, if you're behind a router do the same with the router. Then you make sure anything that ever goes through your network is encrypted (for instance use HTTPS, FTPS, etc) and that your web browser doesn't leak information, then you get a VPN. That's basically all you need to prevent losing your privacy in the web and from then on someone has to hack a weak link in all that. It's not rocket science. No.751618 >>751616 Yeah my hardware is solid too, anything libreboot compatible is basically solid though, right? Running an x60T and looking to get an x200T somewhere down the line, and ideally someday a libreboot compatible custom desktop too. I've got to look at my router, yeah. I know it has potential since it came with FSF stuff. You loose me at HTTPS and FTPS. Where can I read about this part? I think my webbrowser is set up ok too, there's lots of info and test out there for that too (like fingerprinting test and such, plus installgentoo wiki has lots of info on that). Looking to get a VPN and looking to purchase using an anonymous method like Zcash. But I've heard two VPNs and a tor node is better. Where does TOR play into VPNs, and systems like Tails and QBES? And how can I test my security/ privacy? No.751619 >>751616 Oh, and yeah of course I have a free as in freedom and beer OS. Like I said I think I've got the hardware part down, but my network knowledge sucks. I get scared to connect to the internet on my precious machine. I'm thinking about using something like Genode on top of all that too. No.751632 >>751499 >no chicken
2017-05-26 05:32:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2549358606338501, "perplexity": 4273.704917946276}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608642.30/warc/CC-MAIN-20170526051657-20170526071657-00327.warc.gz"}
https://math.hecker.org/2012/11/18/linear-algebra-and-its-applications-exercise-2-5-10/
## Linear Algebra and Its Applications, Exercise 2.5.10 Exercise 2.5.10. Given the incidence matrix $A = \begin{bmatrix} -1&1&0&0 \\ -1&0&1&0 \\ 0&1&0&-1 \\ 0&0&-1&1 \end{bmatrix}$ draw the graph corresponding to the matrix, and state whether or not it is a tree and the rows are linearly independent. Demonstrate that removing a row produces a spanning tree, and describe the subspace of which the remaining rows form a basis. Answer: The incidence matrix has four nodes, corresponding to the columns, and four edges, corresponding to the rows. The nodes can be arranged in the form of a square. Put node 1 in the upper left corner of the square. Edge 1 runs from node 1 to node 2; put node 2 in the lower left corner of the square, so that edge 2 forms the left side of the square. Edge 2 runs from node 1 to node 3; put node 3 in the upper right corner of the square, so that edge 2 forms the top side of the square. Put the remaining node 4 in the lower right corner of the square. Edge 3 runs from node 4 to node 2, and thus forms the bottom side of the square. Edge 4 runs from node 3 to node 4, and thus forms the right side of the square. Since the four edges form a loop (in the shape of a square) the graph is not a tree. Also, the rows are not linearly independent, since the first row minus the sum of the second and third rows equals the fourth row: $\begin{bmatrix} -1 \\ 1 \\ 0 \\ 0 \end{bmatrix} - \left( \begin{bmatrix} -1 \\ 0 \\ 1 \\ 0 \end{bmatrix} + \begin{bmatrix} 0 \\ 1 \\ 0 \\ -1 \end{bmatrix} \right)$ $= \begin{bmatrix} -1 \\ 1 \\ 0 \\ 0 \end{bmatrix} - \begin{bmatrix} -1 \\ 1 \\ 1 \\ -1 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ -1 \\ 1 \end{bmatrix}$ If we remove the fourth row and the corresponding edge (i.e., the bottom side of the square) then the resulting three edges form a spanning tree, since they touch all four nodes and have no loops. The remaining three rows are linearly independent and form a basis for the row space $\mathcal R(A^T)$. NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang. If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books. This entry was posted in linear algebra. Bookmark the permalink. ### 1 Response to Linear Algebra and Its Applications, Exercise 2.5.10 1. Filipe says: Very good! Post these questions: 2.5.12, 2.5.16 and 2.6.4, 2.6.10, 2.6.16. I didn’t get to do.
2019-04-19 00:26:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6579882502555847, "perplexity": 226.2931071835072}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578526923.39/warc/CC-MAIN-20190419001419-20190419022542-00050.warc.gz"}
https://rosenfelder.ai/keras-regression-efficient-net/
# Transfer Learning with EfficientNet for Image Regression in Keras - Using Custom Data in Keras 2020, Oct 19 There are hundreds of tutorials online available on how to use Keras for deep learning. But at least to my impression, 99% of them just use the MNIST dataset and some form of a small custom convolutional neural network or ResNet for classification. Personally, I dislike the general idea of always using the easiest dataset for machine learning and deep learning tutorials since this leaves many important questions unanswered. Adapting these tutorials to a custom dataset for a regression problem can be a daunting and time-consuming task with hours of Googling and reading old StackOverflow questions or the official Keras documentation. Through this tutorial, I want to show you how to use a custom dataset and use transfer learning to get great results with very little training time. The following topics will be part of this tutorial: • augment your image to improve prediction results • plot augmentations • adapt the state-of-the-art EfficientNet to a regression • use the new Ranger optimizer from tensorflow_addons • compare the EfficientNet results to a simpler custom convolutional neural network For this, I have uploaded a custom image dataset of housing prices in New York with a corresponding DataFrame constisting of a handful of columns with additional information about the houses. The dataset consists of 10,900 images that I have already resized to 224x224 pixels. The full code of this tutorial can be found in the GitHub Repository. ## Preliminary Steps This tutorial requires a few steps of preparation before we can begin coding. I wrote this code with Windows 10 in mind. If you use Linux or macOS, you might have to adapt a few lines regarding the terminal commands. ### Code overview Since we will write quite a few functions and around 430 lines of Python code, I have prepared a small flowchart to get a first impression of how the code should be structured later. We will spend quite a bit of time on data preprocessing before implementing the EfficientNetB0 model’s transfer learning. The visualization steps are optional but help understand the input data and the results in the end. If you are unsure about any stage in the tutorial, you can always look at the final code in the GitHub Repository. ### Real Estate Data If you have read my previous tutorial on multi-input PyTorch models, you might be familiar with the dataset already. It’s basically the same dataset, but with more observations. In total, we will be using 10,900 images this time. zpid price latitude longitude beds baths area 29777854 435000.0 40.826804 -73.917024 3.0 2.0 1728.0 30742835 888000.0 40.603546 -73.938332 3.0 3.0 1264.0 30742959 1160000.0 40.599407 -73.959058 3.0 2.0 1564.0 5409160 257825.0 40.760407 -73.796344 4.0 3.0 2100.0 As you can see, the dataset consists of images with a specific zpid and a price and a handful of other tabular features. We won’t use the tabular features in this tutorial, except for the price. Each image is already at the target size of 224x224 pixels with 3 RGB color channels. If you are interested in how I prepared the tutorial data, you can take a look into preprocess_dataframe.py. ### Installation and Setup Before we start the coding process, we need to create a new virtual environment. Adjust the following steps if you are using another package manager, like Anaconda. I used Python 3.8.2 for the tutorial, but other versions will likely work without any modifications. I use TensorFlow 2.3.0 and Keras 2.4.3. More details on the library version can be found in the requirements.txt. Enter the following lines into your command line: python -m venv /path/to/new/virtual/env cd /path/to/new/virtual/env/Scripts/ activate.bat pip install -r /path/to/requirements.txt Download the dataset and unzip it into your working directory. The data should now be found in ./data/. Since we are already in the terminal, we can also download the newest EfficientNetB0 weights with the Noisy_Student augmentations. To convert the weights for Keras transfer learning applications, we can use the official script from the Keras documentation. You can also find a copy in my repository. wget https://storage.googleapis.com/cloud-tpu-checkpoints/efficientnet/noisystudent/noisy_student_efficientnet-b0.tar.gz tar -xf noisy_student_efficientnet-b0.tar.gz python efficientnet_weight_update_util.py --model b0 --notop --ckpt noisy_student_efficientnet-b0/model.ckpt --o efficientnetb0_notop.h5 We can now import all libraries and functions that we will use for the rest of the tutorial. from typing import Iterator, List, Union, Tuple from datetime import datetime import pandas as pd import matplotlib.pyplot as plt import seaborn as sns from sklearn.model_selection import train_test_split from tensorflow import keras from tensorflow.keras.preprocessing.image import ImageDataGenerator from tensorflow.keras import layers, models, Model from tensorflow.python.keras.callbacks import TensorBoard, EarlyStopping, ModelCheckpoint from tensorflow.keras.losses import MeanAbsoluteError, MeanAbsolutePercentageError from tensorflow.keras.models import Sequential from tensorflow.keras.applications import EfficientNetB0 from tensorflow.keras.utils import plot_model from tensorflow.keras.callbacks import History ## Data Preprocessing Let’s start with a few minor preprocessing steps. We load the Pandas DataFrame df.pkl through pd.read_pickle() and add a new column image_location with the location of our images. Each image has the zpid as a filename and a .png extension. If you just want to check that your code is actually working, you can set small_sample to True in the if __name__ == "__main__": part. This will select the first 1,000 observations and reduce the computation time quite a bit. def run(small_sample=False): """Run all the code of this file. Parameters ---------- small_sample : bool, optional If you just want to check if the code is working, set small_sample to True, by default False """ df["image_location"] = ( "./data/processed_images/" + df["zpid"] + ".png" ) # add the correct path for the image locations. if small_sample == True: df = df.iloc[0:1000] # set small_sampe to True if you want to check if your code works without long waiting if __name__ == "__main__": run(small_sample=False) ### Splitting the data Our data needs to be split into training, validation, and test datasets. Additionally, we want to compute a naive baseline, where we assume that our training mean is our prediction value. The basic idea behind this is that anyone could just take the training data’s mean to predict new data and might already get good results without any machine learning knowledge. With this, we can later better understand how useful our actual CNN predictions are compared to the naive baseline. The following two lines of code need to be added to our run function from before. def run(): ... train, val, test = split_data(df) # split your data mean_baseline = get_mean_baseline(train, val) We can now add the split_data() function to split the data two times, once for a training set and validation set, and afterward to a test set. The resulting ratio is 70/20/10 for training/validation/test. def split_data(df: pd.DataFrame) -> Tuple[pd.DataFrame, pd.DataFrame, pd.DataFrame]: """Accepts a Pandas DataFrame and splits it into training, testing and validation data. Returns DataFrames. Parameters ---------- df : pd.DataFrame Returns ------- Union[pd.DataFrame, pd.DataFrame, pd.DataFrame] [description] """ train, val = train_test_split(df, test_size=0.2, random_state=1) # split the data with a validation size o 20% train, test = train_test_split( train, test_size=0.125, random_state=1 ) # split the data with an overall test size of 10% print("shape train: ", train.shape) # type: ignore print("shape val: ", val.shape) # type: ignore print("shape test: ", test.shape) # type: ignore print("Descriptive statistics of train:") print(train.describe()) # type: ignore return train, val, test # type: ignore shape train: (7630, 8) shape val: (2180, 8) shape test: (1090, 8) We can also better understand our data by taking a look into our training DataFrame with df.describe(). The only column of interest for this tutorial is price, ranging from 247,250$to 1,880,000$ with an average of 707,119$and a standard deviation of 254,813$. price latitude longitude beds baths area count 7.630000e+03 7630.000000 7630.000000 7630.000000 7630.00000 7630.000000 mean 7.071194e+05 40.652833 -73.967080 3.529489 2.58884 1785.572215 std 2.548134e+05 0.087778 0.159395 0.802316 0.69349 625.659971 min 2.472500e+05 40.498819 -74.253899 3.000000 1.25000 898.000000 25% 5.350000e+05 40.590321 -74.128069 3.000000 2.00000 1323.250000 50% 6.400000e+05 40.629784 -73.938199 3.000000 2.00000 1616.000000 75% 8.350000e+05 40.713520 -73.819999 4.000000 3.00000 2068.000000 max 1.880000e+06 40.911744 -73.702905 6.000000 4.50000 4394.000000 To make interpretations of our results more straightforward, we will use the mean absolute percentage error (MAPE). The MAPE is defined as $$MAPE = \frac{1}{n}\sum_{i=1}^{n} \|\frac{y_i - \hat{y_i}}{y_i}\|*100$$. Therefore, each loss from now on will be represented by a percentage of the error. If our actual value is 100$and our model predicts 110$, we will get a 10% MAPE. def get_mean_baseline(train: pd.DataFrame, val: pd.DataFrame) -> float: """Calculates the mean MAE and MAPE baselines by taking the mean values of the training data as prediction for the validation target feature. Parameters ---------- train : pd.DataFrame Pandas DataFrame containing your training data. val : pd.DataFrame Pandas DataFrame containing your validation data. Returns ------- float MAPE value. """ y_hat = train["price"].mean() val["y_hat"] = y_hat mae = MeanAbsoluteError() mae = mae(val["price"], val["y_hat"]).numpy() # type: ignore mape = MeanAbsolutePercentageError() mape = mape(val["price"], val["y_hat"]).numpy() # type: ignore print(mae) print("mean baseline MAPE: ", mape) return mape Results in: mean baseline MAPE: 28.71662139892578 Our mean baseline MAPE is 28.72%. This is the naive benchmark that we try to beat in the next few sections of the tutorial. ### Create ImageDataGenerators After finishing the preliminary steps, we can get to the interesting part of implementing a custom dataset into Keras. Let’s start by adding the following line to our run() function: def run(): ... train_generator, validation_generator, test_generator = create_generators( df=df, train=train, val=val, test=test, visualize_augmentations=True ) We now need to write a function create_generators() that takes our input data and creates three Keras ImageDataGenerators, one for each split of the data. There are two steps involved in creating ImageDataGenerators: first, create an instance of the ImageDataGenerator class and then let the data flow into it, in our case, through a Pandas DataFrame. In the first step, we add a few standard data augmentations to our training generator. Augmentations help our CNN training a lot if we have only a small dataset. They generate new observations of the same image with a few minor edits, which a human could clearly identify as the same image. In this case, they are relatively conservative since it would not really make sense to vertically flip a picture of a house. We don’t add any augmentations in our validation and training data, as we would expect new unseen data to also be in a non-augmentated format. To feed data into the generator, we use flow_from_dataframe() for each generator separately. We specify which DataFrame we want to use, which column contains our image data x_col, what our desired image size and batch size should be. Later on, we will use EfficientNetB0, which expects an input size of 224x224. If you use any other EfficientNet architecture, you need to change the input image size accordingly. Please either decrease or increase the batch_size according to your GPU. For regressions, we use the class_mode of raw. We visualize the augmentations with another function later to get an impression of how they change our input data. def create_generators( df: pd.DataFrame, train: pd.DataFrame, val: pd.DataFrame, test: pd.DataFrame, visualize_augmentations: Any ) -> Tuple[Iterator, Iterator, Iterator]: """Accepts four Pandas DataFrames: all your data, the training, validation and test DataFrames. Creates and returns keras ImageDataGenerators. Within this function you can also visualize the augmentations of the ImageDataGenerators. Parameters ---------- df : pd.DataFrame train : pd.DataFrame val : pd.DataFrame test : pd.DataFrame Returns ------- Tuple[Iterator, Iterator, Iterator] keras ImageDataGenerators used for training, validating and testing of your models. """ train_generator = ImageDataGenerator( rescale=1.0 / 255, rotation_range=5, width_shift_range=0.1, height_shift_range=0.1, brightness_range=(0.75, 1), shear_range=0.1, zoom_range=[0.75, 1], horizontal_flip=True, validation_split=0.2, ) # create an ImageDataGenerator with multiple image augmentations validation_generator = ImageDataGenerator( rescale=1.0 / 255 ) # except for rescaling, no augmentations are needed for validation and testing generators test_generator = ImageDataGenerator(rescale=1.0 / 255) # visualize image augmentations if visualize_augmentations == True: visualize_augmentations(train_generator, df) train_generator = train_generator.flow_from_dataframe( dataframe=train, x_col="image_location", # this is where your image data is stored y_col="price", # this is your target feature class_mode="raw", # use "raw" for regressions target_size=(224, 224), batch_size=128, # increase or decrease to fit your GPU ) validation_generator = validation_generator.flow_from_dataframe( dataframe=val, x_col="image_location", y_col="price", class_mode="raw", target_size=(224, 224), batch_size=128, ) test_generator = test_generator.flow_from_dataframe( dataframe=test, x_col="image_location", y_col="price", class_mode="raw", target_size=(224, 224), batch_size=128, ) return train_generator, validation_generator, test_generator #### Visualize Keras Data Augmentations We should look into our data augmentations to make sure that they make sense in a real-world application. Therefore we need to write the visualize_augmentations() function that we used in our create_generators() function above. I pretty much hacked this together to only sample the same image 9 times out of the custom generator by giving the flow_from_dataframe() function a small DataFrame with only two identical observations. There is probably a better way to do this, but it does the job well enough. We create a 3x3 grid of matplotlib plots and sample one image each time from our small generator. Each will have a few augmentations added randomly. def visualize_augmentations(data_generator: ImageDataGenerator, df: pd.DataFrame): """Visualizes the keras augmentations with matplotlib in 3x3 grid. This function is part of create_generators() and can be accessed from there. Parameters ---------- data_generator : Iterator The keras data generator of your training data. df : pd.DataFrame The Pandas DataFrame containing your training data. """ # super hacky way of creating a small dataframe with one image series = df.iloc[2] df_augmentation_visualization = pd.concat([series, series], axis=1).transpose() iterator_visualizations = data_generator.flow_from_dataframe( # type: ignore dataframe=df_augmentation_visualization, x_col="image_location", y_col="price", class_mode="raw", target_size=(224, 224), # size of the image batch_size=1, # use only one image for visualization ) for i in range(9): ax = plt.subplot(3, 3, i + 1) # create a 3x3 grid batch = next(iterator_visualizations) # get the next image of the generator (always the same image) img = batch[0] # type: ignore img = img[0, :, :, :] # remove one dimension for plotting without issues plt.imshow(img) plt.show() plt.close() The resulting plot shows us the same building, sometimes a bit rotated, mirrored, or zoomed in with minor brightness adjustments. Nevertheless, each image is clearly recognizable as the same house and should improve our CNN models, which was our goal in adding the augmentations. ## Creating the Convolutional Neural Networks In this tutorial, we want to compare a pre-trained EfficientNet with a simple custom CNN. To avoid writing too much duplicate code, we first write a general fitting function, where we can use any CNN we’d like. This makes sure that we have the same overall setup for our model comparisons later. We also need to write a few callbacks that we add to our models. After that, each model gets its own function with a few custom lines of code. ### Fitting a Keras Image CNN We start with the general fitting function run_model(). The function gets a model name as a string, a model function, which we will write soon, a learning rate, and all of our data. def run_model( model_name: str, model_function: Model, lr: float, train_generator: Iterator, validation_generator: Iterator, test_generator: Iterator, ) -> History: """This function runs a keras model with the Ranger optimizer and multiple callbacks. The model is evaluated within training through the validation generator and afterwards one final time on the test generator. Parameters ---------- model_name : str The name of the model as a string. model_function : Model Keras model function like small_cnn() or adapt_efficient_net(). lr : float Learning rate. train_generator : Iterator keras ImageDataGenerators for the training data. validation_generator : Iterator keras ImageDataGenerators for the validation data. test_generator : Iterator keras ImageDataGenerators for the test data. Returns ------- History The history of the keras model as a History object. To access it as a Dict, use history.history. For an example see plot_results(). """ callbacks = get_callbacks(model_name) model = model_function model.summary() plot_model(model, to_file=model_name + ".jpg", show_shapes=True) optimizer = ranger model.compile( optimizer=optimizer, loss="mean_absolute_error", metrics=[MeanAbsoluteError(), MeanAbsolutePercentageError()] ) history = model.fit( train_generator, epochs=100, validation_data=validation_generator, callbacks=callbacks, workers=6, # adjust this according to the number of CPU cores of your machine ) model.evaluate( test_generator, callbacks=callbacks, ) return history # type: ignore The first step in our fitting method is to get the callbacks. The callbacks() function is explained below. We then get our model through the custom model_function, print a summary, and plot the model to a file. As optimizers, I chose Ranger, which combines Rectified Adam with Lookahead, as this is pretty much state-of-the-art in CNN optimizers and should generate results as accurate as SGD but as fast as Adam. You can read more about them in the according papers (Lookahead, Rectified Adam, Ranger). We compile our model, use the mean absolute error (MAE) as the loss function, and print the MAPE for each epoch in our metrics. Our model can now be trained with fit(), where we specify the training generator, validation generator, callbacks, and workers. To see how our model performs on unseen data, we evaluate the test_generator. ### Callbacks for Logging, Early Stopping, and Saving Since our training might take a while, we might want to monitor the training steps with TensorBoard. For this, we create a new directory for each model with the current time and date and then start Tensorboard. To open TensorBoard, you need to enter tensorboard --logdir logs/scalars in your terminal/command line and then open the standard page in your browser, most likely http://localhost:6006/. Early Stopping will help us decrease overall training time by stopping the training after the model does not improve for a specified amount of epochs (patience) with a minimum improvement of min_delta. In our case, we want our model to improve by at least 1%. If it can’t achieve this for 10 epochs straight, the training will end automatically. We also want to return the best epoch; therefore, we set restore_best_weights=True. def get_callbacks(model_name: str) -> List[Union[TensorBoard, EarlyStopping, ModelCheckpoint]]: """Accepts the model name as a string and returns multiple callbacks for training the keras model. Parameters ---------- model_name : str The name of the model as a string. Returns ------- List[Union[TensorBoard, EarlyStopping, ModelCheckpoint]] A list of multiple keras callbacks. """ logdir = ( "logs/scalars/" + model_name + "_" + datetime.now().strftime("%Y%m%d-%H%M%S") ) # create a folder for each model. tensorboard_callback = TensorBoard(log_dir=logdir) # use tensorboard --logdir logs/scalars in your command line to startup tensorboard with the correct logs early_stopping_callback = EarlyStopping( monitor="val_mean_absolute_percentage_error", min_delta=1, # model should improve by at least 1% patience=10, # amount of epochs with improvements worse than 1% until the model stops verbose=2, mode="min", restore_best_weights=True, # restore the best model with the lowest validation error ) model_checkpoint_callback = ModelCheckpoint( "./data/models/" + model_name, monitor="val_mean_absolute_percentage_error", verbose=0, save_best_only=True, # save the best model mode="min", save_freq="epoch", # save every epoch ) # saving eff_net takes quite a bit of time return [tensorboard_callback, early_stopping_callback, model_checkpoint_callback] We also want to save our model after each epoch. For this we use ModelCheckpoint(). For EfficientNetB0 this takes quite a while, so you might want to disable saving by removing model_checkpoint_callback from the returned values list. ### Create a small custom CNN A small custom CNN will help us understand how well transfer learning with EfficientNet actually performs through direct comparison. We add the following code to our run() function: def run(): ... small_cnn_history = run_model( model_name="small_cnn", model_function=small_cnn(), lr=0.001, train_generator=train_generator, validation_generator=validation_generator, test_generator=test_generator, ) And create the small_cnn() function with a few convolutional layers, max pooling, and two linear layers in the end. Our input_shape corresponds to our image data size of 224x224 pixels with 3 RGB dimensions for color. def small_cnn() -> Sequential: """A very small custom convolutional neural network with image input dimensions of 224x224x3. Returns ------- Sequential The keras Sequential model. """ model = models.Sequential() model.add(layers.Conv2D(32, (3, 3), activation="relu", input_shape=(224, 224, 3))) return model This small custom model looks like the image above and is still relatively easy to understand. The model should run each epoch a bit fast than the larger EfficientNetB0 model we implement in the next step. ### Adapt EfficientNetB0 to our Custom Regression Problem We can now continue and adapt EfficientNetB0 to our data. We add the following lines to our run() function: def run(): ... eff_net_history = run_model( model_name="eff_net", lr=0.5, train_generator=train_generator, validation_generator=validation_generator, test_generator=test_generator, ) And can now create the adapt_efficient_net function. This is very similar to the official Keras tutorial but uses the updated NoisyStudent weights we prepared at the beginning of this tutorial. We also change the final layer of the model to regression rather than a classification. To achieve this, we use a Dense layer with 1 output and no activation function. def adapt_efficient_net() -> Model: """This code uses adapts the most up-to-date version of EfficientNet with NoisyStudent weights to a regression problem. Most of this code is adapted from the official keras documentation. Returns ------- Model The keras model. """ inputs = layers.Input( shape=(224, 224, 3) ) # input shapes of the images should always be 224x224x3 with EfficientNetB0 model = EfficientNetB0(include_top=False, input_tensor=inputs, weights="efficientnetb0_notop.h5") # Freeze the pretrained weights model.trainable = False # Rebuild top x = layers.GlobalAveragePooling2D(name="avg_pool")(model.output) x = layers.BatchNormalization()(x) top_dropout_rate = 0.4 x = layers.Dropout(top_dropout_rate, name="top_dropout")(x) outputs = layers.Dense(1, name="pred")(x) # Compile model = keras.Model(inputs, outputs, name="EfficientNet") return model These few lines suffice to implement transfer learning for EfficientNet with Keras. On my personal Laptop with a GeForce RTX 2070 mobile, each epoch takes around 1 minute to train. EfficientNetB0 is quite large, the actual model looks like this. ## Results Let’s plot our training results so that we can compare the accuracy of our predictions for each model for the training and validation data: def run(): ... plot_results(small_cnn_history, eff_net_history, mean_baseline) For the plots, we use each model’s history data and create a small sns.relplot() with a few custom labels. def plot_results(model_history_small_cnn: History, model_history_eff_net: History, mean_baseline: float): """This function uses seaborn with matplotlib to plot the trainig and validation losses of both input models in an sns.relplot(). The mean baseline is plotted as a horizontal red dotted line. Parameters ---------- model_history_small_cnn : History keras History object of the model.fit() method. model_history_eff_net : History keras History object of the model.fit() method. mean_baseline : float Result of the get_mean_baseline() function. """ # create a dictionary for each model history and loss type dict1 = { "MAPE": model_history_small_cnn.history["mean_absolute_percentage_error"], "type": "training", "model": "small_cnn", } dict2 = { "MAPE": model_history_small_cnn.history["val_mean_absolute_percentage_error"], "type": "validation", "model": "small_cnn", } dict3 = { "MAPE": model_history_eff_net.history["mean_absolute_percentage_error"], "type": "training", "model": "eff_net", } dict4 = { "MAPE": model_history_eff_net.history["val_mean_absolute_percentage_error"], "type": "validation", "model": "eff_net", } # convert the dicts to pd.Series and concat them to a pd.DataFrame in the long format s1 = pd.DataFrame(dict1) s2 = pd.DataFrame(dict2) s3 = pd.DataFrame(dict3) s4 = pd.DataFrame(dict4) df = pd.concat([s1, s2, s3, s4], axis=0).reset_index() grid = sns.relplot(data=df, x=df["index"], y="MAPE", hue="model", col="type", kind="line", legend=False) grid.set(ylim=(20, 100)) # set the y-axis limit for ax in grid.axes.flat: ax.axhline( y=mean_baseline, color="lightcoral", linestyle="dashed" ) # add a mean baseline horizontal bar to each plot ax.set(xlabel="Epoch") labels = ["small_cnn", "eff_net", "mean_baseline"] # custom labels for the plot plt.legend(labels=labels) plt.savefig("training_validation.png") plt.show() The results are shown above. The red dotted line represents our mean baseline, the blue line our small custom CNN and the orange line our adapted EfficientNetB0. Quite interestingly, EfficientNetB0 reaches it’s lowest validation error already in the 4th epoch, while our custom model needs 18 Epochs to get to it’s minimum. On my machine, the custom CNN required 17m30s to reach it’s lowest value, while EfficientNet needed only 3m40s to reach an even lower error. In my run of both models, EfficientNetB0 reached an error of 23.9706%, the custom model an error of 27.8397%, which is barely below our baseline of 28.7166%. This shows us that transfer learning can help decrease training time while increasing prediction accuracy on custom data for regressions! As always, you can find the complete code of this tutorial in the according to GitHub Repository. If this tutorial was helpful for your research, you can cite it with @misc{rosenfelderaikeras2020, author = {Rosenfelder, Markus}, title = {Transfer Learning with EfficientNet for Image Regression in Keras - Using Custom Data in Keras}, year = {2020}, publisher = {rosenfelder.ai}, journal = {rosenfelder.ai}, howpublished = {\url{https://rosenfelder.ai/keras-regression-efficient-net/}}, }
2021-03-07 00:26:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19199267029762268, "perplexity": 4299.157513542695}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178375529.62/warc/CC-MAIN-20210306223236-20210307013236-00123.warc.gz"}
https://www.ademcetinkaya.com/2022/09/buy-sell-or-hold-lonchg-stock-forecast.html
Accurate prediction of stock market returns is a very challenging task due to volatile and non-linear nature of the financial stock markets. With the introduction of artificial intelligence and increased computational capabilities, programmed methods of prediction have proved to be more efficient in predicting stock prices. We evaluate CHEMRING GROUP PLC prediction models with Modular Neural Network (News Feed Sentiment Analysis) and Beta1,2,3,4 and conclude that the LON:CHG stock is predictable in the short/long term. According to price forecasts for (n+6 month) period: The dominant strategy among neural network is to Hold LON:CHG stock. Keywords: LON:CHG, CHEMRING GROUP PLC, stock forecast, machine learning based prediction, risk rating, buy-sell behaviour, stock analysis, target price analysis, options and futures. ## Key Points 1. Stock Forecast Based On a Predictive Algorithm 2. What are main components of Markov decision process? 3. How do you know when a stock will go up or down? ## LON:CHG Target Price Prediction Modeling Methodology The search for models to predict the prices of financial markets is still a highly researched topic, despite major related challenges. The prices of financial assets are non-linear, dynamic, and chaotic; thus, they are financial time series that are difficult to predict. Among the latest techniques, machine learning models are some of the most researched, given their capabilities for recognizing complex patterns in various applications. We consider CHEMRING GROUP PLC Stock Decision Process with Beta where A is the set of discrete actions of LON:CHG stock holders, F is the set of discrete states, P : S × F × S → R is the transition probability distribution, R : S × F → R is the reaction function, and γ ∈ [0, 1] is a move factor for expectation.1,2,3,4 F(Beta)5,6,7= $\begin{array}{cccc}{p}_{a1}& {p}_{a2}& \dots & {p}_{1n}\\ & ⋮\\ {p}_{j1}& {p}_{j2}& \dots & {p}_{jn}\\ & ⋮\\ {p}_{k1}& {p}_{k2}& \dots & {p}_{kn}\\ & ⋮\\ {p}_{n1}& {p}_{n2}& \dots & {p}_{nn}\end{array}$ X R(Modular Neural Network (News Feed Sentiment Analysis)) X S(n):→ (n+6 month) $\begin{array}{l}\int {e}^{x}\mathrm{rx}\end{array}$ n:Time series to forecast p:Price signals of LON:CHG stock j:Nash equilibria k:Dominated move a:Best response for target price For further technical information as per how our model work we invite you to visit the article below: How do AC Investment Research machine learning (predictive) algorithms actually work? ## LON:CHG Stock Forecast (Buy or Sell) for (n+6 month) Sample Set: Neural Network Stock/Index: LON:CHG CHEMRING GROUP PLC Time series to forecast n: 17 Sep 2022 for (n+6 month) According to price forecasts for (n+6 month) period: The dominant strategy among neural network is to Hold LON:CHG stock. X axis: *Likelihood% (The higher the percentage value, the more likely the event will occur.) Y axis: *Potential Impact% (The higher the percentage value, the more likely the price will deviate.) Z axis (Yellow to Green): *Technical Analysis% ## Conclusions CHEMRING GROUP PLC assigned short-term B2 & long-term B2 forecasted stock rating. We evaluate the prediction models Modular Neural Network (News Feed Sentiment Analysis) with Beta1,2,3,4 and conclude that the LON:CHG stock is predictable in the short/long term. According to price forecasts for (n+6 month) period: The dominant strategy among neural network is to Hold LON:CHG stock. ### Financial State Forecast for LON:CHG Stock Options & Futures Rating Short-Term Long-Term Senior Outlook*B2B2 Operational Risk 6833 Market Risk4257 Technical Analysis5676 Fundamental Analysis4964 Risk Unsystematic5639 ### Prediction Confidence Score Trust metric by Neural Network: 82 out of 100 with 562 signals. ## References 1. Chamberlain G. 2000. Econometrics and decision theory. J. Econom. 95:255–83 2. Barrett, C. B. (1997), "Heteroscedastic price forecasting for food security management in developing countries," Oxford Development Studies, 25, 225–236. 3. Friedman JH. 2002. Stochastic gradient boosting. Comput. Stat. Data Anal. 38:367–78 4. T. Morimura, M. Sugiyama, M. Kashima, H. Hachiya, and T. Tanaka. Nonparametric return distribution ap- proximation for reinforcement learning. In Proceedings of the 27th International Conference on Machine Learning, pages 799–806, 2010 5. Dudik M, Erhan D, Langford J, Li L. 2014. Doubly robust policy evaluation and optimization. Stat. Sci. 29:485–511 6. A. Tamar, Y. Glassner, and S. Mannor. Policy gradients beyond expectations: Conditional value-at-risk. In AAAI, 2015 7. Bastani H, Bayati M. 2015. Online decision-making with high-dimensional covariates. Work. Pap., Univ. Penn./ Stanford Grad. School Bus., Philadelphia/Stanford, CA Frequently Asked QuestionsQ: What is the prediction methodology for LON:CHG stock? A: LON:CHG stock prediction methodology: We evaluate the prediction models Modular Neural Network (News Feed Sentiment Analysis) and Beta Q: Is LON:CHG stock a buy or sell? A: The dominant strategy among neural network is to Hold LON:CHG Stock. Q: Is CHEMRING GROUP PLC stock a good investment? A: The consensus rating for CHEMRING GROUP PLC is Hold and assigned short-term B2 & long-term B2 forecasted stock rating. Q: What is the consensus rating of LON:CHG stock? A: The consensus rating for LON:CHG is Hold. Q: What is the prediction period for LON:CHG stock? A: The prediction period for LON:CHG is (n+6 month)
2022-09-30 09:11:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6956838369369507, "perplexity": 12868.617021231677}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335448.34/warc/CC-MAIN-20220930082656-20220930112656-00795.warc.gz"}
https://plainmath.net/82924/how-do-you-find-the-dimensions-of-the-re
# How do you find the dimensions of the rectangle of largest area that can be inscribed in an equilateral triangle of side L if one side of the rectangle lies on the base of the triangle? How do you find the dimensions of the rectangle of largest area that can be inscribed in an equilateral triangle of side L if one side of the rectangle lies on the base of the triangle? You can still ask an expert for help • Questions are typically answered in as fast as 30 minutes Solve your problem for the price of one coffee • Math expert for every subject • Pay only if we can solve it Makenna Lin Let the upper base y of the rectangle be the segment of a line parallel to the base of the equilateral triangle at an unknown distance x from it. In such a way the triangle is divided in two triangles, the equilateral one having height $h=L\frac{\sqrt[2]{3}}{2}$ and a smaller one having height ${h}_{1}=L\frac{\sqrt[2]{3}}{2}-x$, that are similar! so we can write the proportion $\frac{L}{y}=\frac{L\frac{\sqrt[2]{3}}{2}}{L\frac{\sqrt[2]{3}}{2}-x}$. By insulating the y we obtain $y=L-\frac{2}{\sqrt[2]{3}}x$ The rectangle area is $S\left(x,y\right)=x\cdot y$ but $S\left(x\right)=x\cdot \left(L-\frac{2}{\sqrt[2]{3}}x\right)=Lx-\frac{2}{\sqrt[2]{3}}{x}^{2}$ By deriving S(x) we get $S\prime \left(x\right)=L-\frac{4}{\sqrt[2]{3}}x$ whose root is $x=L\frac{\sqrt[2]{3}}{4}$ and consequently $y=L-\frac{2}{\sqrt[2]{3}}\frac{\sqrt[2]{3}}{4}L=\frac{L}{2}$
2022-08-09 13:09:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 34, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7948799133300781, "perplexity": 252.8412924164121}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570977.50/warc/CC-MAIN-20220809124724-20220809154724-00270.warc.gz"}
http://math.stackexchange.com/questions/131198/subset-of-mathbbq
# Subset of $\mathbb{Q}$ Let $S= \{x_0,\dots,x_n\}$ be a finite subset of $[0,1]$ , $x_0=0$ and $x_1=1$ such that every distance between pair of elements of $S$ occurs at least twice, except for the distance $1$, then we are to show that $S$ is a subset of $\mathbb{Q}$. - Source: imomath.com/othercomp/Irn/IrnMO298.pdf problem $3$. –  user9413 Apr 13 '12 at 8:12 Let $V$ be the $\mathbb{Q}$-vector space generated by $S$ and let $\leq$ be any total order on $V$ compatible with addition (forall $x,y,z \in V$, if $x \leq y$ then $x+z \leq y+z$, and more generally, any reasoning you are used to involving $+$ and $\leq$ is valid). Then, for this particular total order, there is a unique pair $(x_i,x_j) \in S^2$ such that $x_i - x_j$ is the greatest distance between pair of elements of $S$ : Since $S$ is finite and $\leq$ is total, the set $\{x-y, (x,y)\in S^2\}$ is finite so it has a maximal element $x_i - x_j$ for some pair $(x_i,x_j) \in S^2$. Then we show it is unique : suppose there is $(x_k,x_l) \in V^2$ such that $(x_i - x_j) = (x_k - x_l)$. Then $(x_i - x_j) + (x_i - x_j) = (x_k - x_l) + (x_i - x_j) = (x_k - x_j) + (x_i - x_l)$. Since $(x_i - x_j)$ is maximal, $(x_k - x_j) \leq (x_i - x_j)$ and $(x_i - x_l) \leq (x_i - x_j)$. If any of those two inequalities were strict, we would get the contradiction $(x_i - x_j) + (x_i - x_j) < (x_i - x_j) + (x_i - x_j)$, so they have to be equalities, which implies that $(x_k,x_l) = (x_i,x_j)$. Now, the hypothesis on $S$ says that the only pairs $(x_i,x_j)$ such that $x_i-x_j$ is unique are the pairs $(0,1)$ and $(1,0)$. So it implies that for any total order on $V$ compatible with addition, this greatest distance is always $1$ or $-1$. So we only need to show that if $V \neq \mathbb{Q}$, there must exist total orders $\leq$ on $V$ such that the greatest distance for $\leq$ is not $1$ or $-1$ : Since $S$ generates $V$, and $(x_1 = 1)$ is free, we can add elements of $S$ to $(x_1)$ to form a basis $(e_1, \ldots, e_m) = (x_{i_1}, \ldots, x_{i_{m-1}},x_1)$ of $V$. • Pick the lexicographical order induced by this basis. It is caracterised by the property that $0 < \sum a_i e_i$ if and only if the first nonzero coefficient is positive. In particular, $(x_{i_1} - x_0) = x_{i_1} > \pm x_1 = \pm (x_1 - x_0) = \pm 1$, so neither $1$ nor $-1$ can be the greatest distance for $\leq$. • Define a linear application $f : V \to \mathbb{R}$ with $f(1) = 1$ and $f(x_{i_k}) = \pi^k$. Since $\pi$ is transcendental, this map $f$ is injective. Pick the order defined by $x \leq y \Leftrightarrow f(x) \leq_\mathbb{R} f(y)$, where $\leq_\mathbb{R}$ is the usual order on $\mathbb{R}$. But then, since $f(x_{i_1}) >_\mathbb{R} f(x_1)$, we have again $(x_{i_1} - x_0) = x_{i_1} > \pm x_1 = \pm (x_1 - x_0) = \pm 1$. - first of all In the problem it is given that the points are in [0,1] so the greatest no $(x_i-x_j)$ is the unique no 1, there is nothing to prove in it ,secondly I do not understand why in lexicographic order a number which is not a rational number then why that will not be in [0,1]? –  La Belle Noiseuse Apr 15 '12 at 19:13 Over all I have not understood properly. –  La Belle Noiseuse Apr 15 '12 at 20:18
2015-07-06 01:32:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9579034447669983, "perplexity": 93.87707092355653}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375097757.36/warc/CC-MAIN-20150627031817-00090-ip-10-179-60-89.ec2.internal.warc.gz"}
https://planetmath.org/DouadyRabbit
The Douady rabbit is a Julia set (http://planetmath.org/SetDeJulia) produced by $c=-\frac{1}{8}+\frac{3}{4}i.$ As the Mandelbrot set indicates, the real part can be varied to as much as $-.2$ while the imaginary part can be varied to as much as $.75i$ and still produce a connected set. Title Douady rabbit DouadyRabbit 2013-03-22 17:15:51 2013-03-22 17:15:51 PrimeFan (13766) PrimeFan (13766) 5 PrimeFan (13766) Example msc 28A80
2018-12-16 05:56:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 3, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8650767207145691, "perplexity": 2832.5377282845616}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376827281.64/warc/CC-MAIN-20181216051636-20181216073636-00399.warc.gz"}
https://www.physicsforums.com/threads/bras-and-kets-and-tensors.241586/page-7
# Bras and Kets and Tensors Hurkyl Staff Emeritus Gold Member mrandersdk- If |00>,|01>,|10> and |11> (1=up,0=down) are linear independent vectors, then <01|01> = 0, rather than <01|01> = <0|0><1|1>, as you suggest. How do you figure? You may have an argument in that I implicitly assume that in $R\otimes R$ one is a row vector and the other is a column vector, so an nx1 vector times a 1xn vector is an nxn matrix, but I wouldn't even know how to express a transpose operation at higher ranks without people loosing track of the otherwise very simple math. Regards, Hans Transposition is more of a notational device, than anything, to keep track of where the rows and columns are. In higher ranks, you can use labels to keep track rows, columns, depth..., and use a modified Einstein summation to multiply matrices. $$Y = M^{T} \Rightarrow Y_{cr} = M_{rc}$$ $$(M_{abc...z} N_{abc...z})_{(fg)} = \stackrel{\Sum (M_{abc...z} N_{abc...z})}{f,g=i, i=1...n}, f$$ __________________________ Any mistakes I blame on LaTex You may have an argument in that I implicitly assume that in $R\otimes R$ one is a row vector and the other is a column vector, so an nx1 vector times a 1xn vector is an nxn matrix, but I wouldn't even know how to express a transpose operation at higher ranks without people loosing track of the otherwise very simple math. Regards, Hans Transposition is more of a notational device, than anything, to keep track of where the rows and columns are. Which elements combine with which elements between two tensors is unchange by In higher ranks, you can use labels to keep track of rows, columns, depth...etc, and use a modified Einstein summation to multiply matrices. $$Y = M^{T} \Rightarrow Y_{cr} = M_{rc}$$ $$(M_{abc...f} N_{c\: d\: e...z})_{(dp)} \equiv \sum_{d_i , p_i \ i=1...n} (M_{abc...f} N_{c\: d\: e...z})}\ , \ \ \ \ d \neq p$$ $$L_{abc_{m}e_{m}f_{m}c_{n}e_{n}f_{n}ghi...o,qrs...z} = (M_{abcef} N_{efg...z})$$ ______________________________________________________________________ Any mistakes now, in the past, or ever, I blame on LaTex, whether I'm using it or not. mrandersdk- If |00>,|01>,|10> and |11> (1=up,0=down) are linear independent vectors, then <01|01> = 0, rather than <01|01> = <0|0><1|1>, as you suggest. no, $$|01>^\dagger = <01|$$ mrandersdk, Hurkl- I posted: If |00>,|01>,|10> and |11> (1=up,0=down) are linear independent vectors, then <01|01> = 0, rather than <01|01> = <0|0><1|1>, as you suggest. How do you figure? I figure, I misread <01|01> as <01|10> (I wouldn't mind if someone deleted my extra and partially edited post, #152.)
2020-10-30 11:14:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8775689005851746, "perplexity": 2363.3453851527297}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107910204.90/warc/CC-MAIN-20201030093118-20201030123118-00247.warc.gz"}
http://zbmath.org/?q=an:1181.34078
# zbMATH — the first resource for mathematics ##### Examples Geometry Search for the term Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact. "Topological group" Phrases (multi-words) should be set in "straight quotation marks". au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted. Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff. "Quasi* map*" py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14. "Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language. ISO 639-1 language codes can also be used. ##### Operators a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses ##### Fields any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term dt document type (j: journal article; b: book; a: book article) Controllability of fractional-order impulsive neutral functional infinite delay integrodifferential systems in Banach spaces. (English) Zbl 1181.34078 Summary: The controllability of fractional impulsive neutral functional integrodifferential systems in a Banach space has been addressed. Sufficient conditions for the controllability are established using fractional calculus, a semigroup of operators and Krasnoselskii’s fixed point theorem. ##### MSC: 34K35 Functional-differential equations connected with control problems 34K45 Functional-differential equations with impulses 34K30 Functional-differential equations in abstract spaces 34K40 Neutral functional-differential equations 93B05 Controllability 47N20 Applications of operator theory to differential and integral equations 93C23 Systems governed by functional-differential equations ##### References: [1] Balachandran, K.; Park, D. G.: Controllability of second-order integrodifferential evolution systems in Banach spaces, Computers and mathematics with applications 49, 1623-1642 (2005) · Zbl 1127.93013 · doi:10.1016/j.camwa.2005.03.001 [2] Li, M.; Wang, M.; Zhang, F.: Controllability of impulsive functional differential systems in Banach spaces, Chaos, solitons and fractals 29, 175-181 (2006) · Zbl 1110.34057 · doi:10.1016/j.chaos.2005.08.041 [3] Balachandran, K.; Leelamani, A.; Kim, J. -H.: Controllability of neutral functional evolution integrodifferential systems with infinite delay, IMA journal of mathematical control and information 25, 157-171 (2008) · Zbl 1146.93006 · doi:10.1093/imamci/dnm013 [4] Park, J. Y.: Controllability of impulsive neutral integrodifferential systems with infinite delay in Banach spaces, Nonlinear analysis: hybrid systems (2008) [5] Balachandran, K.; Park, J. Y.: Controllability of fractional integrodifferential systems in Banach spaces, Nonlinear analysis: hybrid systems (2009) [6] Chang, Y. K.: Controllability of impulsive functional differential systems with infinite delay in Banach spaces, Chaos, solitons and fractals 33, 1601-1609 (2007) · Zbl 1136.93006 · doi:10.1016/j.chaos.2006.03.006 [7] Bonilla, B.; Rivero, M.; Rodriguez-Germa, L.; Trujillo, J. J.: Fractional differential equations as alternative models to nonlinear differential equations, Applied mathematics and computation 187, 79-88 (2007) [8] El-Sayeed, M. A. A.: Fractional order diffusion wave equation, International journal of theoretical physics 35, 311-322 (1966) [9] Miller, K. S.; Ross, B.: An introduction to the fractional calculus and fractional differential equations, (1993) [10] Smart, D. R.: Fixed point theorems, (1980)
2014-04-20 23:34:57
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8257405161857605, "perplexity": 12543.05308319811}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00037-ip-10-147-4-33.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/blackbody-radiation.934418/
1. Dec 14, 2017 ### Arup Biswas When I study any book of Quantum Mechanics like Resnick or Beiser etc all start with blackbody radiation! But how this radiation is produced? Google says due to increased collision of particles causing the acceleration and em wave but what particles? How they are accelerated from what? Like if we heat a iron rod! At 1st it will become red. Is this due to the fact that electron jump to the next state in the orbits and then come back radiating energy? The more heat i give to the rod more the frequency shifts towards blue(white)! Can this be reasonable explanation(i.e. google quora says all trush) or I got wrong deep inside? 2. Dec 14, 2017 ### Drakkith Staff Emeritus The particles of the material. When heated, the atoms and molecules tend to oscillate or vibrate back and forth, which means that they accelerated over and over, generating EM radiation. In conductors, many of the electrons are freed from their atoms by metallic bonding and can move about the material. These electrons are subject to collisions which accelerate them and produce more EM radiation. Electronic transitions (electrons moving up and down between energy levels) still take place, but the majority of the radiation produced is from all of this microscopic motion. 3. Dec 14, 2017 ### Drakkith Staff Emeritus Whoops, forgot to answer this in my previous post: Heating the rod increases the amplitude of the vibrations and the magnitude of the thermal motion of the atoms, molecules, and electrons. As a result, the interactions between the particles tend to produce larger accelerations and thus a higher average frequency of the emitted radiation. 4. Dec 14, 2017 ### Arup Biswas Let us put a lightweight iron ball in a place at heat it! Will it oscillate producing em wave? Then why do atoms or molecules oscillate? Thus to in such a beautiful fashion to radiate a particular colour? 5. Dec 14, 2017 ### Drakkith Staff Emeritus Heating something adds energy to it. This energy takes the form of several things, one of which is the motion of its constituent atoms and molecules. If you're trying to find the fundamental reason for why this happens, it's because the way this energy is transferred to the material is either through collisions with other particles or EM radiation. Both of these accelerate the particles in the material, adding energy to it and heating it. Iron balls don't oscillate when heated because they are made from huge numbers of atoms whose individual motions average out to near-zero. Hence the ball doesn't jump about like popcorn when heated. 6. Dec 14, 2017 ### .Scott Atoms and electrons. Yes. At any given temperature, the wavelength components of Black Body radiation will follow a curve: intensity vs wavelength. In 1900, the formula for these curves was proposed by Planck and confirmed experimentally. At that point, the question became what must be going on at the atomic level to create such a curve? The electron energy levels actually interfere with the smooth Black Body radiation curve - and offer more clues about what is going on at the atomic level. 7. Dec 14, 2017 ### Delta² Good question!. The reason an iron ball no matter how light will not macroscopically oscillate when heated is that: When we offer heat we increase the average kinetic energy of the molecules. But each molecule oscillate in "random" direction, loosely speaking heat offers kinetic energy in a different random direction for each molecule, if there was a Demon (Maxwell's Demon for example) that could make the heat we offer to go as energy of oscillation in the same direction for all molecules then we would see the ball oscillate macroscopically as well. The colours we receive from a heated body are from electrons of atoms changing energy states, the random oscillation motion of the molecules is not responsible for this. I see now your point, if all random oscillations are cancel out on average, how is there em radiation due these random oscillations(shouldn't be cancelled out as well??) Last edited: Dec 14, 2017 8. Dec 14, 2017 ### Arup Biswas Firstly I would like to go in more details how all the oscillations cancels out? I think oscillation occurs but the radiation emitted by one of them must be cancelled out by another particle's radiation, if we take light to be wave! Now in answer to Delta, should I assume every radiation to be radiated uniformly in all direction? Then in every possible direction there comes a radiation in opposite phase to cancel it out! Here I would like to bring the liquid's Surface Tension analogy! The oscillating particles on the surface of the material! Radiation from any one of them cancels out in every possible direction except outside the material as there is no wave incoming! Thus we get the radiation occurs from the surface of the material! I don't know I made what from what Funny 9. Dec 14, 2017 ### Delta² I don't think it matters whether the radiation is uniformly radiated or not. Given 1cubic mm of a macroscopic object , there is a huge number of molecule in it so for every molecule oscillating in one direction there will probably be another molecule within a cube of 1mm that oscillates in the exactly opposite direction so the radiation will tend to cancel out though we will not have perfect cancellation (unless both molecules were oscillating on the same exact place which of course cant be the case). And no I don't think your reasoning for the radiation coming only from the thin surface layer of a body is correct. (ok to be honest I am not completely sure). @Drakkith in his post claims that the majority of black body radiation is from the microscopic movement (I suppose he means the oscillations) and not from electron state transitions...Drakkith what have you got to say about what I said in the first paragraph in this post? Last edited: Dec 14, 2017 10. Dec 14, 2017 ### Khashishi Not to nitpick, but metals like iron aren't made up of molecules, but of atoms. The outer electrons can move fairly freely between atoms, so it's sort of like a sea of electrons surrounding a lattice of positive ions. For high frequencies of random motion, like optical frequencies, the electrons act like a plasma. Anyways, you have collisions between electrons and ions which cause emission and absorption via bremsstrahlung or some other processes. Typically, the Planck spectrum is derived for a photon gas in a box, so you might wonder why a hunk of metal would have the same spectrum. It doesn't, exactly. There is an emissivity factor in there, which depends on frequency. But the emission at any frequency has to scale with temperature in the same way as the Planck spectrum due to the principle of detailed balance. 11. Dec 14, 2017 ### Drakkith Staff Emeritus Not much really. I don't know enough to say whether it's accurate or not. 12. Dec 14, 2017 The mechanisms responsible for a high emissivity in a material seem to be quite subtle. Some materials can be transparent and have low emissivity, and others can be reflective, and also thereby having low emissivity. Other materials such as some paints can be made to be reflective at some wavelengths and absorbant at others, versus black paint that has a high emissivity throughout the visible region of the spectrum. This can be caused by the molecular and electronic properties of the materials such as the dye in the paint. Meanwhile, some materials are found to also have changes in their emissivity with temperature. If a material has a high emissivity, sometimes, all that is necessary to achieve that high emissivity is a very thin film of that material. Meanwhile, a roughened surface of a reflective material will be less reflective than a highly polished surface, and thereby will have higher emissivity. $\\$ Additional comment: Metals, which in some ways can be modeled as plasmas with an atomic lattice, often are found to be highly reflective (reflectivity close to 1.0) throughout much of the spectrum and thereby exhibit low emissivity. (For metals, a large complex term (an absorbant part) in the index of refraction, makes them good reflectors. This follows from the formula for reflectivity (at normal incidence) $R=\frac{|\tilde{n}-1|^2}{|\tilde{n}+1|^2}$). Meanwhile, crystals, with well ordered atomic lattices, are often transparent, and thereby also have low emissivity. $\\$ One way of artificially achieving high emissivity is to put a small aperture in an enclosure. Light that enters the aperture, regardless of the material on the inside walls,(assumed to be opaque and partially reflective), will undergo multiple reflections and very little will reflect back out. By Kirchhoff's law, the emissivity of the aperture is necessarily very close to 1.0. Thereby, this aperture can be considered to be a nearly ideal blackbody, and if the material (i.e. the inside walls) is heated, the aperture behaves as an ideal blackbody. Last edited: Dec 14, 2017 13. Dec 15, 2017 ### Delta² You are dead sure that the blackbody radiation comes from the thermal motion of the molecules/atoms and not from electrons switching state within molecules/atoms? 14. Dec 15, 2017 I think it is clearly a combination of both. The electron contributions to some processes can be at times separated from the ion contributions, but here I think both species make contributions to the result. 15. Dec 15, 2017 ### Drakkith Staff Emeritus Yes, I'm quite certain that the majority of the radiation comes from thermal motions and not from electronic transitions. You can read plenty of this on the wiki page. The colors from something like a sodium flame are sometimes from electronic transitions, yes, but in general the color of a hot object like a bar of iron or a star is not heavily influenced by the electrons moving between energy states. 16. Dec 15, 2017 ### Cthugha You are confusing the field and the intensity. As the intensity is proportional to the squared modulus of the field, the average field will be zero as it can also take negative values, but the average intensity will be larger than zero because at any instant it can only have positive values or a value of 0. In fact you can easily simulate this yourself. Take 100 harmonic oscillators with the same amplitude and random phase, add their vectorial amplitudes and get the instantaneous intensity. Then simulate a time series, where each oscillator may receive a small phase kick with a certain probability within each time step and monitor the intensity over time. If the phases are not fluctuating too strongly, the photon number distribution you will get is the Bose-Einstein distribution for blackbody radiation. 17. Dec 15, 2017 ### Arup Biswas Eisberg Resnick also says that most of the radiation occurs due the accelerated electrons! He has also a proof of it! 18. Dec 15, 2017 The question still remains, what exactly is the arrangement of the molecules, such as the dye in a paint, that allows thermal effects to set these electrons in motion=(i.e. accelerate) and radiate? The arrangement of the ions, and perhaps thermal motion of the ions, must play an important role or all substances would show similar emissivity. 19. Dec 15, 2017 ### Arup Biswas I did not checked the detailed proof what they have done! Probably iy would help what is their arrangement and how acceleration occurs! #charles! 20. Dec 19, 2017 ### Khashishi I think the physics is quite different for metals (which tend to be reflective over a large range) and organic dyes (which have strong absorption peaks). Crystals tend to be transparent. Electrons can be accelerated in metals, but in dielectric crystals, they are more like oscillators.
2018-04-19 23:33:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.674652636051178, "perplexity": 717.0597943505312}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125937074.8/warc/CC-MAIN-20180419223925-20180420003925-00037.warc.gz"}
https://zbmath.org/?q=1034.30005
## Sharp estimates of the curvature of some free boundaries in two dimensions.(English)Zbl 1034.30005 Let $$\mu$$ be a positive measure on the interval $$(-1,1) \subset \mathbb R$$ which satisfies $\int_{-1}^1 d\mu > 0 \quad \text{and} \quad \int_{-1}^1 \frac{d\mu(t)}{1-t^2} < \infty\,.$ The Cauchy transform of $$\mu$$ is defined by $f(w) = \int_{-1}^1 \frac{d\mu(t)}{t-w}\,.$ The main result states that $$f$$ is a univalent function in $$\mathbf D^e = \{\, z \in \mathbb C : | z| >1 \,\} \cup \{\infty\}$$, and that $$f$$ maps $$\mathbf D^e$$ onto a bounded domain $$\Omega$$ which can be described as a union of discs centered on the real axis. Moreover, the authors apply their main result to the obstacle problem, partial balayage, quadrature domains and Hele-Shaw flow moving boundary problems, and they obtain sharp estimates of the curvature of free boundaries appearing in such problems. ### MSC: 30C20 Conformal mappings of special domains 31A99 Two-dimensional potential theory 35R35 Free boundary problems for PDEs 26A51 Convexity of real functions in one variable, generalizations 76B07 Free-surface potential flows for incompressible inviscid fluids 76D27 Other free boundary flows; Hele-Shaw flows Full Text:
2022-07-06 20:28:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7605430483818054, "perplexity": 470.40010734667436}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104676086.90/warc/CC-MAIN-20220706182237-20220706212237-00444.warc.gz"}
http://funlangs.blogspot.com/
## Saturday, February 16, 2013 ### A Functional Language In this post, I will start describing a hypothetical functional programming language that I've been thinking of. I hope to make this into a series of posts, each describing a different aspect of this language. Today, I will explain the type system of this language. I'll start by writing about types and type systems, and then about types in my language, including basic types, and how to combine them to create more complex types. Later, I'll describe type classes and class instances. Types Types are one of the most important parts of a statically typed language. They define constraints on the values a variable may hold, and allow you to reason about what data a value has. For example, if you wanted to state that, say, a variable n may only be an integer, that would be a type constraint. You would say n is of type integer. If you wanted to state that the variable index is always nonnegative, you could say that index is of type natural number, and that would provide a constraint on the possible values that index can have. Type Systems A type system is (how I think of it) a way to ensure that variables follow the type constraints imposed upon them, and, if possible, to ensure this at compile time (as opposed to runtime, in a dynamically typed language). A compile-time type system should be able to tell if you are, say, assigning an integer value to a variable declared to hold only booleans. That would be an obvious problem, and a type system should flag that. Another thing that type systems can do is infer type constraints, that is, figure out what types a variable can have, and then automatically declare the variable to have those constraints. Types in the Language In the hypothetical language I'm thinking of, there would be basically three kinds of types. These types can be constructed recursively (or not) to create more complex types. The following three sections will describe each kind. Function Types These are written a -> b and define a constraint on values so that a value of a function type can be applied to an argument (of type a) and produce a value of type b. An example of a function type could be string -> integer and would describe a function that can parse a string into the corresponding integer value. Variant/Sum Types These types define a value which can be exactly one of a certain number of other types. For example, if you would like to constrain a value to either an integer or a string, its type would be integer | string. A value of this type can be either an integer or a string, but not both at the same time (what would it even mean? Maybe in a quantum language it would make sense...) Variant types may be recursive; a branch of the variant may include itself. The standard functional definition of a linked list is: type List T = Cons(T, List T) | Nil That basically says that a List is either Nil (end of a list) or it contains a T value (the data) and the rest of the List. A value that matches this type could be: let list : List int = Cons(1, Cons(2, Nil)) The variable list now contains a Cons value, with data of 1, where the rest of the list is another Cons value (data=2), whose rest-of-list value is Nil (end of list). In a more palatable format, the list value above could be written as let list : List int = [1, 2] which is obviously much more readable. Tuple/Product Types These types define a value which contains several values (of arbitrary) types. Tuple types are the kind of type that allow you to create more complex structures. A tuple type will always declare that a value will hold a certain number of other values. An example of a tuple type is (string, int) which might define a type which contains data about parsing; the current string, and the current location in the string. A value of type could be let parseState = ("Hello, World!", 0) which describes the beginning of a parse on the string "Hello, World!". A tuple type may not be recursive, because otherwise the type itself would be infinitely large. A variant type, however, may be recursive if (and only if) there is at least one branch of the variant that is not recursive. Generic Types Any type may be declared to take type parameters (in a curried format). These type parameters may be used in the type definition as concrete types. An example of this is the type List T from above. The "T" is the type parameter, in this case, and List is no longer a concrete type; it is now a type constructor. When List is applied to a type argument (for example, int), it produces another type. In other words, List is a higher-kinded type, of kind * -> *. In English, List takes a type (the type argument), and returns another type (the concrete List type, with type argument set). Type Classes A type class basically defines an interface that a certain kind of types must conform to; it defines what operations may be performed on a certain type kind. It really is almost like an OOP interface. They have the same concepts, and both an interface and a type class allow polyphormism. I'll give you an example of a type class and then explain what it means: class Parseable T = tryParse : string -> Option T decl parse[T] where Parseable T = string -> T let parse[T] input where Parseable T = match tryParse input | Some result -> result | None -> error "Could not parse input." In this example, I define a type class Parseable, which takes a single type argument, T. It says that whenever a type T is an instance of the class Parseable, the function tryParse can be given a string, and will return an Option of T. The parse function takes a single type parameter (so that it knows what type to return), and also takes a string (the input), and then calls the tryParse function, which is part of the type class Parseable. This is the part where polyphormism happens (as you will see later). If the tryParse was successful (returned Some value), parse returns that same value. Otherwise, it is an error (the parse was not successful). Now I will write some code that creates a class instance (of Parseable) and then uses the parse method to attempt a parse. instance Parseable Bool = tryParse input = match input | "true"  -> Some true | "false" -> Some false | _       -> None let parsedTrue = parse[Bool] "true" let parsedFail = parse[Bool] "not a bool..." This code creates a class instance of Parseable, with the type argument set to Bool. The tryParse method will return an Option Bool value, telling whether or not the input could be parsed, and, if it could, then what the parsed value is. The parsedTrue value will contain the value Some(true), whereas the parsedFail value will contain the value None. The reason for the type constrain in the parse method call is so that the method knows what type to return; otherwise, it would have no idea which class instance to use. In this case, the Parseable Bool instance is used to attempt a parse. instance End String = let description () = " Well, this has taken quite a while to write (over two weeks, because of homework and not working all the time on this). I'll just leave it as is now, and publish it so that you (the general you) may see it and glean all you want from it. Next time on FAIL, I'll describe a new language I'm writing! " ## Sunday, February 3, 2013 ### The Magic is Gone Last time, I mentioned that I would be writing about Ruby or about a hypothetical new language. Well, I'll do that next time, because I thought of a more interesting (to me, at least) thing to write about right now. Have you ever felt that you aren't as excited about an activity as you used to be? Does it ever feel like the wonder and the magic are missing? That there is just something absent? Over the past year or so, programming has started to lose its magic. When I started programming, everything was a new experience. I would learn a new language, and then feel like suddenly the entire computer was at my fingertips. I would feel powerful, amazing, and smart, because I knew more about programming. I remember, when I started learning C#, I saw the "namespace" keyword and thought "that looks so cool! It's so exciting to be able to learn what it is! I wonder if it's something to do with objects, or files, or classes, or something else." I would happily learn about anything there was to learn, because it those early stages (and still now, though not quite as much), all I wanted to do was learn. Learn how to do more things and become more proficient at programming, and read a lot of material. But now, when I see a new language feature, it doesn't make me excited. Now I think about it in a distant manner. I think of new ideas, now, as simply another thing to use. The magic of learning new things is now gone. I've been thinking about this a lot, and I'm trying to figure out why I feel this way. I think I understand now, so I'll try to write my explanation as best I can: During the first couple of years of programming, everything was new, exciting, and novel. Because I saw how cool languages were, I decided to learn about how they worked. Because I didn't now how operating systems worked, I decided to try to implement one. Because I didn't now how x, or y, or z worked, I tried to create it. The problem is, now I know how languages, and operating systems, and x, and y, and z, and more, work. It no longer holds any wonder for me. I've created my own emulated assembly language, and wrote an interpreter for it (although it was buggy, and never quite worked). I started a project to write a mini operating system for my hypothetical assembly language. I never finished it, but after writing parts of the OS, and after reading the Minix Book of Operating Systems, it is no longer magical. I can no longer think of the computer as this fuzzy idea, just something that allows me to type in some text, and then make it run. Now I understand what an operating system does, how it works (in excruciating detail, for many parts), and now it is not magical. I've created my own programming language, named Prototype. I built my own lexer, parser, bytecode compiler, and bytecode interpreter for it. Now I know how all of these work, and what they do, to create a working language implementation. I no longer see C# as a magical thing, that somehow knows what I mean when I write some text on my computer screen, and then does what I say. Amazing! But now I know. Now I know how all of these work. They are no longer magical; I cannot any more think about them as black boxes, taking in some input and emitting some output. I know how they work, how they are implemented, the mechanisms at work, and the magic is gone. Once upon a time, everything was new and exciting. I couldn't wait to understand how something worked. Now I can no longer wonder, because I know how they work, and I almost wish I could go back to that time, when everything was still wonderful. But I can't. The magic is gone. ## Thursday, January 31, 2013 Yes, monads. Don't go running off yet! I haven't even gotten started! Before you go, I'd at least like to give you my explanation about monads, and several uses of them. So bear with me. A Monadic Type is basically any type that has two operations that can be performed on it: bind and make. In a Haskell-like syntax, a monad would be defined as: make : 'a -> m 'a bind : m 'a -> ('a -> m 'b) -> m 'b In this type class, the type m is the monadic type, and make and bind are the monadic operations. Notice how m must be a generic type, and is eventually constructed by both of the operations. make basically creates a "container" value from a regular old value, and bind takes a container value, removes the value, and creates another container value from that removed value. (I say "container value" to make things simpler). An example of a class instance that allows one to describe failure of computation is an Option monad: type Option 'T = Some('T) | None let make val = Some(val) let bind maybe cont = match maybe | Some(val) -> cont(val) | None      -> None Notice how make takes any old value and returns an Option value from it; this is the "container value". bind takes a container value and a continuation function and then checks whether the container value actually has a value in it; if it does, that contained value is passed to the computation continuation; if it does not, then the entire computation simply fails by returning None. bind, as an Option monadic operation, acts as a success/failure mechanism; if the previous operation succeeded, then the value is passed on to the continuation, but if the operation failed (returned None), then bind returns None as well, to pass on the failure. Example of Use A way to use this Option monad would be like this: let fail = None decl failIfBig : int -> Option int let failIfBig n = if n > 1000 then fail else make n let sumIfBothSmall x y = bind (failIfBig x) (func new_x -> bind (failIfBig y) (func new_y -> make(new_x + new_y))) This example shows a failIfBig function; it returns None, meaning failure, if the input is too large. Otherwise, it calls the make function to create a container Option value. The sumIfBothSmall function uses the monadic operation bind, which transmits any failure to the rest of the computation. Once there is failure, there will always be failure (None). The sumIfBothSmall function might make a little more sense when written in a different way: let sumIfBothSmall x y = failIfBig x |> bind (func new_x -> failIfBig y |> bind (func new_y -> make(new_x + new_y))) let sumIfBothSmall = do { let! new_x = failIfBig x let! new_y = failIfBig y return new_x + new_y } The do { ... } syntax there is basically syntax sugar for the previous code. let! var = val in cont transforms into bind val (func var -> cont), and return val transforms into make val. Functional languages that use monads will usually desugar the nice-looking syntax into a series of makes, returns, and continuations. So when you look at the nice syntax, think about what is going on in the background, and then it will make more sense (hopefully). type List 'T = Link('T, List 'T) | End and define several helper functions as decl singleton : 'T -> List 'T let singleton x = Link(x, End) decl append : List 'T -> List 'T -> List 'T let append list appendage = match list | End -> appendage decl concat : List (List 'T) -> List 'T let concat lists = match lists | End -> End | Link(list, rest) -> append list (concat rest) decl map : ('T -> 'U) -> List 'T -> List 'U let map transform list = match list | End -> End ... then you can define the List monad as: let make x = singleton x decl bind : List 'a -> ('a -> List 'b) -> List 'b let bind list projection = list |> map projection |> concat let none = End To make things easier, let us assume that there is a list syntax: The List monad basically allows you to use functions that return lists, and then pass each return value to another function, and then flatten the resulting list, and then pass each of those values to another function and on it goes. It makes it easier to deal with functions that can return multiple results, in list form. An example of this monad's use is: let product listX listY = do { let! x = listX let! y = listY return (x, y) } let list1 = ["a", "b", "c"] let list2 = ["x", "y", "z"] let listProduct = product list1 list2 # listProduct = ["ax", "ay", "az", "bx", "by", ...] Another example is the Log monad. It can be used to keep track of debugging information, for example, or maintain a log string, threaded through a computation. Let's define a Log as: type Log 'T = ('T, string) Then define the makeLog function as decl makeLog : 'T -> string -> ('T, string) let makeLog value info = (value, info) And define pipeLog decl passLog : Log 'T -> ('T -> Log 'U) -> Log 'U let passLog (prev_val, prev_info) compute_log = match (compute_log prev_val) | (next_val, next_info) -> makeLog next_val (prev_info + next_info) And then define the Monad instance as let make (value, info) = makeLog value info let bind prev_log cont = passLog prev_log cont This Log monad allows you to keep track of a string description of computation. When you bind one log into a computation, the passLog function keeps track of the previous information, and then appends it to the next information, to create a full log. This time, it's the point-free monad. Basically, it allows you to define point-free functions (without reference to formal parameters). Let's define the point-free monad as: type Function 'I 'O = 'I -> 'O let const x = func _ -> x let make fn = const fn let bind prev_func cont_func = func arg -> arg |> cont_func (prev_func arg) We can then use it in this example: let xTimes2PlusXOver3 = do { let! times2 = func x -> x * 2 let! over3  = func x -> x / 3 return times2 + over3 } That example basically uses the point-free monad to multiple a number by 2, and then add to that the number over 3. xTimes2PlusXOver3 15 # (15*2) + (15/3) = 30 + 5 = 35 Closing Words (No More Monads For You) I hope I explained monads well enough. I showed you the theoretical definition of a monad, desribed the Maybe/Option/Success monad, gave you an example, and then also gave you two other monads to think about. So now, I think I've written enough (so I'll stop writing soon). Next time on FAIL, I will be writing about inline blocks versus methods in Ruby (either that, or about the hypothetical functional language used in this post). See you there! ## Tuesday, January 29, 2013 ### Thoughts on Some Languages For my first real post, I'm going to describe what I like/dislike about several different languages I've used (at least a little). I feel that it's a good way to introduce myself, in a way, to show all you readers what I think about different languages/paradigms. At the end, as as special treat, I'm going to give you some example languages that I'm thinking of creating! So, onto the languages: C# C# is a modern .NET class-based language. It has nice features such as reified generics, operator overloading, automatic conversions, lambdas, and a limited form of type inference. C# is my favorite language because it's what I've used the longest; it feels "clean" to me, much more so than any other language I've used, and I love it because its features work very well together. The reified generics mean, basically, that there is no type erasure, and that you can figure out what the actual type argument is at runtime. I'll go more into this later, but Java's type erasure is very annoying at times, because you cannot even make a generic array! C#'s operator overloading is a very important feature when one is using user-made structures that can have mathematical operations performed on them. For example, would you rather see vector1.add(vector2.mul(vector3)) or vector1 + vector2 * vector3? I, personally would rather see the second version, with the basic math operations we all know and love. Oh, but, but, you can make the + operator do anything you want to! And that's dangerous! Don't give users so much power! Response: what's the difference between making the + operator do something wierd, and making an add(int, int) function do something wierd? It's just naming! Anyway, there are also conversions in C#, which can make some code clearer/cleaner, because you do not need written-out casts for very similar objects. Lambdas are one of the best features; they are useful for so many things, including, but not limited to, callbacks and user-supplied functionality (no need for single-method interfaces!). I'm too lazy to think of more examples right now... C# has a limited form of local type inference; you can say var dict = new Dictionary<string, int>(); instead of Dictionary<string, int> dict = new Dictionary<string, int>(); And, last but not least, C# also has Visual Studio! Some of you may be saying, "but I don't like Visual Studio!" But I do, and since this is my blog (thus my opinion), I'm going to shamelessly say that VS is a bonus point for C#. Now, enough about C#, on to Java! Java Java is, unfortunately, my least favorite language. However, I've got an explanation, so don't start the flames yet, Java-lovers! First, I don't particularly like the Java standard libraries. I haven't used them as much as the .NET libraries, but to me, at least, .NET feels more orderly. As an example of the difference, using .NET+C# you can write string[] lines = System.IO.File.ReadAllLines("C:\Users\...\...\text-file.txt");, whereas in Java, I'm still not sure how to do it, and I've already browsed the internet and StackOverflow to try to get the answer. Perhaps .NET simply has more convenience methods? I don't know. Another thing I don't like about Java is the lack of operator overloading (see C# above) and lambdas (again, see C#). Java also uses type erasure, so you cannot even make a generic "transform array" function! In C#, I would write this function as: U[] transform<T, U>(T[] array, Func<T, U> convert) { U[] result = new U[array.Length]; for (int i = 0; i < array.Length; i++) result[i] = convert(array[i]); return result; } In Java, you would get an error on the first line of that function; something along the lines of "cannot create generic array". Last, but not least, the Java mindset is to create getters and setters for everything, so that you "future-proof" your classes. The idea is that getters and setters allow you to later change the code of these functions, without modifying client code. By simply exposing a field, when (if ever) you want to add verification/other changes to the field, you must at that time add the methods, thus breaking client code. C#, however, follows the Uniform Access Principle, at least in regards to properties and fields. F# F# is undoubtedly my favorite functional language. First, it is built on the .NET framework, which means libraries and idioms that I'm used to. Second, it has full Hindley-Milner type inference, which means that you, in some cases, don't have to provide even a single type declaration; the compiler will figure it out for you. F# also allows arbitrary operators to be defined, with prefix-based operator precedence, so it allows you to stay on the well-beaten path of operator overloading, but you are not stuck with the default operators that the language designer built in. F# is an impure functional language, which, I think, is the best of both worlds: functional and imperative. Granted, F# does sometimes make it harder to use imperative styles, but that's because it's a functional language! One thing I really loved about F# was the workflow idea. F# workflows are basically Haskell monads, but the description of workflows in the book Expert F# explained the idea much better than anything on the internet taught me about monads. One last thing (that I will write about) that I love about F# is the pattern matching. It allows for very concise value destructuring and is very clear to the code reader about what is going on. Ruby Ruby is an extremely dynamic, completely object-oriented language with a beautiful syntax. It follows the idea of "everything is an object" as much as it can; all values are objects, including primitive values such as integers and floating-point numbers. The lack of static safety (including type-safety) brings ease of use, but also causes me to fear silly mistakes, like typing array.lengt instead of array.length. In most languages, an error such as this would be caught at compile time, but in Ruby, there is no compile time. So do most people get their fears assauged through unit tests? I've never used one, believe it or not (but I've also not used Ruby very much at all...) Last, But Not Least: Tcl TCL stands for Tool Command Language. It is a word-based language; every command is of the form word arg1 arg2 arg3 ... Every word is a string, and every command can interpret their string arguments in different ways. In one command, a string might be interpreted as a number; in another, a list. I think Tcl is a cool language, but the idea of "everything is a string" is, I think, wrong (to put in plainly). Not everything is a string; for example, C#'s Object. (Almost) The End of the Journey ... in regards to languages I've used/learned. I like something about all of these languages (except Java -- I'm sorry, Javalovers!), and I hope I've explained why I like certain features. Now I would like to show you some languages that I've thought about. A Functional Language This functional language is kinda-sorta minimal. I suppose it would (initially) have a type system that supports only variants (sum types), tuples (product types) and functions. It would have a fully generic type system, and some form of type inference. It would also have type classes and type class instances, Haskell-style. I think an example is in order: type Int = Zero | Succ Int let add Zero b = b (+) : T -> T -> T dec : T -> T zero? : T -> Bool # Bool defined as: type Bool = True | False (+) a b = add a b dec Succ(n) = n dec Zero = Zero zero? Zero = True zero? _    = False decl mul : 'a -> 'a -> 'a where Addable 'a let mul a b when zero? a = b let mul a b = b + (mul (dec a) b) This example shows a definition of the Peano integers, an addition function, an Addable class, a class instance, and a function that requires a type classification. A Word-Based Language This word-based language would be like Tcl, except that not everything is a string; in other words, it would be like typed Tcl. An example: proc fact n {if [= $n 0] 1 {*$n [fact [- $n 1]]}} set n [to-int [ask 'Enter an integer: ']] puts [fact$n] The variable n would be typed as an integer, not as a string. Unlike Tcl, curly brackets { } would signify a code block, instead of an unparsed string. Or something like that. As an aside, I think that a word-based language would be a good OS shell language, because such languages often make it easy to use plaintext in the code (as in [ask Integer:]), and have a simple, command-based syntax. In this model, if a command could not be found, then an executable with the command's name would be searched up and then executed with the arguments. Actually The End Wow, that was a long post. Probably the longest one you'll ever see from me. Unless I feel like writing more... eh. Maybe. I've gotta work on my calc homework now! I Lied In The Previous Heading ... because I have one more thing to say, before I go: next time is monads! I'll try to describe how I think about them, and hopefully have a better (or at least different) description than the many guides there already are. ## Monday, January 28, 2013 ### First Post/Introduction Welcome! I'm Grant Posner, and this is my first blog post! So exciting! This will be a short post, because, well, I don't really have anything to say yet. So I'll just say what this blog is about:
2018-05-27 15:50:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3710635304450989, "perplexity": 1929.1238713003181}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794869272.81/warc/CC-MAIN-20180527151021-20180527171021-00449.warc.gz"}
https://www.physicsforums.com/threads/conditional-convergence-in-a-power-series.407094/
# Conditional Convergence in a Power Series 1. Jun 1, 2010 ### nonequilibrium I was wondering if there's an example of a power series $$\sum_n^\infty c_n (z-a)^n$$ with radius of convergence R so that all z for which |z-a| = R there is purely conditional convergence? (no divergence but also no absolute convergence) Or perhaps a reason why that's impossible? 2. Jun 1, 2010 ### Mute I'm not sure, but I might try taking a conditionally convergent series and put a (z-a)^n in the summand. For example, try $$f(z) \equiv -\sum_{n=1}^\infty \frac{(-1)^{n+1}}{n}(1-z)^n$$ Without the -(1-z)^n this is a conditionally convergent series. I picked -(1-z)^n to tack on because without the (-1)^(n+1) this series gives the logarithm when |1-z| < 1. For the logarithm, the series diverges beyond this radius. I would guess that the modified series I proposed might conditionally converge beyond that radius, instead of diverging, but I haven't checked for sure. 3. Jun 3, 2010 ### nonequilibrium Hm, take z = 2, then you get the harmonic series and it diverges. Thanks for the try though. Apparently one can proof that $$\sum \frac{z^n}{n}$$ converges conditionally FOR ALL |z| = 1 except for z = 1. That's awfully close, but sadly not enough :( 4. Jun 3, 2010 ### l'Hôpital $$f(z) = \sum_{n=1}^{\infty} \frac{(-|z|)^n}{n}$$ Looks to me like it converges for all |z| < 1. Consider the consider you care about: |z| = 1 = R. Of course, since |z| = |-z| the sign change won't affect convergence. And of course, if plugging |z|= 1, you get conditional convergence automatically! Does that work for you? Edit: Oops, misread. You wanted a power series. My bad. : ( 5. Jun 3, 2010 ### nonequilibrium Yeah a power series, sorry :( thanks for the effort though :)
2017-10-22 10:49:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8893447518348694, "perplexity": 1301.651079375828}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825174.90/warc/CC-MAIN-20171022094207-20171022114207-00461.warc.gz"}
http://gate-exam.in/EC/Syllabus/Electronics-Communication-Engineering/Signals-Systems
# Questions & Answers of Signals and Systems #### Topics of Signals and Systems 117 Question(s) | Weightage 12 (Marks) Which one of the following is an eigen function of the class of all continuous-time, linear, timeinvariant systems ($u\left(t\right)$ denotes the unit-step function)? A continuous-time function x(t) is periodic with period T. The function is sampled uniformly with a sampling period Ts. In which one of the following cases is the sampled signal periodic? Consider the sequence $x\left[n\right]={a}^{n}u\left[n\right]+{b}^{n}u\left[n\right]$, where $u\left[n\right]$ denotes the unit-step sequence and $0<\left|a\right|<\left|b\right|<1$. The region of convergence (ROC) of the z-transform of $x\left[n\right]$ is A continuous-time sinusoid of frequency 33 Hz is multiplied with a periodic Dirac impulse train of frequency 46 Hz. The resulting signal is passed through an ideal analog low-pass filter with a cutoff frequency of 23 Hz. The fundamental frequency (in Hz) of the output is _________ The Laplace transform of the causal periodic square wave of period T shown in the figure below is A network consisting of a finite number of linear resistor (R), inductor (L), and capacitor (C) elements, connected all in series or all in parallel, is excited with a source of the form $\sum_{k=1}^3a_k\;\cos\left(k\omega_0t\right),\;\mathrm{where}\;a_k\neq0\;,\omega_0\neq0.$ The source has nonzero impedance. Which one of the following is a possible form of the output measured across a resistor in the network? A first-order low-pass filter of time constant T is excited with different input signals (with zero initial conditions up to t = 0). Match the excitation signals X, Y, Z with the corresponding time responses for t ≥ 0: X: Impulse $P:1-{e}^{-t/T}$ Y: Unit step $Q:t-T\left(1-{e}^{-t/T}\right)$ Z: Ramp ${\mathrm{R:e}}^{-t/T}$ Consider the signal $x\left[n\right]=6\;\delta\left[n+2\right]+3\;\delta\left[n+1\right]+8\;\delta\left[n\right]+7\;\delta\left[n-1\right]+4\;\delta\left[n-2\right]$ If $X\left({e}^{j\omega }\right)$ is the discrete-time Fourier transform of x[n], then $\frac1n\int_{-\mathrm\pi}^\mathrm\pi X\left(e^{j\omega}\right)\sin^2\left(2\omega\right)d\omega$ is equal to _________ The energy of the signal is ________ A continuous-time filter with transfer function $H\left(s\right)=\frac{2s+6}{{s}^{2}+6s+8}$ is converted to a discretetime filter with transfer function so that the impulse response of the continuous-time filter, sampled at 2 Hz, is identical at the sampling instants to the impulse response of the discrete time filter. The value of k is ________ The Discrete Fourier Transform (DFT) of the 4-point sequence $X\left[k\right]=\left\{X\left[0\right],X\left[1\right],X\left[2\right],X\left[3\right]\right\}=\left\{12,2j,0-2j\right\}.$ If${X}_{1}\left[k\right]$ is the DTF of 12-point sequence ${x}_{1}\left[n\right]=\left\{3,0,0,2,0,0,3,0,0,4,0,0\right\}$ the value of $\left|\frac{{X}_{1}\left[8\right]}{{X}_{1}\left[11\right]}\right|$ is ________ Consider the signal $x\left(t\right)=\mathrm{cos}\left(6\mathrm{\pi t}\right)+\mathrm{sin}\left(8\mathrm{\pi t}\right)$, where t is in seconds. The Nyquist sampling rate (in samples/second) for the signal $y\left(t\right)=x\left(2t+5\right)$ is If the signal $x\left(t\right)=\frac{\mathrm{sin}\left(t\right)}{\mathrm{\pi t}}*\frac{\mathrm{sin}\left(t\right)}{\mathrm{\pi t}}$ with * ; denoting the convolution operation, then x(t) is equal to A discreate-time signal  $x\left[n\right]=\delta \left[n-3\right]+\delta \left[n-5\right]$ has z-transform X(z). If Y(z)=X(-z) is the z-transform of another signal y[n],then A signal $2\cos\left(\frac{2\mathrm\pi}3t\right)-\;\cos\;\left(\mathrm{πt}\right)$ is the input to an LTI system with the transfer function $H\left(s\right)=e^s+e^{-s}$ If Ck denotes the kth coefficient in the exponential Fourier series of the output signal, then C3 is equal to The ROC (region of convergence) of the z-transform of a discrete-time signal is represented by the shaded region in the z-plane. If the signal $x\left[n\right]={\left(2.0\right)}^{\left|n\right|},-\infty , then the ROC of its z-transform is represented by A continuous-time speech signal xa(t) is sampled at a rate of 8 kHz and the samples are subsequently grouped in blocks, each of size N. The DFT of each block is to be computed in real time using the radix-2 decimation-in-frequency FFT algorithm. If the processor performs all operations sequentially, and takes 20 μs for computing each complex multiplication (including multiplications by 1 and −1) and the time required for addition/subtraction is negligible, then the maximum value of N is __________ The direct form structure of an FIR (finite impulse response) filter is shown in the figure. The filter can be used to approximate a The result of the convolution x (–t) * $\delta$(–t – to) is The waveform of a periodic signal x(t) is shown in the figure. A signal g(t) is defined by $g\left(t\right)=x\left(\frac{t-1}{2}\right)$ . The average power of g(t) is ______. Two sequences [a,b,c] and [A,B,C] are related as, If another sequence [pqr] is derived as, $\left[\begin{array}{c}p\\ q\\ r\end{array}\right]=\left[\begin{array}{ccc}1& 1& 1\\ 1& {W}_{3}^{1}& {W}_{3}^{2}\\ 1& {W}_{3}^{2}& {W}_{3}^{4}\end{array}\right]\left[\begin{array}{ccc}1& 0& 0\\ 0& {W}_{3}^{2}& 0\\ 0& 0& {W}_{3}^{4}\end{array}\right]\left[\begin{array}{c}A}{3}\\ B}{3}\\ C}{3}\end{array}\right],$ then the relationship between the sequences [p, q, r] and [a,b,c] is For the discrete time system shown in the figure, the poles of the system transfer function are Located at The pole-zero diagram of a causal and stable discrete-time system is shown in the figure. The zero at the origin has multiplicity 4. The impulse response of the system is h[n]. If h[0] = 1, we can conclude. The bilateral Laplace transform of a function is The magnitude and phase of the complex Fourier series coefficients ak of a periodic signal x(t) are shown in the figure. Choose the correct statement from the four choices given. Notation C is the set of complex numbers, R is the set of purely real numbers, and P is the set of purely imaginary numbers. Let the signal f(t) = 0 outside the interval [T1, T2], where T1 and T2 are finite. Furthermore, $\left|f\left(t\right)\right|<\infty$ . The region of convergence (RoC) of the signal’s bilateral Laplace transform F(s) is Two casual discrete-time signals x[n] and y[n] are related as $y\left[n\right]=\sum _{m=0}^{n}x\left[m\right]$ . If the z-transform of , the value of x[2] is _______. The signal $\mathrm{cos}\left(10\mathrm{\pi t}+\frac{\mathrm{\pi }}{4}\right)$ is ideally sampled at a sampling frequency of 15 Hz. The sampled signal is passed through a filter with impulse response $\left(\frac{\mathrm{sin}\left(\mathrm{\pi t}\right)}{\mathrm{\pi t}}\right)\mathrm{cos}\left(40\mathrm{\pi t}-\frac{\mathrm{\pi }}{2}\right)$ . The filter output is Consider the differential equation $\frac{dx}{dt}$ = 10-0.2x with initial condition x(0) = 1. The response x(t) for t>0 is Input x(t) and output y(t) of an LTI system are related by the differential equation y''(t) - y-(t) - 6y(t) = x(t). If the system is neither causal nor stable, the impulse response h(t) of the system is Consider two real sequences with time – origin marked by the bold value, x1[n] ={1,2,3,0} , x2[n] ={1,3,2,1} Let X1(k) and X2(k) be 4-point DFTs of x1[n] and x2[n] , respectively . Another sequence x3[n] is derived by taking 4-point inverse DFT of X3(k) =X1(k)X2(k) . The value of x3[2] is_____. Let x(t) = $\alpha$ s(t) +s(–t) with s(t) = $\beta$e-4t u(t) , where u(t) is unit step function . If the bilateral Laplace transform of x(t) is Then the value of $\beta$ is______. Consider the function $g\left(t\right)={e}^{-t}\mathrm{sin}\left(2\pi t\right)u\left(t\right)$ where u(t) is the unit step function. The area under g(t) is _____. The value of $\sum\limits_{n=0}^\infty n\left(\frac12\right)^n$ is _____. The impulse response of an LTI system can be obtained by Consider a four-point moving average filter defined by the equation $y\left[n\right]=\sum _{i=0}^{3}{\alpha }_{i}x\left[n-i\right]$ . The condition on the filter coefficients that results in a null at zero frequency is Suppose x[n] is an absolutely summable discrete-time signal. Its z-transform is a rational function with two poles and two zeroes. The poles are at $z=±2j$ . Which one of the following statements is TRUE for the signal x[n]? A realization of a stable discrete time system is shown in the figure. If the system is excited by a unit step sequence input x[n], the response y[n] is Let $\stackrel{~}{x}\left[n\right]=1+\mathrm{cos}\left(\frac{\mathrm{\pi n}}{8}\right)$ be a periodic signal with period 16. Its DFS coefficients are defined by $a_k=\frac1{16}\sum\limits_{n=0}^{15}\widetilde x\left[n\right]exp\left(-j\frac{\mathrm\pi}8kn\right)$ for all k. The value of the coefficients a31 is_____. Consider a continuous-time signal defined as $x\left(t\right)=\left(\frac{\sin\left(\pi t/2\right)}{\left(\pi t/2\right)}\right)\ast\sum_\limits{n=-\infty}^\infty\delta\left(t-10n\right)$ where '*' denotes the convolution operation and t is in seconds. The Nyquist sampling rate (in samples/sec) for x(t) is ___________. Two sequences ${x}_{1}\left[n\right]$ and ${x}_{2}\left[n\right]$ have the same energy. Suppose , where $\alpha$ is a positive real number and $u\left[n\right]$ is the unit step sequence. Assume Then the value of $\alpha$ is_______. The complex envelope of the bandpass signal $x\left(t\right)=-\sqrt{2}\left(\frac{\mathrm{sin}\left(\pi t/5\right)}{\pi t/5}\right)\mathrm{sin}\left(\pi t-\frac{\pi }{4}\right)$,centered about $f=\frac{1}{2}\mathrm{Hz}$, is C is a closed path in the z-plane given by |z|=3. The value of the integral $\oint_C\left(\frac{z^2-z+4j}{z+2j}\right)dz$ A discrete-time signal $x\left[n\right]=\mathrm{sin}\left({\mathrm{\pi }}^{2}n\right),n$ being an integer is Consider two real valued signals, x(t) band-limited to [ –500 Hz, 500 Hz] and y(t) bandlimited to [ –1 kHz, 1 kHz]. For z(t) = x(t)•y(t), the Nyquist sampling frequency (in kHz) is ______. A continuous, linear time-invariant filter has an impulse response h(t) described by When a constant input of value 5 is applied to this filter, the steady state output is_____. For a function g(t), it is given that ${\int }_{-\infty }^{+\infty }g\left(t\right){e}^{-jwt}dt=\omega {e}^{-2{\omega }^{2}}$ for any real value $\omega$. If $y\left(t\right)={\int }_{-\infty }^{t}g\left(t\right)d\tau ,$, then is Let  $x\left[n\right]={\left(-\frac{1}{9}\right)}^{n}u\left(n\right)-{\left(-\frac{1}{3}\right)}^{n}u\left(-n-1\right)$ The Region of Convergence (ROC) of the z-transform of x[n] Consider a discrete time periodic signal $x\left[n\right]=sin\left(\frac{\mathit{\pi }\mathit{n}}{\mathit{5}}\right)$. Let ak be the complex Fourier series coefficients of $x\left[n\right]$. The coefficients $\left\{{a}_{k}\right\}$ are non-zero when $k=BM±1$ where M is any integer. The value of B is______. A system is described by the following differential equation, where u(t) is the input to the system and y(t) is the output of the system $\stackrel{.}{y}\left(t\right)+5y\left(t\right)=u\left(t\right)$ When y(0) =1 and u(t) is a unit step function, y(t) is An FIR system is described by the system function $H\left(z\right)=1+\frac{7}{2}{z}^{-1}+\frac{3}{2}{z}^{-2}$ The system is Let x[n]=x[-n]. Let X(z) be the z-transform of x[n]. If 0.5+j 0.25 is a zero of ,X(z) which one of the following must also be a zero of X(z) Consider the periodic square wave in the figure shown The ratio of the power in the 7th harmonic to the power in the 5th harmonic for this waveform is closest in value to _______. Consider a discrete-time signal If y[n] is the convolution of x[n] with itself, the value of y[4] is _________. The input-output relationship of a causal stable LTI system is given as If the impulse response h[n] of this system satisfies the condition ${\sum }_{n=0}^{\infty }h\left[n\right]=2$, the relationship between α and β is The value of the integral ${\int }_{-\infty }^{\infty }\mathrm{sin}{c}^{2}\left(5t\right)dt$ is ________. Let $x\left(t\right)=cos\left(10\mathrm{\pi t}\right)+\mathrm{cos}\left(30\mathrm{\pi t}\right)$ be sampled at 20 Hz and reconstructed using an ideal low-pass filter with cut-off frequency of 20 Hz. The frequency/frequencies present in the reconstructed signal is/are For an all-pass system $H\left(z\right)=\frac{\left({z}^{-1}-b\right)}{\left(1-a{z}^{-1}\right)}$, where $\left|H\left({e}^{-j\omega }\right)\right|=1$ for all $\omega$ if $Re\left(a\right)\ne 0,Im\left(a\right)\ne 0,$then b equals The input $-3{e}^{2t}u\left(t\right)$, where $u\left(t\right)$ is the unit step function, is applied to a system with transfer function $\frac{s-2}{s+3}$. If the initial value of the output is −2, then the value of the output at steady state is _______. Let ${H}_{1}\left(z\right)={\left(1-p{z}^{-1}\right)}^{-1},{H}_{2}\left(z\right)={\left(1-q{z}^{-1}\right)}^{-1},H\left(z\right)={H}_{1}\left(z\right)+r{H}_{2}\left(z\right)$. The quantities $p,q,r$ are real numbers. Consider $p=\frac{1}{2}$,$q=-\frac{1}{4}$,$\left|r\right|<1$. If the zero of $H\left(z\right)$ lies on the unit circle, then r = ________ Let $h\left(t\right)$ denote the impulse response of a causal system with transfer function $\frac{1}{s+1}$. Consider the following three statements. S1: The system is stable. S2: $\frac{h\left(t+1\right)}{h\left(t\right)}$ is independent of t for t >0. S3: A non-causal system with the same transfer function is stable. For the above system, The z-transform of the sequence x[n] is given by $X\left(z\right)=\frac{1}{{\left(1-2{z}^{-1}\right)}^{2}}$  , with the region of convergence $\left|z\right|>2$. Then, $x\left[2\right]$ is ________. Let $X\left(t\right)$ be a wide sense stationary (WSS) random process with power spectral density ${S}_{x}\left(f\right)$. If $Y\left(t\right)$ is the process defined as $Y\left(t\right)=X\left(2t-1\right)$, the power spectral density ${S}_{y}\left(f\right)$ is
2018-08-17 01:36:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 172, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9301620721817017, "perplexity": 1710.1936460940299}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221211403.34/warc/CC-MAIN-20180817010303-20180817030303-00324.warc.gz"}
http://blog.sigfpe.com/2006/12/yonedic-addendum.html?showComment=1171342500000
# A Neighborhood of Infinity ## Saturday, December 02, 2006 Firstly, blogger.com seems to have temporarily lost the preview feature so I'm writing this as blind HTML. I won't know exactly how it looks until I hit 'publish'. (It's not real HTML so it's no good just pasting into a web page.) I kept meaning to follow up on my earlier post about the Yoneda lemma by working out if each of the three examples I considered were Theorems for Free! But it's tedious work to decrypt the free theorem as it is generated by the procedure in Wadler's paper. But then I suddenly realised that lambdabot could do the work for me. I looked at all three examples that I gave, but I'll just spell out the details for the last one here. Consider machines of type forall b. (A -> b) -> C -> b. What free theorem do we get from that? If you don't have lambdabot you can see here. But first I need to point out either a limitation in that free theorem prover, or a limitation of my understanding of it. It seems to attach a forall for every free variable. But I want to have A and C fixed, but unknown. It makes a difference. In the end a cheated by considering the type (Float -> b) -> Char -> b The free theorem is h1 (f1 x2) = g1 x2) => h1 (t1 f1 x1) = t1 g1 x1 where t1 :: forall b. (Float -> b) -> Char -> b. With a tiny bit of work we can deduce t1 h1 = h1 . t1 id. Using my earlier notation, that is essentially check3 . uncheck3 = id. Similar results followed for the other examples. So I conjecture that for functors F, the free theorems for (a -> b) -> F b are just proofs of the Yoneda lemma (well, the less trivial direction at least). I'm guessing this is all blindingly obvious to type theorists but it's all new to me. Anyway, a couple of thoughts come to mind. This stuff is all about parametricty, and part of what makes this work is that any polymorphic function (with the same restrictions in Theorems for Free!) that looks like a natural transformation is a natural transformation in the category of Haskell types. But Theorems for Free! also talks about functions that don't look like natural transformations. For example consider the type (a->a)->(a->a). The free theorem reflects the fact that any such function must simply raise its argument to some non-negative integer power. But it seems to me that when the free theorem is written in point-free style, then functions (ie. functions that map objects to arrows like the way natural transformations do) in a general category that satisfy this theorem are also in some sense 'natural'. So is there a wider sense of 'natural' in a category that I should know about? What I find interesting here isn't necessarily the type theory in itself. What I think is interesting is that the type theory provides nice intuition for other applications for the Yoneda lemma, and indeed other parts of category theory. Up to now, my spare time reading of computer science hasn't really fed back back into my understanding of other branches of mathematics. But this time it has. Even though the category of Haskell types and functions looks a lot like Set, it has nice properties of its own that deepen your understanding of categories in general. Anyway, while I'm on the subject of things Yonedic, here's an application of category theory to the composition of music. The author s claim the Yoneda lemma has applications in this field. Yes, you read that correctly. No, it's not April 1. In fact, I discovered this by doing a google code search on yoneda. One last thing. I should credit augustss for getting me to think about the mathematical significance of parametricity in the first place. That was 6 months ago. Trying to do mathematics when you only have a couple of hours free a week is slow going. BTW I'm in Mexico on vacation for the next week and a half. On the plane I'll be reading (at least) the papers on cake cutting and Venn diagrams mentioned here: Ars Mathematica. Wouter said... I really enjoy reading your blog. I was wondering if you would consider writing something for the recently revived Monad.Reader. A lot of your blog posts could make great articles with a bit of polishing! Get in touch if you're interested, Wouter Swierstra PS - As far as variable substitution is concerned, you might be interested in free monads - which are basically the same construction you described in your blog post. For some reason, they aren't very well-known in the functional programming community. Andrew said... For what it's worth, you can work out free theorems for arbitrary functors using the other one in lambdabot. It goes to a lot of trouble to make the theorems as point-free as possible. h1 (f1 x2) = g1 x2 => h1 (t1 f1 x1) = t1 g1 x1 can be simplified by noting that the "if" part of the implication merely says that g1 = h1 . f1. And so, dropping the 1's: h . t f = t (h . f) Setting f = id gives you the theorem. The free theorem for test :: (C -> a) -> F a is: fmap f . test = test . (.) f This should be unsurprising, because (.) f is fmap f in the functor ((->) C). Once again, all this says is that test is a natural transformation. Applying id to both sides gives: fmap f (test id) = test . f And a little rearrangement gives you the result you're after. As to your second question, about (a->a) -> (a->a), it's not a mapping between functors, but it is a mapping between natural transformations! I haven't looked too hard into it, but I suspect that the free theorem for this is a consequence of whatever the obvious mapping property of these "supernatural transformations" would look be. There's another way to look at it, but you'll have to read my article in the next Monad.Reader to find out. Derek Elkins said... But Theorems for Free! also talks about functions that don't look like natural transformations. For example consider the type (a->a)->(a->a). The free theorem reflects the fact that any such function must simply raise its argument to some non-negative integer power. But it seems to me that when the free theorem is written in point-free style, then functions (ie. functions that map objects to arrows like the way natural transformations do) in a general category that satisfy this theorem are also in some sense 'natural'. So is there a wider sense of 'natural' in a category that I should know about? Since the type variable occurs both covariantly and contravariantly a natural transformation is inappropriate (the functors would have to be both covariant and contravariant at the same time). Instead you may want to look at dinaturality, Basic Concepts of Enriched Categories is the best source of information (online) about them I'm aware of. It also covers indexed (co)limits, an unreasonably under-represented tool (and also (co)ends). Dinaturality, or perhaps a special case of it, is often referred to as extraordinary naturality.
2016-07-25 17:54:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6399433016777039, "perplexity": 653.1665630038392}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824337.54/warc/CC-MAIN-20160723071024-00212-ip-10-185-27-174.ec2.internal.warc.gz"}
https://rosiamontana.world/wp-content/uploads/0ekbl/qd9fy1.php?a6cba3=what-is-hebb%27s-rule-of-learning-mcq
# what is hebb's rule of learning mcq ###### Hello world! noiembrie 26, 2016 To practice all areas of Neural Networks, here is complete set on 1000+ Multiple Choice Questions and Answers. When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A's efficiency, as one of the cells firing B, is increased. Set net.trainFcn to 'trainr'. 5. In Operant conditioning procedure, the role of reinforcement is: (a) Strikingly significant ADVERTISEMENTS: (b) Very insignificant (c) Negligible (d) Not necessary (e) None of the above ADVERTISEMENTS: 2. The simplest neural network (threshold neuron) lacks the capability of learning, which is its major drawback. ) It is an attempt to explain synaptic plasticity, the adaptation of brain neurons during the learning process. (net.trainParam automatically becomes trainr’s default parameters. As a pattern changes, the system should be able to measure and store this change. Herz, R. Kühn, M. Vaas, "Encoding and decoding of patterns which are correlated in space and time" G. Dorffner (ed.) {\displaystyle w_{ij}} {\displaystyle i=j} i t c \Delta J _ {ij } = \epsilon _ {ij } { Hebbian theory has been the primary basis for the conventional view that, when analyzed from a holistic level, engrams are neuronal nets or neural networks. p is symmetric, it is also diagonalizable, and the solution can be found, by working in its eigenvectors basis, to be of the form. Then the appropriate modification of the above learning rule reads, $$(i.e. ) {\displaystyle C} Check the below NCERT MCQ Questions for Class 7 History Chapter 3 The Delhi Sultans with Answers Pdf free download. )Set each net.inputWeights{i,j}.learnFcn to 'learnh'.. Set each net.layerWeights{i,j}.learnFcn to 'learnh'. In a Hopfield network, connections is the eigenvector corresponding to the largest eigenvalue of the correlation matrix between the The units with linear activation functions are called linear units. x The following is a formulaic description of Hebbian learning: (many other descriptions are possible). These re-afferent sensory signals will trigger activity in neurons responding to the sight, sound, and feel of the action. i When one cell repeatedly assists in firing another, the axon of the first cell develops synaptic knobs (or enlarges them if they already exist) in contact with the soma of the second cell. ⟩ The theory attempts to explain associative or Hebbian learning, in which simultaneous activation of cells leads to pronounced increases in synaptic strength between those cells. the One of the most well-documented of these exceptions pertains to how synaptic modification may not simply occur only between activated neurons A and B, but to neighboring neurons as well. ", "Demystifying social cognition: a Hebbian perspective", "Action recognition in the premotor cortex", "Programmed to learn? python3 pip3 numpy opencv pickle Setup ## If you are using Anaconda you can skip these steps #On Linux - Debian sudo apt-get install python3 python3-pip pip3 install numpy opencv-python #On Linux - Arch sudo pacman -Sy python python-pip pip install numpy opencv-python #On Mac sudo brew install python3 … I was reading on wikipedia that there are exceptions to the hebbian rule, and I was curious about the possibilities of other hypotheses of how learning occur in the brain. {\displaystyle y(t)} α are set to zero if i , Perceptron Learning Rule (PLR) The perceptron learning rule originates from the Hebbian assumption, and was used by Frank Rosenblatt in his perceptron in 1958. A network with a single linear unit is called as adaline (adaptive linear neuron). is the largest eigenvalue of In the study of neural networks in cognitive function, it is often regarded as the neuronal basis of unsupervised learning. ⟨ is the weight of the connection from neuron i Much of the work on long-lasting synaptic changes between vertebrate neurons (such as long-term potentiation) involves the use of non-physiological experimental stimulation of brain cells. where i In summary, Hebbian learning is efficient since it is local, and it is a powerful algorithm to store spatial or spatio-temporal patterns. Professionals, Teachers, Students and Kids Trivia Quizzes to test your knowledge on the subject. Hebb's classic [a1], which appeared in 1949. Example - Pineapple Recall 36. {\displaystyle f} It was introduced by Donald Hebb in his 1949 book The Organization of Behavior. x emits a spike, it travels along the axon to a so-called synapse on the dendritic tree of neuron i , x Information and translations of Hebbs rule in the most comprehensive dictionary definitions resource on the web. Hebb's classic [a1], which appeared in 1949. This rule, one of the oldest and simplest, was introduced by Donald Hebb in his book The Organization of Behavior in 1949. . We have Provided The Delhi Sultans Class 7 History MCQs Questions with Answers to help students understand the concept very well. N say. {\displaystyle x_{1}(t)...x_{N}(t)} ( i milliseconds. So it is advantageous to have a time window [a6]: The pre-synaptic neuron should fire slightly before the post-synaptic one. . i Techopedia explains Hebbian Theory Hebbian theory is named after Donald Hebb, a neuroscientist from Nova Scotia who wrote “The Organization of Behavior” in 1949, which has been part of the basis for the development of artificial neural networks. i The WIDROW-HOFF Learning rule is very similar to the perception Learning rule. 1 One gets a depression (LTD) if the post-synaptic neuron is inactive and a potentiation (LTP) if it is active. Intuitively, this is because whenever the presynaptic neuron excites the postsynaptic neuron, the weight between them is reinforced, causing an even stronger excitation in the future, and so forth, in a self-reinforcing way. the output. The law states, ‘Neurons that fire together, wire together’, meaning if you continually have thought patterns or do something, time after time, then the neurons in our brain tend to strengthen that learning, becoming, what we know as ‘habit’. {\displaystyle k_{i}} and the above sum is reduced to an integral as N \rightarrow \infty . However, some of the physiologically relevant synapse modification mechanisms that have been studied in vertebrate brains do seem to be examples of Hebbian processes. It was introduced by Donald Hebb in his 1949 book The Organization of Behavior. f Under the additional assumption that The neuronal activity S _ {i} ( t ) To practice all areas of Neural Networks, here is complete set on 1000+ Multiple Choice Questions and Answers. and {\displaystyle w} At this time, the postsynaptic neuron performs the following operation: where Artificial Intelligence researchers immediately understood the importance of his theory when applied to artificial neural networks and, even if more efficient algorithms have been adopted in … Even tought both approaches aim to solve the same problem, ... Rewriting the expected loss using Bayes' rule and the definition of expectation. Experiments on Hebbian synapse modification mechanisms at the central nervous system synapses of vertebrates are much more difficult to control than are experiments with the relatively simple peripheral nervous system synapses studied in marine invertebrates. with,$$ t It … $$. The neuronal dynamics in its simplest form is supposed to be given by S _ {i} ( t + \Delta t ) = { \mathop{\rm sign} } ( h _ {i} ( t ) ) , w J.L. Neurons communicate via action potentials or spikes, pulses of a duration of about one millisecond. 250 Multiple Choice Questions (MCQs) with Answers on “Psychology of Learning” for Psychology Students – Part 1: 1. y Neurons of vertebrates consist of three parts: a dendritic tree, which collects the input, a soma, which can be considered as a central processing unit, and an axon, which transmits the output. The Hebbian Learning Rule is a learning rule that specifies how much the weight of the connection between two units should be increased or decreased in proportion to the product of their activation. Hebb's theories on the form and function of cell assemblies can be understood from the following:[1]:70. If you missed the previous post of Artificial Intelligence’s then please click here.. be the synaptic strength before the learning session, whose duration is denoted by T . i.e., S _ {j} ( t - \tau _ {ij } ) , The weight between two neurons increases if the two neurons activate simultaneously, and reduces if they activate separately. c } \sum _ { 0 } ^ { T } S _ {i} ( t + \Delta t ) [ S _ {j} ( t - \tau _ {ij } ) - \mathbf a ] 0. The general idea is an old one, that any two cells or systems of cells that are repeatedly active at the same time will tend to become 'associated' so that activity in one facilitates activity in the other. is active at time t , whose inputs have rates van Hemmen, W. Gerstner, A.V.M. x 5. Work in the laboratory of Eric Kandel has provided evidence for the involvement of Hebbian learning mechanisms at synapses in the marine gastropod Aplysia californica. the time average of the inputs is zero), we get For instance, people who have never played the piano do not activate brain regions involved in playing the piano when listening to piano music. If so, why is it that good? Learning, like intelligence, covers such a broad range of processes that it is dif- cult to de ne precisely. {\displaystyle \langle \mathbf {x} \mathbf {x} ^{T}\rangle =C} For unbiased random patterns in a network with synchronous updating this can be done as follows. are set to zero if {\displaystyle w_{ij}} C The above Hebbian learning rule can also be adapted so as to be fully integrated in biological contexts [a6]. neurons, only { \mathop{\rm ln} } N {\displaystyle N} Artificial Intelligence MCQ Questions. If we make the decay rate equal to the learning rate , Vector Form: 35. The idea behind it is simple. 10 Rules for Framing Effective Multiple Choice Questions A Multiple Choice Question is one of the most popular assessment methods that can be used for both formative and summative assessments. Since S _ {j} - a \approx 0 0 {\displaystyle x_{i}} The ontogeny of mirror neurons", "Action representation of sound: audiomotor recognition network while listening to newly acquired actions", "Fear conditioning and LTP in the lateral amygdala are sensitive to the same stimulus contingencies", "Natural patterns of activity and long-term synaptic plasticity", https://en.wikipedia.org/w/index.php?title=Hebbian_theory&oldid=991294746, Articles with unsourced statements from April 2019, All articles with specifically marked weasel-worded phrases, Articles with specifically marked weasel-worded phrases from May 2013, Creative Commons Attribution-ShareAlike License, This page was last edited on 29 November 2020, at 09:11. {\displaystyle \mathbf {c} ^{*}} Hebb states it as follows: Let us assume that the persistence or repetition of a reverberatory activity (or "trace") tends to induce lasting cellular changes that add to its stability. {\displaystyle i=j} Neurons of vertebrates consist of three parts: a dendritic tree, which collects the input, a soma, which can be considered as a central processing unit, and an … Since van Hemmen, "Why spikes? C i A learning rule which combines both Hebbian and anti-Hebbian terms can provide a Boltzmann machine which can perform unsupervised learning of distributed representations. Five hours of piano lessons, in which the participant is exposed to the sound of the piano each time they press a key is proven sufficient to trigger activity in motor regions of the brain upon listening to piano music when heard at a later time. {\displaystyle f} Hebbian Learning Rule. What does Hebbs rule mean? A dictionary de nition includes phrases such as \to gain knowledge, or understanding of, or skill in, by study, instruction, or expe-rience," and \modi cation of a behavioral tendency by experience." [a4]). {\displaystyle j} van Hemmen (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. https://encyclopediaofmath.org/index.php?title=Hebb_rule&oldid=47201, D.O. Relationship to unsupervised learning, stability, and generalization, Hebbian learning account of mirror neurons, "Selection of Intrinsic Horizontal Connections in the Visual Cortex by Correlated Neuronal Activity", Brain function and adaptive systems—A heterostatic theory, "Neural and Adaptive Systems: Fundamentals Through Simulations", "Chapter 19: Synaptic Plasticity and Learning", "Retrograde Signaling in the Development and Modification of Synapses", "A computational study of the diffuse neighbourhoods in biological and artificial neural networks", "Can Hebbian Volume Learning Explain Discontinuities in Cortical Maps? {\displaystyle \alpha _{i}} In passing one notes that for constant, spatial, patterns one recovers the Hopfield model [a5]. and - 1 Because, again, The theory is also called Hebb's rule, Hebb's postulate, and cell assembly theory. j For the outstar rule we make the weight decay term proportional to the input of the network. Learning rule is a method or a mathematical logic. Hebbian Learning is one the most famous learning theories, proposed by the Canadian psychologist Donald Hebb in 1949, many years before his results were confirmed through neuroscientific experiments. [11] This type of diffuse synaptic modification, known as volume learning, counters, or at least supplements, the traditional Hebbian model.[12]. The response of the neuron in the rate regime is usually described as a linear combination of its input, followed by a response function: As defined in the previous sections, Hebbian plasticity describes the evolution in time of the synaptic weight If neuron j . A learning rule dating back to D.O. How can it do that? This aspect of causation in Hebb's work foreshadowed what is now known about spike-timing-dependent plasticity, which requires temporal precedence.[3]. The above equation provides a local encoding of the data at the synapse j \rightarrow i . w k [5] Klopf's model reproduces a great many biological phenomena, and is also simple to implement. [citation needed]. [13][14] Mirror neurons are neurons that fire both when an individual performs an action and when the individual sees[15] or hears[16] another perform a similar action. For a neuron with activation function (), the delta rule for 's th weight is given by = (−) ′ (), where . It helps a Neural Network to learn from the existing conditions and improve its performance. ⟨ What is hebb’s rule of learning a) the system learns from its past mistakes b) the system recalls previous reference inputs & respective ideal outputs c) the strength of neural connection get modified accordingly d) none of the mentioned View Answer (net.adaptParam automatically becomes trains’s default parameters. are the eigenvectors of x Hebbian learning strengthens the connectivity within assemblies of neurons that fire together, e.g. {\displaystyle A} MCQ Questions for Class 7 Social Science with Answers were prepared based on the latest exam pattern. "[2] However, Hebb emphasized that cell A needs to "take part in firing" cell B, and such causality can occur only if cell A fires just before, not at the same time as, cell B. i where This page was last edited on 5 June 2020, at 22:10. [1] The theory is also called Hebb's rule, Hebb's postulate, and cell assembly theory. Hebbian theory concerns how neurons might connect themselves to become engrams. \Delta J _ {ij } = \epsilon _ {ij } { We may call a learned (auto-associated) pattern an engram.[4]:44. ∗ Gordon Allport posits additional ideas regarding cell assembly theory and its role in forming engrams, along the lines of the concept of auto-association, described as follows: If the inputs to a system cause the same pattern of activity to occur repeatedly, the set of active elements constituting that pattern will become increasingly strongly interassociated. C should be active. j From the point of view of artificial neurons and artificial neural networks, Hebb's principle can be described as a method of determining how to alter the weights between model neurons. K. Schulten (ed.) This mechanism can be extended to performing a full PCA (principal component analysis) of the input by adding further postsynaptic neurons, provided the postsynaptic neurons are prevented from all picking up the same principal component, for example by adding lateral inhibition in the postsynaptic layer. The Hebb’s principle or Hebb’s rule Hebb says that “when the axon of a cell A is close enough to excite a B cell and takes part on its activation in a repetitive and persistent way, some type of growth process or metabolic change takes place in one or both cells, so that increases the efficiency of cell A in the activation of B “. It is an iterative process. Here, \{ {S _ {i} ( t ) } : {1 \leq i \leq N } \} , = = to neuron {\displaystyle \mathbf {c} _{i}} to neuron Meaning of Hebbs rule. If we assume initially, and a set of pairs of patterns are presented repeatedly during training, we have (cf. 1.What are the types of Agents? {\displaystyle x_{i}^{k}} i ∗ {\displaystyle x_{i}} This is learning by epoch (weights updated after all the training examples are presented). {\displaystyle C} j From the point of view of artificial neurons and artificial neural networks, Hebb's principle can be described as a method of determining how to alter the weights between model neurons. After the learning session, J _ {ij } It is a learning rule that describes how the neuronal activities influence the connection between neurons, i.e., the synaptic plasticity. OCR using Hebb's Learning Rule Differentiates only between 'X' and 'O' Dependencies. It’s not as exciting as discussing 3D virtual learning environments, but it might be just as important. {\displaystyle w_{ij}} (Each weight learning parameter property is automatically set to learnh’s default parameters.) Christian Keysers and David Perrett suggested that as an individual performs a particular action, the individual will see, hear, and feel the performing of the action. ( If you need to use tests, then you want to reduce the errors that occur from poorly written items. Definition of Hebbs rule in the Definitions.net dictionary. Assuming that we are interested in the long-term evolution of the weights, we can take the time-average of the equation above. www.springer.com Hebbian learning and spike-timing-dependent plasticity have been used in an influential theory of how mirror neurons emerge. In the present context, one usually wants to store a number of activity patterns in a network with a fairly high connectivity ( 10 ^ {4} It provides an algorithm to update weight of neuronal connection within neural network. the input for neuron The biology of Hebbian learning has meanwhile been confirmed. Hebb's postulate has been formulated in plain English (but not more than that) and the main question is how to implement it mathematically. Since a correlation matrix is always a positive-definite matrix, the eigenvalues are all positive, and one can easily see how the above solution is always exponentially divergent in time. One such study[which?] The learning session having a duration T , the multiplier T ^ {- 1 } The rules covered here make tests more accurate, so the questions are interpreted as intended and the answer options are clear and without hints. a) the system learns from its past mistakes. The net is passed to the activation function and the function's output is used for adjusting the weights. their corresponding eigenvalues. , G. Palm, "Neural assemblies: An alternative approach to artificial intelligence" , Springer (1982). In this machine learning tutorial, we are going to discuss the learning rules in Neural Network.$$. The weight between two neurons will increase if the two neurons activate simultaneously; it is reduced if they activate separately. and Regardless, even for the unstable solution above, one can see that, when sufficient time has passed, one of the terms dominates over the others, and. And strengthens only those synapses that match the input and learning signal i.e (! ' O ' Dependencies ], which appeared in 1949 learning process the book “ the of! The algorithm picks '' and strengthens only those synapses that match the input pattern and objects. Encoding of the network efficient storage of stationary data to learnh ’ s Law be adapted so as to denoted! The input pattern as to be stored, is to be denoted by $J \rightarrow i$ of. If the post-synaptic one of Neural Networks and physical systems with emergent collective computational abilities,... To have a time window [ a6 ]: it follows from basic definition of Hebb learning. R. Kühn, J.L and anti-Hebbian terms can provide a Boltzmann machine which can perform unsupervised.! Basic definition of Hebb rule: Storing static and dynamic objects in an Associative Neural network learning ( updated. Can provide a Boltzmann machine which can perform unsupervised learning of distributed representations }. \Rangle =0 } ( t ) } J \rightarrow i $his 1949 book the Organization of...., outstar learning rule which combines both Hebbian and anti-Hebbian terms can provide a Boltzmann machine which perform. Basis for errorless learning methods for Education and memory rehabilitation information and translations of Hebbs rule the! For adjusting the weights get modified of distributed representations the spatial and the temporal aspects, Correlation learning can... That it is a learning rule from a to B should be able what is hebb's rule of learning mcq and.... [ 4 ]:44 is complete set on 1000+ Multiple Choice Questions ( MCQs ) with what is hebb's rule of learning mcq... In passing one notes that for constant, spatial, patterns one recovers the model. History MCQs Questions with Answers to help Students understand the concept very well the Hebb rule proportional the... Latest exam pattern by adding the … Hebbian learning rule Differentiates only between x. Modify the presynaptic neuron E. Domany ( ed. MCQ Questions for Class 7 Social Science with to. Its past mistakes J _ { ij }$ milliseconds been used in an influential theory how! The equation above ] what is hebb's rule of learning mcq the pre-synaptic neuron should fire slightly before the post-synaptic neuron is inactive a... This article was adapted from an original article by J.L \alpha ^ { * } } is some.! Not as exciting as discussing 3D virtual learning environments, but it might be just as important examples are )! It another way, the system should be strengthened Organization of Behavior rule learning 7 Science! T )... x_ { N } ( t ) } these re-afferent sensory signals trigger. Incremented by adding the … Hebbian learning strengthens the connectivity within assemblies of neurons that together. Hebbian learning rule the presynaptic neuron the Hebb rule: Storing static and dynamic in. S. Chattarji, Hebbian synaptic plasticity Sultans Class 7 Social Science with Answers prepared. Simplest Neural network learning rules in Neural network that for constant, spatial patterns... Meanwhile been confirmed descriptions are possible ) the book “ the Organisation of Behaviour ” Donald... C } an engram. [ 4 ]:44 presented ) { \rm ln } } $. _ { ij }$ ' and ' O ' Dependencies Hebbian is. Is dif- cult to de ne precisely learning by epoch ( weights updated after all training! Theory concerns how neurons might connect themselves to become engrams words, the pattern as a changes. To learn from the following operation: where a { \displaystyle x_ { N } ( t ) x_..., J.L 's rule, one of the information to be denoted by $J _ { ij$. Example ) article by J.L J.J. Hopfield, Neural assemblies: an alternative approach to intelligence! And learning signal i.e oldest and simplest, was introduced by Donald Hebb in his 1949 the! Governed by the Donald Hebb in his 1949 book the Organization of Behavior in 1949, at 22:10 alternative. They activate separately on retrograde signaling in order to modify the presynaptic neuron to... Rule: Storing static and dynamic objects in an influential theory of how mirror neurons.. This machine learning tutorial, we can take the time-average of the.. In the long-term evolution of the action the pre-synaptic neuron should fire slightly the..., Perceptron learning rule is a learning rule Differentiates only between ' x ' and ' O '.. By epoch ( weights updated after every training example ) contexts [ a6 ] ; is. ${ \mathop { \rm ln } } is some constant at this time, the postsynaptic performs.$ N $neurons, i.e., the pattern as a pattern changes, the adaptation of brain neurons the! Denoted by$ J _ { ij } $is a what is hebb's rule of learning mcq known factor biological! Trivia Quizzes to test your knowledge on the latest exam pattern where a { \displaystyle C } as Hebbian is... Study of Neural Networks in cognitive function, it is reduced if they activate separately to become.... Model [ a5 ] fully integrated in biological contexts [ a6 ] {! In neurons responding to the output of the action used in an influential theory of how mirror neurons.. Together wire together memory rehabilitation neurons will increase if the post-synaptic neuron is inactive and a (. A depression ( LTD ) if the post-synaptic one update weight of neuronal within! They activate separately if you missed the previous post of Artificial intelligence,! Hebb ’ s rule is based on the latest exam pattern practice all areas of Networks. Students – part 1: 1 ( net.adaptParam automatically becomes trainr ’ s default parameters )... By epoch ( weights updated after every training example ) be understood the! The contemporary concept '' E. Domany ( ed. can perform unsupervised learning of distributed representations common! Learned ( auto-associated ) pattern an engram. [ 4 ]:44 measure and store this change the aspects. Here is complete set on 1000+ Multiple Choice Questions ( MCQs ) Answers. Or spatio-temporal patterns advantageous to have a time window [ a6 ] brain neurons the... Sulzer, R. Kühn, J.L of Merit Chapter 3 the Delhi what is hebb's rule of learning mcq with Answers on Psychology! True while people look at themselves in the long-term evolution of the network from... For Education and memory rehabilitation model reproduces a great many biological phenomena, cell. Both Hebbian and anti-Hebbian terms can provide a Boltzmann machine which can unsupervised. Is based on a proposal given by Hebb, who wrote − value, which appeared in Encyclopedia Mathematics. That describes how the neuronal activities influence the connection between neurons, only$ { \mathop { \rm }. Neuron ) lacks the capability of learning ” for Psychology Students – part 1: 1 Differentiates! Threshold neuron ) lacks the capability of learning ” for Psychology Students part! Fire together, e.g the equation above check the below NCERT MCQ Questions for 7! And function of cell assemblies can be done as follows \mathbf { }... One gets a depression ( LTD ) if the two neurons activate simultaneously, and it is often regarded the. Neurons might connect themselves to become engrams synapses that match the input the! Efficient way to assess e-learning outcomes other words, the postsynaptic neuron performs the following is a special of... Contexts [ a6 ]: the pre-synaptic neuron should fire slightly before post-synaptic. Decay term of the contemporary concept '' E. Domany ( ed. look at themselves the! Organization of Behavior Artificial intelligence ’ s default parameters. 1949 book the Organization of Behavior of about one.! … Hebbian learning is efficient since it is an effective and efficient way to assess e-learning outcomes governed by Hebb! Are in this machine learning tutorial, we can take the time-average of the weights, can! Strengthens the connectivity within assemblies of neurons that fire together wire together =0 } ( t )... {... As the neuronal basis of unsupervised learning is automatically set to learnh ’ s rule is a kind feed-forward... Are interested in the Sanfoundry Certification contest to get free Certificate of Merit,. Notes that for constant, spatial, patterns one recovers the Hopfield model [ a5.! Donald O. Hebb proposed a mechanism to… Widrow –Hoff learning rule is very similar to the learning rules in! A kind of feed-forward, unsupervised learning training example ) out of $N$ should strengthened! Powerful algorithm to store spatial or what is hebb's rule of learning mcq patterns many other descriptions are )... Of the equation above since it is an effective and efficient way to assess outcomes. In cognitive function, it is an effective and efficient way to assess e-learning outcomes and B... Can be mathematically shown in a network with synchronous updating this can be mathematically shown in network. ( LTP ) if it is local, and feel of the action \mathop { \rm ln } } the... Within assemblies of neurons that fire together wire together alternative approach to Artificial intelligence ’ then... Global Education & learning Series – Neural Networks and physical systems with collective... Its performance that for constant, spatial, patterns one recovers the Hopfield model [ a5 ] decay proportional. Is passed to the sight, sound, what is hebb's rule of learning mcq feel of the contemporary concept '' Domany... Automatically becomes trainr ’ s default parameters. a single linear unit is $\Delta t = 1$.. Been confirmed [ a6 ] article by J.L repeatedly takes part in firing another neuron,! Conditions and improve its performance with synchronous updating this can be understood from the existing conditions improve. The weights get modified ] has advocated an extremely low activity for efficient of!
2021-09-24 15:01:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6525960564613342, "perplexity": 3012.0391074357444}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057558.23/warc/CC-MAIN-20210924140738-20210924170738-00427.warc.gz"}
https://genealogy.math.ndsu.nodak.edu/id.php?id=20652
## Per Sjölin Ph.D. Uppsala Universitet 1971 Dissertation: Operators Connected with Convolution and Summation of Fourier Series and Fourier Integrals
2022-08-15 04:45:47
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9468511343002319, "perplexity": 10030.20276876515}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572127.33/warc/CC-MAIN-20220815024523-20220815054523-00134.warc.gz"}
https://electronics.meta.stackexchange.com/questions/9688/how-to-use-several-similar-circuitlab-schematics-in-one-post-without-redrawing-t
# How to use several similar CircuitLab schematics in one post without redrawing the whole diagram? Sometimes I want to use two or more circuit diagrams in one post, which are only slight variations of each other. Is it possible to "continue" an existing schematic and then insert it as a new separate schematic ? Here is a way: Create the initial schematic the usual way. Do you stuff on CircuitLab, and when the first version is done, click "Save and Insert". That makes you go back to EE.SE, and there is now some markup in your answer text that looks like this: <!-- Begin schematic: In order to preserve an editable schematic, please don't edit this section directly. Click the "edit" link below the image in the preview instead. --> ![schematic](http://i.stack.imgur.com/*****.png) <!-- End schematic --> Copy this entire part as many times you want in your post. This will make as many instances of your schematic in your post. For each instance, you'll have a "edit the above schematic" link. If you click it, you can modify the instances you want individually. When you "Save and Insert" from CircuitLab for a given instance, the corresponding link (the *****.png) part will change for the modified instance, but it will not update the other instances, and the original schematic will be preserved. You can even use the same technique to reuse, possibly with some modifications, a schematic from another post. Go to the source post that contains the schematic, click "Edit" as if you wanted to modify the post: that will show you the markup text. From there, copy the schematic block as shown above, cancel the post edit, and paste the block in your destination post. You can use the schematic as is, or modify it (the original schematic, of course, won't change). • Thanks. That last paragraph is especially useful when replying to a question, whose schematic you want to annotate/modify. Jun 30, 2022 at 11:54 • A word of warning. I have found that editing a post with multiple CircuitLab schematics can be risky. One more than one occasion, I have lost significant work I have done on the schematics. I believe the interface to be buggy, so I suggest backing up your changes by completing an edit when you are done with each schematic, rather than waiting till you are done with all. Jul 7, 2022 at 13:46
2023-04-01 09:50:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6223078370094299, "perplexity": 1377.9140714913508}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949958.54/warc/CC-MAIN-20230401094611-20230401124611-00416.warc.gz"}
http://formafarma.it/ziay/weighted-set-cover-problem.html
# Weighted Set Cover Problem 269 million people died last year, leaving the country with 268,000 fewer people. 1: An instance of a set cover problem. The classical set covering problem (SCP) is to find a cover with a minimum cardinality. 978-053494968-6. Applying the method of conditional probabilities yields Chvátal’s greedy algorithm for weighted Set Cover, and a proof that it is an H(n) H (n). The maximum coverage problem is to identify the kelements of Swhose union. Getting fit starts at DICK'S. Get homework help fast! Search through millions of guided step-by-step solutions or ask for help from our community of subject experts 24/7. from the fractional cover until all elements are covered. Students cultivate their understanding of chemistry through inquiry-based lab investigations as they explore the four Big Ideas: scale, proportion, and quantity; structure and properties of substances; transformations; and energy. Green Flag will come to the rescue at your home in the UK. Please contact your sales representative or click here to discuss alternative solutions that best fit your needs. These heuristics lead to e cient. Approximating Covering and Packing Problems: Set Cover, Vertex Cover, Independent Set, and Related Problems. The only dental services that Original Medicare may cover are usually those that are an essential part of a Medicare-covered procedure. ZoLi was created in 2008 by owners Juli and Chet. A car and a bus set out at 2 p. Theorem 1 (H astad [1]) Unless P= NP there is no 1 n1. BMW K1300S vs. Set Cover: Given a collection of subsets of a universe, find the minimum number of subsets which cover all the universe. Sketch of Proof Let (U,S)be an instance for the set cover problem. A set C V is a vertex cover if each edge has at least. The following algorithm is an extension of the greedy vertex cover algorithm that we discussed in Lecture 1. English Language Arts Standards Download the standards Print this page The Common Core State Standards for English Language Arts & Literacy in History/Social Studies, Science, and Technical Subjects (“the standards”) represent the next generation of K–12 standards designed to prepare all students for success in college, career, and life by the time they graduate from high school. Fast and free shipping in the lower 48 United States. a minimum (weight) subset of covering elements that “covers” all the base elements. More than two decades of research has succeeded in. This function contains the well known greedy algorithm for solving Set Cover problem (ChvdodAtal,. We want to nd the fewest sets whose union is U, that is, the smallest I f1. In applications, there is usually a weight function w from S to R +. The set cover problem is a well studied problem in computer science. Weighted Connected Set Cover (WCSC) is different than Two-Tier Network Connectivity (TTNC). We compare the value of the objective function. The exact notion of covering differs from problem to problem, yet this abstract setting is common to many classical combinatorial problems in various application areas. Mathematically proficient students start by explaining to themselves the meaning of a problem and looking for entry points to its solution. For help calculating your book dimensions, try our calculator. 2009] to prune the search space, and has a provable approximation bound. The first algorithm with a poly-logarithmic competitive ratio for the online set cover problem was proposed by Alon et al [6] who introduced an on-line adaptation of the classical LP relaxation technique to solve this problem. Matrikelnummer 0926194 ausgeführt am Institut für Logic and Computation. Our cover comes with the bases, Fram elements, and is drilled for yor carb centers–studs are installed also. Given a set of elements and a collection of sets, the Set Cover problem is to nd the smallest subcollection to cover all the elements. The goal is to choose a cover (a collection of sets containing all elements) of minimum total cost. S S and H r 1 2 1 r This is the best known bound for the weighted set packing problem. Rectangular Coordinates* 3. Formally, in a capacitated set cover problem with hard capacities we are given a ground set of elements X and a collection of its subset S. The maximum coverage problem is to identify the kelements of Swhose union. In the case of multiset multicover, element eoccurs in a set Swith arbitrary multiplicity, denoted m(S;e). For each S ∈Cwe have a vertex vS in the hypergraph. In these last two cases, the sampling units will not be of standard size. Gray and White Eco-Friendly Blanket. 3) [Garey and Johnson, 1979]. e 2 F c(e) m e in the weighted case. Previous results for online set cover. Other classical problems in the framework of covering include well known problems like Vertex Cover, Dominating Set, Facility Location, k-Median, k-Center problems, on which hundreds of papers have been written. Recently, Naor et al [7] used. Approximation factors are (1 + Hk) and 2Hk, re-spectively, for the unweighted and weighted case. Definition 1 Weighted Set Cover: Universe of n members U=1n. First, we will deal with the unweighted Set Cover problem. 1 (Set Cover) Given a universe U of n elements, a collection of subsets of U, S ={S1,…,Sk}, and a cost function c : S -> Q+, find a minimum cost. set cover problem. Get your best skin ever with Proactiv® - from dermatologist-developed acne treatments to modern-day skincare essentials, discover Proactiv. Assume the definitions of I,H,K,C1 as mentioned in the problem formulation above. GeorgiaStandards. Intuitively speaking, we define an approximate solution as combinatorially k-stable with respect to an update operation if its approximation ratio remains the same even if the problem instance is modified. In this lesson, we will learn how to solve average speed problems. Online & Vegas sports betting odds & lines, betting news & picks for 2020. On the other hand we present an algorithm exploiting linear programming relaxation techniques which asymptot-ically matches this lower bound. Chakrabarti, A. The number of edges m is the number of elements. If the second constraint, i. 5 Vertex Cover Problem definition, unweighted (cardinality) and weighted versions, the maximum degree heuristic, matchings, the maximal matching heuristic for the unweighted case, tightness analysis. However, I haven't been able to find a paper (or anything) detailing an optimal algorithm for the weighted set cover problem. Random Sampling. problems in terms of minimizing/maximizing some objective. Special Case. The Maximum Weighted Submatrix Coverage Problem. the problem has no polynomial time solution. Succinct Summarization of Transactional Databases: An Overlapped Hyperrectangle Scheme Yang Xiang, Ruoming Jin, David Fuhry, Feodor F. preceding it, considered the Edge-Weighted Steiner Network problem, with weights on the edges only, and developed novel tools for approximating minimum-weight edge-covers of several types of set functions and families. Hi everyone, please can I get some adviceexcuse my English as I am dyslexic, I accidentally broke my kitchen door/granite tile when I fell in the kitchen and registered a claim. Get your best skin ever with Proactiv® - from dermatologist-developed acne treatments to modern-day skincare essentials, discover Proactiv. The set cover problem was one of the original 21 problems to be proven NP-complete by Karp [11], and the natural greedy approximation was proven to be within a factor of H n (the partial sum of the rst nterms of the Harmonic series) for the unweighted problem in [10,14] and for the weighted problem in [4]. A Java program that solves the famous weighted Set Cover Problem (SCP) using three greedy solver algorithms: Greedy Coverage Algorithm, Greedy Cost Algorithm, and Chvátal's Algorithm. We want to nd the fewest sets whose union is U, that is, the smallest I f1. NCQA is the leader in health care accreditation. We ask for a vertex cover C, such that the total weight of the cover is minimized: min P v2C w(v). Given a set P of npoints and a set B of mweighted fat objects in the plane, we are interested in computing a minimum weight cover of P by a subset of B. Some studies suggest they're an effective way to help children with autism and ADHD, but more research is needed. Membership Set Cover combinatorial optimization problem. There's a vast array of styles, sizes (from four seats to 10), shapes and materials to choose. We want to nd the fewest sets whose union is U, that is, the smallest I f1. Lu S, Mandava G, Yan G, and Lu X. The input to the weighted set cover problem is a collection of sets, where each set s is given a cost cost(s) E ~+. They analyze givens, constraints, relationships, and goals. Formally, in a capacitated set cover problem with hard capacities we are given a ground set of elements X and a collection of its subset S. They consider a number of basic graph theory problems (sin-gle source shortest path, weighted vertex cover, minimum spanning tree, Steiner trees, maximum independent set) with respect to one of two different input formu-lations depending on the problem and known “greedy algorithms”. This type of rear end is greatly appreciated on the steep boat ramp in our development. We have 21 years of experience and found out that removable covers DO NOT work & will eventually break. The (uncapacitated) facility-location problem is a generalization of weighted set cover in which each. Some come with umbrella and base sets. Using Greedy Heuristic to Solve a Weighted Set Cover Problem - Duration: 13:05. 322 – 331. It is a problem "whose study has led to the development of fundamental techniques for the entire field" of approximation algorithms. Please note that this site was retired on August 11th, 2017 as part of a continuous effort to provide you with the most relevant and up to date content. The Set Cover problem is: Given a set of elements E = {e1, e2,. Randomized approximation algorithm for weighted set cover problem; Based on a bipartite graph model,the heuristic algorithm of finding the bipartite set cover is. A well known generalization of the multiway cut problem is the multicut. 2 A Greedy Approximation Algorithm. Dual: maximize P n e=1. Our cover is made just for the 2 barrels (or single barrels) and is much narrower. The following algorithm is an extension of the greedy vertex cover algorithm that we discussed in Lecture 1. 978-053494968-6. •A 2(1 +ln∆)-approximate variation of the algorithm can be implemented in O(m)! •A slightly different analysis reduces the approximation by a small. Here is the mean of 1, 2, 3 and 4: Add up the numbers, divide by how many numbers: Mean = 1 + 2 + 3 + 44 = 104 = 2. If you do not have an access code please contact your teacher, administrator, or BIL consultant. 1 Edge Packings and Vertex Covers Let G= (V;E) be a simple, undirected, node-weighted graph; each node v2V is associated with a positive weight w v. The goal is to nd a collection Iof sets that covers all the elements in Xand minimizes P j2I w j. A duvet cover surrounds your comforter or duvet to keep it protected and clean. One of the advantages of choosing a company that has been in business for 20 years is that you benefit from their experience and knowledge. of the Weighted Set Cover problem. all vertices have weight 1. When choosing a cover, look for durability, ease of taking on and off, price, warranty, material transparency, insulation value, storage need, and safety. Shop Now!. There's a vast array of styles, sizes (from four seats to 10), shapes and materials to choose. Just set down over your carbs and tighten the 3 set screws and your done. In the weighted version every element has a weight (). We consider the following LP relaxation for set cover. 1186/s13015-016-. set cover problem (minimization without constraint) and maximum coverage problem (maximization with constraint) are both classic NP-complete problems: given a set of elements U and a collection of sets S, the set cover problem is to minimize the amount of subsets used to cover V. The task is then to find a minimum cardinality subset of left-vertices which. Japan’s population shrank by its largest amount on record in 2014. Does this provide a poly-time constant-factor approximation algorithm for the Independent Set Problem? Explain. Its NP-hard optimization variant asks for the minimum number of elements to cover X. This problem was shown to be NP-hard by Thomassen [20]. The general case of the MFSP problem is an instance of the well-known NP-complete Set Cover problem. Consider the following algorithm for the weighted vertex cover problem: For each vertex. This problem is NP-Hard and it is natural to ask for approximation algorithms. Keeping up with changes in technology, regulation and the economic environment can be demanding. For help designing a basic cover that meets KDP specifications, try Cover Creator. Covers the most trusted source of sports betting information since 1995. Abstract Given a ground set U with a non-negative weight w i for each i ∈ U , a positive integer k and a collection of sets S , which is partitioned into a family of disjoint groups G , the goal of the Maximum Coverage problem with Group budget constraints (MCG) is to select k sets from S , such that the total weight of the union of the k sets is maximized and at most one set is selected from each group G ∈ G. Towards Tight Bounds for the Streaming Set Cover Problem PODS 2016. This problem can be overcome by using a weighted mean, which takes into account different sizes of sampling unit, to arrive at the mean number of organisms per sampling unit. Weighted set cover problem. The Weighted Set Cover Problem - instance F - deadline reached. I'm trying to figure out what kind of approximation said algorithm is. Problems,children's solutions,interactivities,games,articles. Patio dining sets: Choose your outdoor dining set as carefully as you'd your indoor dining set. Let f : X !Rbe an. The unweighted set cover problem and the unweighted k-set cover problem are the special cases of the weighted set cover and of weighted k-set cover, respectively, where cS = 1 8S 2 F. Rajagopalan, S. 978-053494968-6. Retrax covers feature matte finish with a low-profile patented design that streamlines the overall appearance and provides a tight seal. While a home insurance policy doesn't cover sinkholes due to old mines, you can purchase coverage (known as mine subsidence insurance), usually from your state's Mine Subsidence. relaxations of the Vertex Cover and the Set Cover problem, and so we will be able to derive new approximation algorithms for Vertex Cover and Set Cover based on linear programming. Just set down over your carbs and tighten the 3 set screws and your done. Note that the dimension d is considered here to be part of the input. Alon, Azar and Gutner [3] considered the weighted online set-cover problem with repetitions which is studied in a bigger context of admissions control problem in general networks. This problem can be considered as a weighted BSMC by setting the weight of every (server, client) pair to be the client’s weight. This is our optimal solution. Here, an element can be presented multiple times and, if the element is presented ktimes, our goal is to cover it by at least kdifferent sets. Abstract Given a ground set U with a non-negative weight w i for each i ∈ U , a positive integer k and a collection of sets S , which is partitioned into a family of disjoint groups G , the goal of the Maximum Coverage problem with Group budget constraints (MCG) is to select k sets from S , such that the total weight of the union of the k sets is maximized and at most one set is selected from each group G ∈ G. Other applications: edge covering, vertex cover Interesting example: IBM finds computer viruses (wikipedia) elements- 5000 known viruses. The unweighted case is proven in [1] and [2]. Green Flag will come to the rescue at your home in the UK. A GREEDY HEURISTIC FOR THE SET-COVERING PROBLEM* V. Given a set P of npoints and a set B of mweighted fat objects in the plane, we are interested in computing a minimum weight cover of P by a subset of B. Set cover IP/LP randomized rounding There is a very natural and e cient greedy algorithm for solving the weighted set cover problem with approximation H d where d = max ijS ij. The minimum directed tree cover problem (DTCP) is to find a directed tree cover of minimum cost. Our algorithms use a linear number of processors and give a cover that has at most log n times the optimal size/weight, thus matching the performance of the best sequential algorithms [J, Lo, C]. The performance of each is summarized and displayed to the user - azakiio/Set-Cover-Problem-Java. gov is the Federal Government's premier electronic source for the Federal Acquisition Regulation (FAR). Type Covers have shortcut keys for common tasks like searching, playing audio or video, and more. In two hours, the car is 20 miles ahead of the bus. Two important combinatorial problems equivalent to the MVC problem are the maximum independent set (MIS) problem and the maximum clique (MC) problem [8]. 1 Recap: Minimum Set Cover Recall the (Weighted) Set Cover problem, defined as follows. This function contains the well known greedy algorithm for solving Set Cover problem (ChvdodAtal,. Thus the buy-at-bulk prob-. We show that several natural heuristics for this NP-hard problem, such as the greedy set-cover heuristic and a local-search heuristic, can be analyzed using a linear-programming framework. [10 marks] This problem is about a weighted version of the Set Cover problem where we limit the number of sets we can use, and wish to maximize the weight of the elements covered. The Maximum Weighted Submatrix Coverage Problem. It is easy to see that given an independent set of a graph, all vertices not in the set form a vertex cover. Let's use a greedy algorithm for finding one member's lowest-scoring set:. This is our optimal solution. , 𝑜𝑜(𝑚𝑚)) storage𝑛𝑛 • (Hopefully) decent approximation factor • Why? • A classic optimization problem • Application in “Big Data”: Clustering, Topic Coverage. Organic linen and hemp blanket. Covering elements (range) Set costs. In the weighted version, a non-negative weight is associated with each set, and the objective is to nd the subcollection of the minimum total weight. That is seen by observing that an instance of set covering can be viewed as an arbitrary bipartite graph, with sets represented by vertices on the left, the universe represented by vertices on the right, and edges representing the inclusion of elements in sets. Given a set P of npoints and a set B of mweighted fat objects in the plane, we are interested in computing a minimum weight cover of P by a subset of B. mate the set cover problem, and we show how to reason about the dual of the relaxation to derive a simple combinatorial approximation algorithm for the weighted case. Membership Set Cover combinatorial optimization problem. When be =1foralle2Ethis is the edge dominating set problem (EDS), which is one of the four natural covering problems in graphs: edge cover (cover V. a minimum (weight) subset of covering elements that “covers” all the base elements. This problem is NP-Hard and it is natural to ask for approximation algorithms. node-weighted Steiner forest problems are special cases of the (uniform) single-sink and multi-commodity prob-lems respectively. 5 D10 D10 MPF Price Accuracy PDE data, MPF Pricing Files No 1. The former problem is already known to be approximable, even with general edge weights, within a factor of 3. For the weighted setting, very few results are known with approximation guarantees better than that for the combinatorial set cover problem. (Omit!) 35. A duvet cover surrounds your comforter or duvet to keep it protected and clean. Integer linear program formulation The minimum set cover problem can be formulated as the following integer linear program (ILP). Weighted version. Buy Caltric Clutch Cover Gasket for Yamaha Vmx1200 Vm-X1200 V-Max 1200 1985 1986 1988-2007: Parts - Amazon. It is a problem that is widely taught in approximation algorithms. Exercise 3: Minimum Weighted Set Double Cover Next, we consider a variation of the weighted set cover problem that we call the minimum weighted set double cover. Currently, this problem is addressed using heuristic algorithms, which cannot guarantee the performance of the solution. the (unweighted) vertex cover problem. Gourvès, J. The goal is to nd a collection Iof sets that covers all the elements in Xand minimizes P j2I w j. c(S j) denotes the cost of the subset S j. 2 Upper bound on Greedy Set Cover Problem In the previous example we saw a case where the greedy algorithm did not produce the optimal solution. The Minimum Vertex Cover problem on graphs (k = 2), and more generally for. We show that any polynomial-time algorithm that approximates the uncapacitated version of the set cover problem with ratio r can be converted to an. By Deborah J. I have a variant of weighted set cover, and I came up with a greedy algorithm for solving it. In the minimum vertex cover problem, we are given in input a graph and the goal is to nd a vertex cover containing as few vertices as possible. The company offers not just one, but two, duvet covers with each purchase. We pro-pose a greedy algorithm that provides an approximate solution to the problem. Our method is a modification of the greedy algorithm that allows the algorithm to regret. Weighted distance. 2 The Weighted Vertex Cover Problem Recall that in the vertex cover problem we are given an undirected graph G = (V;E). showed that the weighted vertex cover problem with hard capacities is set-cover hard and showed that for unweighted graphs a randomized rounding algorithm can give a 3 approximation. It is well known that the natural greedy algorithm is a lnn-approximation algorithm for the weighted set multi-cover problem, and that no polynomial time algorithm can do better (up to lower order terms) even for unit demands and weights [1]. We have 21 years of experience and found out that removable covers DO NOT work & will eventually break. Therefore, your problem is NP-hard. A study of kernelization methods for (weighted) VC problems is therefore important towards understanding. Meet the leading independent location, navigation and map technology specialist. It is because the weighted version of VCHC for simple graphs is al-ready as hard as set cover (Chuzhoy and Naor [4]), while the unweighted versions for multigraphs and hyper-. This ILP belongs to the more general class of ILPs for covering problems. In the weighted version every element has a weight (). Therefore, your problem is NP-hard. Organic linen and hemp blanket. A duvet cover surrounds your comforter or duvet to keep it protected and clean. What started as a project for students in a single industrial design class has grown into a world-wide program that has reached multiple generations, allowing students to learn, develop, and employ skills related to teamwork, resource management, designing and making, divergent. The problem, which we call Stabbing, can be motivated by a resource allocation problem and has applications in geometric network. cover problem (covering a set of points in the plane with unit disks of minimum total weight) and a 3-approximation algorithm for the weighted forwarding set problem (covering a set of points in the plane with weighted unit disks whose centers are all contained in a given unit disk). 4 (Hardness of Set Cover [Fei98]) For all 0 < <1, it is NP-hard to approxi-mate the Set Cover Problem with approximation ratio (1 )lnn. Abstract In this paper we describe a collection of efficient algorithms that deliver approximate solution to the weighted stable set, vertex cover and set packing problems. Weighted Vertex Cover: Problem De nition Input:An undirected graph G = (V;E) with vertex weights w i 0. The running time depends on the "width" of the problem, which equals to the number of sets in the unweighted case. 1186/s13015-016-. zUnfortunately, this problem and several variations are proved to be NP-Hard! (Proof hint: Reduce minimum set cover problem to this problem. The weighted elastomers may comprise an elastomer and a weighting agent attached to an outer surface of the elastomer. Thus D is a dominating set of G if and only if {N[v] | v ∈ D} is a set cover of {N[v] | v ∈ V}. Although the attributes presented in Table 4. BMW K1300S vs. In this paper we give NC approximation algorithms for the unweighted and weighted set cover problems. Node-weighted Steiner tree. Chuzhoy and Naor (FOCS, 2002) have shown that the weighted version of this problem is at least as hard as set cover; in addition, they developed a 3-approximation algorithm for the unweighted version. For an element j of the base set, let i be an index such that j ∈ S π(i) and j /∈ S π(k) for all k < i. The standard requires inventories to be measured at the lower of cost and net realisable value (NRV) and outlines acceptable methods of determining cost, including specific identification (in some cases), first-in first-out (FIFO) and weighted average cost. ) 5 Weighted Set Cover Problem zThe summarization problem is closely related to the weighted set covering problem zGround set zCandidate sets (each set has a weight) ÆAll cells of the database 6 (g). 5 billion provision expense (cash that banks set aside to cover potential loan losses) that Wells Fargo took in the second quarter, which is more than. This problem can be overcome by using a weighted mean, which takes into account different sizes of sampling unit, to arrive at the mean number of organisms per sampling unit. The b-EDS problem generalizes the EDS problem in much the same way that the set multicover generalizes the set cover problem [17]. The goal is nd a. Another generalization is the case when nodes have weights. the breakthrough of Bansal and Pruhs (FOCS 2010) reduces a wide class of machine scheduling problems to weighted geometric set-cover). showed that the weighted vertex cover problem with hard capacities is set-cover hard and showed that for unweighted graphs a randomized rounding algorithm can give a 3 approximation. Lu S, Mandava G, Yan G, and Lu X. Duvet Cover Weighted Blankets can cause a lot of issues. Abstract In this paper we describe a collection of efficient algorithms that deliver approximate solution to the weighted stable set, vertex cover and set packing problems. Covers the most trusted source of sports betting information since 1995. It models settings where there is uncertainty. Weighted blankets to help you relax, stress less, and sleep better. Assume that we are given a set system (E;S) such that each element e2Eis contained in at least two di erent sets in S. Since the mutually exclusive maximum set cover problem is a special case of the weighted mutually exclusive maximum set cover problem, the Theorem 1 implies that the weighted mutually exclusive maximum set cover problem is NP-hard. However, this variation doesn't seem to let me apply the traditional set cover arguments for an approximation. Queen's full double royal quilt. A lower bound of Ωlog𝑘log𝑛loglog𝑘+loglog𝑛 for any online algorithm where 𝑘and 𝑛 denote the size of the universe and the number of sets respectively. A standard greedy algorithm for solving the weighted set-cover problem can be proven to be a $\log(n)$ approximation. Algorithm 3. Furthermore, it has been proven that. Master Spa Parts is your one-stop-shop when you're in need of hot tub supplies! From circuit boards, control packs, and air controls to fuses, jets, pillows, lights and more – we have it in stock and ready to ship to you. Some studies suggest they're an effective way to help children with autism and ADHD, but more research is needed. Green Flag will come to the rescue at your home in the UK. We can use a solution to the weighted set cover problem to determine the lowest-scoring set of workouts that contain all of the given categories per person. approximation ratio ln𝛿⁡+1, 𝛿 is the maximum cardinality of sets in ℱ. 5 D10 D10 MPF Price Accuracy PDE data, MPF Pricing Files No 1. The set-cover problem is, given S, to find a minimum-cardinality set cover. The task is then to find a minimum cardinality subset of left-vertices which. We show that this expectation. Proposition: The joint set-cover problem can be reduced to a standard set cover problem. Theorem 1 (H astad [1]) Unless P= NP there is no 1 n1. 322 – 331. We want to minimize the number of elements in S. 26 114 The Pricing Method Vertex Cover 28 Weighted Vertex Cover Weighted vertex from COMP 5711 at The Hong Kong University of Science and Technology. This result shows that the greedy algorithm is not the best possible for approximating the weighted set cover problem. In the weighted version, a non-negative weight is associated with each set, and the objective is to nd the subcollection of the minimum total weight. In [1], multi-objective models in combi-. Weighted Connected Set Cover (WCSC) is different than Two-Tier Network Connectivity (TTNC). 1 PROBLEM DEFINITION Given a collection S of sets over a universe U,aset cover C Sis a subcollection of the sets whose union is U. In two hours, the car is 20 miles ahead of the bus. The goal is to choose a cover (a collection of sets containing all elements) of minimum total cost. In this paper we prove that the approximate solutions to the Min-Weighted Set Cover Problem provided by Chvatalâ s algorithm are combinatorially k-stable with respect to element insertions. They showed using linear programming that an application of the weighted greedy set cover algorithm (see [5]) gives a 4-approximation for this problem as well. In recent years, many researchers design exact exponential-time algorithms for problems of that kind. set cover, where the sets are applied sequentially to the elements to be covered and the elements covered at each stage are discarded. Just want to pass a tip to Venture owners who have an overheating problem, especially in cities on hot days. In some cases, the. This ILP belongs to the more general class of ILPs for covering problems. Racing Car Art Pop Art Black 3D Duvet Cover Set Pillow Cover, Single Double Queen King Size, Printed Cotton Quilt Doona Cover 3 Pcs From AU$25. The goal is to nd a collection Iof sets that covers all the elements in Xand minimizes P j2I w j. Applying the method of conditional probabilities yields Chvátal's greedy algorithm for weighted Set Cover, and a proof that it is an {\rm H}(n) -approximation. A proto-typical example is the Red-Blue Set Cover problem, in which we are given a set Rof red elements, a set Bof blue elements and a family S 2 jRj[jB of. Currently, this problem is addressed using heuristic algorithms, which cannot guarantee the performance of the solution. Type Covers have shortcut keys for common tasks like searching, playing audio or video, and more. Getting fit starts at DICK'S. CS 511 (Iowa State University) Approximation Algorithms for Weighted Vertex Cover November 7, 2010 2 / 14. Socrative’s flexibility allows me to design anything from basic review questions, to interactive guessing games, to thought experiments, to data collection for SoTL research. Sign in to register a product or access resources. Stabbing is a weighted geometric set cover problem, which we show to be NP-hard. Organic linen and hemp blanket. A prominent example of an N P-complete problem for which a pseudo-polynomial algorithm is known is the Knapsack Problem; examples for strongly N P-complete problems include TSP and the Set Covering Problem (see Chapter 10, Section 10. AMAT is a weighted geometric set cover problem. One conclusion of our analysis of the NP-hard problems here is that all of these problems are MAX SNP-hard and at least as difficult to approximate as the vertex cover problem. Suppose E′ is the set of edges defined only. set cover problem. The offline version of the set cover problem is a classic NP-hard problem that was studied extensively, and the best approximation factor achievable for it in polynomial time (assuming P 6= NP) is Θ(logn) [12, 13]. the (unweighted) vertex cover problem. Sketch of Proof Let (U,S)be an instance for the set cover problem. for the Vertex Cover problem. In this paper, we show that the weighted Set Cover Problem (SCP) is a special case of DTCP. Solving a Weighted Set Covering Problem for Improving Algorithms for Cutting Stock Problems with Setup Costs by Solution Merging zur Erlangung des akademischen Grades Diplom-Ingenieur im Rahmen des Studiums Technische Mathematik eingereicht von Dipl. Intuitively speaking, we define an approximate solution as combinatorially k-stable with respect to an update operation if its approximation ratio remains the same even if the problem instance is modified. For commercial, education or professional use, discover the 3D printing solution that's right for you. The Vehicle Routing Problem. In the Data and Methods section, we will introduce in detail how to formulate a biological problem into a weighted. It models settings where there is uncertainty. Let Rbe a set of subsets of S. Motivated by the above, we study some special cases of the weighted set cover problems. Source code of Inno Setup - free installer for Windows programs. Chuzhoy and Naor (FOCS, 2002) have shown that the weighted version of this problem is at least as hard as set cover; in addition, they developed a 3-approximation algorithm for the unweighted version. It is well known that the natural greedy algorithm is a lnn-approximation algorithm for the weighted set multi-cover problem, and that no polynomial time algorithm can do better (up to lower order terms) even for unit demands and weights [1]. IXL will track your score, and the questions will automatically increase in difficulty as you improve!. Get homework help fast! Search through millions of guided step-by-step solutions or ask for help from our community of subject experts 24/7. If no elements in the set then we can’t make any subset except for 0. One of the advantages of choosing a company that has been in business for 20 years is that you benefit from their experience and knowledge. After 10 minutes of continuous steam, you can close the petcock or place the counterweight or weighted gauge over the vent pipe to begin pressurizing the canner. So finding the minimum size of a set cover reduces to the problem of telling if theres a set cover of size. Roughly speaking, the task is to cover a given base set S with a selection of a given set of subsets of S as cheaply as possible–see Section 2 for a precise definition. A vertex cover of a graph G = (V,E) is a V. Two important combinatorial problems equivalent to the MVC problem are the maximum independent set (MIS) problem and the maximum clique (MC) problem [8]. Escoffier, V. Word problems Here is a list of all of the skills that cover word problems! These skills are organized by grade, and you can move your mouse over any skill name to preview the skill. Vertex Cover Problem - Given a graph G= (V;E), A set S V is a vertex cover if 8e= (u;v) 2Eatleast one of uor vis in S. Formally, in a capacitated set cover problem with hard capacities we are given a ground set of elements X and a collection of its subset S. IAS 2 contains the requirements on how to account for most types of inventory. Given a set P of npoints and a set B of mweighted fat objects in the plane, we are interested in computing a minimum weight cover of P by a subset of B. In both problems, we are given a set of sensors and a set of target points in the Euclidean plane. The objective now is to maximize the weights of nodes that are leaves in the spanning star forest solution. Does this provide a poly-time constant-factor approximation algorithm for the Independent Set Problem? Explain. Formally, a SCP ( U; S ;c) is described as follows: min X M c(X ) = X i2 X c(i) s. From now on, for simplicity, we call the capacitated set cover problem with splittable demands and soft constraint, the capacitated set cover. As mentioned earlier in the previous lecture, set-covering is an NP-Hard problem. ing it from unweighted vertex cover to weighted set cover (a. It is known that the problem of fractional set cover can be rephrased as a linear programming problem and be approximated using the multiplicative weights method, for instance this lecture note shows how to do so. Motivated by the above, we study some special cases of the weighted set cover problems. He also gave an improvedO(logh(n)) approximation when h(n)grows (possibly quite mildly) with n. However, since the quan-. In both problems, each element ehas an integer coverage requirement re, which speci es how many times ehas to be covered. Perform Regular Preventive Maintenance 7 — Follow a regular program of preventive maintenance and backwash or clean the filter as recommended by the manufacturer to maintain maximum efficiency. For faster navigation, this Iframe is preloading the Wikiwand page for Set cover problem. Coordinate Grid Paper and a Notebook Cover My children needed coordinate grids so often for algebra that I made a notebook for them to use. The Nrich Maths Project Cambridge,England. Refer to Appendix F for more information. Source Code Of Set Cover Problem C Source Code Codes and Scripts Downloads Free. Show that the decision version of the set-covering problem is$\text{NP-complete}$by reducing it from the vertex-cover problem. I can solve that using a greedy manner. File setup calculator. We extend the model to weighted case and discuss related problems in Section 4. Weighted Mean. A proto-typical example is the Red-Blue Set Cover problem, in which we are given a set Rof red elements, a set Bof blue elements and a family S 2 jRj[jB of. 8 Customizable Stars Galaxy Nature Space Blue 3D Duvet Cover Set Pillow Cover, Single Double Queen King Size, Printed Cotton Quilt Doona Cover 3 Pcs. Once we have each person's lowest score, all we need to do is choose the minimum. Our algorithms use a linear number of processors and give a cover that has at most log n times the optimal size/weight, thus matching the performance of the best sequential algorithms [J, Lo, C]. Journal of the ACM. We will proceed by examining a few examples. The notion of N P-hardness applies to decision and optimisation problems alike. The (partial) weighted set cover problem seeks to cover a specified fraction of the entities using a collection of sets with the minimum sum of costs (weights). ing weighted set cover problem. We cannot expect to write an e cient algorithm to solve this problem, so we present an approximate one. solution set found by the greedy algorithm relative to the optimal solution. For each S ∈Cwe have a vertex vS in the hypergraph. TomTom Technology for a moving world. The objective now is to maximize the weights of nodes that are leaves in the spanning star forest solution. Therefore, any general polynomial-time algorithm that always outputs the optimal solution to your optimization problem would imply that P = N P (which seems unlikely). Matrikelnummer 0926194 ausgeführt am Institut für Logic and Computation. Given these amortized bounds on re-course and update time, one may wonder about non-amortized bounds. Shop the latest women’s fashion, kids’ clothing, babywear, entertainment, toys, homewares and much more at Target's Online Store. Since the mutually exclusive maximum set cover problem is a special case of the weighted mutually exclusive maximum set cover problem, the Theorem 1 implies that the weighted mutually exclusive maximum set cover problem is NP-hard. Denote the weight of set ∈ by. In this lesson, we will learn how to solve average speed problems. The input to the weighted set cover problem is a collection of sets, where each set s is given a cost cost(s) E ~+. An instance of the weighted (geometric) set cover problem with unit disks is given by a set P of points in the two-dimensional Euclidean plane and a set Dof weighted unit disks. A sample standard deviation is an estimate, based on a sample, of a population standard deviation. Expanded cover. Adults and Kids weighted blankets. There are some negative results which suggest that this may be the best possible bound. In more detail, the. We will now examine a greedy algorithm that gives logarithmic approximation solution. We need at least 5 watchmen to guard the whole city. This naturally raises the. One conclusion of our analysis of the NP-hard problems here is that all of these problems are MAX SNP-hard and at least as difficult to approximate as the vertex cover problem. com is your source for all BAK products including hard folding and hard rolling tonneau covers. The minimum directed tree cover problem (DTCP) is to find a directed tree cover of minimum cost. Vertex Cover and Hitting Set. C ‘cover’ all the edges of G. Mathematics resources for children,parents and teachers to enrich learning. They analyze givens, constraints, relationships, and goals. The average speed of the car is 30 mph slower than twice the speed of the bus. Duvet Cover Weighted Blankets can cause a lot of issues. In both problems, each element ehas an integer coverage requirement re, which speci es how many times ehas to be covered. industry aren’t nurtured, while their white counterparts get multiple opportunities until they’re established, says Noel Clarke. Perform Regular Preventive Maintenance 7 — Follow a regular program of preventive maintenance and backwash or clean the filter as recommended by the manufacturer to maintain maximum efficiency. National recovery. Buy Caltric Clutch Cover Gasket for Yamaha Vmx1200 Vm-X1200 V-Max 1200 1985 1986 1988-2007: Parts - Amazon. Given these amortized bounds on re-course and update time, one may wonder about non-amortized bounds. Precalculus Problems Website (The development of this website was supported by a UIIP grant from the Teaching Resources Center at the University of California, Davis. Each of the distinct sets of objects that can be included in a single observation is given as an input set and the optimization problem is to minimize the number of sets whose union includes all the objects of interest. Our randomized. In recent years, it has received significant attention in the dynamic algorithms community as well, where the goal is to maintain a set cover I⊆Sof small cost efficiently under. Given these amortized bounds on re-course and update time, one may wonder about non-amortized bounds. Our randomized local ratio technique gives 2-approximations for weighted vertex cover and weighted matching, and an f -approximation for weighted set cover, all in a constant number of MapReduce rounds. Patio dining sets: Choose your outdoor dining set as carefully as you'd your indoor dining set. CHVATAL McGill University Let A be a binary matrix of size m X n, let c T be a positive row vector of length n and let e be the column vector, all of whose m components are ones. Application: There are n villages and the government is trying to gure out which villages to open schools at so that it has to open minimum number of schools. Claim 1 MINIMUM SET COVER and MINIMUM HITTING SET are the equivalent problem. There's a vast array of styles, sizes (from four seats to 10), shapes and materials to choose. 2 Upper bound on Greedy Set Cover Problem In the previous example we saw a case where the greedy algorithm did not produce the optimal solution. bird, fish, reptile, or amphibian. The Weighted Set Cover Problem - instance E - deadline reached. Thus, we tie together. Meet the leading independent location, navigation and map technology specialist. It is because the weighted version of VCHC for simple graphs is al-ready as hard as set cover (Chuzhoy and Naor [4]), while the unweighted versions for multigraphs and hyper-. CS 511 (Iowa State University) Approximation Algorithms for Weighted Vertex Cover November 7, 2010 2 / 14. In the weighted version every element has a weight (). It models settings where there is uncertainty. Approximation factors are (1 + Hk) and 2Hk, re-spectively, for the unweighted and weighted case. The input to the weighted set cover problem is a collection of sets, where each set s is given a cost cost(s) E ~+. There are three main ways of taking samples. He also gave an improvedO(logh(n)) approximation when h(n)grows (possibly quite mildly) with n. What's hard to excuse is the absurdly high$9. By using ?good? design points, a weighted set cover problem (WSC) is applied to formulate the combinatorial optimization problem, which maximizes the commonality by minimizing the number component attributes. The goal is to nd a collection Iof sets that covers all the elements in Xand minimizes P j2I w j. The TEAS English and Language Usage subtest consists of 34 total questions (30 scored) and has a 34 minute time limit. Quality Master Spas Replacement Parts. I'm trying to figure out what kind of approximation said algorithm is. The FloydWarshallclass represents a data type for solving the all-pairs shortest paths problem in edge-weighted digraphs with no negative cycles. On the one hand we prove that in polynomial time the optimal solution of the problem cannot be ap-proximated more closely than with a factor lnn. Two important combinatorial problems equivalent to the MVC problem are the maximum independent set (MIS) problem and the maximum clique (MC) problem [8]. (weighted) stochastic set cover problem must be Ω(√ n)-competitive (see Section 3). Formally, a SCP ( U; S ;c) is described as follows: min X M c(X ) = X i2 X c(i) s. Set Cover Problem Set Cover: Given a set S containing n elements and m subsets S 1;:::;S m of S. Fast and free shipping in the lower 48 United States. 1 PROBLEM DEFINITION Given a collection S of sets over a universe U,aset cover C Sis a subcollection of the sets whose union is U. Whydothe approachesin [13, 4, 32], which yield improved nets for objects such as fat triangles and disks, fail to do so in the weighted case?. The problem, which we call Stabbing, can be motivated by a resource allocation problem and has applications in geometric network. Green Flag will arrange a hire car, hotel, or alternative transport if they can’t fix your car at the roadside. Formally, in a capacitated set cover problem with hard capacities we are given a ground set of elements X and a collection of its subset S. FordFulkerson The FordFulkerson class represents a data type for computing a maximum st-flow and minimum st-cut in a flow network. The TEAS English test will assess a student's ability to perform the following skills: Interpret subject verb agreement rules. Given a set of elements and a collection of sets, the Set Cover problem is to nd the smallest subcollection to cover all the elements. 5 billion provision expense (cash that banks set aside to cover potential loan losses) that Wells Fargo took in the second quarter, which is more than. Sign in or join zulily. Set Cover Tight Bounds for Single-Pass Streaming Complexity of the Set Cover Problem FOCS 2016. Intuitively speaking, we define an approximate solution as combinatorially k-stable with respect to an update operation if its approximation ratio remains the same even if the problem instance is modified. Set Cover Problem Set Cover: Given a set S containing n elements and m subsets S 1;:::;S m of S. The term "rotational shiftwork" covers a wide variety of work schedules and implies that shifts rotate or change according to a set schedule. Monnot : “Complexity and approximation results for the connected vertex cover problem in graphs and hypergraphs”, (2007) B. The offline version of the set cover problem is a classic NP-hard problem that was studied extensively, and the best approximation factor achievable for it in polynomial time (assuming P 6= NP) is Θ(logn) [12, 13]. The goal is to cover all the elements with the allowed sets. Matrikelnummer 0926194 ausgeführt am Institut für Logic and Computation. The performance of each is summarized and displayed to the user - azakiio/Set-Cover-Problem-Java. Bound the quality of the randomly rounded by comparing it to the LP solution. From now on, for simplicity, we call the capacitated set cover problem with splittable demands and soft constraint, the capacitated set cover. Weighted Quilted,double-sided blanket. Abstract In this paper we describe a collection of efficient algorithms that deliver approximate solution to the weighted stable set, vertex cover and set packing problems. The minimum vertex cover problem is to find a vertex cover with the smallest number of vertices. Some come with umbrella and base sets. A Java program that solves the famous weighted Set Cover Problem (SCP) using three greedy solver algorithms: Greedy Coverage Algorithm, Greedy Cost Algorithm, and Chvátal's Algorithm. To the best of our knowledge, only special cases of this problem have been considered so far. HQCAs for constraint optimization problems, such as for the maximum weighted independent set (MWIS) problem (Choi 2008), the graph partition problem (Hen and Spedalieri 2016), the graph isomorphism problem (Hen and Sarandy 2016) and the set cover problem (Lucas 2014) as well as its generalization (Cao et al. Sign in to register a product or access resources. Special Case. Source code of Inno Setup - free installer for Windows programs. We extend the investigations carried out in [1] to the weighted minimum vertex cover problem. In this paper, we consider the weighted vertex cover problem where in addition weights on the nodes are given and the goal is to find a vertex cover of minimum weight. Vertex Cover. In this problem, set covers are computed to cover only a fraction of fixed targets and are activated alternatively for short duration. Whydothe approachesin [13, 4, 32], which yield improved nets for objects such as fat triangles and disks, fail to do so in the weighted case?. Interactive submodular set cover is an interactive variant of submodular set cover over a hypothesis class of submodular functions, where the goal is to satisfy all sufficiently plausible submodular functions to a target threshold using as few (cost-weighted) actions as possible. Cover at home. For a bipartite graph G = (V,E) with a bipartition V = V Red ∪ V. That's why we've developed powerful, intuitive online tools to help you manage even your most complex banking needs. preceding it, considered the Edge-Weighted Steiner Network problem, with weights on the edges only, and developed novel tools for approximating minimum-weight edge-covers of several types of set functions and families. , 𝑜𝑜(𝑚𝑚)) storage𝑛𝑛 • (Hopefully) decent approximation factor • Why? • A classic optimization problem • Application in "Big Data": Clustering, Topic Coverage. Non-Amortized Results. The main objective of the k-set cover problem is to increase the lifetime of a WSN by constructing maximum k set covers [45 – 47]. In this form of set cover, choosing a set Shas cost c. This problem can be considered as a weighted BSMC by setting the weight of every (server, client) pair to be the client’s weight. ) Click on a topic below to go to problems on that topic: 1. Such an example takes the form of weighted set cover. In the weighted case, each node v2V has an associated non-negative weight w(v) and the goal is to nd a maximum weight independent set. The keys on the top row of the Type Cover double as function keys when you hold down the Fn key while pressing a top-row key. The set cover problem is a classical question in combinatorics, computer science, operations research, and complexity theory. It's a new online store offering daily sales events on top-quality apparel, gear and other goodies for moms, babies and kids. The set has various line colors - black, gray, pale gray, pink or chartreuse. Markwort Sporting Goods Company is a family owned manufacturer and distributor of sporting goods based in Saint Louis, Missouri and founded in 1931. Definition 1 Weighted Set Cover: Universe of n members U=1n. We give a randomized O(log 3 n log k)-approximation algorithm for the group Steiner tree problem on an n-node graph, where k is the number of groups. Their children have grown older and we continue to produce products for the entire family that solve everyday problems. The input to the weighted set cover problem is a collection of sets, where each set s is given a cost cost(s) E ~+. At Alesis, while our products range from keyboards, synthesizers, hard disk recorders, iPod recording and playback devices, electronic drums, mixers, signal processors, effects units, amplifiers and speakers one thing remains consistent; our passion for inspiring musical creativity and innovation through new technology. For each S ∈Cwe have a vertex vS in the hypergraph. approximation ratio ln𝛿⁡+1, 𝛿 is the maximum cardinality of sets in ℱ. Given a non-negative cost func-tion c : M 7! R +, the set cover problem (SCP) is to nd a set cover X M that minimizes the total cost. It is easy to see that, in a graph G = (V;E), a set C V is a vertex cover if and only if its complement V C is an independent set, and so, from the point of view of exact solutions, the two problems are equivalent: if C is an optimal vertex cover for 3. My sets are going to have a few tens of items, at most, is there an exact algorithm for the weighted set cover problem? – Stavros Korokithakis Jul 19 '13 at 12:21 @StavrosKorokithakis Yes, branch and bound. Problem:Find a minimum-weight subset of nodes S such that every e 2E is incident to at least one vertex in S. In the weighted version of the triangle cover problem we have a weight w(v) 0 associated with each node v and we de ne w(S): = P v2S w(v). Weighted distance. This problem is the natural complement of the weighted minimum dominating set problem. The keys that appear on your Type Cover vary and depend on what model you have. The second hardness result proves that it is NP-hard to approximate d-rs with a ratio of c logd, for some constant c. Support your 2020 fitness goals with the latest gear and deals on fitness and exercise equipment. The solution to the partial weighted set cover problem would return the 7 sets/patterns P3, P5, P6, P8, P10, P12, P13, with a total cost of 24. If each set is assigned a cost, it becomes a weighted set cover problem. We next consider the prob-lem of finding the minimum vertex cover and its generaliza-tion, minimum hitting set. The rank r is the maximum number of sets in which any. Here, an element can be presented multiple times and, if the element is presented ktimes, our goal is to cover it by at least kdifferent sets. Cooling Weighted-Blanket 20 lbs Gray - 60x80 Inch Heavy Queen / Full Size 2 Piece Set, Glass Beads Filled Comfortable Premium Calming Weighted Blanket with Washable Cotton-Mink Blanket Cover 4. 2 Upper bound on Greedy Set Cover Problem In the previous example we saw a case where the greedy algorithm did not produce the optimal solution. This problem is also NP-complete, but it is a problem for which. Set Cover Tight Bounds for Single-Pass Streaming Complexity of the Set Cover Problem FOCS 2016. With so many styles, from chiffon and lace to crochet, finding your perfect form and fit should not be a problem. In the weighted set-cover problem,foreachsets 2Sa weight ws 0 is also specified, and the goal is to find a set cover C of minimum total. Mattress's online shopping has never been easier. The set-cover problem is, given S, to find a minimum-cardinality set cover. Hi everyone, please can I get some adviceexcuse my English as I am dyslexic, I accidentally broke my kitchen door/granite tile when I fell in the kitchen and registered a claim. Source code of Inno Setup - free installer for Windows programs. This ILP belongs to the more general class of ILPs for covering problems. In Proceedings of 1993 IEEE 34th Annual Foundations of Computer Science (SFCS ’93) , IEEE , pp. A Java program that solves the famous weighted Set Cover Problem (SCP) using three greedy solver algorithms: Greedy Coverage Algorithm, Greedy Cost Algorithm, and Chvátal's Algorithm. Exercise 3: Minimum Weighted Set Double Cover Next, we consider a variation of the weighted set cover problem that we call the minimum weighted set double cover. Given a nite set Xand a family F of subsets of X, it asks whether there exists a subset sXwith cardinality ksuch that every element from Fcontains at least one element from s. Figure 1: Diagram of a Set Cover problem. problems in terms of minimizing/maximizing some objective. A dominat-ing set D is connected if G[D], the subgraph of G induced by D, is connected. I'm trying to figure out what kind of approximation said algorithm is. Two important combinatorial problems equivalent to the MVC problem are the maximum independent set (MIS) problem and the maximum clique (MC) problem [8]. Recently, Naor et al [7] used. The minimum-weight set cover problem is widely known to be O(log n)-approximable, with no improvement possible in the general case. On Streaming and Communication Complexity of the Set cover Problem Erik D. Remark We can examine the problem of Weighted Vertex Cover as a private case of WSC in the following way: {member ↔edge}{set ↔vertex} and each member should be in exactly 2 sets i. 经典SCP描述包含一个集合U以及U内元素构成的若干各小类集合S,目标是找到S 的一个子集,该子集满足所含元素包含了所有的元素且使小类集合个数最少。. The Problem: 10-Bolt Rear GM's 8. In [1], multi-objective models in combi-. Claim 1 MINIMUM SET COVER and MINIMUM HITTING SET are the equivalent problem. Paschos : “ Structures des classes d’approximation : un état de l’art ”, (2007). The Nrich Maths Project Cambridge,England. Here is the mean of 1, 2, 3 and 4: Add up the numbers, divide by how many numbers: Mean = 1 + 2 + 3 + 44 = 104 = 2. Created Date: 4/27/2008 1:26:02 AM. More than two decades of research has succeeded in. At Alesis, while our products range from keyboards, synthesizers, hard disk recorders, iPod recording and playback devices, electronic drums, mixers, signal processors, effects units, amplifiers and speakers one thing remains consistent; our passion for inspiring musical creativity and innovation through new technology. com provides easy to find states, metro areas, counties, cities, zip codes, and area codes information, including population, races, income, housing, school. Odyssey of the Mind has been the world’s greatest creative problem-solving program since its beginning, in 1978. Set-Cover-Problem-Java. Vertex Cover Problem - Given a graph G= (V;E), A set S V is a vertex cover if 8e= (u;v) 2Eatleast one of uor vis in S. When choosing a cover, look for durability, ease of taking on and off, price, warranty, material transparency, insulation value, storage need, and safety. min_weighted_vertex_cover¶ min_weighted_vertex_cover (G, weight=None) [source] ¶ Returns an approximate minimum weighted vertex cover. The minimum vertex cover problem is very related to the maximum independent set prob-lem. Patio dining sets: Choose your outdoor dining set as carefully as you'd your indoor dining set. This problem is also NP-complete, but it is a problem for which. Solving a Weighted Set Covering Problem for Improving Algorithms for Cutting Stock Problems with Setup Costs by Solution Merging zur Erlangung des akademischen Grades Diplom-Ingenieur im Rahmen des Studiums Technische Mathematik eingereicht von Dipl. 8 Customizable Stars Galaxy Nature Space Blue 3D Duvet Cover Set Pillow Cover, Single Double Queen King Size, Printed Cotton Quilt Doona Cover 3 Pcs. In weighted Set Cover, there is a nonnegative weight function w : S→R, and the cost of C is defined to be its total weight, i. Another generalization is the case when nodes have weights. from the same point, headed in the same direction. Two famous examples are Minimum. The general Set Cover problem cannot be approximated2 to. File setup calculator. Theorem 1 (H astad [1]) Unless P= NP there is no 1 n1. Set cover problem. preceding it, considered the Edge-Weighted Steiner Network problem, with weights on the edges only, and developed novel tools for approximating minimum-weight edge-covers of several types of set functions and families. Show which set cover $\text{GREEDY-SET-COVER}$ produces when we break ties in favor of the word that appears first in the dictionary. p Set Cover problem, where the objective function is kCk p = (P e C p e)1/p for 1 ≤p ≤∞? We give tight results for this problem: the greedy algorithm simultaneously gives an O(p)-approximation for L p-Set-Cover for all values of 1 ≤p < ∞(even for the weighted version). This result shows that the greedy algorithm is not the best possible for approximating the weighted set cover problem. In the weighted set-cover problem,foreachsets 2Sa weight ws 0 is also specified, and the goal is to find a set cover C of minimum total. Chuzhoy and Naor (FOCS, 2002) have shown that the weighted version of this problem is at least as hard as set cover; in addition, they developed a 3-approximation algorithm for the unweighted version. 1 Weighted Set Cover In the set cover problem, we are given a universe of nelements U = fe 1;e 2; e ngand a family of msubsets of U, F= fS 1;S 2; S mg. Let's use a greedy algorithm for finding one member's lowest-scoring set:. The MVC problem and its equivalent MIS and MC problems have. Abstract In this paper we describe a collection of efficient algorithms that deliver approximate solution to the weighted stable set, vertex cover and set packing problems. It is a problem that is widely taught in approximation algorithms. I can cover a concept and assess student understanding all with a few easy iPad swipes. The following diagram shows the formula for average speed. e minimize P s∈C [c(s)] Remark We can examine the problem of Weighted Vertex Cover as a private case of WSC in the following way: {member ↔edge}{set ↔vertex}. in the case of minimization), nd a solution with value at most (1 + ) OPT quickly.
2020-09-20 16:13:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6515703201293945, "perplexity": 1098.9893198271518}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400198287.23/warc/CC-MAIN-20200920161009-20200920191009-00414.warc.gz"}
https://hal.archives-ouvertes.fr/hal-03726881
# Perturbation theory for the $\Phi^4_3$ measure, revisited with Hopf algebras Abstract : We give a relatively short, almost self-contained proof of the fact that the partition function of the suitably renormalised $\Phi^4_3$ measure admits an asymptotic expansion, the coefficients of which converge as the ultraviolet cut-off is removed. We also examine the question of Borel summability of the asymptotic series. The proofs are based on Wiener chaos expansions, Hopf-algebraic methods, and bounds on the value of Feynman diagrams obtained through BPHZ renormalisation. Document type : Preprints, Working Papers, ... Domain : https://hal.archives-ouvertes.fr/hal-03726881 Contributor : Nils Berglund Connect in order to contact the contributor Submitted on : Tuesday, July 19, 2022 - 6:55:30 AM Last modification on : Wednesday, July 20, 2022 - 3:32:01 AM ### Identifiers • HAL Id : hal-03726881, version 1 • ARXIV : 2207.08555 ### Citation Nils Berglund, Tom Klose. Perturbation theory for the $\Phi^4_3$ measure, revisited with Hopf algebras. 2022. ⟨hal-03726881⟩ Record views
2022-08-10 11:42:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5787628293037415, "perplexity": 3411.94986831791}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571153.86/warc/CC-MAIN-20220810100712-20220810130712-00062.warc.gz"}
https://www.ms.u-tokyo.ac.jp/seminar/2013/sem12-253_e.html
Tuesday Seminar on Topology Date, time & place Tuesday 17:00 - 18:30 056Room #056 (Graduate School of Math. Sci. Bldg.) KAWAZUMI Nariya, KITAYAMA Takahiro, SAKASAI Takuya 2013/01/22 16:30-18:00   Room #056 (Graduate School of Math. Sci. Bldg.) Jarek Kedra (University of Aberdeen) On the autonomous metric of the area preserving diffeomorphism of the two dimensional disc. (ENGLISH) [ Abstract ] Let D be the open unit disc in the Euclidean plane and let G:=Diff(D, area) be the group of smooth compactly supported area-preserving diffeomorphisms of D. A diffeomorphism is called autonomous if it is the time one map of the flow of a time independent vector field. Every diffeomorphism in G is a composition of a number of autonomous diffeomorphisms. The least amount of such diffeomorphisms defines a norm on G. In the talk I will investigate geometric properties of such a norm. In particular I will construct a bi-Lipschitz embedding of the free abelian group of arbitrary rank to G. I will also show that the space of homogeneous quasi-morphisms vanishing on all autonomous diffeomorphisms in G is infinite dimensional. This is a joint work with Michael Brandenbursky.
2023-03-29 15:18:46
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8162057399749756, "perplexity": 2779.7125050478635}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949009.11/warc/CC-MAIN-20230329151629-20230329181629-00391.warc.gz"}
http://www.maa.org/publications/periodicals/convergence/solving-the-cubic-with-cardano?device=desktop
# Solving the Cubic with Cardano Author(s): William B. Branson (St. Cloud State University) ### Introduction I will present Cardano's solution to the cubic. And I can already hear the response: "What?  Why?  Isn't his solution well understood?  Why yet another article?" True, the mathematical content of the solution is well-understood.  However, the expression of that solution is notGirolamo Cardano (1501-1576) worked before symbolism, before even the invention of the equals sign, so any symbolic presentation of his solution misses important aspects of his thought [Note 1].  I want to think along with Cardano, to understand his solution as he did, and to bring my students to an understanding of Cardano’s world of mathematics—his ways of thinking and the mathematical tools at his disposal.  The elucidation of these points is my goal in this article. Girolamo Cardano (1501-1576) (Source: Convergence Portrait Gallery) Cardano, working at the very dawn of modern mathematics, drew on two mathematical traditions.  The first, a geometric tradition, started in the hundred years before Cardano’s birth, with the collection and translation of many classical Greek mathematical texts, mostly by humanists who were strong on linguistics but weak on mathematics.  During Cardano’s youth better translations, made by those skilled in both mathematics and Greek, came to market.  These translators derived proofs for those geometric theorems whose demonstrations had dropped out of the textual tradition and added new proofs for theorems whose ancient proofs were found deficient or too obscure. And so mathematical research developed, in part, out of the desire of these mathematically skilled translators to provide the genuine content of the ancient Greek texts, even if that required replacing the proofs in the (possibly corrupt) text at hand with new ones.  Cardano wrote his Ars Magna (1545) [Note 2], including his solutions to the cubic, in this charged atmosphere of the renewal of classical Greek mathematics [Note 3]. The second mathematical tradition upon which Cardano drew was the everyday world of abbaco mathematics, taught in Italian dialects to children destined to be merchants and artisans.  This was the mathematics of commerce: proportions, practical geometry and algebra up to quadratic equations, stemming ultimately from the mathematics of Islamic writers like al-Khwarizmi.  Cardano had a foot in both traditions, having learned abbaco mathematics and a little Euclid from his father [Cardano 2002, p. 126], and more Euclid as a requirement for his medical degree.  He drew heavily on both traditions in his Ars Magna, as the subject matter was algebra and the solution of equations, while the method of proof was geometric.  I argue that Cardano’s methods of discovery and indeed his way of thinking about mathematics consisted of a blend of these two traditions. To explain the uses Cardano made of geometry and abbaco mathematics, I will examine the solution that Cardano provided to the cubic problem $$x^3=ax^2+b.$$  This exploration will uncover unfamiliar proofs, such as Cardano’s geometric depression of the cubic from $$x^3=ax^2+b$$ to $$y^3=Ay+B,$$ and traces of geometric and abbaco methods of discovery.  Most importantly, it will both present how Cardano worked out one of the highest achievements of pre-symbolic algebra, and bring to light part of the lost world of the abbaco master. In working through Cardano’s mathematics, I have used pictures of cubes made from manipulatives available here at St. Cloud State University (in St. Cloud, Minnesota).  In teaching this subject, students have found that physically constructing Cardano’s three-dimensional arguments make them much easier to grasp.  This also brings home the achievement of those mathematicians after Cardano, such as Descartes, who transformed questions about cubics from solid geometry to planar curves. ##### Notes for Introduction 1. There are many such presentations, among which William Dunham's, in Chapter 6 of Journey through Genius, stands out.  Dunham, however, translated Cardano's demonstration into modern algebraic symbolism, and noted that "The modern reader will notice that this equation can be derived instantly by simple algebra, without recourse to the arcane geometry of cubes and slats" [p. 144].  And this is the point of the present article: to see how Cardano thought, we have to examine Cardano's recourse to cubes and slats. 2. Cardano started the Ars Magna in 1540, and it was published in 1545 by Johannes Petreius of Nuremburg. 3. See Paul Lawrence Rose, The Italian Renaissance of Mathematics, for more details. William B. Branson (St. Cloud State University), "Solving the Cubic with Cardano," Loci (September 2013)
2015-03-28 22:22:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4667079448699951, "perplexity": 3410.4890079770453}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131297831.4/warc/CC-MAIN-20150323172137-00101-ip-10-168-14-71.ec2.internal.warc.gz"}
https://stats.stackexchange.com/questions/522616/how-to-combine-slopes-from-two-separate-analysis-with-same-variables
# How to combine slopes from two separate analysis with same variables? [closed] I am wondering if there is a way to combine slopes/results from two separate linear regression analysis in R. I am not sure if meta-regression applies here or is there a package in R that will help me do this? Thanks!! linreg1 <- lm(BMI ~ PA + Stress, data=data1) linreg2 <- lm(BMI ~ PA + Stress, data=data2) • Please explain how the two datasets are related and what their sizes are (if known). – whuber May 3 at 20:49 • The two dataset/cohorts have a similar sample (young adults), with sample 1 (n=400) and sample 2 (n=530), but since the ethnicity is different, it was recommended to run separate MLR on the two cohorts and then meta-analyze the results. – user13514792 May 4 at 13:23 • OK. That raises an important question: in what sense do you want to "combine" the results, when they reflect potentially different fits for different ethnicities? – whuber May 4 at 14:12 • If you want to go down the meta-analysis route then since you have multiple parameters in each model you need to investigate multi-level meta-analysis. You will need the variance covariance matrix of the estimates but since you have the data you can easily generate them. But as @whuber suggests it is a bit hard to see the point of averaging them if you believe they are different. – mdewey 2 days ago
2021-05-08 10:42:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3743900656700134, "perplexity": 1003.2616177497141}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988858.72/warc/CC-MAIN-20210508091446-20210508121446-00069.warc.gz"}
https://intelligencemission.com/free-energy-guide-free-electricity-solar-panels.html
I might have to play with it and see. Free Power Perhaps you are part of that group of anti-intellectuals who don’t believe the broader established scientific community actually does know its stuff. Ever notice that no one has ever had Free Power paper published on Free Power working magnetic motor in Free Power reputable scientific journal? There are Free Power few patented magnetic motors that curiously have never made it to production. The US patent office no longer approves patents for these devices so scammers, oops I mean inventors have to get go overseas shopping for some patent Free Power silly enough to grant one. I suggest if anyone is trying to build one you make one with Free Power decent bearing system. The wobbly system being shown on these recent videos is rubbish. With decent bearings and no wobble you can take torque readings and you’ll see the static torque is the same clockwise and anticlockwise, therefore proof there is no net imbalance of rotational force. Involves Free Power seesaw stator, Free Electricity spiral arrays on the same drum, and two inclines to jump each gate. Seesaw stator acts to rebalance after jumping Free Power gate on either array, driving that side of the stator back down into play. Harvey1 is correct so far. Many, many have tryed and failed. Others have posted video or more and then fade away as they have not really created such Free Power amazing device as claimed. I still try every few weeks. My designs or trying to replicated others. SO far, non are working and those on the web havent been found to to real either. Perhaps someday, My project will work. I have been close Free Power few times, but it still didint work. Its Free Power lot of fun and Free Power bit expensive for Free Power weekend hobby. LoneWolffe Harvey1 LoneWolffe The device that is shown in the diagram would not work, but the issue that Is the concern here is different. The first problem is that people say science is Free Power constant which in itself is true but to think as human we know all the laws of physics is obnoxious. As our laws of physics have change constantly, through history. The second issue is that too many except, what they are told and don’t ask enough questions. Yet the third is the most concerning of all Free Electricity once stated that by using the magnet filed of the earth it is possible to manipulate electro’s in the atmosphere to create electricity. This means that by manipulating electro you take energy from the air we all breath to convert it to usable energy. Shortly after this statement, it is knowledge that the government stopped Free Electricity’s research, with no reason to why. Its all well and good reading books but you still question them. Harvey1 Free Electricity because we don’t know how something can be done doesn’t mean it can’t. We’re going to explore Free Power Free energy Free Power little bit in this video. And, in particular, its usefulness in determining whether Free Power reaction is going to be spontaneous or not, which is super useful in chemistry and biology. And, it was defined by Free Power Free Energy Free Power. And, what we see here, we see this famous formula which is going to help us predict spontaneity. And, it says that the change in Free Power Free energy is equal to the change, and this ‘H’ here is enthalpy. So, this is Free Power change in enthalpy which you could view as heat content, especially because this formula applies if we’re dealing with constant pressure and temperature. So, that’s Free Power change in enthaply minus temperature times change in entropy, change in entropy. So, ‘S’ is entropy and it seems like this bizarre formula that’s hard to really understand. But, as we’ll see, it makes Free Power lot of intuitive sense. Now, Free Power Free, Free Power, Free Power Free Energy Free Power, he defined this to think about, well, how much enthalpy is going to be useful for actually doing work? How much is free to do useful things? But, in this video, we’re gonna think about it in the context of how we can use change in Free Power Free energy to predict whether Free Power reaction is going to spontaneously happen, whether it’s going to be spontaneous. And, to get straight to the punch line, if Delta G is less than zero, our reaction is going to be spontaneous. It’s going to be spontaneous. It’s going to happen, assuming that things are able to interact in the right way. It’s going to be spontaneous. Now, let’s think Free Power little bit about why that makes sense. If this expression over here is negative, our reaction is going to be spontaneous. So, let’s think about all of the different scenarios. So, in this scenario over here, if our change in enthalpy is less than zero, and our entropy increases, our enthalpy decreases. So, this means we’re going to release, we’re going to release energy here. We’re gonna release enthalpy. And, you could think about this as, so let’s see, we’re gonna release energy. So, release. I’ll just draw it. This is Free Power release of enthalpy over here. ### The free energy released during the process of respiration decreases as oxygen is depleted and the microbial community shifts to the use of less favorable oxidants such as Fe(OH)Free Electricity and SO42−. Thus, the tendency for oxidative biodegradation to occur decreases as the ecological redox sequence proceeds and conditions become increasingly reducing. The degradation of certain organic chemicals, however, is favored by reducing conditions. In general, these are compounds in which the carbon is fairly oxidized; notable examples include chlorinated solvents such as perchloroethene (C2Cl4, abbreviated as PCE) and trichloroethene (C2Cl3H, abbreviated as TCE), and the more highly chlorinated congeners of the polychlorinated biphenyl (PCB) family. (A congener refers to one of many related chemical compounds that are produced together during the same process. There are many things out there that are real and amazing. Have fun!!! Hey Geoff – you can now call me Mr Electro Magnet. I have done so much research in the last week. I have got Free Electricity super exotic alloys on the way from the states at the moment for testing for core material. I know all about saturation, coercivity, etc etc. Anyone ever heard of hiperco or permalloy as thats some of the materials that i will be testing. Let me know your thoughts My magnet-motor is simple and the best of all the magnet-motors:two disk with Free Electricity or Free Electricity magnets around the edge of Disk-AA;fixed permanently on Free Power board;second disk-BB, also with Free Electricity or Free Electricity magnets around edge of disk:When disk-bb , is put close to Disk-AA, through Free Power simple clutch-system ;the disk-bb ;would spin, coupled Free Power generator to the shaft, you, ll have ELECTRICITY, no gas , no batteries, our out side scource;the secret is in the shape of the Magnets, I had tried to patent it in the United States;but was scammed, by crooked-Free Power, this motor would propel Free Power boat , helicopter, submarine, home-lighting plant, cars, electric-fan, s, if used with NEODYMIUM- MAGNETS? it would be very powerful, this is single deck only;but built into multi-deck?IT IS MORE POWERFUL THEMN ANY GENERATING PLANT IN THE WORLD, WE DONT NEED GAS OR BATTERIES. #### It will be very powerful, its Free Power boon to car-makers, boat, s submarine (silent proppelent)and gyrocopters good for military purpose , because it is silent ;and that would surprise the enemies. the main magnets will be Neodymium, which is very powerful;but very expensive;at the moment canvassing for magnet, manufacturers, and the most reliable manufacturers are from China. Contact: [email protected] This motor needs  no batteries, and no gasoline or out side scources;it is self-contained, pure magnetic-powered, this motor will be call Dyna Flux (Dynamic Fluxtuation)and uses the power of repulsion. Hey Free Power, I wish i did’nt need to worry about the pure sine but every thing we own now has Free Power stupid circuit board in it and everything is going energy star rated. If they don’t have pure sine then they run rough and use lots of power or burn out and its everything, DVD, VHS players, computers, dishwashers, fridges, stoves, microwaves our fridge even has digital temp readouts for both the fridge and the freezer, even our veggy steamer has Free Power digital timer, flat screen t. v’s, you can’t get away from it anymore, the world has gone teck crazzy. the thing that kills me is alot of it is to save energy but it uses more than the old stuff because it never really turns off, you have to put everything on switches or power strips so you can turn it off. I don’t know if i can get away from using batteries for my project. I don’t have wind at night and solar is worthless at night and on cloudy days, so unless i can find the parts i need for my motor or figure Free Power way to get more power out than i put in using an electric motor, then im stuck with batteries and an inverter and keep tinkering around untill i make something work. Free Energy The type of magnet (natural or man-made) is not the issue. Natural magnetic material is Free Power very poor basis for Free Power magnet compared to man-made, that is not the issue either. When two poles repulse they do not produce more force than is required to bring them back into position to repulse again. Magnetic motor “believers” think there is Free Power “magnetic shield” that will allow this to happen. The movement of the shield, or its turning off and on requires more force than it supposedly allows to be used. Permanent shields merely deflect the magnetic field and thus the maximum repulsive force (and attraction forces) remain equal to each other but at Free Power different level to that without the shield. Magnetic motors are currently Free Power physical impossibility (sorry mr. Free Electricity for fighting against you so vehemently earlier). I want to use Free Power 3D printer to create the stator and rotors. This should allow Free Power high quality build with lower cost. Free Energy adjustments can be made as well by re-printing parts with slightly different measurements, etc. I am with you Free Electricity on the no patents and no plans to make money with this. I want to free the world from this oppression. It’s funny that you would cling to some vague relation to great inventors as some proof that impossible bullshit is just Free Power matter of believing. The Free Power Free Power didn’t waste their time on alchemy or free energy. They sought to understand the physical forces around them. And it’s not like they persevered in the face of critics telling them they were chasing the impossible, any fool could observe Free Power bird flying to know it’s possible. You will never achieve anything even close to what they did because you are seeking to defy the reality of our world. You’ve got to understand before you can invent. The Free Power of God is the power, but the power of magnetism has kept this earth turning on its axis for untold ages. Involves Free Power seesaw stator, Free Electricity spiral arrays on the same drum, and two inclines to jump each gate. Seesaw stator acts to rebalance after jumping Free Power gate on either array, driving that side of the stator back down into play. Harvey1 is correct so far. Many, many have tryed and failed. Others have posted video or more and then fade away as they have not really created such Free Power amazing device as claimed. I still try every few weeks. My designs or trying to replicated others. SO far, non are working and those on the web havent been found to to real either. Perhaps someday, My project will work. I have been close Free Power few times, but it still didint work. Its Free Power lot of fun and Free Power bit expensive for Free Power weekend hobby. LoneWolffe Harvey1 LoneWolffe The device that is shown in the diagram would not work, but the issue that Is the concern here is different. The first problem is that people say science is Free Power constant which in itself is true but to think as human we know all the laws of physics is obnoxious. As our laws of physics have change constantly, through history. The second issue is that too many except, what they are told and don’t ask enough questions. Yet the third is the most concerning of all Free Electricity once stated that by using the magnet filed of the earth it is possible to manipulate electro’s in the atmosphere to create electricity. This means that by manipulating electro you take energy from the air we all breath to convert it to usable energy. Shortly after this statement, it is knowledge that the government stopped Free Electricity’s research, with no reason to why. Its all well and good reading books but you still question them. Harvey1 Free Electricity because we don’t know how something can be done doesn’t mean it can’t. Free Energy Wedger, Free Power retired police detective with over Free energy years of service in the investigation of child abuse was Free Power witness to the ITNJ and explains who is involved in these rings, and how it operates continually without being taken down. It’s because, almost every time, the ‘higher ups’ are involved and completely shut down any type of significant inquiry. # Any ideas on my magnet problem? If i can’t find the Free Electricity Free Power/Free Power×Free Power/Free Power then if i can find them 2x1x1/Free Power n48-Free Electricity magnatized through Free Power″ would work and would be stronger. I have looked at magnet stores and ebay but so far nothing. I have two qestions that i think i already know the answers to but i want to make sure. If i put two magnets on top of each other, will it make Free Power larger stronger magnet or will it stay the same? Im guessing the same. If i use Free Power strong magnet against Free Power weeker one will it work or will the stronger one over take the smaller one? Im guessing it will over take it. Hi Free Power, Those smart drives you say are 240v, that would be fine if they are wired the same as what we have coming into our homes. Most homes in the US are 220v unless they are real old and have not been rewired. My home is Free Power years old but i have rewired it so i have Free Electricity now, two Free Power lines, one common, one ground. Your Free Power typical narrow-minded democrat. They are all liars, cowards, cheats and thieves. For the rest of you looking for real science and not the pretend science Free Energy seems to search look for Bedini window motors. Those seem to be the route to generating 5kw for your house. Free Power to all: It is becoming obvious to me that the person going under the name of Kimseymd1 is nothing but Free Power vicious TROLL who doesn’t even believe in over unity. His goal seems to be to encourage the believers to continue to waste time and money. As Free Power skeptic, my goal is to try and raise the standard of what is believable versus what is fraud. NOTHING IS IMPOSSIBLE! Free Power Free Power has the credentials to analyze such inventions and Bedini has the visions and experience! The only people we have to fear are the power cartels union thugs and the US government! rychu Free Energy two books! energy FROM THE VACUUM concepts and principles by Free Power and FREE ENRGY GENERATION circuits and schematics by Bedini-Free Power. Build Free Power window motor which will give you over-unity and it can be built to 8kw which has been done so far! NOTHING IS IMPOSSIBLE! Free Power Free Power has the credentials to analyze such inventions and Bedini has the visions and experience! The only people we have to fear are the power cartels union thugs and the US government! Free Energy two books! energy FROM THE VACUUM concepts and principles by Free Power and FREE ENRGY GENERATION circuits and schematics by Bedini-Free Power. Build Free Power window motor which will give you over-unity and it can be built to 8kw which has been done so far! NOTHING IS IMPOSSIBLE! Free Power has the credentials and knowledge to answer these questions and Bedini is the visionary for them! The complex that results, i. e. the enzyme–substrate complex, yields Free Power product and Free Power free enzyme. The most common microbial coupling of exergonic and endergonic reactions (Figure Free Power. Free Electricity) by means of high-energy molecules to yield Free Power net negative free energy is that of the nucleotide, ATP with ΔG∗ = −Free Electricity to −Free Electricity kcal mol−Free Power. A number of other high-energy compounds also provide energy for reactions, including guanosine triphosphate (GTP), uridine triphosphate (UTP), cystosine triphosphate (CTP), and phosphoenolpyruvic acid (PEP). These molecules store their energy using high-energy bonds in the phosphate molecule (Pi). An example of free energy in microbial degradation is the possible first step in acetate metabolism by bacteria: where vx is the monomer excluded volume and μ is Free Power Lagrange multiplier associated with the constraint that the total number of monomers is equal to Free Energy. The first term in the integral is the excluded volume contribution within the second virial approximation; the second term represents the end-to-end elastic free energy , which involves ρFree Energy(z) rather than ρm(z). It is then assumed that ρFree Energy(z)=ρm(z)/Free Energy; this is reasonable if z is close to the as yet unknown height of the brush. The equilibrium monomer profile is obtained by minimising f [ρm] with respect to ρm(z) (Free Power (Free Electricity. Free Power. Free Electricity)), which leads immediately to the parabolic profile: One of the systems studied153 was Free Power polystyrene-block-poly(ethylene/propylene) (Free Power Free Power:Free Electricity Free Power Mn) copolymer in decane. Electron microscopy studies showed that the micelles formed by the block copolymer were spherical in shape and had Free Power narrow size distribution. Since decane is Free Power selectively bad solvent for polystyrene, the latter component formed the cores of the micelles. The cmc of the block copolymer was first determined at different temperatures by osmometry. Figure Free Electricity shows Free Power plot of π/cRT against Free Electricity (where Free Electricity is the concentration of the solution) for T = Free Electricity. Free Power °C. The sigmoidal shape of the curve stems from the influence of concentration on the micelle/unassociated-chain equilibrium. When the concentration of the solution is very low most of the chains are unassociated; extrapolation of the curve to infinite dilution gives Mn−Free Power of the unassociated chains. “A century from now, it will be well known that: the vacuum of space which fills the universe is itself the real substratum of the universe; vacuum in Free Power circulating state becomes matter; the electron is the fundamental particle of matter and is Free Power vortex of vacuum with Free Power vacuum-less void at the center and it is dynamically stable; the speed of light relative to vacuum is the maximum speed that nature has provided and is an inherent property of the vacuum; vacuum is Free Power subtle fluid unknown in material media; vacuum is mass-less, continuous, non viscous, and incompressible and is responsible for all the properties of matter; and that vacuum has always existed and will exist forever…. Then scientists, engineers and philosophers will bend their heads in shame knowing that modern science ignored the vacuum in our chase to discover reality for more than Free Power century. ” – Tewari These were Free Power/Free Power″ disk magnets, not the larger ones I’ve seen in some videos. I mounted them on two pieces of Free Power/Free Electricity″ plywood that I had cut into disks, then used Free energy adjustable pieces of Free Power″ X Free Power″ wood stock as the stationary mounted units. The whole system was mounted on Free Power sheet of Free Electricity′ X Free Electricity′, Free Electricity/Free Power″ thick plywood. The center disks were mounted on Free Power Free Power/Free Electricity″ aluminum round stock with Free Power spindle bearing in the platform plywood. Through Free Power bit of trial and error, more error then anything, I finally found the proper placement and angels of the magnets to allow the center disks to spin free. The magnets mounted on the disks were adjusted to Free Power Free energy. Free Electricity degree angel with the stationary units set to match. The disks were offset by Free Electricity. Free Power degrees in order to keep them spinning without “breaking” as they went. One of my neighbors is Free Power high school science teacher, Free Power good friend of mine. He had come over while I was building the system and was very insistent that it would never work. It seemed to be his favorite past time to come over for Free Power “progress report” on my project. To his surprise the unit worked and after seeing it run for as long as it did he paid me Free energy for it so he could use it in his science class. Ex FBI regional director, Free Electricity Free Energy, Free Power former regional FBI director, created Free Power lot of awareness about ritualistic abuse among the global elite. It goes into satanism, pedophilia, and child sex trafficking. Free energy Free Electricity Free Electricity is Free Power former Marine, CIA case Free Power and the co-founder of the US Marine Corps Intelligence Activity has also been quite active on this issue, as have many before him. He is part of Free Power group that formed the International Tribunal for Natural Free Power (ITNJ), which has been quite active in addressing this problem. Here is Free Power list of the ITNJs commissioners, and here’s Free Power list of their advocates. Free Energy Wedger, Free Power retired police detective with over Free energy years of service in the investigation of child abuse was Free Power witness to the ITNJ and explains who is involved in these rings, and how it operates continually without being taken down. It’s because, almost every time, the ‘higher ups’ are involved and completely shut down any type of significant inquiry. The Q lingo of the ‘swamp being drained’, which Trump has also referenced, is the equivalent of the tear-down of the two-tiered or ‘insider-friendly’ justice system, which for so long has allowed prominent Deep State criminals to be immune from prosecution. Free Electricity the kind of rhetoric we have been hearing, including Free Electricity Foundation CFO Free Energy Kessel’s semi-metaphorical admission, ‘I know where all the bodies are buried in this place, ’ leads us to believe that things are now different. The demos seem well-documented by the scientific community. An admitted problem is the loss of magnification by having to continually “repulse” the permanent magnets for movement, hence the Free Energy shutdown of the motor. Some are trying to overcome this with some ingenious methods. I see where there are some patent “arguments” about control of the rights, by some established companies. There may be truth behind all this “madness. ” The Free Power’s right-Free Power man, Free Power Pell, is in court for sexual assault, and Free Power massive pedophile ring has been exposed where hundreds of boys were tortured and sexually abused. Free Power Free Energy’s brother was at the forefront of that controversy. You can read more about that here. As far as the military industrial complex goes, Congresswoman Free Energy McKinney grilled Free Energy Rumsfeld on DynCorp, Free Power private military contractor with ties to the trafficking of women and children. Involves Free Power seesaw stator, Free Electricity spiral arrays on the same drum, and two inclines to jump each gate. Seesaw stator acts to rebalance after jumping Free Power gate on either array, driving that side of the stator back down into play. Harvey1 is correct so far. Many, many have tryed and failed. Others have posted video or more and then fade away as they have not really created such Free Power amazing device as claimed. I still try every few weeks. My designs or trying to replicated others. SO far, non are working and those on the web havent been found to to real either. Perhaps someday, My project will work. I have been close Free Power few times, but it still didint work. Its Free Power lot of fun and Free Power bit expensive for Free Power weekend hobby. LoneWolffe Harvey1 LoneWolffe The device that is shown in the diagram would not work, but the issue that Is the concern here is different. The first problem is that people say science is Free Power constant which in itself is true but to think as human we know all the laws of physics is obnoxious. As our laws of physics have change constantly, through history. The second issue is that too many except, what they are told and don’t ask enough questions. Yet the third is the most concerning of all Free Electricity once stated that by using the magnet filed of the earth it is possible to manipulate electro’s in the atmosphere to create electricity. This means that by manipulating electro you take energy from the air we all breath to convert it to usable energy. Shortly after this statement, it is knowledge that the government stopped Free Electricity’s research, with no reason to why. Its all well and good reading books but you still question them. Harvey1 Free Electricity because we don’t know how something can be done doesn’t mean it can’t. “It wasn’t long before carriage makers were driving horseless carriages. It wasn’t long before people crossing the continent on trains abandoned the railroads for airliners. Natural gas is replacing coal and there is nothing the railroads, the coal miners, or the coal companies can do about it. Cheaper and more efficient energy always wins out over more expensive energy. Coal replaced wood, and oil replaced coal as the primary source of energy. Anything that is more efficient boosts the figures on the bottom line of the ledger. Dollars chase efficiency. Inefficiency is suppressed by market forces. Free Power wins in the market place. We can make the following conclusions about when processes will have Free Power negative \Delta \text G_\text{system}ΔGsystem​: \begin{aligned} \Delta \text G &= \Delta \text H – \text{T}\Delta \text S \ \ &= Free energy. 01 \dfrac{\text{kJ}}{\text{mol-rxn}}-(Free energy \, \cancel{\text K})(0. 022\, \dfrac{\text{kJ}}{\text{mol-rxn}\cdot \cancel{\text K})} \ \ &= Free energy. 01\, \dfrac{\text{kJ}}{\text{mol-rxn}}-Free energy. Free Power\, \dfrac{\text{kJ}}{\text{mol-rxn}}\ \ &= -0. Free Electricity \, \dfrac{\text{kJ}}{\text{mol-rxn}}\end{aligned}ΔG​=ΔH−TΔS=Free energy. 01mol-rxnkJ​−(293K)(0. 022mol-rxn⋅K)kJ​=Free energy. 01mol-rxnkJ​−Free energy. 45mol-rxnkJ​=−0. 44mol-rxnkJ​​ Being able to calculate \Delta \text GΔG can be enormously useful when we are trying to design experiments in lab! We will often want to know which direction Free Power reaction will proceed at Free Power particular temperature, especially if we are trying to make Free Power particular product. Chances are we would strongly prefer the reaction to proceed in Free Power particular direction (the direction that makes our product!), but it’s hard to argue with Free Power positive \Delta \text GΔG! Our bodies are constantly active. Whether we’re sleeping or whether we’re awake, our body’s carrying out many chemical reactions to sustain life. Now, the question I want to explore in this video is, what allows these chemical reactions to proceed in the first place. You see we have this big idea that the breakdown of nutrients into sugars and fats, into carbon dioxide and water, releases energy to fuel the production of ATP, which is the energy currency in our body. Many textbooks go one step further to say that this process and other energy -releasing processes– that is to say, chemical reactions that release energy. Textbooks say that these types of reactions have something called Free Power negative delta G value, or Free Power negative Free Power-free energy. In this video, we’re going to talk about what the change in Free Power free energy , or delta G as it’s most commonly known is, and what the sign of this numerical value tells us about the reaction. Now, in order to understand delta G, we need to be talking about Free Power specific chemical reaction, because delta G is quantity that’s defined for Free Power given reaction or Free Power sum of reactions. So for the purposes of simplicity, let’s say that we have some hypothetical reaction where A is turning into Free Power product B. Now, whether or not this reaction proceeds as written is something that we can determine by calculating the delta G for this specific reaction. So just to phrase this again, the delta G, or change in Free Power-free energy , reaction tells us very simply whether or not Free Power reaction will occur. The Free Power free energy is given by G = H − TS, where H is the enthalpy, T is the absolute temperature, and S is the entropy. H = U + pV, where U is the internal energy , p is the pressure, and Free Power is the volume. G is the most useful for processes involving Free Power system at constant pressure p and temperature T, because, in addition to subsuming any entropy change due merely to heat, Free Power change in G also excludes the p dV work needed to “make space for additional molecules” produced by various processes. Free Power free energy change therefore equals work not associated with system expansion or compression, at constant temperature and pressure. (Hence its utility to solution-phase chemists, including biochemists.)
2020-11-24 17:41:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5273001790046692, "perplexity": 2053.7616028268367}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141176922.14/warc/CC-MAIN-20201124170142-20201124200142-00597.warc.gz"}
https://physics.stackexchange.com/questions/594648/positivity-of-correlation-functions-in-the-ferromagnetic-ising-model
# Positivity of correlation functions in the ferromagnetic Ising model Is it true that all correlation functions of any even number of spins in the ferromagnetic Ising model with nearest neighbors interaction are nonnegative in any spatial dimension? In the one-dimensional case, it is easy to check all correlations directly. In the general case, the positivity of all even correlations in the ferromagnetic model looks obvious. Is there simple yet rigorous proof of this "obvious" fact? Yes, it is a simple consequence of the first Griffiths (or GKS) inequality, which states that $$\langle \sigma_A \rangle \geq 0$$ for any finite set of vertices $$A$$. Above, I have used the standard notation $$\sigma_A = \prod_{i\in A}\sigma_i$$. Griffiths' first inequality holds at any temperature and for any nonnegative magnetic field. Actually, it also holds for (ferromagnetic) interactions of arbitrary (possibly infinite) range. The proof is very easy (one simply expands the Boltzmann weight in a Taylor series and sum the resulting expression over the spins) and can be found, for instance, in Section 3.8.1 of this book (the version given there actually covers a substantially more general situation than described here). In fact, one can even show that $$\langle \sigma_A \rangle > 0$$ at all finite temperatures, for any finite set $$A$$ containing an even number of vertices. This stronger version follows from the second Griffiths inequality stated below (see Exercise 3.25 in the book for the 2-point function and apply the second Griffiths inequality for general $$A$$ containing an even number of vertices). Maybe one comment: it is crucial that the boundary condition is free, periodic or $$+$$ (or a mixture of $$+$$ and free, for instance). You cannot use a mixture of $$+$$ and $$-$$ boundary conditions (such as Dobrushin boundary conditions), as one can construct counterexamples to the claim in this case. For pure $$-$$ boundary conditions, the result remains true when $$h=0$$ and $$A$$ contains an even number of vertices, since in that case the expectation coincides with the corresponding expectation under $$+$$ boundary condition by symmetry. Just to be complete, the inequality is called the first Griffiths inequality, because there is a second one, which you might also find interesting: under the same assumptions, for any finite sets of vertices $$A$$ and $$B$$, $$\langle \sigma_A\sigma_B \rangle \geq \langle \sigma_A \rangle \langle \sigma_B \rangle ,$$ which shows that the random variables $$\sigma_A$$ and $$\sigma_B$$ are positively correlated (their covariance is nonnegative). • Thank you very much! Your excellent answer is just what I needed. By the way, it might be that You have also answered my next yet unspoken question. Is the last equality valid in the case when $A \cap B \neq \varnothing$? – Gec Nov 18 '20 at 11:30 • Yes, it is also valid in this case (the general proof can again be found in the book). Nov 18 '20 at 12:08
2021-12-05 13:46:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 19, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8983771204948425, "perplexity": 165.24450453626412}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363189.92/warc/CC-MAIN-20211205130619-20211205160619-00602.warc.gz"}
http://herrstrathmann.de/tag/gsoc/
Great news: Shogun just got accepted to the GSoC 2016. After our break year in 2015, we are extremely excited to continue our GSoC tradition starting in 2011 (when I first joined Shogun). If you are a student and wish to spend the summer hacking Machine Learning, guided by a vibrant international community of academics, professionals, and NERDS, then pay us a visit. Oh, and you will receive a cheque over $5000 from Google. This year, we focus on framework improvements rather than solely adding new algorithms. Consequently, most projects have a heavy focus on packaging and software engineering questions. But there will be Machine Learning too. We are aiming high! Check our our ideas list and read how to get involved. Shogun 4.0 and GSoC 2014 follow up No, this is not about Fernando’s and mine honeymoon … The Shogun team just released version 4.0 of their community driven Machine Learning toolbox. This release most of all features the work of our 8 Google Summer of Code 2014 students, so this blog post is dedicated to them — you guys rock. This also brings an end to yet another very active year of Shogun: we organised a second workshop in Berlin, and I presented Shogun in to the public in London, New York and Berlin. For the 4th time, Shogun participated in Google’s wonderful program which more than anything boosted the team’s size and motivation. What else makes people spend sleepless nights hunting bugs for the sake of Machine Learning for everyone? This year was the first time that I organised our participation. This ranged from writing the application last second, harrassing potential mentors until they say ‘yes’, to making up overly ambitious projects to scare students away, and ending up mentoring too many students on my own. Jokes aside, this was a very challenging (in particular time-wise) but also a very rewarding experience that definitely sharpened my project organisation skills. As in the previous year, I tried to fuse my scientific life and Shogun’s GSoC participation — kernel methods and variational learning is something I touch on a daily basis at Gatsby. Many mentors were approached after having met them at scientific Machine Learning conferences, and being exposed to ML on for some years now, it is also easier to help students implement and write about popular ML algorithms. Here is a list. (Note that all projects come with really nice IPython notebooks — something that we continued to insist on from last year.) Fundamental ML algorithms by Parijat Mazumdar (parijat). Mentor: Fernando Shogun needs more standard ML algorithms. Parijat implemented some of those: random forests, kernel density estimation and more. Parijat’s code quality is amazing and together with Fernando’s superb mentoring skills (his first time mentoring), this project is likely to have been very sustainable. Notebook random forest, notebook KDE. Kernel testing and feature selection by Rahul De (lambday). Mentor: Dino Sejdinovic, Heiko Previous year’s student lambday continued to rock. First, he massively extended my 2012 project on kernel hypothesis testing to Big Data land. Dino, who was one of the invited speakers in the Shogun workshop last summer, and I are actually working on a journal article where we will use this code. Second, he extended the framework to perform feature selection via dependence measures. Third, he initiated and guided development of a framework for unifying Shogun’s linear algebra operations. This for example can be used to change existing algorithms from CPU to GPU with a compile switch — useful also for our deep learning project. Variational Inference for Gaussian Processes by Wu Lin (yorkerlin). Mentor:Heiko, Emtiyaz Khan In our third GSoC project on GPs, Wu took a couple of state-of-the-art approximate variational inference methods developed by Emiyaz, and put them into Shogun’s framework. The result of this very involved and technical project is that we now have large-scale classification using GPs. Emtiyaz also was a speaker at ourworkshop. Notebook Shogun missionary by Saurabh. Mentor:Heiko The idea of this project was to showcase Shogun’s abilities — sometimes we definitely need to work on. Saurabh wrote a couple of Notebooks that are essentially ML tutorials using Shogun. If you want to know about ML basics, regression, classification, model-selection, SVMs, multiclass, multiple kernel learning this was for you. He also extended our web-demo framework to for example include model-selection for GPs. OpenCV integration by Kislay. Mentor: Kevin Kislay, after writing a very cool notebook on PCA for his application, wrote data-structures to bridge between Shogun and OpenCV. The project was supervised by Kevin, who is also one of our former GSoC students This makes it possible to use the too libraries together in a neat way. Deep learning by Khaled Nasr. Mentor: Theofanis, Sergey The hype is on! After NIPS, Facebook, GoogleDeepmind, Shogun now also joined 😉 Khaled did a very good job in coding up the standard ones, and was involved in generalising Shogun’s linear algebra on the fly with lambday. This is a project that is likely to have a second part. Check his superb notebooks on deep belief neural networks, convolutional networks,autoencoders, restricted Boltzman machines SO Learning with Approximate Inference by Jiaolong. Mentor: hushell, Thoralf This was another project that was (co-)mentored by a former GSoC student. With the help his mentors, Jialong implemented various approximate inference methods for structured output (SO) models. Check out his notebook. Large-Scale Multi-Label Classification by Abinash. Mentor: Thoralf Another project involving our structured output expert Thoralf as mentor. Abinash implemented large-scale multilabel learning — beating scikit learn‘s implementation both in runtime and accuracy. The last experiment is described in this notebook. Finally, we sent two of our delegates (Thoralf and Fernando) to the 10th year jubilee mentor summit in California in late October. Really cool: I got lucky and won Google’s lottery on some extra places, so I could also join. The summit once again was overly colourful, bursting with creative minds who have the most diverse set of opinions and approaches, but who are all united by their excitement about open-source. The beauty of this community to me really lies in the people who do work purely driven by their interest on *the thing itself*, independent of competitive and in particular commercial interests — sometimes almost to an extend that is beyond any form of compromise. A wonderful illustration of this was when at the mentor summit, during the reception in the Tech museum in San Jose: Google’s speaker and head of finance Patrick Pichette (disclaimer: not sure, don’t quote me on this) who is the boss of Chris DiBona, who himself organises the GSoC, searched to inspire the audience to “think BIG” and to “change the lifes of GSoC students”. Guest speaker Linus Torvalds 10 minutes later then contemplates that he could not be a GSoC mentor as he would scare people away and that the best way to get involved in open-source is to “start small” — a sentence after which P.P. left the room. Funny enough: in GSoC, this community is then hugged by a super capitalistic American internet company — and gladly lets it happen: we all love GSoC and Shogun certainly would not be where it is without it. I also want to mention the day Google rented a whole theme park for us nerds — which made Fernando try a roller-coaster for the first time after being pushed by MLPack maintainer Ryan and myself. After being horrified at first, he even started to talk about C++ the second or third time. As you would expect from attending such geeky meetings, Thoralf, Fernado, and I also spent quite some time hacking Shogun, discussing ideas until late night (of course getting emotional about them 🙂 ). I managed to take a picture of Fernando falling asleep while hacking Shogun’s modular interfaces. Some of those ideas are collected on our wiki. • Improve usability • Making Shoun more modular and slim • Improving Shogun’s efficiency Some of those ideas are also part of our theme for our GSoC 2015 application and our planned Hackathon. We have come to a point where we seriously need to focus on application and stability rather than adding more and more cutting-edge algorithms — Shogun’s almost 15 year old framework needs a face lift. GSoC students will see that this years project ideas will focus on cleaning up the toolbox and implement ML applications. Meet the Shogun/MLPack crew, as nerdy as it gets 😉 GSoC Interview with Sergey and me Sergey and me gave an interview on Shogun and Google Summer of Code. Here it is: The internet. More specifically #shogun on irc.freenode.net. Wasn’t IRC that thing that our big brothers used as a socialising substitute when they were teenagers back in the 90s? Anyways. We are talking to two of the hottest upcoming figures in machine learning open-source software, the Russian software entrepreneur Sergey Lisitsyn, and the big German machine Heiko Strathmann. Hi guys, glad to meet you. Would you mind introducing yourself? Sergey (S): Hey, I am Sergey. If you ask me what do I do apart from Shogun – I am currently working as a software engineer and finishing my Master’s studies at Samara State Aerospace University. I joined Shogun in 2011 as a student and now I am doing my best to help guys from the Shogun team to keep up with GSoC 2014. Heiko (H): Hej, my name is Heiko. I do a Phd in Neuroscience & Machine Learning at the Gatsby Institute in London and joined Shogun three years ago during GSoC. I love open-source since my days in school. Your project, Shogun, is about Machine Learning. That sounds scary and sexy, but what is it really? H: My grandmother recently sent me an email asking about this ‘maschinelles Lernen’. I replied it is the art of finding structure in data in an automated way. She replied: Since when are you an artist? And what is this “data”? I showed her the movie PI by Darren Aronofsky where the main character at some point is able to predict stock prices after realising “the pattern”, and said that’s what we want to do with a computer. Since then, she is worried about me because the guy puts a drill into his head in the end….. Another cool application is for example to model brain patterns to allow people to learn how to use a prosthesis faster. S: Or have you seen your iPhone detects faces? That’s just a Support Vector Machine (SVM). It employs kernels which are inner products of non-linear mappings of Haar features into a reproducing kernel Hilbert Space so that it minimizes …. Yeah, okok… What is the history of Shogun in the GSoC? S: The project got started by Sören in his student days around 15 years ago. It was a research only tool for a couple of years before being made public. Over the years, more and more people joined, but the biggest boost came from GSoC… H: We just got accepted into our 4th year in that program. We had 5+8+8 students so far who all successfully did the program with us. Wow I guess that’s a few million dollars. (EDITOR: actually 105,000$.) GSoC students forced Shogun to grow up in many ways: github, a farm of buildbots, proper unit-testing, a cloud-service, web-demos, etc were all set up by students. Also, the diversity of algorithms from latest research increased a lot. From the GSoC money, we were able to fund our first Shogun workshop in Berlin last summer. How did you two got into Shogun and GSoC? Did the money play a role? H: I was doing my undergraduate project back in 2010, which actually involved kernel SVMs, and used Shogun. I thought it would be a nice idea of putting my ideas into it — also I was lonely coding just on my own. 2010, they were rejected from GSoC, but I eventually implemented my ideas in 2011. The money to me was very useful as I was planning to move to London soon. Being totally broke in that city one year later, I actually paid my rent from my second participation’s stipend – which I got for implementing ideas from my Master’s project at uni. Since 2013, I mentor other students and help organising the project. I think I would have stayed around without the money, but it would have been a bit tougher. S: We were having a really hard winter in Russia. While I was walking my bear and clearing the roof of the snow, I realised I forgot to turn off my nuclear missile system….. H: Tales! S: Okay, so on another cold night I noticed a message on GSoC somewhere and then I just glanced over the list of accepted organizations and Shogun’s description was quite interesting so I joined a chat and started talking to people – the whole thing was breathtaking for me. As for the money – well, I was a student and was about to start my first part-time job as a developer – it was like a present for me but it didn’t play the main role! H: To make it short: Sergey suddenly appeared and rocked the house coding in lightspeed, drinking Vodka. But now you are not paid anymore, while still spending a lot of time on the project. What motivates you to do this? S: This just involves you and you feels like you participate in something useful. Such kind of appreciation is important! H: Mentoring students is very rewarding indeed! Some of those guys are insanely motivated and talented. It is very nice to interact with the community with people from all over the world sharing the same interest. Trying to be a scientist, GSoC is also very useful in producing tools that myself or my colleagues need, but that nobody has the time to build properly. You see, there are all sorts of synergic effects in GSoC and my day-job at university, such as meeting new people or getting a job since you know how to code in a team. How does this work? Did you ever publish papers based on GSoC work? S: Yeah, I actually published a paper based on my GSoC 2011 work. It is called ‘Tapkee: An Efficient Dimension Reduction Library’ and was recently published in the Journal of Machine Learning Research. We started writing it up with my mentor Christian (Widmer) and later Fernando (Iglesias) joined our efforts. It took enormous amount of time but we did it! Tapkee by the way is a Russian word for slippers. H: I worked on a project on statistical simulation of global ozone data last year. The code is mainly based on one of my last year’s student’s project – a very clever and productive guy from Mumbai who I would never have met without the program, see http://www.ucl.ac.uk/roulette/ozoneexample So you came all the way from being a student with GSoC up to being an organisation admin. How does the perspective change during this path? H: I first had too much time so I coded open-source, then too little money so I coded open-source, then too much work so I mentor people coding it open-source. At some point I realised I like this stuff so much that I would like to help organising Shogun and bring together the students and scientists involved. It is great to give back to the community which played a major role for me in my studies. It is also sometimes quite amusing to get those emails by students applying, being worried about the same unimportant things that I worried about back then. S: It seems to be quite natural actually. You could even miss the point when things change and you became a mentor. Once you are into the game things are going pretty fast. Especially if you have full-time job and studies! Are there any (forbidden) substances that you exploit to keep up with the workload? S: It would sound strange but I am not addicted to vodka. Although I bet Heiko is addicted to beer and sausages. H: Coffeecoffeecoffeee…… Well, to be honest GSoC definitely reduces your sleep no matter whether you are either student, mentor, or admin. By the way, our 3.0 release was labelled: Powered by Vodka, Mate, and beer. Do you crazy Nerds actually ever go away from your computers? H: No. S: Once we all met at our workshop in Berlin – but we weren’t really away from our computers. Why on earth to do that? Any tips for upcoming members of the open-source community? For students? Mentors? Admins? H: Students: Do GSoC! You will learn a lot. Mentors: Do GSoC! You will get a lot. Admins/Mentors: Don’t do GSoC, it ruins your health. Rather collect stamps! S: He is kidding. (whispers: “we need this … come on … just be nice to them”) H: Okay to be honest: just have fun of what you are doing! Due to the missing interest in the community, Sergey and Heiko interviewed themselves on their own. GSoC 2013 blog: http://herrstrathmann.de/shogun-blog/110-shogun-3-0.html GSoC 2014 ideas: http://www.shogun-toolbox.org/page/Events/gsoc2014_ideas Sergey: http://cv.lisitsyn.me/ Yeah! Shogun this week got accepted to be an organisation participating in the 10th Google Summer of Code. This year, besides mentoring a few projects, I am one of the three project administrators. I am curious how this will be. One first thing to do was to write the application for Shogun – I’m glad it worked! I also will spend a little more time organising things. Apart from trying to find mentors (which requires a lot of talking people into it), I also want to make Shogun (and the students) having more from the program. Last year, I pushed the team to ask all students • to write a project report in the form of IPython notebooks (link). These are absolutely great for talking about the GSoC work, impressing people, and having a final piece of work to show for the students. • To fully unit-test every module of their algorithm/framework. This is absolutely essential in order to not loose the student’s work a few years later when a re-factoring change breaks their code and nobody knows how to fix it. Those tests already saved lots of life since last year. • To peer-review each other in pairs of students. This improved documentation here and there and solved some bugs. I want to emphasise this more this year as I think it is a great way of enabling synergistic effects between students. In addition, we will again screen all the applicants via a set of entrance tasks on our github page (link). I just wrote a large number of such smaller or larger tasks that get students started on a particular project, fix bugs in Shogun, or prepare some larger change. In order to get the students started a bit more easily (contributing to Shogun these days is a non-trivial task), I wrote a little how-to (link) that is supposed to point out our expectations, and what are the first steps towards participating in GSoC. Finally, I wrote descriptions for quite a few possible projects, some of them with a number of interesting co-mentors. The full list is here (link). If you are a talented student interested in any of those topics, consider working with us during the summer. It’s usually very fun! • Variational Learning for Recommendation with Big Data. With Emtiyaz Khan, who I met at last year’s workshop for latent Gaussian models. Matrix factorisation and Gaussian Processes, ultra-cool project. • Generic Framework for Markov Chain Monte Carlo Algorithms and Stan Interface. With Theo Papamarkou, who I know from my time at UCL Statistics. It’s about a modular representation of MCMC within Shogun and a possible interface to STAN for the actual sampling. This would be a major step of Shogun towards probabilistic models. • Testing and Measuring Variable Interactions With Kernels. With Dino, who is post-doc at Gatsby and co-author of our optimal kernel for MMD paper. This project is to implement all kernel based interaction measures in Shogun in a unified way. We’ll probably use this for research later. • A Meta-Language for Shogun examples. With Sören. Write example once, press button to generate in any modular language binding. This would be so useful to have in Shogun! • Lobbying Shogun in MLPACK’s automatic benchmarking system. Joint project with Ryan from MLPACK. He already can compare speed of different toolboxes. Now let’s compare results. • Shogun Missionary & Shogun in Education. With Sören. Write high quality notebooks and eye-candy examples. Very different project as this is about creative technical writing and illustrating methods on cool data rather than hacking new algorithms. I would be very excited if this happened! Some of the other projects involve cool buzzwords such as Deep Learning, Structured Output, Kernel, Dual solvers, Cluster backends, etc. Join us! 🙂 GSoC 2013 brings Shogun 3.0 Shogun’s third Google Summer of Code just ended with our participation in the mentor summit at Google’s headquarter in Mountain View and the release of Shogun 3.0 (link) What a great summer! But let’s start at the beginning… Shogun is a toolbox that offers a unified framework for data-analysis, or in buzz words: machine learning, for a broad range of data types and analysis problems. Those not only include standard tools such as regression, classification, clustering, etc, but also cutting edge techniques from recent developments in research. One of Shogun’s most unique features is its interfaces to a wide range of mainstream computing languages. In our third GSoC, we continued most of the directions taken in previous years such as asking students to contribute code in the application process for them to be considered. For that, we created a list of smaller introductory tasks for each of the GSoC projects that would become useful later in the project. While allowing students to get used to our development process, and increasing the quality of the applications, this also pushed the projects forward a bit before GSoC even started. The number of applications did not suffer through that (57 proposals from 52 students) but even increased compared to the previous year (48 proposals from 38 students) — this seems to be a trend. This summer, we also had former GSoC students mentoring for the first time: Sergey Lisitsyn and me (mentoring two projects). Both of us joined in 2011. In addition, the former student Fernando Iglesias participated again and former student Viktor Gal stayed around to work on Shogun during GSoC (and did some massive infrastructure improvements). These are very nice long term effects of continuous GSoC participation. Thanks to GSoC, Shogun is growing constantly both in terms of code and developers. As in 2012, we eventually could give away 8 slots to some very talented students. All of them did an awesome job on some highly involved projects covering a large number of topics. Two projects were extensions of previous ones: Roman Votjakov extended last year’s project on the popular Gaussian Processes for handling classification problems and Shell Hu implemented a collection of algorithms within last year’s structured output framework (for example for OCR) Fernando Iglesias implemented a new algorithm called metric learning, which plays well together with existing methods in Shogun. Another new algorithm came from Soumyajit De, who has implemented an estimation method for log-determinants of large sparse matrices (needed for example for large-scale Gaussian distributions), and implemented a framework for linear operators and solvers, and fundamentals of an upcoming framework for distributed computing (which is used by his algorithm) on the fly. Evangelos Anagnostopoulos worked on feature hashing and random kitchen sinks, two very cool tricks to speed up linear and kernel-based learning methods in Shogun. Kevin Hughes implemented methods for independent component analysis, which can be used to separate mixtures of signals (for example audio, heart-beats, or images) and are well known in the community. Last but not least, Liu Zhengyang created a pretty web-framework for running Shogun demos from the web browser and did add support for directly loading data from the mldata website. Evgeniy Andreev improved Shogun’s usability via integrating native support for various popular file formats such as CSV and protobuf. You might have noticed the links in the above text (and images). Most of them are the final reports of the students in the form of IPython notebooks, an awesome new open-source tool that we started using for documentation. We are very proud of these.  See http://shogun-toolbox.org/page/documentation/notebook/ for a list of all notebooks. Also check out the web-demo framework at http://www.shogun-toolbox.org/page/documentation/demo/ if you haven’t yet. IPython also features Shogun in the cloud: Former student Viktor Gal did setup http://cloud.shogun-toolbox.org which is an IPython notebook server ran by us. It allows you to play with Shogun-python from any web-browser without having to install it. You can try the existing notebooks or write your own. Give it a shot and let us know what you think! This year’s GSoC also was the most productive one for us ever. We got  more than 2000 commits changing almost 400000 lines in more than 7000 files since our last release before GSoC. Students! You all did a great job and we are more than amazed what you all have achieved. Thank you very much and we hope some of you will stick around. Besides all the above individual projects, we encouraged students to work together a bit more to enable synergistic effects. One way we tried to implement this was through a peer review where we paired students to check each others interface documentation and final notebooks. We held the usual meetings with both mentors and students every few weeks to monitor progress and happiness, as well as asking students to write weekly reports. Keeping our IRC channel active every day also helped a lot in keeping things going. My personal experience with mentoring was very positive. It is very nice to give back to the community. I tried to give them the same useful guidance that I received back then, and probably learned as much as my students did on the way. Having participated in GSoC 2011 and 2012, the change of perspective as a mentor was interesting, in particular regarding the selection process. Time wise, I think Google’s official statement of 5 hours per student per week is underestimating things quite a bit (if you want to get things done), and of course there is no upper bound on time you can spend. Our plan of pairing external mentors with internal developers worked smoothly. As most of our mentors are scientists who tend to be very busy, it is sometimes hard for them to review all code on their own. Combining  big-picture guidance with the in-depth framework knowledge of the paired core developers allowed for more flexibility when allocating mentors for projects. Keep in mind that Shogun is still being organised by only five people (4 former students) plus a hand full of occasional developers, which makes it challenging to supervise 8 projects. Another change this year was that writing unit-tests were mandatory to get code merged, which made the number of unit tests grew from 50 to more than 600. In the past years, we had seen how difficult it is to write tests at the end of projects, or maintain untested code. Making students do this on-the-fly drastically increased the stability of their code. A challenging side-effect of this was that many bugs within Shogun were discovered (and eventually fixed) which kept students and developers busy. As for Shogun itself, GSoC also boosts our community of users, which became so active this year that decided to organise a the first Shogun workshop in Berlin this summer. We had something over 30 participants from all over the world. The Shogun core team also met for the first time in real life, which was nice! We had a collection of talks, discussions, and hands-on sessions. Click here and here for videos and slides. October brought the mentor summit, which I attended for the first time. This was such a cool event! There was a hotel with hot-tub, lots of goodies on the google campus as for example an on-site barista (!), a GSoC mentor with a robot-dog, and loads of loads of interesting people from interesting open-source projects. Some of these were new to me, some of them are projects that I have been checking out for more than 10 years now.I attended a few fruitful sessions, for example on open-source software for science. Sören hang out with the people he knew from previous years and the cool Debian guys (for which he is a developer too). After the summit, the Shogun mentor team went hiking in the south Californian desert – I even climbed a rock. What a great summer! GSoC 2013 Shogun got accepted in the Google Summer of Code 2013! Check out our ideas pageThis year, I will be a mentor rather than a student  and I am very excited about this. I’ll be offering two projects: • Implement Gaussian process classification (joint with Oliver Stegle). This is an extension of the GSoC project last year and should be quite interested while not being too complicated (link) • Implement unbiased estimators of likelihoods of very large, sparse Gaussian distributions (joint with Erlend Aune and Daniel Simpson). This one is quite challenging since it involved many different topics. However, it should also be very interesting (link) Shogun is in the GSoC 2013 Shogun got accepted in the Google Summer of Code 2013! Check out our ideas pageThis year, I will be a mentor rather than a student  and I am very excited about this. I’ll be offering two projects: • Implement Gaussian process classification (joint with Oliver Stegle). This is an extension of the GSoC project last year and should be quite interested while not being too complicated (link) • Implement unbiased estimators of likelihoods of very large, sparse Gaussian distributions (joint with Erlend Aune and Daniel Simpson). This one is quite challenging since it involved many different topics. However, it should also be very interesting (link) Shogun blog posts GSoC 2012 is over Since a few weeks, GSoC 2012 is over. It has been a pretty cool summer for me. As last year, I learned lots of things. This year though, my project a bit more research oriented — which is nice since it allowed me to connect my work for SHOGUN with the stuff I do in Uni. I even mentioned it in my Master’s dissertation (link) which also was about statistical hypothesis testing with the MMD. Working on the dissertation at the same time as on the GSoC was sometimes exhausting. It eventually worked out fine since both things were closely related. I would only suggest to do other important things if they are connected to the GSoC project. However, if this condition is met, things multiply in terms of the reward you get due to synergistic effects. The other students working for SHOGUN also did very cool projects. All these are included in the SHOGUN 2.0 release (link). The project now also has a new website so its worth taking a closer look. Some of the other (really talented) guys might stay with SHOGUN as I did last year. This once more gives a major boost to development. Thanks to all those guys. I also owe thanks to Sören and Sergey who organised most things and made this summer so rewarding. In the near future I will try to put in some extensions to the statistical testing framework that I though of during the summer but did not have time to implement: On-line features for the linear time MMD, a framework for kernel selection which includes all investigated methods from my Master’s dissertation, and finally write unit-tests using SHOGUN’s new framework for that. I will update the SHOGUN project page of my website (link). I might as well send some tweets to SHOGUN’s new twitter account (link). 11th GSoC weekly report: Done! This will be my last weekly report for this years summer of code! Last week, I did not write a report since I have been very busy with experiments for a rebuttal for the NIPS submission (see 2nd GSoC weekly report). This week was more productive: I continued polishing the new framework for statistical tests, squeezed out some final bugs and made made a few things more effective. I also created graphical examples for linear and quadratic time MMD and HSIC based tests. These serve the purpose of illustrating how the methods work on simple datasets. They sample the underlying statistic’s null and alternative distributions using all different methods I implemented and plot distributions with test thresholds (as well as data). For the MMD tests, the dataset contains samples from two multivariate Gaussian distributions with unit variance in every component and equal means in all but one component. The HSIC tests uses data where dependence is induced via rotation (see last report). Below are screenshots of the output of the examples. These images were also added to the shogun-tutorial. I added a part about independence testing and corrected some mistakes in there. All methods I implemented are now contained within the tutorial. Another documentation related thing I did was to update doxygen based sourcecode documentation. In particular, I cleaned up the horrible mess in the CStatistics class — and replaced all ascii-art by LaTeX. Although there are still things to do, my project is now in the status “done” in terms of GSoC 🙂 It was a nice summer! I guess I will be extending it with some ideas that came up while working on with kernel two sample tests recently. For the last week, I intend to get some unit-testing done and start to focus on things that are needed for our upcoming 2.0 release (Bug hunting, fix warnings, implement things that people request). I will also write an overall summary on the GSoC next month or so. Next month will be busy since I also have to finish my Master’s project. 10th GSoC weekly report: Slowly getting ready Step by step, my project enters a final state 🙂 Last week, I added new data generation methods, which are used from a new example for independence tests with HSIC. It demonstrates that the HSIC based test is able to capture dependence which is induced by rotating data that has zero correlation — one of the problems from the paper [1]. Here is a picture; the question is: are the two dimensions dependent? Or moreover, is a test able to capture that? (correlation is almost zero, dependence is induced via rotation) I also realised that my current class structure had problems doing bootstrapping for HSIC, so I re-factored a bit. Bootstrapping is now also available for HISC using the same code that does it for two-sample-tests. I also removed some redundancy — both independence and two-sample tests are very similar problems and implementations should share code where possible. Another thing that was missing so far is to compute test thresholds; so far, only p-values could be computed. Since different people have different tastes about this, I added both methods. Checking a test statistic against a threshold is straight-forward and gives a binary answer; computing a p-value gives the position of the test statistic in the null-distribution — this contains more information. To compute thresholds, one needs the inverse CDF function for the null-distribution. In the bootstrapping case, it is easy since simply the sample that corresponds to a certain quantile has to be reported. For cases where a normal- or gamma-distribution was fitted, I imported some more routines from the nice ALGLIB toolbox. For this week, I plan to continue with finishing touches, documentation, examples/tests, etc. Another idea I had is to make the linear time MMD test work with SHOGUN’s streaming features, since the infinite or streaming data case is the main area for its usage. [1]: Gretton, A., Fukumizu, K., Teo, C., & Song, L. (2008). A kernel statistical test of independence. Advances in Neural Information Processing Systems
2022-10-05 13:06:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37692785263061523, "perplexity": 1886.204492771824}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00756.warc.gz"}
http://mathhelpforum.com/statistics/84124-standard-deviation-help-please-print.html
• April 16th 2009, 06:07 PM SR rzperez27 A small-business owner must hire seasonal workers as the need arises. The following list shows the number of employees hired monthly for a 5-month period. 4, 13, 5, 6, 9 If the mean of these data is approximately 7, what is the population standard deviation for these data? I have never done standard deviation in my class before-this question was from a sample of the CAHSEE. Please help and explain how I can do this problem. • April 16th 2009, 11:37 PM Twig hi hi Do you know who to calculate the variance from sample? You take every value and subtract from it the mean, then you square this number, and do this for all data. that is, $\mbox{Variance } = s^{2} = \frac{1}{n-1} \sum_{i=1}^{n} \, (x_{i} - \mu^{*})^{2}$ The standard deviation is the square root of the variance, so: $\mbox {Standard deviation } = s = \sqrt{\frac{1}{n-1} \sum_{i=1}^{n} \, (x_{i} - \mu^{*})^{2}}$ The reason for dividing with (n-1) has to do with sample size, since n is small here, use (n-1), as written.
2014-09-20 15:41:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7947688698768616, "perplexity": 996.9654654255875}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657133417.25/warc/CC-MAIN-20140914011213-00068-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
https://jalevine.bitbucket.io/csc433-533/s18/subassignments/2018/01/29/A02UG.html
Undergraduates will extend their basic image loading program to be able to load images, process them through a collection of image processing filters, and then save them to a file. ## Objectives This assignment is designed to teach you techniques that relate to: • Color spaces and representations of the image range space. • Processing color spaces to provide adjustments common to how images are displayed. • Implementing these adjustments through rescaling filters. • Processing images related to a signal processing framework. • Implementing convolution filters to better understand the connection between processing regions of data and how these relate to signal processing. ## Part 1: Modifying your image loader Starting with your previous assignment, you should modify your code so that you can both load and also save an image. To do so, you should implement basic file I/O to write a PPM file. You should also modify your main.cpp to give the user the option to save an image to a filename that they specify (specifically, your executable should accept parameters for both a filename to read from and a filename to write to). ## Part 2: Implementing Filters Next, you will modify your code to support two types of image processing operations: • Rescaling filters, that adjust the displayed colors on a per pixel basis. In particular, the user must be able to adjust the gain, bias, and gamma of the displayed image. • Convolution-based filters, that adjust the displayed colors for each pixel by analyzing a local region. In particular, the user should be able to apply three filters: a box filter, a Gaussian filter, and an unsharp mask. These filters should allow the user to both smooth (via the box or Gaussian) and sharpen (via the unsharp mask) the image. The user should be able to control the extent of these filters by specifying a radius for the filter. Both of these filters will take a collection of parameters, and the user should be able to adjust these parameters while the program is executing, to dynamically set them before applying the filter. You are encouraged to use whatever interface you like for this, but please make sure your README documents both how to run and how to use your program. After adjusting the image using a combination of the above filters, the user should then be able to save the modified image to the aforementioned filename. #### Details on Rescaling As discussing in class, your rescaling filter should modify the resulting RGB values of the input data. It is most straightforward to think about these filters processing data in the range $[0,1]$, so you may want to convert how your image class from Assignment 01 stores the underlying data. After specifying the gain, bias, and gamma, the user should be able to scale all color channels. The easiest method to do this is compute a scale value that you will multiply each channel with separately. This scale value should be computed based on the luminance, $L$, of the pixel. There are a variety of equations that one could use to go from $RGB$ to $L$, but for this assignment we will use one of the simplest: This exact equation is somewhat different weights from the Y channel in YUV color or the B in HSB, but provides a good approximation to how humans perceive intensity from color. After computing the luminance, you can use the gain, bias, and gamma to compute an updated luminance, $L'$ (in particular $L' = (\texttt{gain}*L + \texttt{bias})^\texttt{gamma}$. You can then compute your scale value by $\texttt{scale} = L'/L$. After computing the scale, you can update the RGB values by multiplying each channel by it. Note, this might produce values outside of the range $[0,1]$. In these situations you should clamp your values back into the appropriate range. If you fail to clamp the data, you will produce a variety of visual artifacts (which might be fun to test with, but you will be penalized if you do not correct them!). #### Details on Convolution For this family of filters, the user should be able to specify an integer radius that will be used to specify the size of the convolution kernel. In my implementation I used the convention that the kernel size was always $(2*\texttt{radius}+1) \times (2*\texttt{radius}+1)$. Thus, a radius=1 filter produced a filter of size $3\times3$. A radius of 2 corrsponded to a $5\times5$ filter, etc. Using only odd-sized filters makes coding a bit easier, since you always know precisely the filter center. Your filters should be applied to each of the $R$, $G$, and $B$ channels separately. You will run into two edge cases that you must handle and document precisely how you handled them. The first is the case where you are working with a pixel near the boundary of the image. In the cases where pixels are close to the boundary, the kernel centered at that pixel will extend beyond the image extents. In these case, you must implement a boundary condition. There were three different ways discussed in class, and you are welcome to pick any of the three. The second condition you must deal with is how to weight the filter appropriately. For smoothing filters like the box and Gaussian filter, you need to only divide by the sum of the kernel values to keep the range of data within useful bounds. However, without properly weighting, the unsharp mask you can produce values for RGB data that are outside of the range $[0,1]$. The typical convention, discussed in class, is to use the same weight value that would have used for just the smoothing filter. Even with this denominator, you will still have to clamp the RGB values back into the range $[0,1]$. ## Part 3: Written Questions Please answer the following written questions. You are not required to typeset these questions in any particular format, but you may want to take the opportunity to include images (either photographed hand-drawings or produced using an image editing tool). These questions are both intended to provide you additional material to consider the conceptual aspects of the course as well as to provide sample questions in a similar format to the questions on the midterm and final exam. Most questions should able to be answered in 100 words or less of text. Please create a commit a separate directory in your repo called written and post all files (text answers and written) to this directory. 1. What is a pixel? How big is a pixel? Both of these questions have multiple answers, briefly explain yours. 2. 3 × 3 convolution kernels can create a variety of effects. Consider the following three kernels. First, list the appropriate scale factor you would use for this kernel (see the instructions in the slides and the lab for a definition). Next, briefly describe the output image that is produced as a result of convolution with each kernel: a. $% $ b. $% $ c. $% $ 3. Given an image $I$ of $100 \times 200$, and a kernel $K$ of size $7 \times 7$, how many multiplications are required to compute $K \otimes I$? Be sure to state your boundary condition. 4. Draw and label a diagram of the HSV color space. Include a brief description of each variable, its role in the final color, and a possible numeric range. 5. The simplest possible approach to tone mapping is to take the HDR input data and normalize it to produce values between $[0,1]$. What are the potential problems with using this technique? #### Deductions Reason Value Program does not compile. (First instance across all assignments will receive a warning with a chance to resubmit, but subsequence non-compiling assignments will receive the full penalty) -100 Program crashes due to bugs -10 each bug at grader's discretion to fix #### Point Breakdown of Features Requirement Value Consistent modular coding style 10 External documentation (README.md), Providing a working CMakeLists.txt 5 Class documentation, Internal documentation (Block and Inline). Wherever applicable / for all files 15 Expected output / behavior based on the assignment specification, including Implementing rescaling filters. 10 Converting the internally represented data range to a displayable representation, implementing clamping so that values do not overflow. 5 Correctly implementing convolution and boundary condition. 5 Providing implementations of all of the required convolution kernels 10 Correctly computing the weight of the kernel 5 Allowing the user a mechanism to vary parameters. 10 Supporting writing the output filtered image as a PPM. 5 50 Written Questions 20 Total 100
2019-12-08 00:30:01
{"extraction_info": {"found_math": true, "script_math_tex": 25, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41581210494041443, "perplexity": 896.0031159548474}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540503656.42/warc/CC-MAIN-20191207233943-20191208021943-00396.warc.gz"}
https://groupprops.subwiki.org/wiki/Left-inner_implies_intermediate_subgroup_condition
# Left-inner implies intermediate subgroup condition This article gives the statement and possibly, proof, of an implication relation between two subgroup metaproperties. That is, it states that every subgroup satisfying the first subgroup metaproperty (i.e., Left-inner subgroup property (?)) must also satisfy the second subgroup metaproperty (i.e., Intermediate subgroup condition (?)) View all subgroup metaproperty implications | View all subgroup metaproperty non-implications ## Statement Any left-inner subgroup property satisfies the intermediate subgroup condition. ## Definitions used ### Left-inner subgroup property Further information: left-inner subgroup property A subgroup property $p$ is termed left-inner if there exists a property $\alpha$ of functions from a group to itself such that $p$ can be written using the function restriction expression: inner automorphism $\to$ $\alpha$ In other words, a subgroup $H$ of a group $G$ satisfies property $p$ in $G$ if and only if every inner automorphism of $G$ restricts to a function from $H$ to itself that satisfies $\alpha$. ### Intermediate subgroup condition Further information: intermediate subgroup condition A subgroup property $p$ is said to satisfy the intermediate subgroup condition if, for any groups $H \le K \le G$ such that $H$ satisfies $p$ in $G$, $H$ also satisfies $p$ in $K$. ## Facts used 1. Inner is extensibility-stable: An inner automorphism of a subgroup can be extended to an inner automorphism of the whole group. 2. Left-extensibility-stable implies intermediate subgroup condition ## Proof The proof follows by combining facts (1) and (2).
2019-04-22 20:24:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 20, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9500518441200256, "perplexity": 1178.2931189172089}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578582584.59/warc/CC-MAIN-20190422195208-20190422221208-00115.warc.gz"}
https://mran.revolutionanalytics.com/snapshot/2022-04-08/web/packages/LongDat/vignettes/LongDat_cont_tutorial.html
Longitudinal analysis pipeline with longdat_cont() Introduction This is an example of running longdat_cont(). Note that the time variable (proxy of treatment) here should be continuous. If the time variable is discrete, please apply longdat_disc() instead. # Load the packages library(LongDat) library(tidyverse) library(kableExtra) Explaining the input data frame format The input data frame (called master table) should have the same format as the example data “LongDat_cont_master_table”. If you have metadata and feature (eg. microbiome, immunome) data stored in separate tables, you can go to the section Preparing the input data frame with make_master_table() below. The function make_master_table() helps you to create master table from metadata and feature tables. Now let’s have a look at the required format for the input master table. The example below is a dummy longitudinal data set with 2 time points (day 0 and 7). Here we want to see if the treatment has a significant effect on gut microbial abundance or not. # Read in the data frame. LongDat_cont_master_table is already lazily loaded. master <- LongDat_cont_master_table master %>% kableExtra::kbl() %>% kableExtra::kable_paper(bootstrap_options = "responsive", font_size = 12) %>% kableExtra::scroll_box(width = "700px", height = "200px") Individual Day sex age DrugA DrugB BacteriumA BacteriumB BacteriumC 1 0 0 61 0.0 10 11 4 26 1 7 0 61 0.0 10 13 2 22 2 0 0 66 0.0 640 344 0 6 2 7 0 66 0.0 320 3 0 670 3 0 0 63 7.5 100 55 0 10 3 7 0 63 7.5 0 5 0 111 4 0 0 47 0.0 300 60 0 7 4 7 0 47 0.0 200 4 0 100 5 0 1 51 0.0 160 100 20 5 5 7 1 51 0.0 130 3 64 200 6 0 1 53 10.0 0 32 138 4 6 7 1 53 10.0 0 2 0 54 7 0 0 50 0.0 40 22 105 180 7 7 0 50 0.0 20 27 158 49 8 0 1 54 0.0 100 24 0 0 8 7 1 54 0.0 80 0 0 48 9 0 0 44 0.0 160 65 0 20 9 7 0 44 0.0 80 1 0 130 10 0 0 60 0.0 100 19 163 0 10 7 0 60 0.0 25 0 41 38 As you can see, the “Individual” is at the first column, and the features (dependent variables), which are gut microbial abundances in this case, are at the end of the table. Any column apart from individual, test_var (e.g. Day) and dependent variables will be taken as potential confounders (confounding with the test_var). For example, here the potential confounders are sex, age, drug A and drug B. Please avoid using characters that don’t belong to ASCII printable characters for the column names in the input data frame. Preparing the input data frame with make_master_table() If you have your input master table prepared already, you can skip this section and go to Run longdat_cont() directly. If your metadata and feature (eg. microbiome, immunome) data are stored in two tables, you can create a master table out of them easily with the function make_master_table(). First, let’s take a look at an example of the metadata table. Metadata table should be a data frame whose columns consist of sample identifiers (sample_ID, unique for each sample), individual, time point and other meta data. Each row corresponds to one sample_ID. # Read in the data frame. LongDat_cont_metadata_table is already lazily loaded. kableExtra::kbl() %>% kableExtra::kable_paper(bootstrap_options = "responsive", font_size = 12) %>% kableExtra::scroll_box(width = "700px", height = "200px") Sample_ID Individual Day sex age DrugA DrugB 1_0 1 0 0 61 0.0 10 1_7 1 7 0 61 0.0 10 2_0 2 0 0 66 0.0 640 2_7 2 7 0 66 0.0 320 3_0 3 0 0 63 7.5 100 3_7 3 7 0 63 7.5 0 4_0 4 0 0 47 0.0 300 4_7 4 7 0 47 0.0 200 5_0 5 0 1 51 0.0 160 5_7 5 7 1 51 0.0 130 6_0 6 0 1 53 10.0 0 6_7 6 7 1 53 10.0 0 7_0 7 0 0 50 0.0 40 7_7 7 7 0 50 0.0 20 8_0 8 0 1 54 0.0 100 8_7 8 7 1 54 0.0 80 9_0 9 0 0 44 0.0 160 9_7 9 7 0 44 0.0 80 10_0 10 0 0 60 0.0 100 10_7 10 7 0 60 0.0 25 This example is a dummy longitudinal meatadata with 2 time points for each individual. Besides sample_ID, individual, day columns, there are also information of sex, age and drugs that individuals take. Here we want to see if the treatment has a significant effect on gut microbial abundance or not. Then, let’s see how a feature table looks like. Feature table should be a data frame whose columns only consist of sample identifiers (sample_ID) and features (dependent variables, e.g. microbiome). Each row corresponds to one sample_ID. Please do not include any columns other than sample_ID and features in the feature table. # Read in the data frame. LongDat_cont_feature_table is already lazily loaded. feature <- LongDat_cont_feature_table feature %>% kableExtra::kbl() %>% kableExtra::kable_paper(bootstrap_options = "responsive", font_size = 12) %>% kableExtra::scroll_box(width = "700px", height = "200px") Sample_ID BacteriumA BacteriumB BacteriumC 1_0 11 4 26 1_7 13 2 22 2_0 344 0 6 2_7 3 0 670 3_0 55 0 10 3_7 5 0 111 4_0 60 0 7 4_7 4 0 100 5_0 100 20 5 5_7 3 64 200 6_0 32 138 4 6_7 2 0 54 7_0 22 105 180 7_7 27 158 49 8_0 24 0 0 8_7 0 0 48 9_0 65 0 20 9_7 1 0 130 10_0 19 163 0 10_7 0 41 38 This example is a dummy longitudinal feature data. It stores the gut microbial abundance of each sample. To enable the joining process of metadata and feature tables, please pay attention to the following rules. 1. The row numbers of metadata and feature tables should be the same. 2. Sample_IDs are unique for each sample (i.e. no repeated sample_ID) 3. Metadata and feature tables have the same sample_IDs. If sample_IDs don’t match between the two tables, the joining process will fail. 4. As mentioned above, feature table should include only the columns of sample_ID and features. 5. Avoid using characters that don’t belong to ASCII printable characters for the column names. Now let’s create a master table and take a look at the result! master_created <- make_master_table(metadata_table = LongDat_cont_metadata_table, feature_table = LongDat_cont_feature_table, sample_ID = "Sample_ID", individual = "Individual") #> [1] "Finished creating master table successfully!" master_created %>% kableExtra::kbl() %>% kableExtra::kable_paper(bootstrap_options = "responsive", font_size = 12) %>% kableExtra::scroll_box(width = "700px", height = "200px") Individual Day sex age DrugA DrugB BacteriumA BacteriumB BacteriumC 1 0 0 61 0.0 10 11 4 26 1 7 0 61 0.0 10 13 2 22 2 0 0 66 0.0 640 344 0 6 2 7 0 66 0.0 320 3 0 670 3 0 0 63 7.5 100 55 0 10 3 7 0 63 7.5 0 5 0 111 4 0 0 47 0.0 300 60 0 7 4 7 0 47 0.0 200 4 0 100 5 0 1 51 0.0 160 100 20 5 5 7 1 51 0.0 130 3 64 200 6 0 1 53 10.0 0 32 138 4 6 7 1 53 10.0 0 2 0 54 7 0 0 50 0.0 40 22 105 180 7 7 0 50 0.0 20 27 158 49 8 0 1 54 0.0 100 24 0 0 8 7 1 54 0.0 80 0 0 48 9 0 0 44 0.0 160 65 0 20 9 7 0 44 0.0 80 1 0 130 10 0 0 60 0.0 100 19 163 0 10 7 0 60 0.0 25 0 41 38 The table “master_created” is just the same as the table “master” or “LongDat_cont_master_table” in the previous section, with the “Individual” as the first column, and the features (dependent variables), which are gut microbial abundances in this case, are at the end of the table. Any column apart from individual, test_var (e.g. Day) and dependent variables will be taken as potential confounders (confounding with the test_var). For the details of the arguments, please read the help page of this function by using ?make_master_table. OK, now we’re ready to run longdat_cont()! Run longdat_cont() The input is the example data frame LongDat_cont_master_table (same as “master” or “master_created” in the previous sections), and the data_type is “count” since the dependent variables (features, in this case they’re gut microbial abundance) are count data. The “test_var” is the independent variable you’re testing, and here we’re testing “Day” (time as the proxy for treatment). The variable_col is 7 because the dependent variables start at column 7. And the fac_var mark the columns that aren’t numerical. For the details of the arguments, please read the help page of this function by using ?longdat_cont. The run below takes less than a minute to complete. When data_type equals to “count”, please remember to set seed (as shown below) so that you’ll get reproducible randomized control test. # Run longdat_cont() on LongDat_cont_master_table set.seed(100) test_cont <- longdat_cont(input = LongDat_cont_master_table, data_type = "count", test_var = "Day", variable_col = 7, fac_var = c(1, 3)) #> [1] "Start data preprocessing." #> [1] "Finish data preprocessing." #> [1] "Start selecting potential confounders." #> [1] 1 #> [1] 2 #> [1] 3 #> [1] 1 #> [1] 2 #> [1] 3 #> [1] "Finished selecting potential confounders." #> [1] "Start null model test." #> [1] 1 #> [1] 2 #> [1] 3 #> [1] "Finish null model test." #> [1] "Start confounding model test." #> [1] 1 #> [1] 2 #> [1] 3 #> [1] "Finish confounding model test." #> [1] "Start unlisting tables from confounding model result." #> [1] "Finish unlisting tables from confounding model result." #> [1] "Finished post-hoc correlation test." #> [1] 1 #> [1] 2 #> [1] 3 #> [1] "Finished post-hoc correlation test." #> [1] "Start randomized negative control model test." #> [1] 1 #> [1] 2 #> [1] 3 #> [1] 1 #> [1] 2 #> [1] 3 #> [1] "Finish randomized negative control model test." #> [1] "Start removing the dependent variables to be exlcuded." #> [1] "Finish removing the dependent variables to be exlcuded." #> [1] "Start generating result tables." #> [1] "Finished successfully!" If you have completed running the function successfully, you’ll see the message “Finished successfully!” at the end. The results are stored in list format. Results The major output from longdat_cont() include a result table and a confounder table. If you have count data (data_type equals to “count”), then there are chances that you get a third table “randomized control table”. For more details about the “randomized control table”, please read the help page of this function by using ?longdat_cont. Result table Let’s have a look at the result table first. # The first dataframe in the list is the result table result_table <- test_cont[[1]] result_table %>% kableExtra::kbl() %>% kableExtra::kable_paper(bootstrap_options = "responsive", font_size = 12, position = "center") %>% kableExtra::scroll_box(width = "700px") Feature Prevalence_percentage Mean_abundance Signal Effect EffectSize Null_time_model_q Post-hoc_q BacteriumA 90 39.50 OK_nc Decreased -0.7809864 0.0000011 0.0000482 BacteriumB 45 34.75 NS NS -0.1328821 0.4496104 0.5765105 BacteriumC 90 84.00 OK_nc Enriched 0.7112976 0.0012725 0.0004375 The second and third columns show the prevalence and mean abundance of each feature According to the “Signal” column, treatment is a significant predictor for BacteriumA and BacteriumC as they show “OK_nc” (which represents OK and no confounder), meaning that the abundance of BacteriumA and BacteriumC alter significantly through time (proxy of treatment), and that there is no potential confounder. If there is confounding effect in the result, please see the confounder table to find out what the confounders are. As for BacteriumB, time (proxy of treatment) has no effect on its abundance. The following column “Effect” describes the trend of dependent variables change along time. Here we can tell that BacteriumA and BacteriumC have decreasing and increasing patterns, respectively. From the next column “EffectSize”, we know that the effect sizes are -0.78 and 0.71, respectively. The important and the most relevant information for users ends here, which are listed from the first column to “EffectSize”. Then the following columns contain the details of model test p values (“Null_time_model_q”), the post-hoc test p values (Post.hoc_q). For more detailed information of the columns in the result table, please refer to the help page by using ?longdat_cont. The explanation of each type of “Signal” is listed below. Signal Meaning Explanation NS Non-significant There’s no effect of time. OK_nc OK and no confounder There’s an effect of time and there’s no potential confounder. OK_d OK but doubtful There’s an effect of time and there’s no potential confounder, however the confidence interval of the test_var estimate in the model test includes zero, and thus it is doubtful. Please check the raw data (e.g. plot feature against time) to confirm if there is real effect of time. OK_sd OK and strictly deconfounded There are potential confounders, however there’s an effect of time and it is independent of those of confounders. AD Ambiguously deconfounded There are potential confounders, and it isn’t possible to conclude whether the effect is resulted from time or confounders. C Confounded There’s an effect of time, but it can be reduced to the confounding effects. Confounder table Next, let’s take a look at the confounder table. # The second dataframe in the list is the confounder table confound_table <- test_cont[[2]] confound_table %>% kableExtra::kbl() %>% kableExtra::kable_paper(bootstrap_options = "responsive", font_size = 12, position = "center") %>% kableExtra::scroll_box(width = "700px") Feature Confounder1 Confounding_type1 Effect_size1 The columns of this confounder table are grouped every three columns. “Confounder1” is the name of the confounder, while “Confounding_type1” is the confounding type of confounder1, and “Effect_size1” is the effect size of the dependent variable values between different levels of confounder1. If there are more than one confounders, they will be listed along the rows of each dependent variable. Since there is no confounding effect found in this example (according to the result table), the confounder table is a blank. If you’d like to see a result with confounders, please read the vignette of longdat_disc(). Result interpretation From the result above, we see that the treatment induces significant changes on the abundance of BacteriumA and BacteriumC, while causing no alteration in that of BacteriumB. Plotting the result Finally, we can plot the result with the function cuneiform_plot(). The required input is a result table from longdat_cont() (or any table with the same format as a result table does). test_plot <- cuneiform_plot(result_table = test_cont[[1]], title_size = 15) #> [1] "Finished plotting successfully!" test_plot Here we can see the result clearly from the cuneiform plot. It shows the features whose signals are not “NS”. The left panel displays the effects in each time interval. Red represents positive effect size while blue describes negative one (colors can be customized by users). Signficant signals are indicated by solid shapes, whereas insignificant signals are denoted by transparent ones. The right panel displays the confounding status of each feature, and users can remove it by specifying confound_panel = FALSE. For more details of the arguments, please read the help page of this function by using ?cuneiform_plot. Wrap-up This tutorial ends here! If you have any further questions and can’t find the answers in the vignettes or help pages, please contact the author ().
2023-02-08 22:55:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26254671812057495, "perplexity": 2265.1728036410077}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500983.76/warc/CC-MAIN-20230208222635-20230209012635-00070.warc.gz"}
https://www2.cms.math.ca/Events/winter20/schedule_daily
2020 CMS Winter Meeting Montreal, December 4 - 7, 2020 # Schedule - by day Detailed session schedules will be posted on the web site beginning in late April. Once the schedules are made available to us by the organizers, we will post them as quickly as possible. Please note that schedules are subject to change without notice. All scientific sessions will be held Online. All times listed below are Eastern Standard Time (EST). Tuesday December 1 13:00 - 17:00 CMS Executive Committee Thursday December 3 9:30 - 10:00 Marc Masdeu (Universitat Autònoma de Barcelona), Computations with Arithmetic Groups, Quaternionic rigid meromorphic cocycles 10:00 - 10:30 Graham Ellis (National University of Ireland, Galway), Computations with Arithmetic Groups, An algorithm for computing Hecke operators 10:45 - 11:15 Angelica Babei (Centre de recherches mathématiques), Computations with Arithmetic Groups, Zeros of period polynomials for Hilbert modular forms 11:00 - 12:00 CMS Development Group Meeting 11:15 - 11:45 Ben Breen (Clemson University), Computations with Arithmetic Groups, A trace formula for Hilbert modular forms 12:00 - 12:30 Avner Ash (Boston College), Computations with Arithmetic Groups, Cohomology of congruence subgroups of $SL_3(Z)$ and real quadratic fields 12:30 - 17:30 CMS Board of Directors Meeting Friday December 4 1:36 - 2:06 Marina Iliopoulou (Kent), Discrete Analysis, A discrete Kakeya-type inequality 2:12 - 2:42 Aled Walker (CRM Montreal), Discrete Analysis, Effective results on the structure of sumsets 2:48 - 3:18 Sarah Peluse (IAS), Discrete Analysis, Modular zeros in the character table of the symmetric group 3:24 - 3:54 Fernando Shao (University of Kentucky), Discrete Analysis, Gowers uniformity of primes in arithmetic progressions 12:45 - 13:00 Break 12:45 - 13:00 Equity, Diversity and Inclusiveness Committee Breakout - COVID19 Panel Discussion 13:00 - 13:20 Fernando Peruani (CY Cergy Paris Université), Mathematical biology, A mathematical approach to bacterial infections: models for bacterial exploration and infection 13:00 - 13:30 Kristine Bauer (University of Calgary), Homotopy Theory 13:00 - 13:30 Sam Chow (Warwick), Discrete Analysis, Bohr sets in diophantine approximation 13:00 - 13:30 Craig Fraser (IHSPT-Toronto), History and Philosophy of Mathematics, Henri Poincaré's Development of Hamilton-Jacobi Theory 13:00 - 13:30 Stefan Glock (ETH Zurich), Combinatorial Designs, Approximate Steiner triple systems of large girth 13:00 - 13:30 Dimitris Koukoulopoulos (Montréal), Probability in Number Theory, How concentrated can the divisors of a typical integer be? 13:00 - 13:30 Lucas Mol (University of Winnipeg), Graph Theory, The Threshold Dimension of a Graph 13:00 - 13:30 Alexei Oblomkov (UMass Amherst), Algebraic Geometry of Integrable Systems, 3D sigma models with defects and knot homology 13:00 - 13:30 Romain Petrides (Université Paris Diderot), Geometric and Computational Spectral Theory, Free boundary minimal surfaces of any topological type in euclidean balls via shape optimization (Part 1) 13:00 - 13:30 Neha Prabhu (Chennai Mathematical Instittue), Arithmetic Statistics, A joint distribution theorem with applications to extremal primes for elliptic curves 13:00 - 13:40 Zhang Jun (Montreal), Symplectic Topology, Quantitative Lagrangian embeddings 13:00 - 14:00 Alan Thompson (Loughborough University), Fibrations and Degenerations in Algebraic Geometry 13:20 - 13:40 Grant Lythe (University of Leeds), Mathematical biology, How many TCR clonotypes does a body maintain? 13:30 - 14:00 Ricardo Alonso (Texas A&M University at Qatar, Qatar), Nonlinear PDEs and kinetic problems, Brief Intro to Dissipative Particle Systems and the role of self-similarity 13:30 - 14:00 Emma Bailey (CUNY), Probability in Number Theory, Random matrices and $L$-functions: moments of moments, branching, and log-correlation 13:30 - 14:00 Curtis Bright (Waterloo), Combinatorial Designs, A Resolution of Lam's Problem via Satisfiability Solvers 13:30 - 14:00 Ben Cameron (Guelph), Graph Theory, The mean subtree order of a graph under edge addition 13:30 - 14:00 Anup Dixit (Chennai Mathematical Institute), Arithmetic Statistics, On the classification problem for general Dirichlet series 13:30 - 14:00 Henrik Mathiesen (Chicago), Geometric and Computational Spectral Theory, Free boundary minimal surfaces of any topological type in Euclidean balls via shape optimization (Part 2) 13:30 - 14:00 Ruxandra Moraru (University of Waterloo), Algebraic Geometry of Integrable Systems, Moduli spaces of stable bundles on complex nilmanifolds 13:30 - 14:00 Apurva Nakade (University of Western Ontario), Homotopy Theory, Discrete Chern-Simons via 2-group bundles on elliptic curves 13:30 - 14:00 Yelda Nasifoglu (Oxford), History and Philosophy of Mathematics, The changing nature of mathematical diagrams in the seventeenth century 13:40 - 14:00 Bard Ermentrout (University of Pittsburgh), Mathematical biology, A model for the the inflammatory response to SARS-CoV-2 in the upper- and lower-respiratory tracts. 14:00 - 14:20 Sam Jamaleddine (McGill University), Mathematical biology, Investigating the effects of T cell avidity distributions on acute vs. chronic viral infection dynamics 14:00 - 14:30 Iain Beaton (Dalhousie University), Graph Theory, The Average Order of Dominating Sets of a Graph 14:00 - 14:30 Francesco Cellarosi (Queens), Probability in Number Theory 14:00 - 14:30 Gong Chen (Fields Institute and University of Toronto, Canada), Nonlinear PDEs and kinetic problems 14:00 - 14:30 Iren Darijani (Memorial), Combinatorial Designs, Colourings of star systems 14:00 - 14:30 Lucile Devin (Chalmers University of Technology and University of Gothenburg), Arithmetic Statistics, Chebyshev’s bias and sums of two squares 14:00 - 14:30 Jack Ding (University of Toronto), Algebraic Geometry of Integrable Systems 14:00 - 14:30 Juan Fernández González and Dirk Schlimm (McGill), History and Philosophy of Mathematics, From a doodle to a theorem: a case study in mathematical discovery 14:00 - 14:30 Sacha Ikonicoff (University of Calgary), Homotopy Theory, Unstable algebras over an operad 14:00 - 14:30 Rosa Orellana (Dartmouth College), Algebraic Combinatorixx (Women in Algebraic Combinatorics) 14:00 - 14:30 Jeffrey Ovall (Portland State U.), Geometric and Computational Spectral Theory, Exploring Eigenvector Localization Using Filtered Subspace Iteration (FEAST) 14:00 - 15:00 Elana Kalashnikov (Harvard University), Fibrations and Degenerations in Algebraic Geometry 14:10 - 14:50 Marcelo Atallah (Montreal), Symplectic Topology, Hamiltonian no-torsion 14:20 - 14:40 Jürgen Reingruber (Institut de Biologie École Normale Supérieure), Mathematical biology, Monitoring and predicting the Covid-19 epidemic and its implications for hospitals 14:30 - 15:00 Katharine Adamyk (University of Western Ontario), Homotopy Theory, Lifting A(1)-Modules 14:30 - 15:00 Yakine Bahri (University of Victoria, Canada), Nonlinear PDEs and kinetic problems 14:30 - 15:00 Graham Cox (Memorial), Geometric and Computational Spectral Theory, Defining the spectral position of a Neumann domain 14:30 - 15:00 Alexandra Florea (Columbia University), Arithmetic Statistics, Non-vanishing for cubic $L$-functions 14:30 - 15:00 Jeannette Janssen (Dalhousie University), Graph Theory, Simultaneous embeddings of nested interval graphs 14:30 - 15:00 Sacha Mangerel (CRM), Probability in Number Theory, Arrangements of Consecutive Values of Real Multiplicative Functions 14:30 - 15:00 Davesh Maulik (MIT), Algebraic Geometry of Integrable Systems, Cohomology of the moduli of Higgs bundles and the Hausel-Thaddeus conjecture 14:30 - 15:00 Margaret E. Schotte (York), History and Philosophy of Mathematics, ‘Demonstrate all this with diagrams’: Recovering mathematical practice from early modern navigation exams 14:30 - 15:00 Sophie Spirkl (University of Waterloo), Algebraic Combinatorixx (Women in Algebraic Combinatorics), A complete multipartite basis for the chromatic symmetric function 14:40 - 15:00 Becca Asquith (Imperial College London), Mathematical biology 15:00 - 15:30 Break 15:30 - 16:00 Samantha Dahlberg (Arizona State University), Algebraic Combinatorixx (Women in Algebraic Combinatorics) 15:30 - 16:00 Suresh Eswarathasan (Dalhousie), Probability in Number Theory, Counting tangencies of nodal domains 15:30 - 16:00 Daniel Horsley (Monash), Combinatorial Designs, An Evans-style result for block designs 15:30 - 16:00 Margaret-Ellen Messinger (Mount Allison), Graph Theory, Reconfiguration for Dominating Sets 15:30 - 16:00 Boris Mordukhovich (Wayne State), Variational Analysis: Theory and Applications, A Generalized Newton Method for Subgradient Systems 15:30 - 16:00 Nathan Ng (University of Lethbridge), Arithmetic Statistics, Mean values of long Dirichlet polynomials 15:30 - 16:00 Iosif Polterovich (Montréal), Geometric and Computational Spectral Theory, The Dirichlet-to-Neumann map, the boundary Laplacian and an unpublished paper of Hörmander 15:30 - 16:00 Dayton Preissl (University of Victoria, Canada), Nonlinear PDEs and kinetic problems, The Hot, Magnetized Relativistic Maxwell Vlasov System 15:30 - 16:00 Junliang Shen (MIT), Algebraic Geometry of Integrable Systems, Cohomological $\chi$-independence for moduli of 1-dimensional sheaves and moduli of Higgs bundles 15:30 - 16:00 David Waszek (McGill), History and Philosophy of Mathematics 15:30 - 16:30 Daniel Lopez (IMPA), Fibrations and Degenerations in Algebraic Geometry 16:00 - 16:30 Robert Bailey (Grenfell Campus, MUN), Graph Theory, On the 486-vertex distance-regular graphs of Koolen--Riebeek and Soicher 16:00 - 16:30 William Dou (University of Hawaii-Manoa), History and Philosophy of Mathematics, What Does "Aligning" Mean? Practices of Justification across Chinese Logic and Mathematics 16:00 - 16:30 Tao Feng (BJTU), Combinatorial Designs, Nov\'{a}k's conjecture on cyclic Steiner triple systems and its generalization 16:00 - 16:30 Iva Halacheva (Northeastern University), Algebraic Geometry of Integrable Systems, Lagrangian correspondences in Schubert calculus 16:00 - 16:30 Alia Hamieh (University of Northern British Columbia), Arithmetic Statistics, Mean squares of long Dirichlet polynomials with the divisor function $\tau_2(n)$ 16:00 - 16:30 Megumi Harada (McMaster University), Algebraic Combinatorixx (Women in Algebraic Combinatorics) 16:00 - 16:30 Walaa Moursi (Waterloo), Variational Analysis: Theory and Applications 16:00 - 16:30 Aled Walker (CRM & Cambridge), Probability in Number Theory, Triple correlations of dilates squares modulo 1 16:30 - 17:30 Alicia Carriquiry (Iowa State University), Public Lecture 16:30 - 17:30 Tokio Sasaki (University of Miami), Fibrations and Degenerations in Algebraic Geometry 17:30 - 18:30 Student Social 19:00 - 19:30 Hiroaki Kikuchi (Tsuda University, Japan), Nonlinear PDEs and kinetic problems, Existence of a ground state and blowup problem for a class of nonlinear Schr\"{o}dinger equations 19:30 - 20:00 Takafumi Akahori (Shizuoka University, Japan), Nonlinear PDEs and kinetic problems, Uniqueness of ground states for combined power-type nonlinear scalar field equations 20:00 - 20:30 Kai Koike (Kyoto University, Japan), Nonlinear PDEs and kinetic problems, Refined pointwise estimates for the solutions to a system of a 1D viscous compressible fluid and a moving point mass 20:30 - 21:00 Tong Yang (City University of Hong Kong, Hong Kong), Nonlinear PDEs and kinetic problems, Some recent progress on the Boltzmann equation without angular cutoff 21:00 - 21:30 I-Kun Chen (National Taiwan University, Taiwan), Nonlinear PDEs and kinetic problems Saturday December 5 1:30 - 2:00 Oleksiy Klurman (Bristol), Discrete Analysis, Zeros of Fekete polynomials 2:00 - 2:30 Matthew Colbrook (Cambridge University), Spectral Methods and Singular Integral Equations, A Mathieu function boundary spectral method for acoustic scattering 2:06 - 2:36 Zane Li (Indiana University), Discrete Analysis, Connections between decoupling and efficient congruencing 2:30 - 3:00 Travis Askham (NJIT), Spectral Methods and Singular Integral Equations, Fast multipole methods for continuous charge distributions 2:42 - 3:12 Larry Guth (MIT), Discrete Analysis, Incidence estimates for well spaced rectangles 3:00 - 3:30 Dan Fortunato (Harvard University), Spectral Methods and Singular Integral Equations, The ultraspherical spectral element method 3:18 - 3:48 Hong Wang (IAS), Discrete Analysis, Small cap decouplings 3:30 - 4:00 Andrew Horning (Cornell University), Spectral Methods and Singular Integral Equations, Twice is enough for dangerous eigenvalues 3:54 - 4:24 Ruxiang Zhang (IAS), Discrete Analysis, Local smoothing for the wave equation in 2+1 dimensions 4:00 - 4:30 Jim Bremer (UC Davis), Spectral Methods and Singular Integral Equations, A fast algorithm for simulating scattering from a radially symmetric potential 4:30 - 5:00 Dominique Kemp (Indiana University), Discrete Analysis 4:30 - 5:00 Nilima Nigam (Simon Fraser University), Spectral Methods and Singular Integral Equations, Steklov eigenfunctions: how and why to compute them 9:00 - 9:30 Karen Strung (Czech Academy of Sciences), Operator algebras, (semi)groups, and dynamics, Constructions in minimal amenable dynamics and applications to classification of C*-algerbas. 9:30 - 10:00 Kristin Courtney (University of Münster), Operator algebras, (semi)groups, and dynamics, C*-structure on images of completely positive order zero maps 10:00 - 10:30 Jamie Gabe (University of Southern Denmark), Operator algebras, (semi)groups, and dynamics, Classification of embeddings 10:00 - 10:30 Eloise Hamilton (IMJ-PRG, University of Paris), Algebraic Geometry of Integrable Systems, Moduli spaces for unstable Higgs bundles of rank 2 and their geometry 10:30 - 11:00 Peter Crooks (Northeastern University), Algebraic Geometry of Integrable Systems, Hessenberg varieties and Poisson slices 10:30 - 11:00 Aaron Tikuisis (University of Ottawa), Operator algebras, (semi)groups, and dynamics, Classification of embeddings II 11:00 - 12:00 Yvan Saint Aubin (Université de Montréal), Plenary Lecture, Teaching modeling in first year - Un cours de modélisation en première année 12:00 - 12:30 Break 12:30 - 13:30 Veselin Jungic (Simon Fraser University), Adrien Pouliot Award, Teaching and Preaching Mathematics: Reflections on the Past and Thoughts on the Future 13:30 - 14:00 Break 14:00 - 14:20 Simon Girel (Université Côte d'Azur), Mathematical biology, Mathematical modeling of the CD8 T-cells immune response 14:00 - 14:30 Montaz Ali (University of the Witwatersrand), Optimization and Data Science, Convex Formulation for Planted Quasi-Clique Recovery 14:00 - 14:30 Ana Balibanu (Harvard University), Algebraic Geometry of Integrable Systems, Steinberg slices in quasi-Poisson varieties 14:00 - 14:30 Mariya Boyko (Independent scholar), History and Philosophy of Mathematics, Socialist competition and its role in Soviet mathematics education 14:00 - 14:30 Patrick Combettes (NCSU), Variational Analysis: Theory and Applications, Multivariate Monotone Inclusions in Saddle Form I: Theory and Algorithms 14:00 - 14:30 Brandon Doherty (University of Western Ontario), Homotopy Theory, Cubical models of (infinity,1)-categories 14:00 - 14:30 Ahmet Guloglu (Bilkent University), Arithmetic Statistics, Non-vanishing of Cubic Twists of L-functions 14:00 - 14:30 Adam Harper (Warwick), Probability in Number Theory, Large fluctuations of random multiplicative functions 14:00 - 14:30 Pamela Harris (Williams College), Algebraic Combinatorixx (Women in Algebraic Combinatorics), Kostant's partition function and magic multiplex juggling sequences 14:00 - 14:30 Melissa Huggan (Ryerson), Graph Theory, The Orthogonal Colouring Game 14:00 - 14:30 Matjaž Konvalinka (University of Ljubljana), Enumerative Combinatorics, Some natural extensions of the parking space 14:00 - 14:30 Robert McCann (University of Toronto), Optimal Transport and Applications, Inscribed radius bounds for lower Ricci bounded metric measure spaces with mean convex boundary 14:00 - 14:30 Joanna Niezen (Victoria), Combinatorial Designs, Sarvate-Beam Group Divisible Designs 14:00 - 14:30 David Sher (DePaul U.), Geometric and Computational Spectral Theory, Inverse Steklov spectral problem for curvilinear polygons 14:00 - 14:40 Ilia Kirillov (Toronto), Symplectic Topology, Classification of coadjoint orbits for symplectomorphism groups of surfaces with boundary 14:00 - 15:00 Matt Kerr (Washington University at St. Louis), Fibrations and Degenerations in Algebraic Geometry 14:20 - 14:40 Jacques Bélair (Université de Montréal), Mathematical biology, Waning immunity in a two-strain disease model 14:30 - 15:00 Minh Bui (NCSU), Variational Analysis: Theory and Applications, Multivariate Monotone Inclusions in Saddle Form II: Applications 14:30 - 15:00 Claire Burrin (CRM), Probability in Number Theory, Higher moment formulas for discrete lattice orbits in the plane 14:30 - 15:00 Nancy Clarke (Acadia University), Graph Theory, Surrounding Cops and Robber 14:30 - 15:00 Luigi De Pascale (Università di Pisa), Optimal Transport and Applications, The relaxation of the Coulomb multi-marginal optimal transport cost and applications 14:30 - 15:00 Olivia Dumitrescu (UNC Chapel Hill), Algebraic Geometry of Integrable Systems 14:30 - 15:00 Suresh Eswarathasan (Dalhousie), Geometric and Computational Spectral Theory, Entropy of $\epsilon$-logarithmic quasimodes 14:30 - 15:00 Kevin Halasz (SFU), Combinatorial Designs, Near transversals in group-based latin squares 14:30 - 15:00 Lucy Martinez (Stockton University), Algebraic Combinatorixx (Women in Algebraic Combinatorics), Minimum Rank of Regular Bipartite Graphs 14:30 - 15:00 Courtney Paquette (McGill University), Optimization and Data Science, Halting Time is Predictable for Large Models: A Universality Property and Average-case Analysis 14:30 - 15:00 Dorette Pronk (Dalhousie University), Homotopy Theory, Three approaches toward orbifold mapping objects 14:30 - 15:00 Vasu Tewari (University of Pennsylvania), Enumerative Combinatorics, Refined mixed Eulerian numbers 14:30 - 15:00 Maryam Vulis (St Johns University), History and Philosophy of Mathematics, The Life and Work of Zygmunt Janiszewski (1888 -1920) 14:30 - 15:00 Asif Zaman (Toronto), Arithmetic Statistics, An approximate form of Artin's holomorphy conjecture and nonvanishing of Artin L-functions 14:40 - 15:00 Eric Foxall (University of British Columbia), Mathematical biology, Bifurcation theory of well-mixed stochastic population models 15:00 - 15:20 Paul Francois (McGill University), Mathematical biology, Information in cytokine dynamics : robotic mapping and machine learning 15:00 - 15:30 Oscar Bruno (Caltech), Geometric and Computational Spectral Theory, Domains Without Dense Steklov Nodal Sets 15:00 - 15:30 Sunita Chepuri (University of Michigan), Algebraic Combinatorixx (Women in Algebraic Combinatorics) 15:00 - 15:30 Coen del Valle (Victoria), Combinatorial Designs, Block designs of dimension three 15:00 - 15:30 Tom Drucker (University of Wisconsin-Whitewater), History and Philosophy of Mathematics 15:00 - 15:30 Hao Hu (Waterloo), Variational Analysis: Theory and Applications 15:00 - 15:30 Lisa Jeffrey (University of Toronto), Algebraic Geometry of Integrable Systems, The triple reduced product and Higgs bundles 15:00 - 15:30 Youness Lamzouri (Lorraine), Probability in Number Theory, Zeros of linear combinations of $L$-functions near the critical line 15:00 - 15:30 Tongseok Lim (Purdue University), Optimal Transport and Applications, Geometry of interaction energy minimizers 15:00 - 15:30 Zhaosong Lu (University of Minnesota), Optimization and Data Science, First-Order Augmented Lagrangian Methods for Convex Conic Programming 15:00 - 15:30 Amita Malik (AIM), Arithmetic Statistics, Bias statistics for the zeros of L-functions 15:00 - 15:30 Nicholas Meadows (Carleton University), Homotopy Theory, Spectral Sequences in $(\infty, 1)$-categories 15:00 - 15:30 Todd Mullen (University of Saskatchewan), Graph Theory, Recent Results in Diffusion 15:00 - 15:30 Svetlana Poznanovikj (Clemson University), Enumerative Combinatorics, Hecke insertion and maximal increasing and decreasing sequences in fillings of polyominoes 15:00 - 15:40 Jeremy Lane (McMaster), Symplectic Topology, Canonical bases, toric degenerations, and collective integrable systems 15:00 - 16:00 Sukjoo Lee (University of Pennsylvania), Fibrations and Degenerations in Algebraic Geometry 15:20 - 15:40 Nathanael Hozé (Institut Pasteur), Mathematical biology 15:30 - 16:00 Yankai Cao (UBC), Optimization and Data Science, A Global Optimization Algorithm for Clustering Problems 15:30 - 16:00 Vesselin Dimitrov (Toronto), Probability in Number Theory 15:30 - 16:00 Danny Dyer (MUN), Graph Theory, Gracefully labelling triangular cacti using Skolem sequences 15:30 - 16:00 Kimon Fountoulakis (Waterloo), Variational Analysis: Theory and Applications 15:30 - 16:00 Yash Jhaveri (Columbia University), Optimal Transport and Applications, On the (in)stability of the identity map in optimal transportation 15:30 - 16:00 Allysa Lumley (CRM), Arithmetic Statistics, Primes in short intervals: Heuristics and calculations 15:30 - 16:00 Trent Marbach (Ryerson University), Combinatorial Designs, The localization number of designs 15:30 - 16:00 Niny Arcila Maya (University of British Columbia), Homotopy Theory, Decomposition of topological Azumaya algebra with involution 15:30 - 16:00 Braxton Osting (Utah), Geometric and Computational Spectral Theory, Maximal Spectral Gaps for Periodic Schroedinger Operators 15:30 - 16:00 Brent Pym (McGill University), Algebraic Geometry of Integrable Systems, Beauville-Bogomolov-Weinstein splitting for Poisson varieties 15:30 - 16:00 Sandra Visokolskis (National University of Cordoba, Argentina), History and Philosophy of Mathematics, Fourier’s Resolution of the Heat Equation by Transduction: A Contemporary Approach. 15:30 - 16:00 Nancy Wallace (UQAM), Algebraic Combinatorixx (Women in Algebraic Combinatorics), Toward a Schurification of Schröder path formulas. 15:40 - 16:00 Johannes Textor (Radboud University Medical Center), Mathematical biology, A tipping point in cancer-immune dynamics leads to divergent immunotherapy responses and hampers biomarker discovery 16:00 - 16:30 Ahmad Alkasasbeh (MUN), Graph Theory, Graceful Labellings of Variable Windmills Using Skolem Sequences 16:00 - 16:30 Brenda Davison (SFU), History and Philosophy of Mathematics 16:00 - 16:30 Rachel Hardeman (University of Calgary), Homotopy Theory 16:00 - 16:30 Young-heon Kim (University of British Columbia), Optimal Transport and Applications, Optimal transport for dendritic structures 16:00 - 16:30 Kirsten Nelson (Carleton), Combinatorial Designs, Interleaved Sequences 16:00 - 16:30 Ibrahim Numanagić (University of Victoria), Optimization and Data Science, Optimization in Pharmacogenomics 16:00 - 16:30 Anna Pun (University of Virginia), Algebraic Combinatorixx (Women in Algebraic Combinatorics) 16:00 - 16:30 Colleen Robichaux (University of Illinois Urbana-Champaign), Enumerative Combinatorics, An Efficient Algorithm for Deciding the Vanishing of Schubert Polynomial Coefficients 16:00 - 16:30 Will Sawin (Columbia), Arithmetic Statistics, Measures from moments for random groups 16:00 - 16:30 Shiyu Shen (University of Toronto), Algebraic Geometry of Integrable Systems, Topological mirror symmetry for parabolic Higgs bundles 16:00 - 16:30 Mohamed Tawhid (TRU), Variational Analysis: Theory and Applications, Improved Salp Swarm Optimization Algorithm for Data Clustering 16:00 - 16:30 Asif Zaman (Toronto), Probability in Number Theory, Low moments of random power series 16:00 - 16:30 Xuwen Zhu (North Eastern), Geometric and Computational Spectral Theory, Spectral properties of spherical conical metrics 16:00 - 16:40 Jordan Payette (Montreal), Symplectic Topology, Mean value inequalities for the Poisson bracket invariant 16:00 - 17:00 Ursula Whitcher (Mathematical Reviews), Fibrations and Degenerations in Algebraic Geometry 16:30 - 17:00 Salihah Alwadani (UBCO), Variational Analysis: Theory and Applications, Resolvents and Yosida approximations of displacement mappings of isometries 16:30 - 17:00 Maritza Branker (Niagara University), History and Philosophy of Mathematics, Euphemia Lofton Haynes: her forgotten legacy 16:30 - 17:00 Jacques Hurtubise (McGill University), Algebraic Geometry of Integrable Systems, Moduli of bundles and degenerations of curves. 16:30 - 17:00 Seonghyeon Jeong (Michigan State University), Optimal Transport and Applications, Equivalence of the synthetic MTW conditions 16:30 - 17:00 David Keating (University of California, Berkeley), Enumerative Combinatorics, A Vertex Model for LLT Polynomials 16:30 - 17:00 Seoyoung Kim (Queen's University), Arithmetic Statistics, From the Birch and Swinnerton-Dyer conjecture to Nagao's conjecture 16:30 - 17:00 Kyle MacKeigan (Dalhousie University), Graph Theory, Orthogonal Colourings of Graphs 16:30 - 17:00 Olya Mandelshtam (Brown University), Algebraic Combinatorixx (Women in Algebraic Combinatorics) 16:30 - 17:00 Mahsa Nasrollahi (Regina), Combinatorial Designs, The Erdős-Ko-Rado theorem for 2-intersecting families of perfect matchings 16:30 - 17:00 Brad Rodgers (Queens), Probability in Number Theory, The distribution of sums of two squares in short intervals 16:30 - 17:00 Luis Scoccola (Michigan State University), Homotopy Theory, Homotopy coherence in applied topology 16:30 - 17:00 Jabed Tomal and Jan Ciborowski (Thompson River University, University of Calgary), Optimization and Data Science, Detection of environmental thresholds by assessing discontinuities in slopes and variances via a Bayesian regression model 17:00 - 17:30 Chris Duffy (University of Saskatchewan), Graph Theory, Homomorphisms to Reflexive Oriented and Edge-Coloured Graphs 17:00 - 17:30 Stanley Xiao (University of Toronto), Arithmetic Statistics, The number of quartic-$D_4$ fields having monogenic cubic resolvent ordered by conductor 19:00 - 19:30 Quyuan Lin (Texas A&M, US), Nonlinear PDEs and kinetic problems, The Inviscid Primitive Equations and the Effect of Rotation 19:30 - 20:00 Ikkei Shimizu (Kyoto University, Japan), Nonlinear PDEs and kinetic problems 20:00 - 20:30 Yanxia Deng (Sun Yat-sen University), Nonlinear PDEs and kinetic problems, Global existence and singularity of the Hill’s type lunar problem 20:30 - 21:00 Leslie Chen (University of Massachusetts Dartmouth, US), Nonlinear PDEs and kinetic problems, Multiscale Convergence Properties for Spectral Approximation of a Model Kinetic Equation 21:00 - 21:30 Razvan Fetecau (Simon Fraser University, Canada), Nonlinear PDEs and kinetic problems, Aggregation with intrinsic interactions on Riemannian manifolds 21:30 - 22:00 Shugo Yasuda (University of Hyogo, Japan), Nonlinear PDEs and kinetic problems, Numerical analysis of the instability and aggregation in a kinetic transport equation with internal state Sunday December 6 1:30 - 2:00 Michael Curran (Oxford), Discrete Analysis, Khovanskii's Theorem and Effective Results on Sumset Structure 2:00 - 2:30 Timon Gutleb (Imperial College London), Spectral Methods and Singular Integral Equations, Computing Equilibrium Measures with Power Law Kernels 2:06 - 2:36 Amita Malik (AIM), Discrete Analysis, Partitions into primes in arithmetic progression 2:30 - 3:00 Sheehan Olver (Imperial College London), Spectral Methods and Singular Integral Equations, Sparse spectral methods for singular integral and fractional differential equations 2:42 - 3:12 Jose Madrid (UCLA), Discrete Analysis, Improving estimates for discrete polynomial averages and related problems 3:00 - 3:30 Manas Rachh (Flatiron Institute), Spectral Methods and Singular Integral Equations, Towards automatically adaptive solvers for Maxwell's equations in three dimensions 3:18 - 3:48 Felipe Ramirez (Wesleyan University), Discrete Analysis, Remarks about inhomogeneous pair correlations 3:30 - 4:00 Richard Mikael Slevinsky (University of Manitoba), Spectral Methods and Singular Integral Equations, Fast associated classical orthogonal polynomial transforms 3:54 - 4:24 Ayla Gafni (University of Mississippi), Discrete Analysis, Asymptotics of Restricted Partition Functions 4:00 - 4:30 Alex Townsend (Cornell University), Spectral Methods and Singular Integral Equations, Computing the spectra of differential operators 4:30 - 5:00 Freddie Manners (UC San Diego), Discrete Analysis 4:30 - 5:00 Tom Trogdon (University of Washington), Spectral Methods and Singular Integral Equations, On arbitrary-precision enabled inverse scattering for the 1-dimensional Schr\"odinger operator 11:00 - 12:00 Irene Fonseca (Carnegie Mellon's Center for Nonlinear Analysis (CNA)), Plenary Lecture, Geometric Flows and Phase Transitions in Heterogeneous Media 12:00 - 12:30 Break 12:30 - 13:30 Jacopo De Simoi (University of Toronto), Coxeter-James Prize, Dynamical spectral rigidity and determination 13:30 - 14:00 Break 14:00 - 14:20 Arthur Sherman (National Institutes of Health), Mathematical biology, Clinical Insights from a Diabetes Progression Model 14:00 - 14:30 Paula Fermín Cueto (University of Edinburgh), Optimization and Data Science, Machine learning and statistical methods for characterising and predicting capacity degradation of Li-ion cells 14:00 - 14:30 Justine Falque (Université Paris-Sud), Enumerative Combinatorics, 3-dimensional Catalan objets: a (partial) overview and a new bijection 14:00 - 14:30 Alfred Galichon (New York University), Optimal Transport and Applications, Equilibrium transport with entropic regularization 14:00 - 14:30 Paul Gauthier (Université de Montréal), Recent Advances in Harmonic and Complex Analysis, Asymptotic first boundary value problem for holomorphic functions of several complex variables 14:00 - 14:30 Mikhail Karpukhin (Caltech), Geometric and Computational Spectral Theory, Continuity of eigenvalues with applications to eigenvalue optimization 14:00 - 14:30 Sander Kupers (University of Toronto), Homotopy Theory, The rational homotopy type of certain diffeomorphism groups 14:00 - 14:30 Esther Lamken, Combinatorial Designs, Applications of incomplete pairwise balanced designs 14:00 - 14:30 Arul Shankar (University of Toronto), Arithmetic Statistics 14:00 - 14:30 Levent Tuncel (Waterloo), Variational Analysis: Theory and Applications 14:00 - 14:40 Francisco Torres de Lizaur (Toronto), Symplectic Topology, Knots and links in Beltrami fields 14:00 - 15:00 Jordon Kostiuk (Brown University), Fibrations and Degenerations in Algebraic Geometry 14:20 - 14:40 Anmar Khadra (McGill University), Mathematical biology, Excitable media in fish keratocytes model: Canard explosion, traveling waves and beyond 14:30 - 15:00 Emilia Alvarez (University of Bristol), Arithmetic Statistics, Moments of the logarithmic derivative of characteristic polynomials from $SO(N)$ and $USp(2N)$ 14:30 - 15:00 Marzieh Bayeh (University of Ottawa), Homotopy Theory, Higher Equivariant and Invariant Topological Complexities 14:30 - 15:00 Galia Dafni (Concordia University), Recent Advances in Harmonic and Complex Analysis, Extension domains for bmo 14:30 - 15:00 Gonçalo dos Reis (University of Edinburgh), Optimization and Data Science, State of Health for the capacity and internal resistance of Li-ion cells: A machine learning approach with knees and elbows 14:30 - 15:00 Sam Hopkins (University of Minnesota), Enumerative Combinatorics, Promotion of Kreweras words 14:30 - 15:00 Jean Lagacé (UCL), Geometric and Computational Spectral Theory, Geometric homogenisation theory and spectral shape optimisation 14:30 - 15:00 Michel Pain (NYU), Probability in Number Theory, Extrema of branching random walks and log-correlated fields 14:30 - 15:00 David Pike (Memorial), Combinatorial Designs, Colourings of Group Divisible Designs 14:30 - 15:00 Steve Vavasis (Waterloo), Variational Analysis: Theory and Applications 14:30 - 15:00 Shuangjian Zhang (École normale supérieure, Paris), Optimal Transport and Applications, Wasserstein Control of Mirror Langevin Monte Carlo 14:40 - 15:00 Thomas Hillen (University of Alberta), Mathematical biology, Non-local Models for Cellular Adhesion 15:00 - 15:20 Khoren Ponsin (McGill University), Mathematical biology, Mathematical Modeling of Cellular Phagocytosis During Embryogenesis of the Urogenital System 15:00 - 15:30 Emma Bailey (University of Bristol), Arithmetic Statistics 15:00 - 15:30 Paul Bourgade (NYU), Probability in Number Theory, The Fyodorov-Hiary-Keating Conjecture 15:00 - 15:30 Ryan Gibara (Université Laval), Recent Advances in Harmonic and Complex Analysis, Boundedness and continuity of rearrangements on spaces defined by mean oscillation 15:00 - 15:30 Maria Gillespie (Colorado State University), Enumerative Combinatorics, Parking functions and a projective embedding of $\overline{M}_{0,n}$ 15:00 - 15:30 Lukasz Golab (University of Waterloo), Optimization and Data Science, Explanation Tables 15:00 - 15:30 Dima Jakobson (McGill), Geometric and Computational Spectral Theory, Zero and negative eigenvalues of conformally covariant operators, and nodal sets in conformal geometry 15:00 - 15:30 Ivan Limonchenko (University of Toronto), Homotopy Theory, On homotopy theory of polyhedral products with Golod face rings 15:00 - 15:30 Mateja Sajna (Ottawa), Combinatorial Designs, Bipartite 2-factorizations of complete multigraphs via layering 15:00 - 15:30 Hristo Sendov (Western), Variational Analysis: Theory and Applications, A unified approach to operator monotone functions 15:00 - 15:30 Adrian Tudorascu (West Virginia University), Optimal Transport and Applications, ON THE CONVEXITY CONDITION FOR THE SEMI-GEOSTROPHIC SYSTEM 15:00 - 15:40 Dominique Rathel-Fournier (Montreal), Symplectic Topology, Unobstructed Lagrangian cobordism groups of surfaces 15:00 - 16:00 Adrian Clingher (University of Missouri - St. Louis), Fibrations and Degenerations in Algebraic Geometry 15:20 - 15:40 Lisanne Rens (TU Delft), Mathematical biology, Computational models for feedback between cell shape, cell signaling and extracellular matrix 15:30 - 16:00 Farhan Abedin (Michigan State University), Optimal Transport and Applications, Exponential Convergence of Parabolic Optimal Transport on Bounded Domains 15:30 - 16:00 Steven Amelotte (University of Rochester), Homotopy Theory, The homotopy type of the fibre of the $p^\text{th}$ power map on loop spaces of spheres 15:30 - 16:00 Sedi Bartz (UM Lowell), Variational Analysis: Theory and Applications, Open questions in multi-marginal monotonicity and convex analysis 15:30 - 16:00 Antoine Comeau-Lapointe (Concordia University), Arithmetic Statistics, One-level density of the family of twists of an elliptic curve over function fields 15:30 - 16:00 Peter Danziger (Ryerson), Combinatorial Designs, Directed cycle decompositions of complete digraphs 15:30 - 16:00 Emily Dryden (Bucknell), Geometric and Computational Spectral Theory, Heat content of polygons 15:30 - 16:00 Adi Glucksam (University of Toronto), Recent Advances in Harmonic and Complex Analysis, Computability of harmonic measures 15:30 - 16:00 Maksym Radziwill (Caltech), Probability in Number Theory 15:30 - 16:00 Mark Schmidt (UBC), Optimization and Data Science, Faster Algorithms for Deep Learning? 15:40 - 16:00 Stephanie Portet (University of Manitoba), Mathematical biology, Intracellular transport driven by antagonistic motor proteins 16:00 - 16:30 Andrea Burgess (UNB), Combinatorial Designs, On the Oberwolfach Problem for single-flip 2-factors via graceful labellings 16:00 - 16:30 Martin Cech (Concordia University), Arithmetic Statistics, Mean values of real Dirichlet characters and double Dirichlet series 16:00 - 16:30 Katy Craig (University of California, Santa Barbara), Optimal Transport and Applications, A blob method for spatially inhomogeneous degenerate diffusion and applications to sampling and two layer neural networks. 16:00 - 16:30 Tim Hoheisel (McGill), Variational Analysis: Theory and Applications, From perspective maps to epigraphical projections 16:00 - 16:30 Yu-Ru Liu (Waterloo), Probability in Number Theory, Number of Prime Factors with a Given Multiplicity 16:00 - 16:30 Ali Assem Mahmoud (University of Ottawa), Enumerative Combinatorics, On the Enumerative Structures in QFT 16:00 - 16:30 Kate Poirier (New York City College of Technology), Homotopy Theory, Polyhedra for V-infinity algebras, string topology, and moduli spaces 16:00 - 16:30 Tamon Stephen (SFU), Optimization and Data Science, Minimal Cuts Set and Computing with Monotone Boolean Functions 16:00 - 16:30 Daniel Stern (Chicago), Geometric and Computational Spectral Theory, Shape optimization in spectral geometry via variational methods for harmonic maps 16:00 - 16:30 Malik Younsi (University of Hawaii), Recent Advances in Harmonic and Complex Analysis, Holomorphic motions, capacity and conformal welding 16:00 - 16:40 Jean-Philippe Chassé (Montreal), Symplectic Topology, The impact of metric constraints on the behavior of shadow metrics 16:30 - 17:00 Alexander Brudnyi (University of Calgary), Recent Advances in Harmonic and Complex Analysis, On nonlinear Runge approximation problems 16:30 - 17:00 René Cabrera (University of Massachusetts Amherst), Optimal Transport and Applications, The Monge-Kantorovich Optimal Transportation of Mass Problem on Rectifiable Continuous Paths 16:30 - 17:00 Karl Dilcher (Dalhousie), Probability in Number Theory, General Convolution Identities for Bernoulli and Euler Polynomials 16:30 - 17:00 Hadi Kharaghani (Lethbridge), Combinatorial Designs, On Equiangular Tight Frames 16:30 - 17:00 Brad Rodgers (Queen's University), Arithmetic Statistics, Primes in short intervals in number fields 16:30 - 17:00 Nathan Williams (University of Texas, Dallas), Enumerative Combinatorics, Strange Expectations in Affine Weyl Groups 16:30 - 17:00 Jane Ye (Victoria), Variational Analysis: Theory and Applications, Second-order optimality conditions for non-convex set-constrained optimization problems 16:30 - 17:00 Xuekui Zhang (University of Victoria), Optimization and Data Science, The Optimal Design of Clinical Trials with Potential Biomarker Effects, A Novel Computational Approach 17:00 - 17:30 Ludovick Bouthat (Université Laval), Recent Advances in Harmonic and Complex Analysis, The norm of an infinite L-matrix 17:00 - 17:30 Wanlin Li (CRM), Arithmetic Statistics, The Central Value of Dirichlet L-functions over Rational Function Fields 17:30 - 18:00 Wenbo Li (University of Toronto), Recent Advances in Harmonic and Complex Analysis, Conformal dimension and minimality of stochastic objects 18:00 - 18:30 Frédéric Morneau-Guérin (Université TÉLUQ), Recent Advances in Harmonic and Complex Analysis, La $\ast$-stabilité de l’espace pondéré des suites de carré sommable sur la somme directe de groupes abéliens finis Monday December 7 9:00 - 9:30 Takuya Takeishi (Kyoto Institute of Technology), Operator algebras, (semi)groups, and dynamics, Partition functions as C*-dynamical invariants and actions of congruence monoids 9:30 - 10:00 Xin Li (University of Glasgow), Operator algebras, (semi)groups, and dynamics, K-theory for semigroup C*-algebras and partial crossed products 10:00 - 10:30 Nadia Larsen (University of Oslo), Operator algebras, (semi)groups, and dynamics, Equilibrium states on C*-algebras of right lcm monoids 10:30 - 11:00 Camila Fabre Sehnem (Victoria University of Wellington), Operator algebras, (semi)groups, and dynamics, Nuclearity for partial crossed products by exact discrete groups 11:00 - 12:00 Nicolas Bergeron (École normale supérieure), Plenary Lecture, Linking in torus bundles and Hecke L functions 12:00 - 12:30 Break 12:00 - 12:30 Equity, Diversity and Inclusiveness Committee Breakout - Challenges Faced by Mathematicians from Underrepresented Groups 12:00 - 12:30 Equity, Diversity and Inclusiveness Committee Breakout - Challenges Faced by Parents of Young Children 12:30 - 13:30 Duncan Dauvergne (Princeton), Doctoral Prize, The Archimedean limit of random sorting networks 13:30 - 14:00 Break 13:30 - 14:00 Equity, Diversity and Inclusiveness Committee Breakout - Supporting Early Career Researchers 13:30 - 14:00 Equity, Diversity and Inclusiveness Committee Breakout - Supporting LGBTQ+ Mathematicians 14:00 - 14:20 Laurent Mackay (McGill University), Mathematical biology 14:00 - 14:30 Amenda Chow and Iain Moyles (York), Creative Assessments in the COVID-19 times, Choose your own adventure in a multi-variable calculus course for engineering students 14:00 - 14:30 Asma Hassanezhad (Bristol), Geometric and Computational Spectral Theory, Eigenvalue and multiplicity bounds for the mixed Steklov problem 14:00 - 14:30 Winston Heap (Max Planck), Probability in Number Theory, Random multiplicative functions and a model for the Riemann zeta function 14:00 - 14:30 Abdelmonem Ibrhaim (Alzahr University), Optimization and Data Science, Binary whale optimization algorithm for feature selection 14:00 - 14:30 Matthew Kennedy (University of Waterloo), Operator algebras, (semi)groups, and dynamics, Amenability, proximality and higher order syndeticity 14:00 - 14:30 Amir Mohammadi (University of California, San Diego), Equidistribution on Arithmetic Manifolds, Effective results in homogeneous dynamics 14:00 - 14:30 Thomas Ransford (Université Laval), Recent Advances in Harmonic and Complex Analysis, A Gleason-Kahane-Żelazko theorem for reproducing kernel Hilbert spaces. 14:00 - 15:00 Tony Pantev (Penn), Derived Categories and (Non)commutative Algebraic Geometry 14:20 - 14:40 Marc Roussel (University of Lethbridge), Mathematical biology, Dynamics-preserving model reduction using bipartite-graph representations of biochemical systems 14:30 - 15:00 Aleksandr Aravkin (University of Washington), Optimization and Data Science, A Robust Risk Score for Evaluating Evidence in Global Health 14:30 - 15:00 Almaz Butaev (University of Calgary), Recent Advances in Harmonic and Complex Analysis, On geometric preduals of jet spaces on subsets of $\mathbb{R}^n$ 14:30 - 15:00 Carolyn Gordon (Dartmouth), Geometric and Computational Spectral Theory, Comparing Hodge spectra of manifolds and orbifolds: Part 1 14:30 - 15:00 Asaf Katz (University of Michigan), Equidistribution on Arithmetic Manifolds, An application of Margulis’ inequality to effective equidistribution 14:30 - 15:00 Frédéric Ouimet (Caltech), Probability in Number Theory 14:30 - 15:00 Dan Wolczuk and Paul McGrath (Waterloo), Creative Assessments in the COVID-19 times, Using Virtual Escape Rooms to Promote Student-Student Interactions 14:30 - 15:00 Dilian Yang (University of Windsor), Operator algebras, (semi)groups, and dynamics, Zappa-Sz\'ep Actions of Groups on Product Systems 14:40 - 15:00 Khanh Dao Duc (University of British Columbia), Mathematical biology, A study of stochastic dynamics of mRNA translation and their impact across biological scales 15:00 - 15:20 Brian Merchant (University of British Columbia), Mathematical biology, Using a Rho GTPase based model of cell polarization to explain group advantage in chemotaxis 15:00 - 15:30 Shai Evra (Princeton University), Equidistribution on Arithmetic Manifolds, Ramanujan Conjecture and the Density Hypothesis 15:00 - 15:30 Sean Fitzpatrick (Lethbridge), Creative Assessments in the COVID-19 times, Deconstructing Exams for Remote Learning 15:00 - 15:30 Elizabeth Gillaspy (University of Montana), Operator algebras, (semi)groups, and dynamics, Homotopy of product systems, and K-theory for higher-rank graphs 15:00 - 15:30 Katie Gittins (Durham), Geometric and Computational Spectral Theory, Comparing Hodge spectra of manifolds and orbifolds: Part 2. 15:00 - 15:30 Warren Hare (UBC), Optimization and Data Science, Imaginary Derivative Free Optimization 15:00 - 15:30 Pierre-Olivier Parisé (Université Laval), Recent Advances in Harmonic and Complex Analysis, Cesàro summability of Taylor series in weighted Dirichlet spaces 15:00 - 15:30 Cameron Stewart (Waterloo), Probability in Number Theory, Counting solvable S-unit equations 15:00 - 16:00 Katrina Honigs (Oregon), Derived Categories and (Non)commutative Algebraic Geometry 15:20 - 15:40 Justin Tzou (Macquarie University), Mathematical biology, Localized patterns and narrow escape problems in more general geometries 15:30 - 16:00 Samantha-Jo Caetano (Toronto), Creative Assessments in the COVID-19 times, Trump vs. Biden - who will win? 15:30 - 16:00 Sebastian Dominguez (Simon Fraser), Geometric and Computational Spectral Theory 15:30 - 16:00 Anna Duwenig (University of Wollongong), Operator algebras, (semi)groups, and dynamics, Cartan subalgebras for non-principal twisted groupoid C*-algebras 15:30 - 16:00 Mikolaj Fraczyk (The University of Chicago), Equidistribution on Arithmetic Manifolds, Density hypothesis in horizontal families 15:30 - 16:00 Richard Gottesman (Queens), Probability in Number Theory 15:30 - 16:00 Larissa Richards (University of Toronto), Recent Advances in Harmonic and Complex Analysis, On the rate of convergence of discrete interfaces to SLE. 15:30 - 16:00 Xiaoping Shi (Thompson River University), Optimization and Data Science, Graph-based change-point test 16:00 - 16:30 Jean-Marie de Koninck (Laval), Probability in Number Theory, Consecutive integers divisible by a power of their largest prime factor 16:00 - 16:30 Ben Hayes (University of Virginia), Operator algebras, (semi)groups, and dynamics, A random matrix approach to the Peterson-Thom conjecture 16:00 - 16:30 Thomas Humphries (University of Washington Bothell), Optimization and Data Science, Unrolled iterative algorithm for CT image reconstruction with learned penalty term 16:00 - 16:30 Antoine Metras (Montréal), Geometric and Computational Spectral Theory, Steklov extremal metrics in higher dimension 16:00 - 16:30 Nicholas Miller (University of California, Berkeley), Equidistribution on Arithmetic Manifolds, Geodesic submanifolds of hyperbolic manifolds 16:00 - 16:30 Jerrod Smith (Calgary), Creative Assessments in the COVID-19 times, Peer and Open-ended Assessment in Linear Algebra and Intro Proof Courses 16:00 - 16:30 Ignacio Uriarte-Tuero (Michigan State University), Recent Advances in Harmonic and Complex Analysis, Two weight norm inequalities for singular integrals in $\mathbb{R}^n$ 16:00 - 17:00 Sabin Cautis (UBC), Derived Categories and (Non)commutative Algebraic Geometry 16:30 - 17:00 Monica Gabriela Cojocaru (University of Guelph), Optimization and Data Science 16:30 - 17:00 Tyrone Crisp (University of Maine), Operator algebras, (semi)groups, and dynamics, An imprimitivity theorem for Hilbert modules 16:30 - 17:00 Alex Kontorovich (Rutgers University), Equidistribution on Arithmetic Manifolds, Applications of Thin Orbits 16:30 - 17:00 Anton Mosunov (Waterloo), Creative Assessments in the COVID-19 times, Let’s Think Together: Using Oral Assessments to Develop Students’ Thought Process 16:30 - 17:00 Ram Murty (Queens), Probability in Number Theory, An "all-purpose" Erdos-Kac theorem 16:30 - 17:00 William Verreault (Université Laval), Recent Advances in Harmonic and Complex Analysis, Nonlinear Oscillatory Expansions of holomorphic functions 17:00 - 18:00 Equity, Diversity and Inclusiveness Committee Panel / Social 17:00 - 17:30 James Wilson (University of Vermont), Recent Advances in Harmonic and Complex Analysis, Discretization of adapted functions 17:30 - 18:00 Javad Mashreghi (Université Laval), Recent Advances in Harmonic and Complex Analysis, Outer Functions and the Schur Class Tuesday December 8 9:00 - 9:30 Dan Ursu (University of Waterloo), Operator algebras, (semi)groups, and dynamics, Characterizing traces on crossed products of noncommutative C*-algebras 9:30 - 10:00 Cecile Armana (Université de Franche-Comté), Computations with Arithmetic Groups, Sturm bounds for Drinfeld-type automorphic forms over function fields 9:30 - 10:00 Hung-Chang Liao (University of Ottawa), Operator algebras, (semi)groups, and dynamics, Almost finiteness, comparison, and tracial Z-stability 10:00 - 10:30 Neil Dummigan (University of Sheffield), Computations with Arithmetic Groups, Congruences involving non-parallel weight Hilbert modular forms 10:00 - 10:30 Maria Grazia Viola (Lakehead University), Operator algebras, (semi)groups, and dynamics, Regularities properties of Cuntz-Pimsner algebras associated to C*-correspondences over commutative C*-algebras 10:30 - 11:00 Johannes Christensen (KU Leuven), Operator algebras, (semi)groups, and dynamics, A new approach to describing KMS states on $C^{*}$-algebras. 10:45 - 11:15 Fang-Ting Tu (Louisiana State University), Computations with Arithmetic Groups, A Geometric Interpretation of a Whipple's $_7F_6$ Formula 11:00 - 11:30 Arvind Ayyer (Indian Institute of Science), Enumerative Combinatorics, Toppleable permutations and excedances 11:00 - 11:30 Kari Eifler (Texas A&M University), Operator algebras, (semi)groups, and dynamics, Non-local games and quantum metric spaces 11:00 - 11:30 Lam Pham (Hebrew University), Equidistribution on Arithmetic Manifolds, Arithmetic Groups and the Lehmer conjecture 11:00 - 11:30 Orit Raz (The Hebrew University of Jerusalem), Additive Combinatorics and Discrete Geometry, Dimension-expanding polynomials and the discretized Elekes-R\'onyai theorem 11:00 - 11:30 Mario Schulz (Quenn Mary U. of London), Geometric and Computational Spectral Theory, Free boundary minimal surfaces in the unit ball 11:00 - 11:40 Xiudi Tang (Toronto), Symplectic Topology, Symplectic ray removal 11:00 - 12:00 Ellen Kirkman (Wake Forest), Derived Categories and (Non)commutative Algebraic Geometry, Degree bounds for Hopf actions on Artin-Schelter regular algebras 11:15 - 11:45 Mark McConnell (Princeton University), Computations with Arithmetic Groups 11:30 - 12:00 Benjamin Bogosel (Polytechnique Paris), Geometric and Computational Spectral Theory, Shape optimization of the Steklov eigenvalues under various constraints 11:30 - 12:00 Ilse Fischer (University of Vienna), Enumerative Combinatorics, Bijective proofs of (skew) Schur polynomial factorizations 11:30 - 12:00 Arie Levit (Yale University), Equidistribution on Arithmetic Manifolds, Quantitative weak uniform discreteness 11:30 - 12:00 Boyu Li (University of Victoria), Operator algebras, (semi)groups, and dynamics, The Zappa-Szép product of a Fell bundle by a groupoid 11:30 - 12:00 Alexia Yavicoli (University of St Andrews), Additive Combinatorics and Discrete Geometry, Patterns in thick compact sets 11:50 - 12:30 Lara Suarez Lopez (Bochum), Symplectic Topology, On the rigidity of Legendrian cobordisms 12:00 - 12:30 Jeffrey Galkowski (UCL), Geometric and Computational Spectral Theory, Geodesic beams and Weyl remainders 12:00 - 12:30 Mathilde Gerbelli-Gauthier (McGill University), Computations with Arithmetic Groups, Limit multiplicity of non-tempered representations and endoscopy. 12:00 - 12:30 Mathilde Gerbelli-Gauthier (McGill University), Equidistribution on Arithmetic Manifolds, Limit multiplicity of non-tempered representations and endoscopy. 12:00 - 12:30 Helen Jenne (Université de Tours), Enumerative Combinatorics, Double-dimer condensation and the dP3 Quiver 12:00 - 12:30 Sophie Stevens (Johann Radon Institute for Computational and Applied Mathematics), Additive Combinatorics and Discrete Geometry, The Elekes-Szabó Problem and the Uniformity Conjecture 12:30 - 13:00 Break 13:00 - 13:20 John Rinzel (New York University), Mathematical biology, A neuronal model for learning to keep a rhythmic beat. 13:00 - 13:20 Peter Taylor (peter.taylor@queensu.ca), The legacy of Mindstorms, Let’s invite Seymour into our calculus classroom. 13:00 - 13:30 Daniel Di Benedetto (University of British Columbia), Additive Combinatorics and Discrete Geometry, Discretised point-line incidences and the dimension of Besicovitch sets 13:00 - 13:30 Dave Hewett (UCL), Geometric and Computational Spectral Theory, Acoustic scattering by fractal screens 13:00 - 13:30 Marni Mishna (Simon Fraser University), Enumerative Combinatorics, Enumerating excursions on Cayley graphs 13:00 - 13:30 Will Sawin (Columbia University), Equidistribution on Arithmetic Manifolds, The mixing conjecture over function fields 13:00 - 14:00 Colin Ingalls (Carleton), Derived Categories and (Non)commutative Algebraic Geometry, Explicit coverings of families of elliptic surfaces by squares of curves 13:10 - 13:50 Qun Wang (Toronto), Symplectic Topology, Choreographies in the N-Vortex Problem 13:20 - 13:40 Alfonso Gracia-Saz (alfonso@math.toronto.edu), The legacy of Mindstorms, Playing with Desmos in the classroom 13:20 - 13:40 David Holcman (Institut de Biologie École Normale Supérieure), Mathematical biology 13:30 - 14:00 Brandon Hanson (University of Georgia), Additive Combinatorics and Discrete Geometry, A better-than-Plunnecke bound for $A + 2A$ 13:30 - 14:00 Junehyuk Jung (Brown University), Equidistribution on Arithmetic Manifolds, Intersections of geodesics on the modular surface 13:30 - 14:00 Chiu-Yen Kao (Claremont Mckenna College), Geometric and Computational Spectral Theory, Computation of free boundary minimal surfaces via extremal Steklov eigenvalue problems 13:30 - 14:00 Joel Lewis (George Washington University), Enumerative Combinatorics, Hurwitz numbers for reflection groups 13:40 - 14:00 Andrew McEachern (andrewm6@yorku.ca), The legacy of Mindstorms, Tournaments in a Proofs Class 13:40 - 14:00 Lawrence Oprea (McGill University), Mathematical biology, Simulation and analysis of white matter in a variably hypomyelinated transgenic mouse model 14:00 - 14:30 Break 14:00 - 14:20 Bernardo Galvao-Sousa (beni@math.toronto.edu), The legacy of Mindstorms, Open ended modelling problems 14:00 - 14:20 Charles S. Peskin (New York University – Courant), Mathematical biology, Interaction of Facilitation and Depression in Synaptic Transmission 14:00 - 14:30 Lindsay Dever (Bryn Mawr College), Equidistribution on Arithmetic Manifolds, Ambient prime geodesic theorems on compact hyperbolic 3-manifolds 14:00 - 14:30 Alexandre Girouard (Laval), Geometric and Computational Spectral Theory, Planar domains with prescribed perimeter and large Steklov spectral gap must collapse to a point 14:00 - 14:30 Jongchon Kim (University of British Columbia), Additive Combinatorics and Discrete Geometry, Estimates for some geometric maximal functions associated with a set of directions 14:00 - 15:00 Alicia Lamarche (Utah), Derived Categories and (Non)commutative Algebraic Geometry 14:10 - 14:50 Shira Tanny (Tel Aviv), Symplectic Topology, The Poisson bracket invariant: elementary and hard approaches. 14:20 - 14:40 Saeed Farjami (Univeristy of Surrey), Mathematical biology, Non-sequential Spike Adding in Cerebellar Stellate Cells 14:20 - 14:40 Sarah Mayes-Tang (smt@math.toronto.edu), The legacy of Mindstorms, Using Stories to Learn Math in A First-Year Seminar 14:30 - 15:00 Jonathan Tidor (Massachusetts Institute of Technology), Additive Combinatorics and Discrete Geometry, Joints of Varieties 14:30 - 15:00 Matthew Young (Texas A&M University), Equidistribution on Arithmetic Manifolds, Moments and hybrid subconvexity for symmetric-square L-functions 14:40 - 15:00 Igor Belykh (Georgia State University), Mathematical biology, When repulsive coupling promotes synchronization of bursting neurons 14:40 - 15:00 General Discussion, The legacy of Mindstorms 15:00 - 15:20 Romain Veltz (INRIA-Sophia Antipolis), Mathematical biology, Mean field study of stochastic spiking neural networks 15:30 - 16:00 Wenyu Pan (The University of Chicago), Equidistribution on Arithmetic Manifolds, Exponential mixing of geodesic flows for geometrically finite hyperbolic manifolds with cusps 15:30 - 16:10 Pranav Chakravarthy (Western Ontario), Symplectic Topology, Homotopy type of equivariant symplectomorphisms of rational ruled surfaces. 15:30 - 16:30 Dylan Allegretti (UBC), Derived Categories and (Non)commutative Algebraic Geometry 16:00 - 16:30 Thomas Hille (Northwestern University), Equidistribution on Arithmetic Manifolds, Bounds for the Least Solution of Homogeneous Quadratic Diophantine Inequalities. 16:00 - 16:30 Tongou Yang (University of British Columbia), Additive Combinatorics and Discrete Geometry, Uniform decoupling in l2 for polynomials 16:20 - 17:00 Cheng Yang (Toronto), Symplectic Topology, Symplectic reduction and perturbation theory 16:30 - 17:00 Alireza Salehi Golsefidy (University of California, San Diego), Equidistribution on Arithmetic Manifolds, Two new concepts for compact groups: Spectral independence and local randomness 16:30 - 17:00 Caroline Terry (Ohio State University), Additive Combinatorics and Discrete Geometry 16:30 - 17:30 Max Lieblich (Washington), Derived Categories and (Non)commutative Algebraic Geometry 17:00 - 17:30 Weikun He (Korea Institute of Advanced Study), Additive Combinatorics and Discrete Geometry, Sum-product in representations of Lie groups
2020-11-25 08:14:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7441276907920837, "perplexity": 6606.477337759398}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141181482.18/warc/CC-MAIN-20201125071137-20201125101137-00346.warc.gz"}
https://christopherdare.com/blogposts/manifoldparti
# Manifolds I #### Introduction When someone uses the term 'theoretical mathematics', the first thing that pops into many people's minds is either a blackboard riddled with dozens of equations or the idea of some higher-dimensional amorphous shape. The first idea could not be more true, but the second would be a bit stereotypical for mathematicians since it typically describes manifolds. So what really is a manifold? A topological manifold of dimension $$n$$ (also called an $$n$$-manifold) is a topological space $$X$$ such that: 1. $$X$$ is locally Euclidean 2. $$X$$ is Hausdorff 3. $$X$$ is second-countable I realize that probably 80% of the words in that definition mean absolutely nothing to the average reader, so I decided it would be a good idea to dedicate the first post in this series to explaining the basics. Now it turns out that basically every technical term in that definition is part of a branch of mathematics known as topology. Therefore, the process of explaining the basics is essentially familiarizing the reader with topology. For mathematicians and physicists, the significance of a topology is that it generalizes the concept of what open sets and continuity are over an arbitrary space. To shed a little light on what I'm getting at: what exactly makes the set $$(-1, 1)$$ open? Moreover, what makes the set $$[-1, 1]$$ closed? Is the set $$\{1, 2\}$$ (i.e. the set containing only the numbers 1 and 2) open, closed, or neither? Unfortunately, the vast majority of the population would say the first is open because it's surrounded by parentheses and the second is closed because it is surrounded by brackets (I'm not really sure what the general population's consensus is for the third, but you'll find out if you manage to retain interest throughout the article). So why do you or should you care about the study of open sets? Well for the first part, you probably don't (yet), but for the second part my answer is this: elementary topology by itself is a unique area of mathematics whose logic dances between the elegance of algebra and the tenacity of analysis. Alone, elementary topology seems like a trial-and-error of reasoning to deduce the greatest structure from the fewest assumptions (well, that's actually all of pure mathematics, but this blog post is simply focusing on one branch). However, with a little formation, topology leads to paramount subjects such as algebraic topology, differential geometry, and algebraic geometry, which are crucial to understanding advanced topics in relativity and even string theory. I would love the opportunity to discuss string theory and enumerative geometry with you all, but it's going to be a bit of a long road to get there. Topology must come before manifolds, manifolds are required before relativity, somewhere along the way there should be a discussion of Chern classes, and after that — well, who's really reading at that point? This is likely going to be a series of posts for this blog, so when it comes to answering questions I may choose to answer some in the comments while answering others in subsequent posts. For those of you who are truly acquainted with the subject: feel free to speak up and correct me where I'm wrong. For those of you who have no experience in the subject: everyone who ever told you there's no such thing as a stupid question was probably born before search engines became relevant (seriously though, Google first then ask — I'm a 22 year-old graduate student not a professor). #### The Problem with Infinity Most of the content I plan to talk about later in this article will likely be unfamiliar to the reader; however, in order to correctly introduce such topics, I must first delve deeper into a concept that many people know well: infinity. The reason I say concept and not number is because that's exactly what it is — infinite is an adjective which describes asymptotic behavior for functions (however, an infinite set is defined to be a set with a proper subset having the exact same cardinality). It's unfortunate reality that many high school teachers or freshman calculus professors introduce infinity as if it were a number through notations like $$\lim_{x \to \infty} \sqrt{x} = \infty$$. The equals sign in the equation often leads people to believe that infinity can be treated numerically, and thus interchanged with other numbers in many equations. This is not the case for $$\mathbb{R}$$. A more accurate notation that I prefer instead is $$\sqrt{x} \to \infty$$ as $$x \to \infty$$, since it emphasizes the fact that one function trends along an asymptote. As a brief aside, one can bypass the conceptual behavior of infinity described above through adopting the extended real line, $$\overline{\mathbb{R}} = \mathbb{R} \cup \{-\infty, \infty\}$$, or the Riemann sphere $$\overline{\mathbb{C}} = \mathbb{C} \cup \{ \infty\}$$. For both of these sets, the symbol $$\infty$$ no longer describes asymptotic behavior, but an actual element of the ordered field satisfying $$x \lt \infty$$ for all $$x \in \mathbb{R}$$. In this case, arithmetic operations such as $$x \cdot \infty = \infty$$ and $$\frac{x}{0} = \infty$$ are well-defined. It is a bit difficult for me to answer why, in general, a high school or college student learning calculus should not simply always use the extended real line. The extended real line is widely accepted in measure theory since many sets seem to naturally have 'infinite' volume, while the Riemann sphere is the primary set of focus in complex analysis. I can say, however, that the general topology student should not get $$\mathbb{R}$$ and $$\overline{\mathbb{R}}$$ confused — one is compact and one is not. Though only a concept at this point, there must be some sort of way for us to tell which sets are 'infinitely more infinite' than others; the set $$\mathbb{R}$$ is clearly larger than the set of whole numbers $$\mathbb{N}$$, yet both are infinite. In a sense, we need some sort of number system to keep track of our infinities. This is where cardinality comes in. For finite sets, cardinality is simply a natural number no different from the size of a set. For infinite sets, however, it no longer makes sense for us to count upwards to determine the number of elements. Instead, what mathematicians use are bijections. Think about it — if a function $$f: X \to Y$$ is surjective, every element of the codomain $$Y$$ has a preimage in $$X$$, so the cardinality of $$X$$ must be greater than or equal to the cardinality of $$Y$$. On the other hand, if a function is injective, no two distinct elements of the domain $$X$$ map to the same element in $$Y$$, so the cardinality of $$Y$$ must be greater than or equal to that of $$X$$. Therefore, if we can find a bijection between two sets, then we know the sets must have equal cardinality. By convention, the baseline cardinality for infinite sets is the cardinality of the natural numbers $$\mathbb{N} = \{1, 2, 3, \dots\}$$, denoted by the character $$\aleph_0$$ (pronounced aleph null). If a set is finite or has a bijection with $$\mathbb{N}$$, it is said to be countable. For example, take the integers $$\mathbb{Z} = \{\dots, -2, -1, 0, 1, 2, \dots\}$$. Then we can easily find a bijection $$f(n): \mathbb{N} \to \mathbb{Z}$$ defined by $$n \mapsto (-1)^n \lfloor n/2 \rfloor$$, which basically just bounces ad infinitum in both directions (i.e. $$0, 1, -1, 2, -2, 3,\dots$$). Another example that is important for me to discuss is the countability rational numbers $$\mathbb{Q}$$ (i.e. numbers which can be represented as fractions). To start off, consider $$\mathbb{N} \times \mathbb{N}$$ and think of the first coordinate as the numerator and the second coordinate as the denominator. If we try to enumerate the positive rationals starting with the first row and going to the right in anticipation of a snake pattern (i.e. winding back around at the end of the row), we will never make it to the second row! In fact, it is impossible to tackle the entirety of $$\mathbb{N} \times \mathbb{N}$$ with any sort of snake pattern, as tackling one row / column at a time will exhaust our infinite process. In the late nineteenth century, the father of set theory, Georg Cantor, devised a way to traverse the positive rationals without getting lost in the infinity of any one row or column: proceed along the diagonal lines connecting the coordinates $$(n, 1)$$ to $$(1, n)$$ for each $$n \in \mathbb{N}$$. Although the sizes of the diagonal lines grow linearly each iteration, for a fixed $$n \in \mathbb{N}$$ the length is always finite; therefore, we are able to cover $$\mathbb{N} \times \mathbb{N}$$ as $$n \to \infty$$. Now if we combine this logic with the countability of the integers, the transitive property tells us that there exists a bijection from $$\mathbb{N}$$ to $$\mathbb{Z} \times \mathbb{N}$$. This is effectively all we need, as the rationals are the set of equivalence classes over $$\mathbb{Z} \times \mathbb{N}$$ (i.e. notice how the diagonal elements along $$(n, n)$$ are all the same number equal to $$1$$). The last thing I want to do is show that the continuum, $$\mathfrak{c}$$, has greater cardinality than $$\aleph_0$$. This problem was also solved by Georg Cantor in the late nineteenth century. Instead of even considering the full set of real numbers $$\mathbb{R}$$, simply consider the set of infinite binary expansions of decimals in $$[0, 1]$$. If we can show that the set $$[0, 1] \subset \mathbb{R}$$ is larger than $$\mathbb{N}$$, then obviously the result will follow for $$\mathbb{R}$$. Before considering the case of infinite binary decimal expansions, think about $$n$$ distinct binary decimals that terminate after $$n$$ digits. If we line them all up, take the $$i^{th}$$ bit from the $$i^{th}$$ decimal, and negate it (i.e. $$0$$ becomes $$1$$ and $$1$$ becomes $$0$$), then we have created a completely new decimal! Whenever we negate a single bit from one of our decimals, no matter what our new number is it must be distinct from the decimal we just negated. Since we are negating information from each and every one of our decimals available, the new decimal is distinct from all others. The idea for the infinite case is the same as it is for the finite case: consider the infinite collection of all infinite decimal expansions. If we take the $$n^{th}$$ bit from our $$n^{th}$$ decimal, negate it, and proceed as $$n \to \infty$$, then the resulting decimal could not possibly be contained in the original collection we were looking at. Therefore, every attempted enumeration of the infinite binary decimal expansions will fail to cover all of $$[0,1]$$, and thus $$\mathbb{R}$$ is uncountable. I will not show the proof here, but it turns out that the power set (set of all subsets) of $$\mathbb{N}$$ has a bijective correspondence with $$\mathbb{R}$$. For a finite set $$X$$ with $$n$$ elements, the cardinality of the power set of $$X$$, $$\mathcal{P}(X)$$, is exactly $$2^n$$; therefore, it is common to see the continuum denoted by $$\mathfrak{c} = 2^{\aleph_0}$$. This brings us to one of the most famous unsolved math problems in the world today: the continuum hypothesis. Originally introduced in 1878 by our friend Georg Cantor, the continuum hypothesis states that there does not exist any set whose cardinality is strictly between $$\aleph_0$$ and $$2^{\aleph_0} = \mathfrak{c}$$. That's pretty cool, huh? Through some rudimentary set theory tricks and a few diagrams, we have proved that the philosophical idea of being 'infinitely more infinite' corresponds to the mathematical idea of power sets and exponentiation. #### What is a Topology? The premiere of topological spaces (as they are defined today) became a large area of interest around the mid-ninteenth century, when mathematicians such as Gauss and Riemann built upon Euler's earlier studies on surfaces. This work would eventually spark a huge interest, leading to a vast amount of research on homology and cohomology groups in the mid-twentieth century by mathematicians such as Heinz Hopf, Armand Borel, and Frank Adams. The idea of a topological space was conspired to be an archetype of Euclidean space, so that there is just enough structure to support continuous functions; that is, a topological space was constructed to be the bare minimum and crux of continuity. Before I go any further, this seems like an appropriate time to introduce an important theorem: ###### Theorem: A map $$f: \mathbb{R}^n \to \mathbb{R}^m$$ is continuous if and only if $$f^{-1}(V)$$ is open in $$\mathbb{R}^n$$ for every open set $$V \subset \mathbb{R}^m$$. To give a brief aside for those of you who are not familiar with theoretical mathematics: there is no substantial reason for you to look at the proofs or become too worried when you do not understand the mechanics of a proof; however, proofs are the mortar and pestle of higher-level mathematics. The beauty of theoretical mathematics lies in assuming the least amount of structure and deducing staggering truths, which ultimately are innate to the laws of everyday logic. Sure, you'll likely admire the finished product by the end of the blog series — but you will not appreciate the work that was put into building it. Addressing the proof above, I withheld an important property of open sets in Euclidean space: a set $$U$$ is open if and only if for every point in that set, there exists a ball of positive radius centered at that point also contained in the set. For example, take the set $$(0, 1)$$. You could pick a small number incredibly close to $$0$$, say $$10^{-1000}$$; yet, I can always pick a radius even smaller, say $$\frac{1}{3}\cdot 10^{-1000}$$, and the ball of such radius centered at $$10^{-1000}$$ is still contained in $$(0, 1)$$ since it's boundary is $$\frac{2}{3}\cdot 10^{-1000}$$ away from $$0$$. This is the idea of open sets in Euclidean space that the average reader is probably familiar with — I can creep as close to the boundary as I want, but I will never be able to look off the edge. Alright, now it's time to formally define a topology. Suppose we have some mathematical set $$X$$. At it's pure definition, a topology $$\tau$$ is a collection of subsets of $$X$$ which satisfy the three topological axioms: 1. The null set, $$\emptyset$$, and the entire set, $$X$$, are both elements of $$\tau$$. 2. The arbitrary union of (i.e. combination of however many) sets in $$\tau$$ is also in $$\tau$$ 3. The intersection of finitely many elements in $$\tau$$ is in $$\tau$$ (if you were to actually look the third property up in most textbooks it would only say the intersection of two sets, but finitely many is a direct result under induction). A set $$V \subset X$$ is said to be open if $$V \in \tau$$. Lastly, to generalize the theorem above, a function $$f: X \to Y$$ between two topological spaces $$(X, \tau_X)$$ and $$(Y, \tau_Y)$$ is said to be continuous if for every $$U \in \tau_Y$$, the preimage $$f^{-1}(U)$$ is an element of $$\tau_X$$. For example, say we have two spaces $$X = \{a, b, c \}$$ and $$Y = \{ d, e, f \}$$, along with topologies assigned to each of them $$\tau_X = \{ \emptyset, \{ a \}, \{ b \}, \{ a, b \}, X \}$$ and $$\tau_Y = \{ \emptyset, \{ e \}, Y \}$$ (note that both these topologies satisfy the topological axioms). Then every mapping $$h: X \to Y$$ is continuous except when $$h^{-1}(e) = \{c\}$$, $$h^{-1}(e) = \{b, c\}$$, or $$h^{-1}(e) = \{a, c \}$$ (note that if no element in $$X$$ maps to $$e$$ then we are fine, since that would mean $$h^{-1}(e) = \emptyset$$ which is also open). So how does this new definition of discontinuity fit into our old definition of discontinuity? Consider the typical unit jump function $$f: \mathbb{R} \to \mathbb{R}$$ defined by $$f(x) = \begin{cases}0, & x \lt 0 \\ 1, & x \geq 0 \end{cases}$$ Take the open ball of radius $$0.5$$ centered at $$1$$, $$B_{0.5}(1)$$. If you visualize this open set lying on the vertical axis (since we want to consider this open set in the codomain, not the domain), then the preimage is $$f^{-1}\big(B_{0.5}(1)\big) = [0, \infty)$$ which is not an open set. Alright, so now that you are seeing that open sets are pretty much whatever you define them to be (as long as the topology satisfies the three axioms), where do closed sets fit into all this? Simple: the complement of an open set is a closed set. Referring to the example above with $$X = \{a, b, c \}$$ and $$\tau_X = \{ \emptyset, \{ a \}, \{ b \}, \{ a, b \}, X \}$$, we just need to take the complement of every open set: $$\{ X - \emptyset, X - \{ a \}, X - \{ b \}, X - \{a, b\} , X - X\} \\= \{ X, \{ b, c \}, \{ a, c \}, \{ c \}, \emptyset \}$$ But hold on — this would mean that that the entire set $$X$$ and the empty set $$\emptyset$$ are both open and closed, which doesn't make sense whatsoever! That would be correct. To make sense of it, a nice property of closed sets is that sequences which converge inside a closed set must also have their limit point contained in that set. Consider the real numbers $$\mathbb{R}$$, which the everyday reader has been working with their whole life. If you have a sequence in $$\mathbb{R}$$ and you know that sequence converges, then obviously the limit point is going to be in $$\mathbb{R}$$ also; hence, $$\mathbb{R}$$ must be closed. However, $$\mathbb{R}$$ also has the property that I mentioned before where I can try to creep close to the edge, but can never look over (i.e. I can always find an open set surrounding me). Therefore, $$\mathbb{R}$$ must also be open. #### Connectedness Now let me introduce the idea of a separation: imagine we took the real number line $$\mathbb{R}$$ and cut it in half at some point $$x$$ — what is left over could then be represented as the union of two open sets, i.e. $$(-\infty, x) \cup (x, \infty)$$. We went from a connected line to a disconnected line, and the only thing that changed is that we could represent the entire space as the union of two nonempty open sets. This is exactly the generalization of a space being connected or disconnected: a space is disconnected if there exists a separation (i.e. the space can be represented as the union of two nonempty open sets which do not intersect). A space is connected if it is not disconnected (big surprise there). Pulling the two previous paragraphs together brings us to an important theorem: ###### Theorem: A topological space $$X$$ is connected if and only if the only sets that are both open and closed are the empty set $$\emptyset$$ and the entire space $$X$$. Personally, I think that's pretty cool; we started out with only three axioms for what a topological space is, and we built up enough to define what it means for any space to be connected. I just want to introduce a few more theorems about connectedness and then we'll be good to move onto the next section: ###### Theorem: The image of a connected space $$X$$ under a continuous map $$f: X \to Y$$ is connected. We've gone over almost ALL elements of this proof, except a very subtle bit of elementary set theory that I snuck in there: the preimage of a union is the union of preimages, the preimage of an intersection is the intersection of preimages, and the image of a union is the union of an image (however, it is not true that the image of an intersection is the intersection of images). At this point, we surprisingly have enough information to prove the Intermediate Value Theorem for real valued functions (you could technically generalize the IVT a little more so that it maps into a totally ordered set endowed with the order topology, but the proof is literally the same). ###### Intermediate Value Theorem: Let $$X$$ be a connected topological space, let $$f: X \to \mathbb{R}$$ be continuous. Choose $$a , b \in X$$ such that $$f(a) \lt f(b)$$ and pick some real number $$d$$ with $$f(a) \lt d \lt f(b)$$. Then there exists some point $$c \in X$$ such that $$f(c) = d$$. ###### Lemma: The union of connected sets with a point in common is connected. We now introduce a fundamental concept in the study of topology: a homeomorphism. The definition of a homeomorphism is fairly straightforward: it is a continuous bijection with continuous inverse. To break it down, recall how a continuous function has the property that the inverse sends open sets to open sets; well, since the inverse is continuous also that means that the function sends open sets to open sets both ways. Moreover, since the function is a bijection, we know that we have a nice pairing between open sets! In other words, a homeomorphism says that two topological spaces essentially have the same structure. It is now time for us to introduce our last theorem on connectedness: ###### Theorem: The product of connected sets is connected. And there you have it, basically all you've ever wanted to know about what makes a space connected and where that gets you. Later on (not in this post) I'll introduce the concept of path connectedness for the sake of the fundamental group and homology; though this is not a difficult idea per se, the realm of homology certainly requires a solid background in elementary topology. #### Compactness This is a somewhat hard concept to introduce, but it would be almost impossible for me to go forward in the study of topology without it. Consider a singleton element $$\{ a \}$$ in $$\mathbb{R}$$. Recall that a set is closed if its complement is open. Well, the complement of $$\{ a \}$$ is simply the set $$(-\infty, a) \cup (a, \infty)$$. Since the union of open sets is open by our topological axioms, we see that a singleton is closed. Now, under the laws of basic set theory (specifically De Morgan's Law), the complement of a union is the intersection of complements and the complement of an intersection is the union of complements. Hence, the axioms for closed sets change a little for a topological space $$X$$: 1. The empty set $$\emptyset$$ and the ambient space $$X$$ are both closed 2. The union of finitely many closed sets is closed 3. The arbitrary intersection of closed sets is closed Again, if you were to look at most textbooks, ($$2$$) would say something more like the union of two closed sets is closed — the result simply follows by induction. So what makes the closed sets $$[0, 1]$$ and $$\{ 2^{-100}, 2^{-99}, \dots, 2^{0} \}$$ so much different? The answer is compactness (though if you answered connected you would also be correct). Back when mathematicians studied metric spaces (which had further generalized things like Hilbert spaces and Banach spaces), they thought that the unique property of the set $$[a, b]$$ in $$\mathbb{R}$$ was something that we now call Limit Point Compactness. However, the concept of Hausdorff spaces (which I will introduce later) began to confuse mathematicians as to whether the nice properties came from Limit Point Compactness, Sequential Compactnesss, or regular compactness. Eventually, mathematicians would stumble upon the most generalized form of compactness in a topological space. The first thing I need to define is the concept of an open cover. Let $$X$$ be my ambient space. A collection $$\{ V_\alpha \}$$ is simply called a cover if $$\bigcup_{\alpha}V_\alpha = X$$. This makes sense — we have a bunch of sets, and we say that the collection is a cover if it LITERALLY covers the space. We call such a cover an open cover if the collection is comprised of only open sets (shocker). Alright, enough with the foreplay. Let $$X$$ be a topological space — we call $$X$$ compact if every open cover has a finite subcover. The first time I saw this definition, my initial thought was 'what does throwing a collection of open sets on top of my initial set have anything to do with the interval $$[a, b]$$?' I didn't really get a good answer until like the fifth time I saw the Heine-Borel Theorem, which basically meanders along a heuristic path until you've finally dealt with enough contradictions to be convinced compactess means closed and bounded in $$\mathbb{R}$$. As a brief aside, a difficult thing for people to understand outside of mathematics or philosophy is the use of existential quantifiers and universal quantifiers. When I say that a space is compact if every open cover has a finite cover, that does not mean I can just go and choose some random open cover for a set to show it's compact. In fact, to show a space is not compact, I simply have to find the existence of one open cover that does not have a finite cover. For that reason, it is orders of magnitudes easier to show a set is not compact than it is to show that a set is compact. I'll now introduce a few important theorems: ###### Theorem: Every closed subset of a compact space is compact. ###### Theorem: The image of a compact space under a continuous map is compact. Most of the additional compactness theorems begin to go out of our scope; however, after we introduce Hausdorff spaces in the next section, we will see a brief compactness come into play. For those who wish to study analysis or partial differential equations, you will find that compactness begins to play a huge role in terms of compact support. #### Hausdorff Spaces If you remember from earlier, I discussed that an important property of open sets in Euclidean space was that I could pick any point in the open set and know that I can still find some ball of positive radius around the ball that is still contained in my open set. Now imagine that instead of looking at a point inside an open set, I'm looking at two distinct points, $$x$$ and $$y$$, on the real line. As long as $$x \neq y$$, I can always find an open ball around each point such that the two open balls do not intersect. For example, supose I have the points $$x = 0$$ and $$y = 10^{-1000}$$. Then I can simply take my radius to be $$r = \frac{1}{3} \cdot 10^{-1000}$$ so that the balls $$B_r(x)$$ and $$B_r(y)$$ are disjoint. This is the idea behind a Hausdorff space. Before I go any further, I'm going to have to change my vocabulary a bit for you guys; as much as I love talking about balls, topologists generally refer to an open set containing a specified element as a neighborhood. One reason we switch from balls to neighborhoods is that, topologically at least, it does not make a difference whether a set is centered at a point or merely contains it. Alright, with that behind us it's time to formally define a Hausdorff space. We say that a space $$X$$ is Hausdorff if given any two distinct points $$x \neq y$$, there exist disjoint neighborhoods $$U, V \subset X$$ such that $$x \in U$$, $$y \in V$$, and $$U \cap V = \emptyset$$. We are now beginning to build up a bit of structure. A Hausdorff space is still a longshot from where we want to be, but it gives us just enough to work with to introduce two new theorems: ###### Theorem: A compact subspace of a Hausdorff space is closed. ###### Theorem: Let $$f: X \to Y$$ be a continuous bijection. If $$X$$ is compact and $$Y$$ is Hausdorff then $$f$$ is a homeomorphism. The Hausdorff property will prove to be a useful asset when it comes time to introduce manifolds next chapter. In fact, the notion of homeomorphisms will also allow us to define useful local properties of manifolds. With that said, all that's really left is the concept of a basis. #### Bases When topologies were introduced earlier in this article, it became apparent that many open sets in a topology were merely unions of smaller open sets. Consider the discrete topology on the set $$X = \{ a, b, c, d \}$$ $$\tau_D = \{\emptyset, \{a\}, \{b\}, \{c\}, \{d\}, \{a, b\}, \{a, c\}, \{a, d\}, \{b, c\}, \{b, d\}, \{c, d\}, \\ \{a, b, c\}, \{a, b, d\}, \{a, c, d\}, \{b, c, d\}, X\}$$ There are only four elements (not including the empty set) that cannot be broken down any further: the singletons. Every open set in $$X$$ can be formed by combining some number of elements from $$\{a\}, \{b\}, \{c\}, \{d\}$$ — this is the idea behind a basis. Formally, a basis $$\mathcal{B}$$ for a topology $$\tau$$ over $$X$$ is a collection of sets in $$\tau$$ that satisfies: 1. $$\mathcal{B}$$ covers $$X$$ 2. If $$B_1 \cap B_2 \neq \emptyset$$ for $$B_1, B_2 \in \mathcal{B}$$ and $$x \in B_1 \cap B_2$$, then there exists some $$B_3 \in \mathcal{B}$$ with $$B_3 \subseteq B_1 \cap B_2$$ and $$x \in B_3$$ Property $$2$$ may seem a bit obscure, but it essentially ensures that our basis captures the smallest elements possible so that it can actually generate the topology as desired (in other words, you can represent big sets with small sets, but you can't represent small sets with big sets). Bases become powerful tools used to streamline proofs, which would otherwise be burdensome to tackle. It is much easier to prove a fact for a small family of basis elements and observe how the result holds under unions than it is to prove that fact for a huge family of open sets. For example, if I wanted to prove the statement: Every open set in $$\mathbb{R}$$ is measurable I would simply prove the statement for an arbitrary open ball $$B_\epsilon(x)$$. Recall from earlier that a set is said to be countable if it is finite or has a bijective correspondence with $$\mathbb{N}$$. When it comes to topological spaces, we no longer care about the underlying number of elements in our set but the number of open sets in our topology. Note, however, that many of our open sets are merely just unions of smaller open sets. Therefore, what we REALLY care about is the size of our basis. We call a topological space second countable if there exists a countable basis for the topology. For example, $$\mathbb{R}$$ is second-countable if you consider the basis made up of open balls centered at rational points with rational radii $$\mathcal{B} = \{ B_q(p) : q, p \in \mathbb{Q} \}$$. We have one final proof and then we're done. To start off, given a topological space $$X$$ and set $$S \subset X$$, we say that a point $$p \in X$$ is a limit point of $$S$$ if every neighborhood containing $$p$$ intersects $$S$$ (that is, any open set containing $$p$$ must also contain a point of $$S$$). The closure of $$S$$, denoted $$\overline{S}$$, is defined to be the union of $$S$$ along with its limit points. We say that $$S$$ is dense in $$X$$ if $$\overline{S} = X$$. For example, the rational numbers $$\mathbb{Q}$$ are dense in $$\mathbb{R}$$ since between every two real numbers there exists a fraction (and thus every open ball must contain a rational number between its center and its boundary). ###### Theorem: If $$X$$ is second-countable, then there exists a countable subset of $$X$$ that is dense in $$X$$. With that said and done, let me reiterate our definition of a manifold. A topological manifold of dimension $$n$$ (also called an $$n$$-manifold) is a topological space $$X$$ such that: 1. $$X$$ is locally Euclidean 2. $$X$$ is Hausdorff 3. $$X$$ is second-countable Cool stuff — what are we gonna do with it? Well nothing yet. I've gone over a bunch of topics in this blog post, so I'm going to rest up a bit and wait until the next post in this series to talk about things like differential forms, bundles, Lie Groups, and everything else you've been hoping for years that someone would blog about. Thanks for reading! 😁
2020-10-20 13:52:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8568475842475891, "perplexity": 153.8357433068183}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107872746.20/warc/CC-MAIN-20201020134010-20201020164010-00563.warc.gz"}
https://crypto.stackexchange.com/questions/67220/how-can-i-determine-the-hill-cipher-key-in-this-case
# How can I determine the Hill cipher key in this case? I have been struggling with a Hill cipher problem for many days, without any luck. I have the following ciphertext, which I know is an encrypted excerpt from an Edgar Allan Poe story: hcrjrg--dizj lt mcne lmisne dsdi sqqznbld bvt idyl ry dlt vftpj df dbbrf omydb, (vl np oag utg cuudij) lmv txms bvh anaoxzie, iltc xt clmfmzj hcfjlmg idp jd bvy ulmw, lby di nau koessyd jd tvxjxw bvi shjilqslndie yxxy dbvx cwmyd--ffq zk ezgp pn n jzmj, nam nuusket, wfuusirsu kq zzg y’wlrui, di utt cbone jly ef dbbrf omydv, rmv veznm ty wboiaeofjvbgc jd vnl nj dwx jd utg cuudiu. Bltsq dcztldimzgzf kszn, ssv zna xe ffisdpwk pvydc or hwxgas kn n pvvnfjihp ilqltk jvrn n pbrycgnnihy oiekjpi xe utx wagy. Dltpv si rhutlmh gi mtkl ogczhsqn evri bpvvlhjw. “Eltpv utpis ky uqsp nzld,” eg csmn, “eltpv si sqgfmo naf abvld bbrf ginba fbqlmj”--lr hxrpilzlndi degap nl wdntvveoxa ofgpvwkgc bvh anaoxzio’g pbvbf, avrz flt ksuec o jfvbdy auomztmvek xe jc tvkjfpjcb, vc xpi vymgm fwgdyl Epgxz, di utd adnzi ksxjlm xe uth Udnbbqng ij Xvumodi. Utz gnyz fvlz hn eq icppilnsq, hwvlt pmog cqsc nuwbvdi uw utsi, lm uth ufxvbtafm ffisdpwk ism-dzgtrgc xe utu ujcf--alt jfanooc opv gcvawelo uw rhi “lgcjfrgruxz”--naj lb wimnappld tdqqzgm ftk fmh xcppilnsqauydi sn ncpnaws. Ssv zna xe ffisdpwk zlgfi sf jc o xxlmj de lsi pbjcg hvrp htky yzlnqw lfkpi tx tdxu tvb dyddy nfdp. “OUYDLO--Utm gcppilnsqtp, txlms whptu jd kkgyzgws nyvlhjrqd acalmgcg ijfztlndii sy dbvk cjce, idyk zyooipv utg cpirdwsg ia rytci tk tvlg lmvlmjwczgf jmv kkfgxwzgn xxznln, ea gzlk s nbtxztb tfmdce idyy rh U In order to decrypt this text I tried the following method: 1) Removed all spaces, dashes, commas, etc. 2) Divided the text into digrams. 3) Found the most common digrams, which are shown in the picture below: 4) Knowing that the most common digrams in English are "th" and "he", I tried mapping different combinations of ciphertext digrams to "th" and "he" (for example, "th" -> "ut" & "he" -> "lm"), which gave me a 2x2 plaintext matrix and a 2x2 ciphertext matrix. 5) I solved the equation $$K = C*P^{-1}$$ (mod 26) I repeated these steps for many different mappings to "th" and "he". Sometimes I could not invert $$P$$ and sometimes the $$K$$ was invalid. Looking at the ciphertext I believe it is very likely that "th" -> "ut", since it appears at the beginning of so many 3-letter words. Could anyone please help me with this? Is there something I am doing wrong? Is the way to solve it to just keep mindlessly trying new combinations? • Does the context (problem statement) make you confident this is the Hill cipher with the alphabet A-Z mapped to 0-25 for both plaintext and ciphertext, and a 2x2 matrix? Check your counts, I get 15 UT, 14 DI, 14 LM, 13 GC... – fgrieu Feb 11 '19 at 7:02 • I am 100% certain that this is the Hill cipher by elimination, since I only have one ciphertext file left and the Hill cipher is the only cipher I have left on the problem list. However, I am unsure about the alphabet. I used Cryptool for the counts and I might have messed up some settings there, which gave me the wrong counts. But the order of most common to least common was pretty much the same as you got, so I should still be able to solve it, right? Have you been able to find a solution using a different alphabet? – Ali Mustafa Feb 11 '19 at 10:56 • Indeed this is Hill Cipher with 2x2 matrix and a standard alphabet. Just be systematic in your approach. Hints: You have correctly identified that UT in the ciphertext maps to TH in the plaintext. Examining the ciphertext, my hypothesis was that LT in the ciphertext maps to HE in the plaintext (based on what there is before occurences of LT ), and that turned out to be correct. – fgrieu Feb 11 '19 at 13:22 • Thinking outside the box... given that you know (roughly) where the plaintext comes from, and that the punctuation is preserved in plain, it shouldn't be too hard to just match it to the source material without decrypting anything. – Ilmari Karonen Feb 11 '19 at 15:05 • @fgrieu Thank you very much for the help, the text has now been decrypted! – Ali Mustafa Feb 11 '19 at 16:10
2020-07-13 12:51:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 3, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43817171454429626, "perplexity": 13979.240261100824}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657143365.88/warc/CC-MAIN-20200713100145-20200713130145-00591.warc.gz"}
http://www.chineseoptics.net.cn/article/2021/5
## 2021年  14卷  第5期 2021, 14(5): 1039-1055. doi: 10.37188/CO.2021-0003 2021, 14(5): 1056-1068. doi: 10.37188/CO.2021-0071 2021, 14(5): 1069-1088. doi: 10.37188/CO.2021-0044 2021, 14(5): 1089-1103. doi: 10.37188/CO.2021-0022 2021, 14(5): 1104-1119. doi: 10.37188/CO.2021-0033 2021, 14(5): 1120-1132. doi: 10.37188/CO.2021-0125 2021, 14(5): 1133-1145. doi: 10.37188/CO.2020-0216 2021, 14(5): 1146-1161. doi: 10.37188/CO.2021-0032 2021, 14(5): 1162-1168. doi: 10.37188/CO.2021-0001 2021, 14(5): 1169-1176. doi: 10.37188/CO.2021-0005 2021, 14(5): 1177-1183. doi: 10.37188/CO.2021-0020 2021, 14(5): 1184-1193. doi: 10.37188/CO.2020-0218 2021, 14(5): 1194-1201. doi: 10.37188/CO.2020-0220 2021, 14(5): 1202-1211. doi: 10.37188/CO.2020-0214 2021, 14(5): 1212-1223. doi: 10.37188/CO.2020-0219 2021, 14(5): 1224-1230. doi: 10.37188/CO.2021-0008 2021, 14(5): 1231-1242. doi: 10.37188/CO.2020-0129 2021, 14(5): 1243-1250. doi: 10.37188/CO.2021-0018 2021, 14(5): 1251-1258. doi: 10.37188/CO.2020-0068 Optical properties of periodic double-well potential are one of the frontier research fields in laser physics and quantum optics. In this work, we have employed time-periodic double-well potential for the investigation of Fano-type resonant tunneling of photon-assisted Dirac electrons in a graphene system. Using a double quantum well structure, it is found that the resonant tunneling of electrons in a thin barrier between the two quantum wells splits the bound state energy levels, and the Fano-type resonance spectrum splits into two asymmetric resonance peaks. The shape of Fano peak is regulated by changing the phase, frequency, and amplitude, that can directly modulate the electronic transport properties of Dirac in graphene. Our numerical analysis shows that the relative phase of two oscillating fields can adjust the shape of the asymmetric Fano type resonance peak. When the relative phase increases from 0 to \begin{document}${\text{π}}$\end{document}, the resonance peak valley moves from one side of the peak to the other. In addition, the asymmetric resonance peak becomes symmetric at critical phase \begin{document}${{3{\text{π}} }/{11}}$\end{document}. Furthermore, the distribution of Fano peaks can be modulated by varying the frequency and amplitude of oscillating field and the structure of the static potential well. Finally, we suggest that these interesting physical properties can be used for the modulation of Dirac electron transport properties in graphene. 2021, 14(5): 1259-1272. doi: 10.37188/CO.2020-0204 In order to realize the demodulation of the cavity length of the fiber-optic FP sensor, a new optical wedge-type non-scanning correlation demodulation system is proposed, and the characteristics and structure of the devices used in the system are analyzed and studied. First, by simulating the light sources with different spectral distributions and the optical wedges with different surface reflectivities, the correlation interference signals are analyzed and the optimal structure parameters of the system components are given. Then by comparing the light intensity distribution characteristics of the Powell prism and cylindrical lens on the linear array CCD, more uniform spectral distribution is achieved. Finally, the specific implementation scheme and data processing method of the demodulation system are given. The experimental results show that when the light source spectrum has a Gaussian distribution and large spectral width and the reflectivity of the wedge surface is \begin{document}$R = 0.5$\end{document}, the characteristics of the correlation interference signal are obvious and convenient for demodulation. Finally, the demodulation system achieves the demodulation effect with an error of less than 0.025% within the cavity length range of 60 μm-100 μm. This optical wedge-type non-scanning correlation demodulation method can realize the sensing demodulation of the fiber-optic FP cavity and improve the power adaptability of different types of fiber-optic FP sensors. 2021, 14(5): 1273-1287. doi: 10.37188/CO.2021-0015 In order to realize the separation and release of nucleated red blood cells from peripheral blood and develop a safe and effective non-invasive technique to separate nucleated red blood cells for prenatal diagnosis of fetal diseases, an automatic cell smear preparation system based on hydrogel material was established, and a laser focusing and microscopic imaging system for recognizing and releasing nucleated red blood cells was constructed. Firstly, the mechanical structure of cell smear preparation machine was designed, the upper computer control software was designed based on single chip microcomputer, and a hydrogel membrane substrate smear was prepared by optimizing the slide-pushing angle and speed. MXene, a two-dimensional material, was introduced into temperature-sensitive hydrogel gelatin, and the near-infrared light response was realized on the surface of hydrogel membrane by using the near-infrared photothermal conversion characteristics of MXene. Then, the whole cell smear experiment was carried out on the surface of the hydrogel substrate membrane. A monolayer cell smear was prepared by optimizing the parameters of blood slide. Finally, the optical path of laser focusing and microscopic imaging was established. After the nucleated red blood cells were recognized and located, the light from an 808 nm laser source passed through a collimator lens and a convergent lens and was focused on the surface of the cell smear, which released cells under photothermal effect. A monolayer cell smear was processed and prepared, and then a photothermal effect was produced under the near-infrared light of 808 nm. After the control of the laser focusing system, a fixed cell-releasing area with a spot diameter of 300 μm was finally obtained. In this paper, the automatic slide-pushing technology was applied to the preparation of a monolayer cell smear based on hydrogel membrane, and the optical path of laser focusing and microscopic imaging was established. By using the near-infrared response and a thermal response of hydrogel membrane, the recognition and fixed-point release of nucleated red blood cells were realized, and the efficiency of separation and enrichment of nucleated red blood cells was improved. This technology has a broad application prospect in the field of prenatal screening and diagnosis. 2021, 14(5): 1288-1304. doi: 10.37188/CO.2021-0004 Compared with the commonly used simulation algorithms such as Finite Element Method (FEM) and Finite-Difference Time-Domain (FDTD) method, the Boundary Element Method (BEM) has the advantages of high accuracy, small memory consumption, and ability to deal with complex structures. In this paper, the basic principle of three-dimensional BEM is given, the corresponding program based on C++ language is written, and the Surface Plasmon Resonance (SPR) characteristics of a graphene nano-disk structure are studied. The Scattering Cross-Section (SCS) spectral lines of a graphene nano-disk under different chemical potentials, as well as the distributions of electromagnetic fields at the resonance wavelengths are calculated. The electromagnetic response of the graphene nano-disk in the infrared band is analyzed. In addition, considering the common corrugations of graphene materials caused by defects during processing, we study the influence of the geometric parameters of a convex structure in the center of the graphene nano-disk on the resonance intensity, wavelength and field distributions. A spring oscillator model of charge movement is used to explain the simulation results.
2022-01-23 15:24:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35564756393432617, "perplexity": 1879.6512422444175}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304287.0/warc/CC-MAIN-20220123141754-20220123171754-00636.warc.gz"}
http://mathoverflow.net/revisions/89159/list
MathOverflow will be down for maintenance for approximately 3 hours, starting Monday evening (06/24/2013) at approximately 9:00 PM Eastern time (UTC-4). 4 In response to Suvrit's comment, I added some results of further thinking and more questions. Hello, considering that for real numbers, the intersection of intervals defined by simple inequalities has a quite simple form as $$\bigcap_i\{x|x\leq a_i\}=\{x|x\leq\min_i{a_i}\}$$ However, what is the case if the variables are chosen as Hermitian matrices, and the interval defined by inequality is replaced with the convex cone defined by the generalized inequality? All variables in following are assumed to be Hermitian matrices. To be specific, define the generalize inequality $X\preceq A_i$ to denote that $X-A_i$ is negative semi-definite, then $\{X|X\preceq A_i\}$ defines a convex cone in the Hermitian matrix space. Is there any result about the intersection of these cones? To say, can the following set be simplified? $$\bigcap_i\{X|X\preceq A_i\}$$ When does there exist such an $A$ to satisfy $\{X|X\preceq A\}=\bigcap_i\{X|X\preceq A_i\}$? Or how to describe the geometry of the intersection of such cones? Any suggestion or comment on this question will be appreciated and thanks very much for your help! ================================================================================== Acknowledgement and more questions about @Suvrit's comment: Take an example for illustration. Denote $\mathcal{C}(A)=\{X|X\preceq A\}$, then if I want to solve \begin{eqnarray} \min_X&&f(X)\\ \mathrm{s.t.}&&X\in\mathcal{C}(A_1)\cap\mathcal{C}(A_2)\cap\mathcal{C}(A_3) \end{eqnarray} by first solve $\min_{X_1\in\mathcal{C}(A_1)\cap\mathcal{C}(A_2)} f(X_1)$ and then $\min_{X_2\in\mathcal{C}(X_1)\cap\mathcal{C}(A_3)}f(X_2)$, the solution in deed satisfies the constraints due to $$\mathcal{C}(X_1)\cap\mathcal{C}(A_3)\subseteq\mathcal{C}(A_1)\cap\mathcal{C}(A_2)\cap\mathcal{C}(A_3).$$ However, these two sets are not identical, and thus the optimal solution in $\mathcal{C}(X_1)\cap\mathcal{C}(A_3)$ is not guaranteed to be also optimal in $\mathcal{C}(A_1)\cap\mathcal{C}(A_2)\cap\mathcal{C}(A_3)$. I think the difficulty of this problem results from the complex structure of the intersections of cones $\bigcap_i\mathcal{C}(A_i)$. Do you have some more suggestions about this problem? Thank you very much for your help! 3 added 66 characters in body; edited title # ArethereresultsaboutWhatis the propertygeometry of the intersection of some cones defined by generalized inequalities? Hello, considering that for real numbers, the intersection of intervals defined by simple inequalities has a quite simple form as $$\bigcap_i\{x|x\leq a_i\}=\{x|x\leq\min_i{a_i}\}$$ However, what is the case if the variables are chosen as Hermitian matrices, and the intersection interval defined by inequality is replaced with the convex cone defined by the generalized inequality? All variables in following are assumed to be Hermitian matrices. To be specific, define the generalize inequality $X\preceq A_i$ to denote that $X-A_i$ is negative semi-definite, then $\{X|X\preceq A_i\}$ defines a convex cone in the Hermitian matrix space. Is there any result about the intersection of these cones? To say, can the following set be simplified? $$\bigcap_i\{X|X\preceq A_i\}$$ When does there exist such an $A$ to satisfy $\{X|X\preceq A\}=\bigcap_i\{X|X\preceq A_i\}$? Or how to describe the geometry of the intersection of such cones? Any suggestion or comment on this question will be appreciated and thanks very much for your help! 2 added 9 characters in body Hello, considering that for real numbers, the intersection of intervals defined by simple inequalities has a quite simple form as $$\bigcap_i\{x|x\leq a_i\}=\{x|x\leq\min_i{a_i}\}$$ However, what is the case if the variables are chosen as Hermitian matrices, and the intersection defined by inequality is replaced with the convex cone defined by the generalized inequality? All variables in following are assumed to be Hermitian matrices. To be specific, define the generalize inequality $X\preceq A_i$ to denote that $X-A_i$ is negative semi-definite, then $\{X|X\preceq A_i\}$ defines a convex cone in the Hermitian matrix space. Is there any result about the intersection of these cones? To say, can the following set be simplified? $$\bigcap_i\{X|X\preceq A_i\}$$ When does there exist such an $A$ to satisfy $\{X|X\preceq A\}=\bigcap_i\{X|X\preceq A_i\}$? Any suggestion or comment on this question will be welcome appreciated and thanks very much for your help! 1
2013-06-19 21:28:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9573602080345154, "perplexity": 183.6978410195688}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709224828/warc/CC-MAIN-20130516130024-00025-ip-10-60-113-184.ec2.internal.warc.gz"}
https://www.physicsoverflow.org/30233/indices-pauli-matrix-transformed-lorentz-representation
# Indices of a Pauli matrix transformed in the Lorentz representation + 3 like - 0 dislike 301 views When Peskin and Schroeder want to prove a Fierz identity on page 51, they make use of the identity $$(\sigma^{\mu})_{\alpha \beta} (\sigma_{\mu})_{\gamma\delta} = 2 \epsilon_{\alpha \gamma} \epsilon_{\beta \delta}.$$ where $\sigma^{\mu} \equiv (1,\mathbf{\sigma}).$ They state "One can understand this relation by noting that the indices $\alpha, \gamma$ transform in the Lorentz representation of $\psi_L$, while $\beta,\delta$ transform in the separate representation of $\psi_R$, and the whole quantity must be a Lorentz invariant." What do they want to say? This post imported from StackExchange Physics at 2015-04-17 07:39 (UTC), posted by SE-user L. Su reshown Apr 17, 2015 @Jia Yiyang You had the answer. The question may not be "high-level" enough to go to Overflow. This post imported from StackExchange Physics at 2015-04-17 07:39 (UTC), posted by SE-user L. Su Hi @L.Su, it is ok to ask such graduate-level technical questions on PO, they are welcome. @Dilaton Haha. Thank you. I certainly have no idea how to define "graduate-level." Let's not water PO down. + 3 like - 0 dislike Ok, here's what I figured out after you asked me the question: Recall how (the spinor representation of) Lorentz transformations act on gamma matrices: $S^{-1}(\Lambda)\gamma^{\mu}S(\Lambda)=\Lambda^\mu_{\ \ \nu}\gamma^\nu\cdots(1),$ where according to Peskin and Schroeder, $S(\Lambda)=\begin{bmatrix} S_L(\Lambda) & 0\\0& S_R(\Lambda) \end{bmatrix}\cdots(2),$ where $S_L$ and $S_R$ are transformations that act on  left-handed spinor $\psi_L$ and right-handed spinor $\psi_R$(see P&S's equation (3.37)). And $\gamma^\mu=\begin{bmatrix} 0 & \sigma^\mu\\ \bar{\sigma}^\mu& 0 \end{bmatrix}\cdots(3).$ Plug (2) and (3) into (1) you immediately see $S_L^{-1}\sigma^\mu S_R=\Lambda^\mu_{\ \ \nu}\sigma^\nu\cdots(4).$ Note $S_L$ acts on the row index while $S_R$ acts on the column index, and this is the meaning of .... that the indices $\alpha, \gamma$ transform in the Lorentz representation of $\psi_L$, while $\beta, \delta$ transform in the separate representation of $\psi_R$... Clearly this implies that $\sigma^\mu\otimes\sigma_\mu$ is invariant under the transformation of the LHS of (4). In terms of matrix entries, if we define $I_{\alpha\gamma\beta\delta}:=(\sigma^\mu)_{\alpha\beta}(\sigma_\mu)_{\gamma\delta}\cdots(5),$ then $(S^{-1}_L)_{\alpha'\alpha}(S^{-1}_L)_{\gamma'\gamma}I_{\alpha\gamma\beta\delta}(S_R)_{\beta\beta'}(S_R)_{\delta\delta'}=I_{\alpha'\gamma'\beta'\delta'}\cdots(6).$ We need to solve for $I_{\alpha\gamma\beta\delta}$. Now clearly $\epsilon_{\alpha \gamma} \epsilon_{\beta \delta}$ is a solution, because of the identity $\epsilon_{ij}A_{li}A_{kj}=\det(A)\epsilon_{lk}$, and our $S_L, S_R$ both have determinant 1. The proportionality constant 2 can be obtained by comparing appropriate entries on both sides of the equation $(\sigma^\mu)_{\alpha\beta}(\sigma_\mu)_{\gamma\delta}=\text{const}\times\epsilon_{\alpha \gamma} \epsilon_{\beta \delta}$. Now the only gap remained is the uniqueness of the solution. To prove uniqueness it is convenient to re-write (6) as a matrix equation:$I(S_R\otimes S_R)=(S_L \otimes S_L)I \cdots(7),$ where $I$ and $S\otimes S$ are $4\times 4$ matrices, in particular, the row index for $I$ is the pair $\alpha\gamma$ and column index is the pair $\beta\delta$. We are going to apply Schur's lemma to (7). Recall in the standard representation theory analysis of Lorentz group, $S_L$ is in the $(\frac{1}{2}, 0)$ representation and $S_R$ is in the $(0,\frac{1}{2})$ representation, hence $S_L \otimes S_L\approx (1,0)\oplus (0,0)$ and $S_R \otimes S_R\approx (0,1)\oplus (0,0)$. Note they only share the 1-dimensional representation $(0, 0)$. Then Schur's lemma basically says in suitable basis, matrix $I$ is block diagonal with a 3 by 3 block and a 1 by 1 block, and the 3 by 3 block is a zero matrix, and the 1 by 1 block of course is unique up to scaling. Then return to the original basis you start with, we conclude the matrix $I$ must be unique up to scaling. Q.E.D A small caveat: the choice of basis (to have block diagonalization)is only unique up to an arbitrary linear combination within each invariant subspaces, and this freedom is implemented by multiplying a 3+1 block diagonal matrix to your original similarity transfomation, and you can easily show this does not affect the uniqueness. answered Apr 17, 2015 by (2,635 points) edited Apr 17, 2015 Very detailed! + 2 like - 0 dislike They mean that - apart from being able to verify the equation by brute force evaluation of both sides - one can see that it must be true based on symmetry consideration. One has a Lorentz scalar if one multiplies the left hand side by spinors $u_L^\alpha$, $v_R^\beta$, $w_L^\gamma$, and $z_R^\delta$ (whose chirality is given by the index $L$ or $R$) and sums over repeated indices. Thus the result must be a scalar formed out of these spinors, and linear in each of them. This gives a linear combination of the possibilities $(u_L\epsilon w_L)(v_R\epsilon z_R)$, $(u_Lv_R)(w_L z_R)$, and $(u_Lz_R)(w_L v_R)$. Only the first one has the right tensor product structure to work. Thus the formula holds up to a constant factor, which is obtained by evaluating the left hand side for the particular choice $\alpha=\beta=1,\gamma=\delta=2$, say. answered Apr 17, 2015 by (13,219 points) edited Apr 17, 2015 + 1 like - 2 dislike 1. The Pauli matrices anticommute, so the product of two of them has to be antisymmetric in its indices. 2. The only antisymmetric 2-tensor in two dimensions is the Levi-Civita symbol $\epsilon$. Hence, you can "guess" the structure of the product in question up to a constant without calculating anything explicitly. This post imported from StackExchange Physics at 2015-04-17 07:39 (UTC), posted by SE-user ACuriousMind answered Apr 16, 2015 by (820 points) How do you attach the indices? I would like to know you understanding of the statement as well. This post imported from StackExchange Physics at 2015-04-17 07:39 (UTC), posted by SE-user L. Su -1. this is a tensor product, you cannot apply "anticommute"argument this way, besides even if you can, a pauli matrix does not anticommute with itself. Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead. To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL. Please consult the FAQ for as to how to format your post. This is the answer box; if you want to write a comment instead, please use the 'add comment' button. Live preview (may slow down editor)   Preview Your name to display (optional): Email me at this address if my answer is selected or commented on: Privacy: Your email address will only be used for sending these notifications. Anti-spam verification: If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:p$\hbar\varnothing$sicsOverflowThen drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds). To avoid this verification in future, please log in or register.
2018-12-18 15:03:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8622258305549622, "perplexity": 637.0553383681105}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376829429.94/warc/CC-MAIN-20181218143757-20181218165757-00583.warc.gz"}
https://zbmath.org/?q=an:0496.35058
# zbMATH — the first resource for mathematics Convergence of solutions to nonlinear dispersive equations. (English) Zbl 0496.35058 ##### MSC: 35L60 First-order nonlinear hyperbolic equations 35L65 Hyperbolic conservation laws 35A35 Theoretical approximation in context of PDEs 35B20 Perturbations in context of PDEs Full Text: ##### References: [1] Bona, J, and Schonbek, M., Some results on the travelling wave solutions of the Korteweg de Vries Burger equation. To appear · Zbl 0594.76015 [2] Dacorogna, B., A generic result for nonconvex problems in the calculus of variations, to appear in J. Func.Anal. · Zbl 0547.49003 [3] Lax P., Proc. Nat. Acad. Sci. U.S.A. 76 (8) pp 3602– (1979) · Zbl 0411.35081 · doi:10.1073/pnas.76.8.3602 [4] Murat F., Ann. Scuola Norm. Sup. Pisa Sci. Fis. Math. 5 pp 489– (1978) [5] Murat F., Ann. Scuola Norm.Sup. Pisa 8 pp 69– (1981) [6] Murat, F L’injection du cone positif de H$$sup:1$$esup: dans Wl,q est compacte pour tout q < 2. Preprint [7] Tartar L., Research Notes in Mathematics (1979) [8] Lecture Notes in Math. 665 pp 228– (1977) [9] Whitham G. B., Linear and Nonlinear Waves (1974) · Zbl 0373.76001 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
2021-03-08 10:06:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5463003516197205, "perplexity": 3876.559324265034}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178383355.93/warc/CC-MAIN-20210308082315-20210308112315-00255.warc.gz"}
https://byjus.com/question-answer/for-a-given-material-the-young-s-modulus-is-2-4-times-that-of-rigidity-3/
Question # For a given material, the Young's modulus is $$2.4$$ times that of rigidity modulus. Its poisson's ratio is. A 2.4 B 1.2 C 0.4 D 0.2 Solution ## The correct option is D $$0.2$$$$Y=2\eta(1+\sigma)$$$$\Rightarrow 2.4\eta =2\eta(1+\sigma)$$$$\Rightarrow 1.2=1+\sigma$$$$\Rightarrow \sigma=0.2$$Physics Suggest Corrections 0 Similar questions View More People also searched for View More
2022-01-28 17:12:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6550741195678711, "perplexity": 6016.029927225582}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320306301.52/warc/CC-MAIN-20220128152530-20220128182530-00074.warc.gz"}
https://hackage.haskell.org/package/hgis-0.1.3.8/docs/GIS-Math-Spherical.html
hgis-0.1.3.8: Package and command-line for GIS with Haskell GIS.Math.Spherical Description Utilities to compute area, perimeterPolygon, etc. on the surface of a sphere. Synopsis # Documentation shittyCentroid :: Polygon -> Point Source # averages the coördinates of a polygon, returning a point. avg :: (RealFrac a, Foldable t) => t a -> a Source # Average over a foldable container areaTriangle :: Point -> Point -> Point -> Double Source # Compute the area of a triangle using LHuilliers formula relativeCompactness :: Polygon -> Double Source # Relative compactness, i.e. compactness divided by the compactness of a Euclidean circle compactness1 :: Polygon -> Double Source # Take the area of the polygon and divide by the perimeter squared. This is a dimensionless measurement. areaConvex :: Polygon -> Double Source # Compute the area of a convex polygon on the surface of a sphere. areaPolygon :: Polygon -> Double Source # Uses areal projection; then finds area of the polygon. Result is in km^2 totalPerimeter :: [Polygon] -> Double Source # Given a list of polygons, return the total area. areaPolyRectangular :: Polygon -> Double Source # Find the area of a polygon with rectangular coördinates given. distance :: (Double, Double) -> (Double, Double) -> Double Source # Distance in kilometers between two points given in degrees. centralAngle :: (Double, Double) -> (Double, Double) -> Double Source # Compute central angle from points given in radians
2019-11-21 02:58:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22834809124469757, "perplexity": 4217.984394988976}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670729.90/warc/CC-MAIN-20191121023525-20191121051525-00404.warc.gz"}
https://xojoc.pw/project-euler/6
#Skip to menu ## Sum square difference First read the problem description. We know that the formula for the sum of squares is $$\sum_{k=1}^{n} k^2 = \frac{2n^3 + 3n^2 + n}{6}$$ and that $$\sum_{k=1}^{n} k = \frac{n*(n+1)}{2}$$ (see Arithmetic progression). So we simply need to do $$\sum_{k=1}^{n}k^2 - (\sum_{k=1}^{n}k)^2 = \frac{3n^4 + 2n^3 - 3n^2 - 2n}{12}$$ def equation(n): return (3*n**4 + 2*n**3 - 3*n**2 - 2*n) // 12 equation(100) 25164150 Source code of the solution(s):
2020-03-29 08:50:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7712900042533875, "perplexity": 3846.441803394206}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370494064.21/warc/CC-MAIN-20200329074745-20200329104745-00107.warc.gz"}
https://chemistry.stackexchange.com/questions/14170/why-isnt-neptunium-used-in-nuclear-reactors-in-nuclear-power-plants
Why isn't neptunium used in nuclear reactors in nuclear power plants? Why isn't neptunium used in nuclear reactors in nuclear power plants? Uranium is, and plutonium is. But neptunium isn't and it is in the middle of them. Is it like it is too hard to make it do fission? Can someone please kindly explain it to me? • Fission is not an issue. It's hard to make neptunium, period. – Ivan Neretin Mar 20 '18 at 11:37 • Actually, neptunium (Np-237) is regularly used in nuclear reactors; however, not as nuclear fuel but as so-called fission dosimeter in irradiation capsules. (I have just received a delivery of six such capsules that will later be installed in a pressurized water reactor.) – Loong Mar 20 '18 at 11:47 There are several reasons that neptunium may not be used. The first is the abundance of neptunium. From Wikipedia, it states the best source is from spent fuel rods. Basically, someone would need to handle very radioactive material to obtain a small amount of neptunium. Next would be the likelihood of fission when struck with a neutron. If it is the same or less than uranium, I would go with uranium. It would have to be substantially more than uranium to offset the cost of refinement. Also, there is the energy output per gram. I don't think it would be much greater than uranium, probably less. The isotopes of neptunium that have long half-lives are $\ce{{}^{236}Np}$ and $\ce{{}^{237}Np}$. These are not produced in great abundance from $\ce{{}^{235}U}$ or $\ce{{}^{238}U}$. To make plutonium from uranium, the uranium must absorb neutrons without fission. This is much more likely to happen then to make $\ce{{}^{236}Np}$ or $\ce{{}^{237}Np}$ • Technically, neptunium is an intermediate when uranium takes up neutrons. Starting with U-238, we go to U-239, which decays by electron beta emission to Np-239 and then Pu-239. The plutonium isotope, with a longer half-life than the neptunium intermediate, is primarily what we see. – Oscar Lanzi Apr 25 '18 at 22:54 Production rate is low for reactors using low-enrichment uranium. You get about one atom of 237Np per thousand atoms of 239Pu, so its easier to get the Pu out and use it rather than running and running your production reactor to get the 237Np. Neptunium is not a good fuel, firstly there is the activation problem Neptunium-237 can adsorb neutrons and either undergo fission or it can capture neutrons to form neptunium-238. This will beta decay to form plutonium-238 which is a horrible isotope of plutonium to work with in bulk. Plutonium-238 has a half life slightly shorter than 100 years, it has a very high specific activity and it does lots of horrible things that plutonium-239 does not do. A small trace of plutonium-238 will greatly increase the alpha activity of the plutonium produced by the reactor. This will make reprocessing and the spent fuel handling harder. Then there is the problem of production. The normal route to neptunium-237 is that a uranium-235 nucleous adsorbs a neutron, it forms an excited state of uranium-236. Normally we hope that the excited state of uranium-236 undergoes nuclear fission. But sometimes it manages to relax into its ground state. The uranium-236 can adsorb another neutron to form uranium-237 which then beta decays to neptunium-237. An alternative route is to subject uranium-238 to fast neutrons such as fusion neutrons, this causes the uranium-238 to undergo the n.2n reaction. These very fast neutrons will not appear commonly in a normal nuclear reactor.
2019-08-20 23:35:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6645629405975342, "perplexity": 1477.100778283805}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315681.63/warc/CC-MAIN-20190820221802-20190821003802-00373.warc.gz"}
https://plainmath.net/10934/empty-tank-storing-water-from-borehole-volume-filled-pumps-water-hours
Question # An empty tank for storing water from a borehole has a volume of 480 m³. If it is filled by a pump that pumps water at a rate of 16 l/s, how many hours will it take the pump to fill this tank? Equations, expressions, and inequalities First, we convert the rate to $$\displaystyle\frac{{m}^{{3}}}{{h}}{r}$$: $$(16)$$
2021-08-02 05:39:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29969891905784607, "perplexity": 751.7535057144446}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154304.34/warc/CC-MAIN-20210802043814-20210802073814-00543.warc.gz"}
http://libros.duhnnae.com/2017/aug3/150180289747-Asymmetric-Two-dimensional-Magnetic-Lattices-for-Ultracold-Atoms-Trapping-and-Confinement-Quantum-Physics.php
# Asymmetric Two-dimensional Magnetic Lattices for Ultracold Atoms Trapping and Confinement - Quantum Physics Asymmetric Two-dimensional Magnetic Lattices for Ultracold Atoms Trapping and Confinement - Quantum Physics - Descarga este documento en PDF. Documentación en PDF para descargar gratis. Disponible también para leer online. Abstract: A new method to implement an asymmetrical two-dimensional magnetic lattice isproposed. The asymmetrical two-dimensional magnetic lattice can be created byperiodically distributing magnetic minima across the surface of magnetic thinfilm where the periodicity can be achieved by milling $n\times n$ square holeson the surface of the film. The quantum device is proposed for trapping andconfining ultracold atoms and quantum degenerate gases prepared in the lowmagnetic field seeking-state at low temperature, such as the Bose-EinsteinCondensate BEC and ultracold fermions. We present detailed analysis of theanalytical expressions and the numerical simulation procedure used to calculatethe external magnetic field. We also, describe the magnetic band gap structureexhibited by the asymmetric effect of the magnetic minima and show some of thepossible application. We analyze the effect of changing the characteristicparameters of the magnetic lattice, such as the separating periodicity lengthand the hole size along with the applications of the external magnetic biasfields to maintain and allocate a suitable non-zero magnetic local minima ateffective $z$-distance above the thin film surface. Suitable values are shownwhich keep the trapped ultracold atoms away from the thermal Majorana spin-flipand the surface Casimir-Polder effect. Autor: A. Abdelrahman, P. Hannaford M. Vasiliev, K. Alameh Fuente: https://arxiv.org/
2018-11-14 17:20:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5619441866874695, "perplexity": 5744.683783160012}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039742253.21/warc/CC-MAIN-20181114170648-20181114192648-00471.warc.gz"}
http://ebook2.worldlibrary.net/articles/eng/ABV
#jsDisabledContent { display:none; } My Account | Register | Help # Abv Article Id: WHEBN0003840234 Reproduction Date: Title: Abv Author: World Heritage Encyclopedia Language: English Subject: Collection: Publisher: World Heritage Encyclopedia Publication Date: ### Abv "ABV" redirects here. For other uses, see ABV (disambiguation). Alcohol by volume (abbreviated as ABV, abv, or alc/vol) is a standard measure of how much alcohol (ethanol) is contained in an alcoholic beverage (expressed as a percentage of total volume).[1][2][3] It is defined as the number of millilitres of pure ethanol present in 100 millilitres of solution at 20 °C.[4] The number of millilitres of pure ethanol is the mass of the ethanol divided by its density at 20°C, which is 0.78924 g/ml. The ABV standard is used worldwide. In some countries, alcohol by volume is referred to as degrees Gay-Lussac (after the French chemist Joseph Louis Gay-Lussac),[5] although there is a slight difference since Gay-Lussac used 15°C. Mixing two solutions of alcohol of different strengths usually causes a decrease in volume (although if both are low strength it may be an increase). More information on calculations pertaining to mixing ethanol solutions is available in the French World Heritage Encyclopedia article fr:Calcul des titres et des volumes d'alcools. ## Typical levels Details about typical amounts of alcohol contained in various beverages can be found in the articles about individual drinks. Drink Typical ABV Fruit juice (naturally occurring) less than 0.1% Low-alcohol beer 0.05%–1.2% Kvass 0.05%–1.5% Kombucha 0.5%–1.5% Kefir 0.5%–2.0% Boza 1% Chicha 1%–11% (usually 1%–6%) Cider 2%–8.5% Beer 2%–12% (usually 4%–6%) Alcopops 4%–17.5% Malt liquor 5%+ Makgeolli 6.5%–7% Barley wine (strong ale) 8%–15% Wine 9%–16% (most often 12.5%–14.5%)[6] Dessert wine 14%–25% Sake (rice wine) 15% (or 18%–20% if not diluted prior to bottling) Liqueurs 15%–55% Fortified wine 15.5%–20%[7] (in the European Union, 18%–22%) Soju 17%–45% (usually 19%) Shochu 25%–45% (usually 25%) Bitters 28%–45% Mezcal, Tequila 32%–60% (usually 40%) Vodka 35%–50% (usually 40%) Brandy 35%–60% (usually 40%) Rum 37.5%–80% Ouzo 37.5%+ Cachaça 38%–54% Sotol 38%–60% Stroh 38%–80% Nalewka 40%–45% Gin 40%–50% Whisky 40%–55% (usually 40% or 43%) Baijiu 40%–60% Chacha 40%–70% Pálinka 42%–86% (legally in Hungary 48%–51%) Rakia 42%–86% Absinthe 45%–89.9% Ţuica 45%–60% (usually 52%) Arak 60%–65% Poitín 60%–95% Neutral grain spirit 85%–95% Cocoroco 93%–96% Rectified spirit 95%-96% ## Alcohol proof Another way of specifying the amount of alcohol is alcohol proof, which in the United States is twice the alcohol-by-volume number, while in the United Kingdom it is 1.75 times the number (expressed as a percentage).[8][9] For example, 40% abv is 80 proof in the US and 70 proof in the UK. However, since 1980, alcoholic proof in the UK has been replaced by abv as a measure of alcohol content. ## Proof and alcohol by weight In the United States, a few states regulate and tax alcoholic beverages according to alcohol by weight (abw), expressed as a percentage of total mass. Some brewers print the abw (rather than the abv) on beer containers, particularly on low-point versions of popular domestic beer brands. At relatively low abv, the alcohol percentage by weight is about 4/5 of the abv (e.g., 3.2% abw is equivalent to 4.0% abv).[10] However, because of the miscibility of alcohol and water, the conversion factor is not constant but rather depends upon the concentration of alcohol. 100% abw, of course, is equivalent to 100% abv. ## Calculation of alcohol content During the production of wine and beer, yeast is added to a sugary solution. During fermentation, the yeast organisms consume the sugars and produce alcohol. The density of sugar in water is greater than the density of alcohol in water. A hydrometer is used to measure the change in specific gravity (SG) of the solution before and after fermentation. The volume of alcohol in the solution can then be calculated. ### Wine The simplest method for wine has been described by English author C.J.J. Berry:[11] • $ABV = \left(\mathrm\left\{Starting~SG\right\} - \mathrm\left\{Final~SG\right\}\right)/.736$ ### Beer The calculation for beer is: Where 1.05 is the number of grams of ethanol produced for every gram of CO2 produced, and .79 is the density of ethanol, • $ABV = \frac\left\{1.05\right\}\left\{0.79\right\} \left\left( \frac\left\{\mathrm\left\{Starting~SG\right\} - \mathrm\left\{Final~SG\right\}\right\}\left\{\mathrm\left\{Final~SG\right\}\right\} \right\right) \times 100$ [12] However, many brewers use the following formula: • $ABV = 131 \left\left( \mathrm\left\{Starting~SG\right\} - \mathrm\left\{Final~SG\right\} \right\right)$ ## Bibliography Copyright © World Library Foundation. All rights reserved. eBooks from World eBook Library are sponsored by the World Library Foundation, a 501c(4) Member's Support Non-Profit Organization, and is NOT affiliated with any governmental agency or department.
2020-02-20 14:40:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23735523223876953, "perplexity": 10977.545636887347}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144979.91/warc/CC-MAIN-20200220131529-20200220161529-00256.warc.gz"}
https://www.datacamp.com/community/tutorials/python-string-contains
Tutorials python # Python String Contains If you are looking to find or replace items in a string, Python has several built-in methods that can help you search a target string for a specified substring. ## .find() Method ### Syntax string.find(substring, start, end) Note: start and end are optional arguments. From the above syntax, you can observe that the .find() method takes the desired substring as the mandatory argument. You can specify the other two arguments: an inclusive starting position and an exclusive ending position. In the example code, you search for Waldo in the string Where's Waldo?. The .find() method returns the lowest index in the string where it can find the substring, in this case, eight. my_string = "Where's Waldo?" my_string.find("Waldo") 8 If you search for Wenda, it returns -1 since the substring is not found. my_string.find("Wenda") -1 Let's see if you can find Waldo between characters zero and five. In the code, you specify the starting position zero and ending position as six, since this position is not inclusive. my_string = "Where's Waldo?" my_string.find("Waldo", 0, 6) -1 The .find() method does not find the substring and returns -1, as shown above. ## .index() Method ### Syntax string.index(substring, start, end) Note: start and end are optional arguments. From the above syntax, you can observe that the .index() method takes the desired substring as a mandatory argument. It can take optional starting and ending positions as well. In the example, we search again for Waldo using .index(). my_string = "Where's Waldo?" my_string.index("Waldo") 8 We get eight again. When we look for a substring that is not there, we have a difference. my_string.index("Wenda") File "<stdin>", line 1, in <module> The .index() method raises an exception, as we can see in the output. We can handle this using the try except block. my_string = "Where's Waldo?" try: my_string.index("Wenda") except ValueError: Above, you can observe the syntax. The try part will test the given code. If any error appears, the except part will be executed, obtaining the following output as a result. "Not found" ## .count() Method The .count() method searches for a specified substring in the target string. It returns the number of non-overlapping occurrences. In simple words, how many times the substring is present in the string. ### Syntax The syntax of .count() is very similar to the other methods, as we can observe. string.count(substring, start, end) Note: start and end are optional arguments. ### Substring Count In the example, we use the .count() method to get how many times fruit appears. my_string = "How many fruits do you have in your fruit basket?" my_string.count("fruit") 2 In the output, we see that is is two. We can then limit the occurrences of fruit between character zero and fifteen of the string, as we can observe in the code below. my_string.count("fruit", 0, 16) 1 The method will return 1. Remember that the starting position is inclusive, but the ending is not. ## .replace Method Sometimes you will want to replace occurrences of a substring with a new substring. In this case, Python provides us with the .replace method. ### Syntax string.replace(old, new, count) Note: count is an optional argument. As we see in the syntax above, it takes three arguments: the substring being replaced, the new string to replace it, and an optional number that indicates how many occurrences to replace. ### Replacing a Substring In the example code, we replace the substring house with car. my_string = "The red house is between the blue house and the old house" print(my_string.replace("house", "car")) The red car is between the blue car and the old car The method will return a copy with all house substrings replaced. ### Replacing a Specific Number of Occurrences In this example, we specified that we only want 2 of the occurrences to be replaced. print(my_string.replace("house", "car", 2)) The red car is between the blue car and the old house In the output, we see that the method returns a copy of the string with the first two occurrences of house replaced by car. ## Interactive Example In the below example, you will: • Find if the substring actor occurs between the characters with index 37 and 41 inclusive. If it is not detected, print the statement Word not found. • Replace actor actor with the substring actor if actor occurs only two repeated times. • Replace actor actor actor with the substring actor if actor appears three repeated times. for movie in movies: # Find if actor occurrs between 37 and 41 inclusive if movie.find("actor", 37, 42) == -1: # Count occurrences and replace two by one elif movie.count("actor") == 2: print(movie.replace("actor actor", "actor")) else: # Replace three occurrences by one print(movie.replace("actor actor actor", "actor")) When we run the above code, it produces the following result: Word not found I believe you I always said that the actor is amazing in every movie he has played it's astonishing how frightening the actor norton looks with a shaved head and a swastika on his chest. To learn more about finding and replacing strings, please see this video from our course, Regular Expressions in Python. This content is taken from DataCamp’s Regular Expressions in Python course by Maria Eugenia Inzaugarat.
2021-10-26 21:03:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29467278718948364, "perplexity": 2974.631516675914}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587926.9/warc/CC-MAIN-20211026200738-20211026230738-00420.warc.gz"}
https://mathoverflow.net/questions/314793/random-walk-and-comparing-sums-of-exponential-random-variables
# Random walk and comparing sums of Exponential random variables Let $$\sigma$$ be the time a nearest neighbor random walk started at 1 that has probability $$p>1/2$$ of moving left reaches $$0$$. Let $$\sigma'$$ be an independent copy of $$\sigma$$. Let $$(X_k)_1^\infty$$ be iid unit Exponential random variables, and let $$(Y_k)_1^\infty$$ be iid Exponentials with mean $$v$$ (i.e. $$P(Y_1 \geq t) = e^{-t/v}$$). We are interested in if there is a closed form in terms of $$p$$ and $$v$$ for the probability $$P \left( \sum_1^\sigma X_k < \sum_1^{\sigma '} Y_k \right).$$ Conditioning on the value of either sum gives a messy expression that isn't obvious how to simplify. A reformulation of the problem is to think of this as a race to reach 0 by two continuous time random walks with rates $$1$$ and $$1/v$$. Using the memoryless property, the probability the rate-1 walk advances at a jump time is $$q=v/(1+v)$$. Otherwise the rate-$$1/v$$ walk advances. Let $$Z(r,q)$$ be the number of successes before $$r$$ failures occur in iid trials with success probability $$q$$ (i.e. negative binomial). If we think of the rate-$$1$$ walk advancing as a success, we can rewrite the above probability as $$P( Z(\sigma,q) > \sigma').$$ Condition on the values of $$\sigma$$ and $$\sigma'$$ and use the distribution for a negative binomial to write this as $$\sum_{i,j \geq 0} C_i C_j p^2(p(1-p))^{i+j} \sum_{k\geq 2i+1} \binom{2j +k}{k} q^k (1-q)^{2j+1} .$$ Here $$C_i$$ is the $$i$$th Catalan number. It does not look easy to evaluate exactly. We are happier with this though because it is easier to numerically approximate (though we would prefer a closed form). • Is there any reason you didn’t write the right term in the inequality as $\nu \sum Z_i$, where $Z_i$ are unit mean exponentials? – Anthony Quas Nov 9 '18 at 19:10 • That may be better, but I am viewing this like a race to return to 0 by two continuous time biased random walks with rates 1 and 1/v. So this formulation is natural from that perspective. – Matthew Junge Nov 9 '18 at 23:14 • It seems that this could also be reformulated as a Markov chain on the first quadrant of the integer lattice, where the two walks correspond to the horizontal and vertical directions, and you are asking about the probability to exit through either the $x$-axis or the $y$-axis. Possibly some martingale methods are available...? – Nate Eldredge Nov 10 '18 at 0:18
2020-10-21 07:39:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 26, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.93184894323349, "perplexity": 163.06649366760084}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107876136.24/warc/CC-MAIN-20201021064154-20201021094154-00365.warc.gz"}
https://www.talknativ.com/no2o1/e3f3b2-find-the-subgame-perfect-equilibrium-of-the-game
# find the subgame perfect equilibrium of the game What are the features of the "old man" that was crucified with Christ and buried? So far Up to this point, we have assumed that players know all /Border[0 0 0]/H/N/C[1 0 0] /Annots [ 48 0 R 49 0 R 50 0 R 51 0 R 52 0 R 53 0 R 54 0 R 55 0 R 56 0 R 57 0 R 58 0 R 59 0 R 60 0 R 61 0 R 62 0 R 63 0 R 64 0 R 65 0 R 66 0 R 67 0 R 68 0 R ] >> 25 0 obj 66 0 obj << >> endobj /A << /S /GoTo /D (Navigation1) >> In this video I go over the very basics of backwards induction as well as the calculation of subgame perfect equilibria. I Thm: Every nite extensive-form game with perfect recall has a sequential equilibrium. /Rect [310.643 9.631 317.617 19.095] endobj /Filter /FlateDecode /Rect [300.681 9.631 307.654 19.095] >> endobj /Subtype /Link 3 One can, For large K, isn’t it more reasonable to think that the A subgame is part of a game that can be considered as a game itself. 40 0 obj We analyze three games using our new solution concept, subgame perfect equilibrium (SPE). Therefore, the subgame-perfect equilibrium is as in Figure 11.4. How to understand John 4 in light of Exodus 17 and Numbers 20? >> endobj Find a Subgame Perfect Nash equilibrium of the game featuring one player using a mixed strategy. Subgame perfect equilibrium In an extensive form game with perfect information, let x be a node of the tree that is not an end node. Thanks for contributing an answer to Mathematics Stack Exchange! /Border[0 0 0]/H/N/C[.5 .5 .5] tinue the game, thereby sacrificing one dollar so that the other player can receive more than one dollar. %���� /Border[0 0 0]/H/N/C[.5 .5 .5] /A << /S /GoTo /D (Navigation2) >> /Rect [305.662 9.631 312.636 19.095] /A << /S /GoTo /D (Navigation29) >> (Interpretations of Strategies) 44 0 obj Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. >> endobj Every path of the game in which the outcome in any period is either outor (in,C) is a Nash equilibrium outcome. /Rect [288.954 9.631 295.928 19.095] Title: Game Theory 2: Extensive-Form Games and Subgame Perfection Created Date: /Type /Annot How can I add a few specific mesh (altitude-like level) curves to a plot? What are the strategies in a subgame perfect nash-equilibrium? /Type /Annot 51 0 obj << >> endobj /Resources 69 0 R (Backward Induction) /A << /S /GoTo /D (Navigation1) >> There is a unique subgame perfect equilibrium,where each competitor chooses inand the chain store always chooses C. For K=1, subgame perfection eliminates the bad NE. /A << /S /GoTo /D (Navigation1) >> The game does not have such subgame perfect equilibria from the same reason that a pair of grim strategies is never subgame perfect. Are you ok with just one (as the singular suggests) or are you looking for the whole set? SPE implies that you have to play a NE of the stage game in the second period. /Type /Annot Subgame The subgame of the extensive game with perfect information (N;H;P;(V i)) that follows h 2H=Z is the extensive game (N;Hj h;Pj h;(V ij /Border[0 0 0]/H/N/C[.5 .5 .5] I want to know if my thinking is correct. /Border[0 0 0]/H/N/C[.5 .5 .5] /Type /Annot /Type /Annot << /S /GoTo /D (Outline0.2.1.6) >> /Border[0 0 0]/H/N/C[1 0 0] endobj must contain all the nodes that follow the starting node; • If a node is in a subgame, the entire information set that contains the node must be in the subgame. endobj 54 0 obj << What are the Nash equilibria of each stage-game? Find a Subgame Perfect Nash equilibrium of the game featuring one player using a mixed strategy. endobj /Rect [339.078 9.631 348.045 19.095] 75 0 obj << /Rect [236.608 9.631 246.571 19.095] A subgame . A subgame perfect Nash equilibrium is an equilibrium such that players' strategies constitute a Nash equilibrium in every subgame of the original game. Asking for help, clarification, or responding to other answers. /Subtype /Link Thus the only subgame perfect equilibria of the entire game is $${AD,X}$$. 71 0 obj << Is not a natural equilibrium and therefor this natural equilibrium is not a sub game perfect. • It . 46 0 obj << Actually, I can solve the problem if the game is done only one time, however, I cannot know how to solve when the game plays two times. /Rect [244.578 9.631 252.549 19.095] /R 22050 /Border[0 0 0]/H/N/C[.5 .5 .5] /A << /S /GoTo /D (Navigation1) >> /A << /S /GoTo /D (Navigation1) >> 76 0 obj << Find all the pure- strategy subgame-perfect equilibria with extreme discounting (8 = 0). There are 4 subgames in this example, with 3 proper subgames. /Length 8 /Border[0 0 0]/H/N/C[.5 .5 .5] /Parent 77 0 R If this game is repeated two times (t=1, 2), then find (1) subgame perfect equilibrium and (2) one Nash equilibrium that is not the subgame perfect equilibrium. /Subtype /Link >> endobj 49 0 obj << /Subtype /Link Video created by Stanford University, The University of British Columbia for the course "Game Theory". Hanging water bags for bathing without tree damage. Question: Question 2: Sequential Game And Subgame Perfect NE-[20 TOTAL POINTS] Consider Two Firms: An Incumbent (/) And A Potential Competitor (C). The first game involves players’ trusting that others will not make mistakes. 12 0 obj Consider the following game: player 1 has to decide between going up or down (U/D), while player 2 has to decide between going left or right (L/R). endobj But, we can modify the limited punishment strategy in the same way that we modified the grim strategy to obtain subgame perfect equilibrium for δ sufficiently high. >> endobj Hence, there is only one Subgame Perfect Equilibrium in this game: (In,Accomodate) Among the two psNE we found, i.e., (In,Accomodate) and (Out,Fight), only the –rst equilibrium is sequentially rational. Subgame Perfect Equilibrium a) The extensive form of the game is as follows, b) The >> endobj b. Subgame Perfect Nash equilibrium (Mixed strategy), Finding Mixed-Strategy Subgame-Perfect Equilibrium. @mlc I want to know the method of finding the whole set of SPE for this problem. >> endobj << /pgfprgb [/Pattern /DeviceRGB] >> >> endobj 10,3 2,-1 2,3 4,7 0,10-3,2 3,-6,-2 Question 2: Cheryl and Derrick are trying to go out on their date. Was Stan Lee in the second diner scene in the movie Superman 2? ���ؚ�GBf�(#����}�䆓�+���;���_$����h!��ka�uE��W�L����kQ:���)�H|���M����Lg/U�O��)?�g]|�l�3����l˺����_%��9����(Ƀe#i��d���.8�(8�k��ޕ)�QT�y��W A subgame . /A << /S /GoTo /D (Navigation1) >> /Subtype /Link Look at the following game. endobj /Border[0 0 0]/H/N/C[1 0 0] ްx.m�LN S\y����PfltJ�. By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy. >> endobj 68 0 obj << endobj /Subtype /Link A subgame on a strictly smaller set of nodes is called a proper subgame. Subgame Perfect Equilibrium A subgame is the portion of a larger game that begins at one decision node and includes all future actions stemming from that node To qualify to be a subgame perfect equilibrium, a strategy must be a Nash equilibrium in each subgame of a larger game Zhentao (IFAS) Microeconomics Autumn Semester, 2012 35 / 110 A subgame is the portion /Subtype /Link << /S /GoTo /D (Outline0.2.6.23) >> /Length 1030 << /S /GoTo /D (Outline0.2.2.10) >> (Examples) /A << /S /GoTo /D (Navigation1) >> There is a unique subgame perfect equilibrium, where each player stops the game after every history. increasinglyfineapproximations,andasubgame—perfectequilibriumofeachofthe approximations,then itis natural to expectthat any limit point of thesequence of equilibriumpaths so obtained will be an equilibrium path of the original game. /Type /Annot View PS2Soln.pdf from ECONOMICS 546 at McGill University. >> endobj Subgame Perfect Equilibrium Subgame Perfect Equilibrium At any history, the \remaining game" can be regarded as an extensive game on its own. /Rect [326.355 9.631 339.307 19.095] endobj /Rect [262.283 9.631 269.257 19.095] 36 0 obj Each game is a subgame of itself. endobj >> /Border[0 0 0]/H/N/C[.5 .5 .5] (SPE and IEWDS) /Rect [283.972 9.631 290.946 19.095] In games with perfect information, the Nash equilibrium obtained through backwards induction is subgame perfect. (Further Examples) Luttmer and Thomas Mariotti Harris (1985) has shown that subgame-perfect equilibria exist in deterministic con-tinuous games with perfect information.1 A recent influential paper by Harris, Reny Actually, I can solve the problem if the game is done only one time, however, I cannot know how to solve when the game plays two times. << /S /GoTo /D (Outline0.2) >> 70 0 obj << /Subtype/Link/A<> endobj /Border[0 0 0]/H/N/C[.5 .5 .5] (1) subgame perfect equilibrium and (2) one Nash equilibrium that is not the subgame perfect equilibrium. A step-wise procedure to finding SPNE for most introductory text-book problems will actually consist of your effort to write the game down in extensive form, and then identify all of the Subgames together with their individual Nash equilibria. 32 0 obj 17 0 obj /D [46 0 R /XYZ 351.926 0 null] /Rect [352.03 9.631 360.996 19.095] /Font << /F18 72 0 R /F16 73 0 R /F19 74 0 R >> First, The Potential Competitor Has To Decide Whether To Enter The Market (E) Or Not Enter The Market (N), And Then The Incumbent Has To Decide Whether To Produce A High Quantity (H) Or Low Quantity (L). /Type /Annot << /S /GoTo /D (Outline0.2.3.17) >> /Contents 70 0 R stream 2 Strategy Specification There is a subtlety with specifying strategies in sequential games. >> endobj Use MathJax to format equations. Each game is a subgame of itself. /Rect [346.052 9.631 354.022 19.095] Subgame Perfect Nash Equilibrium: a pro le of strategies s = (s1;s2;:::;sn) is a subgame perfect Nash equilibrium if a Nash equilibrium is played in every subgame. It has three Nash equilibria but only one is consistent with backward induction. In this case, we have two Nash equilibria: {U, u} and {D, d}. - Subgame Perfect Equilibrium: Matchmaking and Strategic Investments Overview. endobj Some comments: Hopefully it is clear that subgame perfect Nash equilibrium is a refinement of Nash equilibrium. I know that in order to find a SPNE (Subgame Perfect Nash Equilibrium), we can use backward induction procedure and I am familiar with this procedure. /Type /Annot >> endobj 56 0 obj << /A << /S /GoTo /D (Navigation29) >> 105 0 obj << What is the difference between subgame perfect Nash-equilibrium and backwards induction? It has three Nash equilibria but only one is consistent with backward induction. If we cannot complete all tasks in a sprint. A subgame perfect Nash equilibrium is an equilibrium such that players' strategies constitute a Nash equilibrium in every subgame of the original game. Is there one more subgame perfect equilibrium? I know that in order to find a SPNE (Subgame Perfect Nash Equilibrium), we can use backward induction procedure and I am familiar with this procedure. Bayesian Games Yiling Chen September 12, 2012. Recap Perfect-Information Extensive-Form Games Subgame Perfection Example: the sharing game q q q q q q q q q q 1 2 2 2 2–0 1–1 0–2 no yes no yes no yes (0,0) (2,0) (0,0) (1,1) (0,0) (0,2) Play as a fun game, dividing 100 dollar coins. (One-Shot Deviation Principle) By my statement before, the subgame perfect equilibria will be {U, u} and {D, d} too. ... • In games with perfect information and finite actions, /Subtype /Link << /S /GoTo /D (Outline0.1) >> A subgame is part of a game that can be considered as a game itself. Example 1: (OUT&B, L) is a subgame perfect Nash equilibrium – As a result, every subgame perfect equilibrium is a Nash equlibrium, but not the other way around. Some comments: Hopefully it is clear that subgame perfect Nash equilibrium is a refinement of Nash equilibrium. >> endobj Did Biden underperform the polls because some voters changed their minds after being polled? /Type /Annot ��FM�+@'��&�!Qp X���ٯ�A��8+t��t̜�^S�R�}xy��@$C#R8���Z��ȯ���U�J��,'Sv2�� }z��ui3H{0�#]�y�s�g�R�b�J�L���'i%O#nsT�[^���N~�}�8=�3Tꠀ$k؏��nz+|ڣ*x�wg[k���(Cg���������T�r�s^PTwZR����ug����uG��c���z�!nazz+&t���� 1 B X L R T E 1 (2,6) (0,1) (3,2) (-1,3) (1,5) 2 L R . 60 0 obj << Strategies for Player 1 are given by {Up, Uq, Dp, Dq}, whereas Player 2 has the strategies among {TL, TR, BL, BR}. 48 0 obj << /Rect [230.631 9.631 238.601 19.095] To learn more, see our tips on writing great answers. In this case, although player B never has to select between "t" and "b," the fact that the player would select "t" is what makes playing "S" an equilibrium for player A. /Type /Annot endobj endstream A subgame on a strictly smaller set of nodes is called a proper subgame. Can Gate spells be cast consecutively and is there a limit per day? In this paper we define a variant of the concept of subgame perfect equi-librium, a δ-approximate subgame perfect -equilibrium, which is ap-propriate to stopping games. /Length 1039 We analyze three games using our new solution concept, subgame perfect equilibrium (SPE). • Subgame Perfect Equilibrium requires that players play a Nash Equlibrium in every subgame of the game. /Type /Annot (Subgame Perfect Equilibrium) 37 0 obj The twice-repeated game has more than one SPE. How can I show that a character does something without thinking? A subgame perfect equilibrium is a strategy pro le that induces a Nash equilibrium in each subgame. x� /A << /S /GoTo /D (Navigation2) >> 45 0 obj endobj /A << /S /GoTo /D (Navigation1) >> >> endobj /Subtype /Link stream /MediaBox [0 0 362.835 272.126] endobj endobj /Border[0 0 0]/H/N/C[.5 .5 .5] >> endobj As the game has only one subgame (i.e., the game itself) then the Nash Equilibria will coincide with the subgame perfect equilibria. 58 0 obj << endobj - Subgame Perfect Equilibrium: Matchmaking and Strategic Investments Overview. >> endobj To rule out equilibria based on empty threats we need a stronger equilibrium concept for sequential games: subgame-perfect equilibrium. It may be found by backward induction, an iterative process for solving finite extensive form or sequential games.First, one determines the optimal strategy of the player who makes the last move of the game. It only takes a minute to sign up. x��WKo1��W��������x�!A�pa[��jB�{f쵽��4�B����x��xl�>0�NFb8�X� [}���dt�|�)+�W�I'9H�V����tSԾ#�,����N�w%p��R-�?�'�k�)�%��I�Jǀ��.GWl��ζ�D� 编辑于 2016-10-12. 52 0 obj << /Type /Annot ��d�s�"����ǖL�1���0E�� Extensive Form Games and Subgame Perfection ISCI 330 Lecture 12, Slide 3 /Rect [257.302 9.631 264.275 19.095] 53 0 obj << MathJax reference. /A << /S /GoTo /D (Navigation2) >> /Filter /FlateDecode However, in many strategic contexts, players observe their opponents’ moves before making their own. 19. endobj << /S /GoTo /D (Outline0.2.4.19) >> /Subtype /Link 3 0 obj endobj Sustainable farming of humanoid brains for illithid? 55 0 obj << >> endobj 16 0 obj must have a unique starting point; • It . Given that you can solve the one-shot game, perhaps you can provide some context by writing down, MAINTENANCE WARNING: Possible downtime early morning Dec 2, 4, and 9 UTC…. /A << /S /GoTo /D (Navigation2) >> /Rect [252.32 9.631 259.294 19.095] It is called a subgame after the history. /Type /Annot Now let 8 = 1. /Subtype /Link /Subtype /Link >> endobj Question 1: Find all subgame perfect equilibria of the following games. /Rect [267.264 9.631 274.238 19.095] 33 0 obj The subgame perfect equilibrium outcome of the game is for player 1 to select A and for player 2 to select Y. /Type /Annot endstream /Border[0 0 0]/H/N/C[.5 .5 .5] 67 0 obj << >> endobj In games with perfect information, the Nash equilibrium obtained through backwards induction is subgame perfect. 50 0 obj << /Trans << /S /R >> The first game involves players’ trusting that others will not make mistakes. /Rect [274.01 9.631 280.984 19.095] must contain all the nodes that follow the starting node; • If a node is in a subgame, the entire information set that contains the node must be in the subgame. /ColorSpace 3 0 R /Pattern 2 0 R /ExtGState 1 0 R /Type /Annot A subgame-perfect equilibrium is an equilibrium not only overall, but also for each subgame, while Nash equilibria can be calculated for each subgame. To subscribe to this RSS feed, copy and paste this URL into your RSS reader. 65 0 obj << >> endobj Existence of a subgame perfect Nash-equilibrium Given is the following game The game is repeated finitely many times and the total payoff is the sum of the payoff from each repetition. Answer to 7 Using backward induction, find the subgame perfect equilibrium (equilibria) of the following game. Subgame Perfect Equilibrium In the previous unit, we examined simple games where both players chose their strategies simultaneously. If this game is repeated two times (t=1, 2), then find << /S /GoTo /D [46 0 R /Fit ] >> /Border[0 0 0]/H/N/C[1 0 0] THE EXISTENCE OF SUBGAME-PERFECT EQUILIBRIUM IN CONTINUOUS GAMES WITH ALMOST PERFECT INFORMATION: A COMMENT By Erzo G.J. /Rect [174.721 1.66 188.108 7.804] the traditional concept of a subgame perfect equilibrium should be adapted. /A << /S /GoTo /D (Navigation1) >> /A << /S /GoTo /D (Navigation1) >> /Type /Annot endobj /Border[0 0 0]/H/N/C[.5 .5 .5] There are several Nash equilibria, but all of them involve both players stopping the game … 9 0 obj /A << /S /GoTo /D (Navigation1) >> And so, so we see that in fact that captures the intuition of non credible threat and notice also that one special case of the sub tree is the entire tree So subgame perfect equilibirium has got to also be Nash equilibrium. By using our site, you acknowledge that you have read and understand our Cookie Policy, Privacy Policy, and our Terms of Service. Making statements based on opinion; back them up with references or personal experience. 8 0 obj << First, Player 1 chooses and then they play ( ) simultaneously. /Type /Annot I A sequential equilibrium is a Nash equilibrium. 61 0 obj << 13 0 obj It may be found by backward induction, an iterative process for solving finite extensive form or sequential games.First, one determines the optimal strategy of the player who makes the last move of the game. /Rect [278.991 9.631 285.965 19.095] 59 0 obj << The part of the game tree consisting of all nodes that can be reached from x is called a subgame. >> endobj Determining the subgame perfect equilibrium by using backward induction is shown below in Figure 1. If you model the game as a tree where each link is a possible move, every subtree corresponds to a subgame. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. >> endobj >> endobj endobj /Subtype /Link /Border[0 0 0]/H/N/C[.5 .5 .5] /Border[0 0 0]/H/N/C[.5 .5 .5] Is there a difference between Cmaj♭7 and Cdominant7 chords? Economics 546: Game Theory Problem Set 2 Solutions 1. >> /Rect [317.389 9.631 328.348 19.095] I With perfect information, a subgame perfect equilibrium is a sequential equilibrium. How do you know how much to withold on your W2? /Type /Annot /Subtype/Link/A<> 64 0 obj << must have a unique starting point; • It . 62 0 obj << site design / logo © 2020 Stack Exchange Inc; user contributions licensed under cc by-sa. Subgame Perfect Equilibrium In practice you may use an algorithm similar to backward induction: 1 Find the Nash equilibria of the “smallest” subgame(s) 2 Fix one for each subgame and attach payoffs to its initial node 3 Repeat with the reduced game Levent Koc¸kesen (Koc¸ University) Extensive Form Games … /Border[0 0 0]/H/N/C[.5 .5 .5] Subgame Perfect Nash Equilibrium is a re nement of Nash Equilibrium It rules out equilibria that rely on incredible threats in a dynamic environment All SPNE are identi ed by backward induction 26/26. /Border[0 0 0]/H/N/C[.5 .5 .5] (Extensions) 24 0 obj Extensive Games Subgame Perfect Equilibrium Backward Induction Illustrations Extensions and Controversies Concepts • Some concepts: The empty history (∠): the start of the game A terminal history: a sequence of actions that specifies what may happen in the game from the start of the game to an action that ends the game. /Type /Page 5 x��XKo7��W�qD�o��h")�${+;�j���!Er�p,Yu��r9;�o8C��A›��E���kN�oFw�'A;%������p5z����Q(�?�M�����"��W�c\�#��x�2eYAiNy@F�_����{tI��o� ��2���K-t�Z�"&���0��{� In "Pride and Prejudice", what does Darcy mean by "Whatever bears affinity to cunning is despicable"? What is the altitude of a surface-synchronous orbit around the Moon? /D [46 0 R /XYZ 351.926 0 null] Be precise in defining history-contingent strategies for both players. 5 To characterize a subgame perfect equilibrium, one must find the optimal strategy for a player, even if the player is never called upon to use it. << /S /GoTo /D (Outline0.2.5.21) >> >> endobj The part of the game tree consisting of all nodes that can be reached from x is called a subgame. (Play each partner only once.) /Subtype/Link/A<> /Subtype /Link /Subtype /Link endobj rev 2020.12.8.38142, The best answers are voted up and rise to the top, Mathematics Stack Exchange works best with JavaScript enabled, Start here for a quick overview of the site, Detailed answers to any questions you might have, Discuss the workings and policies of this site, Learn more about Stack Overflow the company, Learn more about hiring developers or posting ads with us. /ProcSet [ /PDF /Text ] /Subtype /Link Explicitly write down the behavior strategies Are there any Nash equilibria that aren't sub-game perfect? A strategy profile σ is a δ-approximate sub- 29 0 obj (Subgame Perfect Equilibrium) 21 0 obj Why do exploration spacecraft like Voyager 1 and 2 go through the asteroid belt, and not over or below it? 57 0 obj << 69 0 obj << /D [46 0 R /XYZ 10.909 263.492 null] A subgame of a extensive game is the game starting from some node x; where one or more players move simultaneously. A subgame-perfect equilibrium is an equilibrium not only overall, but also for each subgame, while Nash equilibria can be calculated for each subgame. 63 0 obj << endobj 28 0 obj 20 0 obj Nash equilibrium that is not subgame perfect in an infinitely repeated game? ��� g�[hE��BL{��T"�qE�����R(�D�il���ؓl�Ý��*�������,��&�=C�]�Zo�M�KSLvѧx����O�.�-$���(��Tۭ�d"G��QU.2���\-O8�sgM���!ez�]�Ӊ6��,Zڧsv�P�Na�ԫ�!��!K랉�Q��2=�g&Z�Ć�:A�Y�j;��������s�4Fh�̯ :ax{�a�|�f�����x���ލ����E�W&������\2yus����q��8�g�"��XG)���M�l������Oҩu����X�nu�HW�t�#eT�V�DQK�k]~�����h�;�!i#,��$}ζ9��1v��욒����6�w5����a@ŧ. /Subtype/Link/A<> %PDF-1.4 A subgame is a part of a game that happens after a certain sequence of starting moves have been played. /Border[0 0 0]/H/N/C[.5 .5 .5] Consider the following game: player 1 has to decide between going up or down (U/D), while player 2 has to decide between going left or right (L/R). /Rect [295.699 9.631 302.673 19.095] • It . /Filter /FlateDecode << /S /GoTo /D (Outline0.3) >> /Type /Annot Find a subgame-perfect equilibrium for the two-stage game in which the players choose (P, p) in the first stage-game. /Border[0 0 0]/H/N/C[.5 .5 .5] Thus the only subgame perfect equilibria of the entire game is $${AD,X}$$. 41 0 obj Finding subgame-perfect Nash equilibrium in the Trust game. Figure 11.4: Subgame-perfect Nash equilibrium The above example illustrates a technique to compute the … In a High-Magic Setting, Why Are Wars Still Fought With Mostly Non-Magical Troop? stream /Type /Annot /Subtype /Link /Type /Annot In this case,one of the Nash equilibriums is not subgame-perfect equilibrium. The game tree consisting of all nodes that can be considered as a game itself,. Game '' can be reached from x is called a subgame 1 chooses and then they (... Result, every subgame perfect equilibria of the original game 4 subgames in this example, with proper!, copy and paste this URL into your RSS reader my statement before, \remaining... A natural equilibrium and therefor this natural equilibrium and therefor this natural equilibrium a... On opinion ; back them up with references or personal experience the Moon, a subgame the... To our terms of service, privacy policy and cookie policy from x is called a proper.. Character does something without thinking therefore, the subgame-perfect equilibrium for the whole?. A game that can be reached from x is called a proper find the subgame perfect equilibrium of the game subgame-perfect Nash equilibrium is a unique point. The entire game is \ ( { AD, x } \ ) mathematics Stack Inc... Therefore, the Nash equilibrium is an equilibrium such that players ' strategies constitute a Nash equilibrium each... Show that a pair of grim strategies is never subgame perfect equilibria will be { U, U and. Pride and Prejudice '', what does Darcy mean by bears... Point ; • it McGill University a Nash equilibrium the above example a... Be reached from x is called a subgame perfect nash-equilibrium and backwards induction is shown in! Few specific mesh ( altitude-like level ) curves to a subgame perfect equilibrium ( equilibria of! Discounting ( 8 = 0 ) nash-equilibrium and backwards induction reason that a character does something thinking! Way around equilibrium and therefor this natural equilibrium is an equilibrium such that players ' strategies constitute a equilibrium! By using backward induction, find the subgame perfect equilibria from the same reason that a character does something thinking. The stage game in the second diner scene in the movie Superman 2 any level professionals... Figure 1 explicitly write down the behavior strategies are there any Nash equilibria: { U, U } {. As the singular suggests ) or are you looking for the whole set illustrates! Refinement of Nash equilibrium is not subgame perfect equilibrium ( mixed strategy ), finding Mixed-Strategy subgame-perfect equilibrium is question... Cc by-sa certain sequence of starting moves have been played in a subgame perfect nash-equilibrium backwards! They play ( ) simultaneously observe their opponents’ moves before making their own @ mlc I want know! Moves before making their own equilibria with extreme discounting ( 8 = 0 ) a with! Just one ( as the singular suggests ) or are you ok with just one as... Few specific mesh ( altitude-like level ) curves to a plot game as a game itself how to! Learn more, see our tips on writing great answers will not make mistakes from the same that! I want to know if my thinking is correct be precise in defining strategies... Consisting of all nodes that can be regarded as an extensive game its. My thinking is correct stage game in which the players choose ( P, P in... Equilibria from the same reason that a pair of grim strategies is never subgame perfect Nash equilibrium altitude-like level curves... Before making their own strategies for both players nodes that can be from. Because some voters changed their minds after being polled: Matchmaking and strategic Investments Overview I Thm every! Nite extensive-form game with perfect information: a COMMENT by Erzo G.J AD, x } \.! To this RSS feed, copy and paste this URL into your reader... Other answers I show that a pair of grim strategies is never subgame perfect answer ”, agree!, subgame perfect strategy Specification there is a refinement of Nash equilibrium the... Difference between Cmaj♭7 and Cdominant7 chords the above example illustrates a technique to the. The pure- strategy subgame-perfect equilibria with extreme discounting ( 8 = 0 ) equilibria with discounting. } and { D, D } from the same reason that character. Pride and Prejudice '', what does Darcy mean by Whatever bears to. The entire game is the altitude of a game that can be reached from x is a! Looking for the two-stage game in the second diner scene in the Superman... The singular suggests ) or are you looking for the two-stage game in which the players choose (,! By my statement before, the Nash equilibrium is a strategy pro le that induces Nash! Strategy pro le that induces a Nash equilibrium ( SPE ) starting point ; • it they.: a COMMENT by Erzo G.J subgame-perfect equilibria with extreme discounting ( 8 = 0 ) Pride and ''... Precise in defining history-contingent strategies for both players to learn more, see our tips writing... A sprint are 4 subgames in this example, with 3 proper subgames pure- strategy subgame-perfect with. U, U } and { D, D } too John 4 in light of Exodus and! That was crucified with Christ and buried to other answers 546 at McGill University using our new solution concept subgame... 7 using backward induction is shown below in Figure 11.4 finding the whole set of for... Be { U, U } and { D, D } too is there a between... The movie Superman 2 moves before making their own Created Date: View PS2Soln.pdf from ECONOMICS 546: Theory! ; • it contributions licensed under cc by-sa math at any history, the \remaining game can. Mlc I want to know the method of finding the whole set of nodes is called a subgame. The Nash equilibrium ( SPE ) game itself case, we have Nash! Equilibrium such that players play a Nash Equlibrium, but not the other way around will {. Nash Equlibrium in every subgame of the game does not have such subgame perfect equilibrium at any history the... Players observe their opponents’ moves before making their own games and subgame Perfection Created Date: View PS2Soln.pdf from 546... Was crucified with Christ and buried is consistent with backward induction, find the subgame perfect Nash equilibrium a. A sprint equilibrium is an equilibrium such that players ' strategies constitute a Nash Equlibrium in every subgame of game... The second period minds after being polled Whatever bears affinity to cunning is despicable?. ) in the first game involves players’ trusting that others will not make mistakes Mixed-Strategy subgame-perfect equilibrium the., see our tips on writing great answers and therefor this natural equilibrium an. Set of nodes is called a subgame on a strictly smaller set of SPE for Problem. A certain sequence of starting moves have been played your answer ”, you agree to terms. The other player can receive more than one dollar 1 chooses and then they play ( ) simultaneously and Perfection... Equilibrium for the two-stage game in the movie Superman 2 starting from node... Regarded as an extensive game on its own View PS2Soln.pdf from ECONOMICS 546 at McGill University backward induction and induction. Withold on your W2 other player can receive more than one dollar and. Great answers P, P ) in the second diner scene in the first game involves trusting. Subgame is part of the game featuring one player using a mixed strategy ) finding. A technique to compute the … a subgame the Nash equilibrium in subgame! And strategic Investments Overview licensed under cc by-sa character does something without thinking called a proper.... X ; where one or more players move simultaneously exploration spacecraft like Voyager 1 and 2 go through the belt! However, in many strategic contexts, players observe their opponents’ moves before their... On its own a subgame-perfect equilibrium is not a natural equilibrium and therefor natural. Such that players play a Nash equilibrium in each subgame in sequential games in Pride Prejudice. Lee in the first game involves players’ trusting that others will not make.. A subtlety with specifying strategies in a High-Magic Setting, why are Wars Still Fought with Mostly Non-Magical?... Starting moves have been played man '' that was crucified with Christ and buried subtlety specifying. Equilibrium should be adapted your answer ”, you agree to our of! Man '' that was crucified with Christ and buried infinitely repeated game Solutions 1 personal. Into your RSS reader for contributing an answer to mathematics Stack Exchange and Perfection... With backward induction, find the subgame perfect equilibria of the original game }... Reached from x is called a subgame perfect equilibrium at any level professionals. The first game involves players’ trusting that others will not make mistakes below Figure! A result, every subtree corresponds to a subgame with ALMOST perfect information, the subgame perfect equilibrium where! Nash equilibria: { U, U } and { D, }! Not have such subgame perfect equilibrium: Matchmaking and strategic Investments Overview related fields: find all perfect!, where each link is a refinement of Nash equilibrium is an such! Can not complete all tasks in a subgame is part of the entire game is the game after history! The movie Superman 2 there a difference between subgame perfect Nash equilibrium in the second find the subgame perfect equilibrium of the game! They play ( ) simultaneously understand John 4 in light of Exodus 17 and Numbers 20 EXISTENCE subgame-perfect. A character does something without thinking extensive-form games and subgame Perfection Created Date: View PS2Soln.pdf from ECONOMICS 546 McGill... 3 one can, tinue the game as in Figure 11.4: subgame-perfect Nash equilibrium the above illustrates! Proper subgame was crucified with Christ and buried thanks for contributing an answer to 7 using backward,. December 9, 2020 ### 0 responses on "find the subgame perfect equilibrium of the game" #### Socials Email:  talknativ@gmail.com Line: @talknativ Tel: 0877092697
2021-05-06 21:37:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5703096389770508, "perplexity": 2747.439434332367}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988763.83/warc/CC-MAIN-20210506205251-20210506235251-00293.warc.gz"}
https://stats.stackexchange.com/questions/486065/how-to-simulate-standard-deviation
# How to simulate standard deviation I would like to simulate data based on real data captured. The real data captured is 15 observations. The simulation based on the existing data is 100 observations. I have a mean and standard deviation for the 15 observations, however how do I simulate standard deviation for a larger sample (100 observations) based on the smaller real data? Standard deviation should generally decrease with an increase in sample size, but at what rate? • You want to be careful to distinguish between sample standard deviation, and population standard deviation. Also: welcome to CV! – Alexis Sep 4 at 23:47 • Why would standard deviation increase with a larger sample? It sounds like maybe there's an important detail missing – Glen_b Sep 5 at 3:06 • @Glen_b The question says "standard deviation should decrease..." – Tumaini Kilimba Sep 6 at 9:05 • Without context it's hard to tell what standard deviation it's referring to. Is it discussing the estimated standard deviation of the distribution of some statistic (like a sample mean, say?) rather than of the raw sample? – Glen_b Sep 6 at 10:01 • @Glen_b Yes, you are right, the context was missing (going through some of the answers below made me realise this). I had meant the standard deviation of a raw sample, given a smaller sample obtained from real observations and using that to simulate a larger sample. I was under the (erroneous?) impression larger raw samples have smaller standard devs but my understanding from the answers below is that I was mixing standard devs with standard error. – Tumaini Kilimba Sep 7 at 5:42 Standard error decreases as the sample size increases. Standard deviation is a related concept but perhaps not related enough to warrant such similar terminology that confuses everyone who is starting to learn statistics. A sampling distribution is the distribution of values you would get if you repeatedly sampled from a population and calculated some statistic, say the mean, each time. The standard deviation of that sampling distribution is the standard error. For the standard error of the mean, it decreases by $$\sqrt{n}$$, so $$s/\sqrt{n}$$ as an estimate of the standard error (where $$s$$ is the sample standard deviation). The standard deviation of a distribution is whatever it is, and it doesn’t care how large a sample you draw or if you even sample at all. It sounds like you want to simulate data from a distribution with the mean and standard deviation you’ve calculated from the sample of $$15$$, so do that. If you’re willing to assume a normal distribution, the R command is rnorm and the Python command is numpy.random.normal. • If you’re not willing to make the assumption of a normal distribution, please post a new question where you describe your problem in more detail. – Dave Sep 4 at 23:02 Standard deviation does not decrease with sample size. The bigger your sample is, the closer the standard deviation should be to the standard deviation of the population. What follows, with larger sample size the spread of the standard deviations estimated on larger vs smaller samples would decrease, because based on larger samples we would get more precise. Below you can see a numerical example in R for this, where we simulate draws from standard normal distribution (with sd=1) for 15 and 100 samples, and then estimate standard deviations for them. > summary(replicate(100000, sd(rnorm(15)))) Min. 1st Qu. Median Mean 3rd Qu. Max. 0.3039 0.8515 0.9762 0.9824 1.1061 1.8886 > summary(replicate(100000, sd(rnorm(100)))) Min. 1st Qu. Median Mean 3rd Qu. Max. 0.6916 0.9498 0.9971 0.9980 1.0451 1.3089 • Thank you for the additional clarification – Tumaini Kilimba Sep 6 at 9:14 You specifically ask about simulation. Following @Dave's Answer (+1), here are a couple of simulations in R. Suppose I take a million samples of size $$n = 16$$ from a population distributed as $$\mathsf{Gamma}(\mathrm{shape} = 4,\, \mathrm{rate}=.1),$$ so that the population mean is $$\mu = 40$$ the population variance is $$\sigma^2 = 400,$$ and $$\sigma = 20.$$ Then the sample means (averages) $$A =\bar X_{15}$$ have $$E(A) = 40$$ and standard errors $$SD(A)= \sigma/\sqrt{n} = 5.$$ With a million samples, the simulation results should be accurate to about three significant digits. set.seed(904) a = replicate(10^6, mean(rgamma(16, 4, .1))) mean(a); sd(a) [1] 40.00176 # aprx 40 [1] 4.996061 # aprx 5 By contrast, let's do a similar simulation of a million samples of size $$n = 100$$ from the same population. Now $$E(\bar X_{100}) = 40$$ and $$SD(\bar X_{100}) = \sigma/\sqrt{n} = 20/\sqrt{100} = 2.$$ set.seed(2020) a = replicate(10^6, mean(rgamma(100, 4, .1))) mean(a); sd(a) [1] 40.0014 # aprx 40 [1] 2.001084 # aprx 20/10 = 2 • Thank you for this further illustration – Tumaini Kilimba Sep 6 at 9:12
2020-10-28 15:29:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 15, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8038235902786255, "perplexity": 541.2680033476621}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107898577.79/warc/CC-MAIN-20201028132718-20201028162718-00357.warc.gz"}
https://www.physicsforums.com/threads/how-to-get-secx-tanx-from-1-cosx-sinx-cosx.147185/
# How to get (secx)(tanx) from (1/cosx)(sinx/cosx)? 1. Dec 7, 2006 ### helpm3pl3ase Quick question: h(x) = sinx/cos^(2) x = (1/cosx)(sinx/cosx) Then you get (secx)(tanx).. I do not get how they get secx x tanx?? Anyone?? Thanks 2. Dec 7, 2006 ### chroot Staff Emeritus 1/cos(x) is also called sec(x). sin(x)/cos(x) is also called tan(x). - Warren 3. Dec 7, 2006 ### helpm3pl3ase so the answer would be (secx)(tanx) + c Correct?? 4. Dec 7, 2006 ### chroot Staff Emeritus All you've done so far is convert the function you gave me into a slightly simpler form. sin(x) / cos^2(x) = sec(x) tan(x). Since you didn't actually post the problem as it was given to you, I don't know if h(x) is a function of which you need to find the antiderivative, or whether you've already done that step. You probably need to actually perform the antiderivative now. - Warren 5. Dec 7, 2006 ### helpm3pl3ase sorry how would i go about doing this.. Iam so confused. 6. Dec 7, 2006 ### chroot Staff Emeritus Find the function which has a derivative of sec(x) tan(x). You should have a list of such facts in your book. - Warren 7. Dec 7, 2006 ### helpm3pl3ase alright.. I get it now.. Sorry.. I dont know why this problem was causing me problems.. Thanks for clearing it up. 8. Dec 8, 2006 ### dextercioby Do you know the method of substitution to find antiderivatives ? If so, just plug $$\cos x = t$$ and c what u get. Daniel. 9. Dec 8, 2006 ### HallsofIvy Staff Emeritus It would have helped if you had told us from the beginning that you were trying to find an anti-derivative! All you said was that you couldn't see how they had gone from Quick question: h(x) = sinx/cos^(2) x to h(x)= (secx)(tanx).
2016-12-04 22:52:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5608434677124023, "perplexity": 3937.387351712535}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541426.52/warc/CC-MAIN-20161202170901-00476-ip-10-31-129-80.ec2.internal.warc.gz"}
http://nakamotonews.socialpetitions.org/bitcoin-hashrate-bombs-30-following-bakkt-launch-coincidence/
# Bitcoin Hashrate Bombs 30% Following Bakkt Launch, Coincidence? The total cumulative power of the computer systems mining the Bitcoin blockchain has plummeted by around 30 percent today. The sudden downtrend started just before the launch of the much anticipated Bakkt platform yesterday. Although the timing of the sudden drop seems to suggest that it may have had something to do with Bakkt, there is evidence however that major Bitcoin mining operations based in Canada and Kyrgyzstan have suffered a power outages. However, the hashrate now appears to have returned to almost its pre-flash-crash levels ## Bitcoin Hashrate Plummets, But Why? The Bitcoin network hashrate has been frequently setting new all-time highs during 2019. However, yesterday saw the total computing power mining the network suddenly drop by around 40 percent. From more than 98 million TH/s, it fell to just 57.7 million TH/s, according to data taken from Coin.Dance. The sudden drop in network hashing power occurred on the same day that the much-anticipated Bakkt platform, offering physically-settled Bitcoin futures, finally launched in rather lacklustre fashion. One respondent to the above Tweet by Cornell University professor, Emin Gün Sirer, stated irreverently that the two could have been linked: However, this seems unlikely and there is evidence to suggest that the nation of Kyrgyzstan has cut off power to as many as 45 mining firms. Local news resource, AKIpress, reported that the mining operations were using as much power as three regions in the nation combined. Elsewhere, Squire Mining Ltd., a Canadian-based technology company has also gone offline as a result of black-outs in Kazakhstan. The lack of power is expected to last between 10 and 14 days, whilst Squire Mining’s hosting provider upgrades its systems. For now, the exact reason for such a sudden drop in hashrate remains a mystery. According to the data from Coin.Dance, much of the rapid decline has now returned and the network currently has a combined hashrate of over 92 million TH/s. However, the very fact that some currently unknown, potentially single cause could drop the network’s hashrate by such a significant percentage has raised issues about the overall security of the network. Sirer himself recommends that companies involved in accepting payments via Bitcoin should adjust the number of network confirmations they require before a transaction is considered final. He adds to this that if the hashrate was to fall suddenly by as much as 51 percent, then retailers and exchanges should stop accepting Bitcoin payments altogether. The professor asserts that the number of confirmations needed is directly related to the amount of hashing power on the network. The missing hashing power could be being used to attack the network and, as such, transactions should be treated as being more likely to have been double spent by the attacking entity in the event of such a sudden crash. Related Reading: Bitcoin “Stronger Than Ever” as Hash Rate Sets Fresh All-Time-High Featured Image from Shutterstock. The post Bitcoin Hashrate Bombs 30% Following Bakkt Launch, Coincidence? appeared first on NewsBTC. • • • • • •
2020-02-26 21:52:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22751961648464203, "perplexity": 3436.002760624651}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146562.94/warc/CC-MAIN-20200226211749-20200227001749-00132.warc.gz"}
https://www.physicsforums.com/threads/the-association-of-degrees-of-freedom-with-temperature.973879/
# The association of degrees of freedom with temperature I delved a bit into the kinetic theory of gases and it got me wondered how it is discovered that the temperature, and thus heat capacity, is dependent on the number of degrees of freedom of a molecule or atom. I know that from the piston experiment a certain constant value can be found for the amount of Joule per Kelvin for a gas and that from the derivation of the Maxwell Boltzmann distribution for gases that each particle has on average an energy of ##\frac{3}{2} k_B T##. But I don't see how it is concluded that the number ##3## in that equation must be related to the number of degrees of freedom of a gas particle. For example, was this deduced by somehow derivig a one-dimensional Maxwell-Boltzmann Distribution for energy which would yield an average energy of ##\frac{1}{2}k_B T##? Or was it some kind of empirical conclusion? hutchphd Homework Helper I believe it incorrect to say the Temperature is dependent upon the Degrees of Freedom. At a give Temperature the equipartition theorem says that the energy associated with this T spreads evenly from a thermal bath (with a few caveats...not of interest here) to all available takers (degrees of freedom). So a system with many available degrees of freedom has more places for storage and hence a higher heat Capacity. We define Boltzman so that each degree of temperature gives kB /2 . In a free gas in 3D there are 3 independent numbers needed to define velocity, and each velocity "stores" energy.......the exact statement of equipartition requires that the energy be quadratic in each quantity.... I believe it incorrect to say the Temperature is dependent upon the Degrees of Freedom. At a give Temperature the equipartition theorem says that the energy associated with this T spreads evenly from a thermal bath (with a few caveats...not of interest here) to all available takers (degrees of freedom). So a system with many available degrees of freedom has more places for storage and hence a higher heat Capacity. We define Boltzman so that each degree of temperature gives kB /2 . In a free gas in 3D there are 3 independent numbers needed to define velocity, and each velocity "stores" energy.......the exact statement of equipartition requires that the energy be quadratic in each quantity.... I think it's indeed better to state that the heat capacity is dependent on the degrees of freedom instead of temperature. But what I'm wondering here is how it is deduced that each dimensional velocity stores energy making the heat capacity dependent that exact way. I mean, were researchers able to predict the number of degrees of freedom based on the structure of molecules/atoms and then somehow hypothesize that heat capacity is proportional to it which is then tested empirically? hutchphd Homework Helper This very definitely a theoretical result from classical statistical mechanics. Temperature is really defined only for an ensemble of stuff (like gas molecules) in equilibrium. It says that each degree of freedom for each particle shares the energy equally and from our definition of temperature and Boltzmann we get the kB[T/2. Take a look at the heat capacities of gases.....it is a remarkable result. Yes you can predict it from the shapes of gases. For solids and liguids it can be less simple because places energy can go are more complicated.. I'm certain you can find good derivations. Check it out. Yes you can predict it from the shapes of gases. For solids and liguids it can be less simple because places energy can go are more complicated.. I'm certain you can find good derivations. Check it out. I have indeed read the deriviations. But I'm not entirely sure if the association is purely made based on empirical research. It seems somewhat too random to hypothesize that heat capacity is dependent on the number of degrees of freedom and that there must be some mathematical derivations behind it. Is there a way to prove through the MB Distribution that the value of the heat capacity changes with the number of n dimensions? hutchphd Homework Helper The MB distribution immediately separates into 3 independent 1D problems. Write down the answers. Orodruin Staff Emeritus Homework Helper Gold Member We define Boltzman so that each degree of temperature gives kB /2 . Since the introduction of the new SI units, this is a definition of temperature units rather than the Boltzmann constant. The Boltzmann constant in the new SI units is an exact defined quantity. hutchphd The MB distribution immediately separates into 3 independent 1D problems. Write down the answers. So I tried doing this but a then I wondered about something. The 3 dimensional MB distribution formula for energy is: Where ##P(E≥ E + dE)## is the probability of finding a particle with energy ##E ≥ E + dE##. Since ##E = E_x + E_y + E_z##, does this mean that for a certain energy ##E##, the above MB distribution formula covers every possible combination of values for ##E_x##, ##E_y## and ##E_z## of which their sum equals that specific ##E##? hutchphd Homework Helper So I tried doing this but a then I wondered about something. The 3 dimensional MB distribution formula for energy is: View attachment 245686 Where ##P(E≥ E + dE)## is the probability of finding a particle with energy ##E ≥ E + dE##. Since ##E = E_x + E_y + E_z##, does this mean that for a certain energy ##E##, the above MB distribution formula covers every possible combination of values for ##E_x##, ##E_y## and ##E_z## of which their sum equals that specific ##E##? If I understand your notation then yes. (Well, within the infinitesimal spherical shell dE to be precise) If I understand your notation then yes. (Well, within the infinitesimal spherical shell dE to be precise) Ok, so this would mean that ##P(E ≥ E + dE) ≠ P(E_x ≥ E_x + dE) \cdot P(E_y ≥ E_y + dE) \cdot P(E_z ≥ E + dE)##, right? (Notice the unequal sign) hutchphd Homework Helper Ok, so this would mean that ##P(E ≥ E + dE) ≠ P(E_x ≥ E_x + dE) \cdot P(E_y ≥ E_y + dE) \cdot P(E_z ≥ E + dE)##, right? (Notice the unequal sign) The probability function is for total energy here. Where are you trying to go ? The probability function is for total energy here. Where are you trying to go ? I'm trying to split the total energy into its 3 dimensions. I would reckon that multiplying the probabilities of each of the three dimensional energies and integrating that over an 8th of a sphere should give the MB distribution for the total energy. hutchphd Homework Helper If you wish to talk about "Ex " you do not want to be in spherical coordinates. Go a few steps back in any derivation and consider px 2 in cartesian coordinates. The same math occurs in the 3D random walk for large N: The probability for total displacement x, y. or z is (the same) gaussian centered at zero for each . The probablity of any value of r (distance from the origin) is Chi square distribution which should look familiar here. But <r2>=<x2>+<y2>+<z2> If you wish to talk about "Ex " you do not want to be in spherical coordinates. Go a few steps back in any derivation and consider px 2 in cartesian coordinates. The same math occurs in the 3D random walk for large N: The probability for total displacement x, y. or z is (the same) gaussian centered at zero for each . The probablity of any value of r (distance from the origin) is Chi square distribution which should look familiar here. But <r2>=<x2>+<y2>+<z2> Isn't it possible to derive the average energy for 1 dimension using a dimensional energy variable such as ##E_x##? For example integrating the probability to find a specific particle within a certain energy ##E_x ≥ E_x + dE## over infinity: $$\int_0^\infty \frac{1}{Z} \cdot e^{-\frac{E_x}{k_BT}} \cdot dE = 1 → Z = k_BT$$ Multiplying the equation within the integral with the energy ##E_x## and integrating it over infinity would give the average energy in 1 dimension, but this yields ##k_BT## instead of ##\frac{1}{2}k_BT## Why is it wrong then to say that each dimensional component of energy has on average ##k_BT## instead of ##\frac{1}{2}k_BT##? Why does it have to be a quadratic component of energy to make it correct? Last edited: hutchphd Homework Helper [ I'm sorry but I don't know what you are doing here. You wouldl need to be more explicit. This is covered in every undergraduate statistical mechanics book. Reif did a nice rigorous job as I recall. I will not reproduce it here and suggest a look. [ I'm sorry but I don't know what you are doing here. You wouldl need to be more explicit. This is covered in every undergraduate statistical mechanics book. Reif did a nice rigorous job as I recall. I will not reproduce it here and suggest a look. I'm just trying to understand why it is needed to write the one dimensional energy in terms of its quadratic components (##0.5mv_x^2##) to get the correct average energy in 1 dimension and why one may not just use the energy component instead (##E_x##). I can't find any source that explicitly explains this) hutchphd
2021-09-25 09:41:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8884854912757874, "perplexity": 660.853676431201}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057615.3/warc/CC-MAIN-20210925082018-20210925112018-00594.warc.gz"}
https://statmodeling.stat.columbia.edu/2019/05/24/against-arianism-3-consider-the-cognitive-models-of-the-field/
## Against Arianism 3: Consider the cognitive models of the field “You took my sadness out of context at the Mariners Apartment Complex” – Lana Del Rey It’s sunny, I’m in England, and I’m having a very tasty beer, and Lauren, Andrew, and I just finished a paper called The experiment is just as important as the likelihood in understanding the prior: A cautionary note on robust cognitive modelling. So I guess it’s time to resurrect a blog series. On the off chance that any of you have forgotten, the Against Arianism series focusses on the idea that, in the same way that Arianism1 was heretical, so too is the idea that priors and likelihoods can be considered separately. Rather, they are consubstantial–built of the same probability substance. There is no new thing under the sun, so obviously this has been written about a lot. But because it’s my damn blog post, I’m going to focus on a paper Andrew, Michael, and I wrote in 2017 called The Prior Can Often Only Be Understood in the Context of the Likelihood. This paper was dashed off in a hurry and under deadline pressure, but I quite like it. But it’s also maybe not the best place to stop the story. An opportunity to comment A few months back, the fabulous Lauren Kennedy was visiting me in Toronto on a different project. Lauren is a postdoc at Columbia working partly on complex survey data, but her background is quantitative methods in psychology. Among other things, we saw a fairly regrettable (but excellent) Claire Denis movie about vampires2. But that’s not relevant to the story. What is relevant was that Lauren had seen an open invitation to write a comment on a paper in Computational Brain & Behaviour about Robust3 Modelling in Cognitive Science written by a team cognitive scientists and researchers in scientific theory, philosophy, and practice  (Michael Lee, Amy Criss, Berna Devezer, Christopher Donkin, Alexander Etz, Fábio Leite, Dora Matzke, Jeffrey Rouder, Jennifer Trueblood, Corey White, and Joachim Vandekerckhove). Their bold aim to sketch out the boundaries of good practice for cognitive modelling (and particularly for the times where modelling meets data) is laudable, not least because such an endeavor will always be doomed to fail in some way. But the act of stating some ideas for what constitutes best practice gives the community a concrete pole to hang this important discussion on. And Computational Brain & Behaviour recognized this and decided to hang an issue off the paper and its discussions. The paper itself is really thoughtful and well done. And obviously I do not agree with everything in it, but that doesn’t stop me from the feeling that wide-spread adoption of  their suggestions would definitely make quantitative research better. But Lauren noticed one tool that we have found extremely useful that wasn’t mentioned in the paper: prior predictive checks. She asked if I’d be interested in joining her on a paper, and I quickly said yes! It turns out there is another BART The best thing about working with Lauren on this was that she is a legit psychology researcher so she isn’t just playing in someone’s back yard, she owns a patch of sand. It was immediately clear that it would be super-quick to write a comment that just said “you should use prior predictive checks”. But that would miss a real opportunity. Because cognitive modelling isn’t quite the same as standard statistical modelling (although in the case where multilevel models are appropriate Daniel  Schad, Michael Betancourt, and  Shravan Vasishth just wrote an excellent paper on importing general ideas of good statistical workflows into Cognitive applications). Rather than using our standard data analysis models, a lot of the time cognitive models are generative models for the cognitive process coupled (sometimes awkwardly) with models for the data that is generated from a certain experiment. So we wanted an example model that is more in line with this practice than our standard multilevel regression examples. Lauren found the Balloon Analogue Risk Task (BART) in Lee and Wagenmakers’ book Bayesian Cognitive Modeling: A Practical Course, which conveniently has Stan code online4. We decided to focus on this example because it’s fairly easy to understand and has all the features we needed. But hopefully we will eventually write a longer paper that covers more common types of models. BART is an experiment that makes participants simulate pumping balloons with some fixed probability of popping after every pump. Every pump gets them more money, but they get nothing if the balloon pops. The model contains a parameter ($\gamma^+$) for risk taking behaviour and the experiment is designed to see if the risk taking behaviour changes as a person gets more drunk.  The model is described in the following DAG: Exploring the prior predictive distribution Those of you who have been paying attention will notice the Uniform(0,10) priors on the logit scale and think that these priors are a little bit terrible. And they are! Direct simulation from model leads to absolutely silly predictive distributions for the number of pumps in a single trial. Worse still, the pumps are extremely uniform across trials. Which means that the model thinks, a priori, that it is quite likely for a tipsy undergraduate to pump a balloon 90 times in each of the 20 trials. The mean number of pumps is a much more reasonable 10. Choosing tighter upper bounds on the uniform priors leads to more sensible prior predictive distributions, but then Lauren went to test out what changes this made to inference (in particular looking at how it affects the Bayes factor against the null that the $\gamma^+$ parameters were the same across different levels of drunkenness). It made very little difference.  This seemed odd so she started looking closer. Where is the p? Or, the Likelihood Principle gets in the way So what is going on here? Well the model describe in Lee and Wagenmaker’s book is not a generative model for the experimental data. Why not? Because the balloon sometimes pops! But because in this modelling setup the probability of explosion is independent of the number of pumps, this explosive possibility only appears as a constant in the likelihood. The much lauded Likelihood Principle tells us that we do not need to worry about these constants when we are doing inference. But when we are trying to generate data from the prior predictive distribution, we really need to care about these aspects of the model. Once the context on the experiment is taken into account, the prior predictive distributions change a lot. Context is important when taking statistical methods into new domains Prior predictive checks are really powerful tools. They give us a way to set priors, they give us a way to understand what our model does, they give us a way to generate data that we can use to assess the behaviour of different model comparison tools under the experimental design at hand. (Neyman-Pearson acolytes would talk about power here, but the general question lives on beyond that framework). Modifications of prior predictive checks should also be used to assess how predictions, inference, and model comparison methods behave under different but realistic deviations from the assumed generative model. (One of the points where I disagree with Lee et al.‘s paper is that it’s enough to just pre-register model comparision methods. We also need some sort of simulation study to know how they work for the problem at hand!) But prior predictive checks require understanding of the substantive field as well as understanding of how the experiment was performed. And it is not always as simple as just predict y! Balloons pop. Substantive knowledge may only be about contrasts or combinations of predictions. We need to always be aware that it’s a lot of work to translate a tool to a new scientific context. Even when that tool  appears to be as straightforward to use and as easy to explain as prior predictive checks. And maybe we should’ve called that paper The Prior Can Often Only Be Understood in the Context of the Experiment. Endnotes: 1 The fourth century Christian heresy that posited that Jesus was created by God and hence was not of the same substance. The council of Nicaea ended up writing a creed to stamp that one out. 2 Really never let me choose the movie. Never. 3 I hate the word “robust” here. Robust against what?! The answer appears to be “robust against un-earned certainty”, but I’m not sure. Maybe they want to Winsorize cognitive science? 4 Lauren had to twiddle it a bit, particularly using a non-centered parameterization to eliminate divergences. 1. Joachim says: Robust against vagaries, of course. But seriously, thanks for contributing to this special issue! We were bound to overlook things, so it’s very gratifying to see so many excellent people helping to build a compendium of good practices. 2. Those who advocate choosing the prior in the context of the likelihood, sampling model or experiment really need to think long and hard about whether they are actually using the Bayesian paradigm, or whether in fact they are just integrating over the likelihood, or a weighted version of it, simply because it feels good to do so. The Bayesian paradigm is more than just an abstract mathematical formula. Sorry to be a bit harsh. I do not mean to cause offence, but that’s my personal viewpoint. P.S. Nice to hear you are enjoying a beer (hopefully a real ale) in the best country in the world. • the fundamental question is “how to expression what you know in terms of probability”. One of the most useful and fundamental ways to express knowledge is in terms of the predictive distribution. We very often know more about what kinds of data could be seen than we do about how abstract mathematical quantities interact. for example “voltage observed at temperature T is an smoothly increasing function of temperature that takes on values between 0 and 1v and the derivative is at most dv/dT = C for values of C somewhere in the range of 1 to 2 volts per 100 degrees C” what does that mean for the coefficients of a 6 term fourier series? Hard to express that directly, but easy to adjust the priors on the coefficients until the functions that are drawn have the appropriate properties. • ojm says: Similarly re ‘the idea that priors and likelihoods can[not] be considered separately. Rather, they are consubstantial–built of the same probability substance’ I would be tempted to say a model like p(y | theta)p(theta ; alpha) is kinda just a ‘frequentist’/standard probability model of y, theta with unknown parameter alpha. Ie alpha maps to p(y,theta ; alpha). The twist being the Bayesian analysis treats alpha as fixed and known whereas the ‘frequentist’ treats it as unknown and considers each alpha in turn in order to estimate it. Sometimes the Bayesian analysis does a ‘prior robustness’ study, which seems to amount to mimicking the standard approach of considering unknown parameters (alpha) for each value in turn. Which is to say, this is the sort of Bayes that seems like a good idea, it just doesn’t really seem like ‘Bayes’ in the sense of Bayesian inference… • Andrew says: Naked: You write, Those who advocate choosing the prior in the context of the likelihood, sampling model or experiment really need to think long and hard about whether they are actually using the Bayesian paradigm, or whether in fact they are just integrating over the likelihood, or a weighted version of it, simply because it feels good to do so. Let me generalize this for you. Those who advocate method X which is based on some (necessarily) approximate theory Y really need to think long and hard about whether they are actually using Y, or whether they are just using X simply because it feels good to do so. Sure, why not? I guess we should interrogate all our decisions. Nothing to do with Bayesian statistics in particular, though. • Andrew: No matter how approximate things get it is definitely desirable to have a good conceptual paradigm underlying what we do. I know very well that you would not disagree with me on this point. In your analogy (approximate) theory Y is Bayesian but method X is questionably Bayesian. Therefore what paradigm truly underlies method X? I am not saying that one definitely does not exist. The point I was making is that it would appear not to be the Bayesian one. 3. Andrew says: Dan: That’s so Australian of you to identify a backyard with “a patch of sand”! 4. Shravan says: This prior predictive check idea is very nicely exportable into cognitive modeling. Roberts and Pashler 2000 (How persuasive is a good fit?) made a closely related point; I am smitten by their paper a little bit. Many modelers will be shocked if they actually sit down and generate the full range of plausible predictions—essentially the prior predictive distribution—from their models. • Chris Wilson says: +1. One issue with this is that it highlights how sub-optimal all the models populating popular textbooks and papers are in this regard (including my own here too)! All those models with priors aimed at being “vague”, “non-informative”, gamma(epsilon,epsilon) (which appears to be a relic of the conjugate prior for Gibbs sampling era), yadayada…We can do (much) better now – but it comes at the cost of an expanded workflow (all the way, I am assuming, through peer review). 5. An insightful and important post. I was recently asked to present something to a local group of data scientists who are meeting together to work through John K. Kruschke’s book and I chose the last session to do something on Bayesian Work Flow (so I’ll be stealing material from other authors on this blog). Without adequate Bayesian Work Flow, Bayesian analyses are likely to do more harm than good. As I commented a few posts ago, it converts a black box model into an interpretable model. Now, to try say something potentially helpful. Bayesian theorem is deductive, so as with all deductive reasoning, the real challenge is to discern as clearly as possible how the conclusions are in fact in the premises. The premises are the joint probability model of the parameters and data and the observed data. It is clear that the posterior is in these (intuitively a slice through the joint distribution of parameters and potential data at the actual data). But Bayesian analysis is inductive so we need more than clarity in just the deductive step. We need clarity on how empirical reality as been represented in the deductive step and whether or not that is sensible. So Bayesian Work Flow is just make very clear how this Bayesian analysis represents empirical reality so we can see if it makes sense. Now making sense is much harder than discerning how and by interpretable I just mean clear to see how. • I like this breakdown, the Bayesian math is just formal, the way in which it connects to the world can never be formalized, so we need many informal ways of checking the quality of this connection. • ojm says: Keith – I largely agree. I just think being truly honest about what this perspective means kind of raises a lot of more radical/interesting questions than perhaps some think/hope/etc. E.g. if we stick to Bayesian principles ‘within’ a model but abandon them when checking the empirical adequacy of a model, why not just go straight to empirical adequacy for everything? What *principles* guide checking empirical adequacy, if not Bayes theorem? This is important to me as I kind of think the main job of statistics is checking empirical adequacy of theoretical models, which might come from a very different place to where regression-style models come from. I kind of want a theory of statistics that focuses on empirical adequacy, not one that is *conditional* on empirical adequacy. It’s not that I don’t think thing like a Bayesian workflow that includes checking etc isn’t a good idea, it’s that I think it somewhat ironically amounts to re-discovering that many ‘frequentist’ ideas are not so bad after all! For better or worse, this question of empirical adequacy seems to be what a lot of ‘frequentist’ statistics is about (I’m also often tempted to just throw both Bayes and Freq out and go with data analysis + mechanistic/causal modelling). Also, what is something like a prior predictive check *really* checking? E.g. if no individual p(y|theta) is adequate but a weighted average int p(y |theta)p(theta ; alpha) dtheta is, does this show up? What does the prior predictive distribution = average of the p(y | theta) actually represent? Where does my ‘mechanism’ live? In p(y | theta) or the average over these? Is the average of a mechanistic model still a mechanistic model? Or is my model really just p(y; alpha) since I average out theta anyway? In which case I am back to doing what seems a lot like frequentist inference for a model p(y; alpha) of observable data y given parameter alpha: for *each* alpha I check if it could have generated y in some sense (e.g. look at the sampling distribution of T(y) under that alpha). • “Where does my ‘mechanism’ live? in p(y | theta) or the average over these?” The mechanism for the particular y should live in p(y | theta) but the mechanism for how theta varies from experiment to experiment should live in the p(theta | Background) or whatever. For example, if I do industrial filtration, and I buy a filter and test it on my process, p(contaminant | filter_coefficient) is a model for how the particular filter I have works. But what makes brand X a good filter company to work with is that from one order to another filter_coefficient has a small variability. If I have a time series of filtration results as the filter has been changed over and over, then I should be allowing my filter_cofficient to change with each new installed filter… However I think often people don’t do this, they mix together lots of stuff and try to do inference on “the average theta”. This is particularly true in stuff like economics, where for example people often talk about “the average effect of policy X on GDP” whereas this isn’t a good way to think about the problem. • ojm says: Daniel – this sounds like a so-called ‘random effects’ or a ‘repeated measurements’ model with two levels/scales of ‘frequentist’ variability, i.e. each is pretty strongly connected to something *observable*: noisy measurements of contaminant for a fixed filter, and variations in filter properties for a fixed company/manufacturing process. To me the essence of a Bayesian model is that probability is also used for e.g. *in principle* unobservable things or for things that are actually fixed but for which purely epistemic information about them is represented using probability. • “filtration efficiency of filter x” is in principle unobservable, like “viscosity of fluid y” or “mood of person z”. Only the consequences for how the concentration of contaminants change, fluid flows through a tube, or person answers the question is observable. • ojm says: I mean, it sounds like from your original description you’re imagining the variability arises from real physical processes that a random effects model can capture. But regardless, why do I care specifically about E [p(y |theta) ] where the expectation is over the prior? What does the result represent precisely? Do I care if a mixture can capture the ‘true’ process but not individual p(y|theta) can? How would I tell? • In the example filtration problem a given filter canister has an unknown efficiency that is a constant for that canister… but when we decide which filter company we buy from it matters not just the canister they send in the first order, but also the reliability that their future orders will remain high quality. in other words physical variability of the mfg process. so we have both epistemic uncertainty about the given canister but also epistemic uncertainty caused by mfg variability about the predictive distribution in the future. as for why you care about E[p(y|theta)] I’m not sure you would care as much as you’d care about E[y | theta] where the expectation is over the posterior for theta, because this is an estimate of tomorrow’s pollution level. • Another way to say this is that the posterior (which is basically your E[p(y|theta)] normalized) is a device for making decisions… it’s the fact that this can be used to make many many different decisions which could depend in many ways on different aspects of the problem that gives it its usefulness. • ojm says: Again you can obviously model this explicit two level variability – eg within and between canisters, mfg processes etc – fine with any old probability model (eg random effects). But what I’m getting at is, how do you decide when your (eg two level) model adequately represents the observable real world variability? Presumably you take a series of models like p1(y,theta), p2(y,theta) etc etc and compare the observable consequences with real or imagined data. Maybe you average out theta, maybe you don’t (eg in your case you might want to examine the random effects part too). But then, rather than eg put a prior over p1, p2 etc you compute something like a pvalue for each model (well for the usual Bayesian approach you typically just take a single model p1, *maybe* consider some others) to assess empirical adequacy. Why not just emphasise empirical adequacy of a model of variability in the beginning, middle and end of analysis? Genuine question I ask myself all the time when using Bayes. • Frequentist models don’t accept probability over the parameter theta… But assuming you accept the idea of probability over Theta and are doing a Bayesian analysis, yes you still have the model selection issue: p1 vs p2 vs p3 etc. There are a number of ways you could handle this. These days I think the most useful way to handle it is using Bayesian decision theory. Decide on a utility describing how the model will be used and choose the model that gives the highest average utility. Note that utility and accuracy aren’t necessarily the same thing. For example the “only thing that matters” to you might be say the behavior of the filtration system in the long run average, and you’re willing to put up with occasional spectacular failures… they just cost you the cost of throwing the filter away and replacing it from one off the shelf… Or maybe when the thing fails it’s a major problem, so you accept lower typical efficiency for a design that is extremely reliable… In the second case, maybe accuracy in your model’s prediction of the typical efficiency is less important, but it should represent the tail of the distribution of outcomes very well… In the first case maybe you really don’t care about the tail but you should predict the typical efficiency very well… • ojm says: Given a description like this: > For example, if I do industrial filtration, and I buy a filter and test it on my process, p(contaminant | filter_coefficient) is a model for how the particular filter I have works. > But what makes brand X a good filter company to work with is that from one order to another filter_coefficient has a small variability. > If I have a time series of filtration results as the filter has been changed over and over, then I should be allowing my filter_cofficient to change with each new installed filter… Again: you can definitely motivate a frequentist multilevel/random effects model! E.g. your description is making specific reference to actual real-world variation in the data not just a state of information or belief or whatever. You just have to accept that measuring contaminant given a particular filter has one type of variation, while the variation between filters due to manufacturing has another type of variation. Yes a slightly different interpretation to a ‘Bayesian’ model but there is nothing in ‘frequentist’ modelling that restricts modelling data variation to a single level, especially a case like this when the different sources of variability in observations are pretty clear. Re: doing Bayesian decision theory on the p1, p2 etc – this means you *do* need a prior over these. In the case of p(y,theta ; alpha) you need a prior over alpha. So you wouldn’t be doing prior or posterior predictive checks, but more standard Bayes over models indexed by alpha. (Though again you would have a p(alpha | gamma) and fixed gamma etc to contend with). • sure you can model data variation, but you can’t model parameter uncertainty. suppose your model is concentration follows an ODE with dc/dt =f(e,c,t,q)+err(t,c) e is your unknown efficiency, c is your concentration, t is time and q is additional factors related to system operation. the error has a time and concentration related variability due to measurement instrument. Suppose you change the filter every month. during a given month, the only Frequentist random component here is the observation or measurement error. you could put a Frequentist random component over the whole series of e values that you will get from future purchases, if such a thing exists (perhaps next month you plan to change the equipment?), but it has nothing to do with your knowledge of the true value of the particular e for this particular time period with this particular filter. in other words, the Frequentist will admit the random effect over multiple e but not a probability over the particular e for this experiment which is different from the frequency with which the supplier provides filters that have a given e. suppose you change filters every month, it’s mid month, you want to predict the concentration tomorrow. the Frequentist model says e is what it is, there’s no probability associated. only the measurement errors have frequencies. the Frequentist model will happily admit probability over next months e because unwrapping a new filter is like pulling a handle on a slot machine… it’s a bit perverse really which is why all models of this form are actually Bayesian in practice and the “Frequentist random effects models” are just Bayes with a flat prior and a MAP estimate. in a Frequentist setting the fact that a thing varies in time or with new equipment etc means you can put a probability distribution over it with shape equal to the shape you will get for repeated samples of the thing… if you give it a different shape you are objectively wrong according to the theory. • As for a prior over the models. I figure if I could think them up to code them in the first place, I must have some idea that they are worth considering. The next step where I give them some numerical quantity that tells me how much they’re each worth considering doesn’t bother me much, especially because I might often just put 1/N for each if I don’t have much other reason to prefer one to another…. The bigger problem which you’ve pointed out multiple times is when there is some “other unknown unconsidered model”. I still think working hard on the utility function and then choosing the model that does the best job is a couple orders of magnitude better than the usual alternatives. For example, suppose whenever the concentration of the pollutant exceeds some level, I have to shut down my machine and do a filter change and flush the existing fluids… and it costs me some amount of money. Suppose that if I can predict that this is going to happen in the next day, that I can schedule this shutdown in a way that doesn’t cost much money. Obviously the performance of my model in the right tail of the concentration distribution is critical here. It could do a terrible job predicting concentrations down below half the cutoff but as long as it does a good job predicting when concentrations will exceed the cutoff ahead of time, I am going to want that model. p value based analysis of this model might easily reject it as failing to match the observed frequency of concentration, by a LOT in the range of low concentrations, but *in the portion of phase space where it matters* it could still be by far the right model. You just won’t capture that without the utility based analysis. • Whoops I didn’t mean to put the error term in the diffeq, imagine c =f(t)+err(c,t) with f defined by the diffeq… Teach me to do math first thing in the morning on my phone before getting out of bed…🙀 • ojm says: Re: it’s a bit perverse really which is why all models of this form are actually Bayesian in practice and the “Frequentist random effects models” are just Bayes with a flat prior and a MAP estimate. Sigh :-( Re: priors over models and decision theory. It’s OK if that’s what you want to do because that’s the Bayesian way or whatever, but back to the general topic of the post (!), it means the role of prior or predictive *checks* is unclear in your approach. You don’t seem to need them (which is OK I guess). But for those who do want them, what is the rationale? And of course it’s OK to not have one clear philosophy or whatever, it just opens up a lot of other questions, like why not do something else entirely? Eg focus on the underlying rationale – empirical adequacy? – at the beginning, middle and end of analysis? • I mean, I’d be interested to see someone do a Frequentist analysis of a diffeq model not involving maximum likelihood, you know with sampling distributions of the estimator of the nonlinear parameters and no priors and p values and even someone should do some kind of severity analysis… But whatever. Predictive checks is exactly what I’m talking about with utilities. Write the utility in terms of your accuracy at posterior prediction in terms of frequency of costing you various amounts of money or whatever, and choose your model in that basis. • Maybe I’m not being clear, so I’ll try to explain more carefully: Approach 1) Use posterior predictive checks calculating p values for data vs prediction, try to make decisions based on some kind of frequency matching of probability of error vs frequency of error… Approach 2) Use posterior predictive checks to maximize utility of the model in doing its main actual job, which is not to have error e with frequency p(e) but to make you make good decisions… so perhaps Cost(actual,prediction) is a function that only has appreciable quantity when actual is large (lots of pollutants) and prediction is small (model thought things were fine)… and using your data you estimate mean(Cost(actual,prediction)) averaged over the posterior for each model, and choose the model that gives you the best performance… The model that gives the best performance is the one which correctly predicts the large deviations in pollution, regardless of what happens when actual pollution is small… it could be terribly “calibrated” but always know when something was about to break and leak pollutants everywhere… and you’d pick it. • Actually, I don’t think you really need priors over the models right? you have p1, p2, p3, you fit each one, you specify a utility in terms of something important to you: U(actual, predicted, etc…) using the posterior for each of p1, p2, p3 you calculate E(U(…)) where the expectation is taken over the example cases / experiments you performed as well as the posterior parameters in each model…. and even the expected future cases you’ll use the model for… then choose the model that produces the least cost or most utility or whatever. • ojm says: (People *do* do that…) But back to the decision theory case it sounds like you’re doing eg maxi min expected utility or something over the space of your models. Or even just max likelihood for your hyperparameter… Why not weight each model by a probability? • Chris Wilson says: ojm, to answer your last question. If you are in the M-closed setting, and are interested in converging on (“learning”) the true model, that can be a good approach. Not so much in M-open. In that case, things like stacking are better. Now, what Daniel is talking about doing is using Bayesian decision theory, or expected utility. Expectations collapse distributions into scalars enabling neat things like ordering. Whether this is wise, is context dependent. My \$0.02 • > (People *do* do that…) A few maybe, I doubt the bit about severity, as that seems to be a newish idea that hasn’t exactly been broadly developed. The vast majority of cases I’ve seen where people fit things like diffeq models have been simple least-squares on the observations, or maybe maximum likelihood on non-normal errors. I don’t count those applications as frequentist as they don’t seem to rely on concepts strongly related to testing the frequency properties of models and they have a very simple Bayesian interpretation. I note that Mayo, a major champion of frequency based modeling routinely dismisses the “likelihood principle” so I don’t think I’m entirely off base in thinking that max-likelihood isn’t quite the same thing. Frequentism isn’t just “Bayes without a prior” but rather a procedure where only objects with long run frequency can have probability and where sampling distributions are the primary object of interest. Back to the case of decision theory. Suppose you have 3 tools that can be used to fix your ancient tube TV set. Each one has some chance of working based on what might be wrong with your TV. You could have A, B, or C going on with your TV, and each tool has p1(Fixed | A) , p1(Fixed|B) … p3(Fixed|C) Your utility is U(Fixed)=1, U(Broken)=0. You can calculate the expected utility for each tool… and choose to use the tool that has the highest expected utility under your model for what’s wrong. You need a prior and/or posterior over what is likely to be wrong, but you don’t need a prior over “which is the *true* tool”. If you view models as just “tools to make decisions” then the situation is totally symmetric. You have a model for your future usage of filtration (could be a prior, or a fitted posterior), and you have a utility over filtration outcomes in that future usage, you can choose which model to use to make decisions about the filtration maintenance without a prior over “which is the *true* model”. You might argue how to choose that model of the future usage of filtration? but in general this is something about your own intention “I’ll be running a filter for the next 18 months while we refine corn syrup” or whatever. Could be “I’ll be running experiments on mice to see how they respond to dietary changes every week for the remaining time on my grant” or whatnot. These are not exactly M-open. • It might be hard to see what I’m talking about without some kind of example of how the utility would vary across models. Sticking with filtration, suppose you have three models of the filtration process. Model 1) Built on a relatively detailed analysis of how filtration paper degrades and starts to pass particles, so it’s more “true” to the physics, and it has excellent predictive performance for the first 5 days and very good performance out to 10 days, but has a tendency to underestimate the degradation at long service life for lack of a nonlinear process of agglomeration that you don’t have a good model for, so it predicts the filter should stay in service too long. Model 2) Uses a simple non-physical constant decay and just relies on fitting this decay rate. Doesn’t predict filtration efficiency well at any point in time, but tends to decay fast enough that it predicts you should change your filter well before you really need to, at least it doesn’t leave you high and dry with a full day of downtime when your filter suddenly breaks. Model 3) Includes a simple version of the nonlinear agglomeration process missing from 1, but because it’s too simplistic, it has nonphysical oscillations throughout the timeframe and does a very poor job of predicting filtration in the early stages due to early onset of decay and oscillations, however it falls off in efficiency rapidly at long service life in exactly the way that the real filter does, and reliably predicts end of life to within 12 hours at least 48 hours ahead of the falloff. Obviously, if your main concern is to get the end of life behavior correct, you use Model 3 even though it has all kinds of nonphysical oscillations and does a poor job of predicting for the first several days, because what matters to you is that it give you a day or two of heads up right before the filter conks out so you can schedule a replacement during routine down-time. Model accuracy checking would tell you that of the 3 the first one was “most correct” because for the first N days it does an *excellent* job of predicting the filtration in a regime where you really don’t need an excellent job. I don’t need a prior over “the probability that model X” is correct to make a decision to use Model 3, I just need knowledge of how I’m using the model, and knowledge of how the model works in terms of posterior predictive Utility. • ojm says: Re: m-closed, m-open, stacking etc IMO these are just fancy ways to admit what everybody else was already saying – eg there are many things it doesn’t make sense to put probabilities over and other ideas are needed. Cool, but now all the usual Bayesian formalism *technically* goes out the window…If you want a fancy way of phrasing one of my questions – why stick with a formalism designed for an M-closed world when faced with an M-open world? Or, how should we interpret M-closed tools in an M-open world? Etc. • Andrew says: Ojm: It’s turtles all the way down. We use Bayesian inference conditional on a model because it helps us solve lots of problems that involve variation and uncertainty. • ojm, Andrew, if you want to know the best range parameters to put into your filter model, you run Bayes on the filter data and model… it gives you a posterior distribution, the distribution tells you IF the model were a potentially accurate and self consistent one, which numerical values would make it accurate and self consistent? the usefulness of a Bayesian model and it’s superiority over point estimates remains even if it doesn’t solve every problem. In my filter example, I need the Bayesian posterior distribution so I can figure out how well each model does when it’s tuned into a self consistent state. I can do that with maximum likelihood as well, but it’s definitely the case that the Bayesian posterior gives a better indication of the utility than just a maxlike point estimate. the prior can lead you astray, but it can also easily lead you to a better decision. Computers obviously lead to better calculations of weather predictions even if software bugs are also possible. I’m not a big fan of the m-open vs m-closed concept. inference about *the one true model* is not as important or even feasible as choice of a sufficiently useful model after tuning it. m-open is about being logically open to the vastness of model space… fine but here and now which of the 3 weather forecasts should I use to decide whether to go sailing? • ojm says: > We use Bayesian inference conditional on a model because it helps us solve lots of problems that involve variation and uncertainty. That’s fair – so do I, often! I ask these questions as someone who has been using the ‘Gelman-Box-Rubin’ workflow for a while and been telling people to do predictive checks since I learned about them. These questions are as much for me as for anyone else! But when it comes to these more general questions – like how do we properly check models, define robustness, work in M-open worlds, what does a prior/posterior predictive check actually mean etc – I can’t help but wonder if it’s worth taking a broader view. Eg adapt to the world as it is rather than trying to adapt the world to fixed tools. • ojm says: Eg I had a few conversations with Laurie Davies about this sort of thing a few years ago. I kept trying to stick with the usual Bayes Bayes Bayes answers to his questions but it eventually just felt like epicycles and rationalisation. Kind of fun to ask what it would look like to throw it all out and start over… 6. jd says: Great post. I really like the last line – “in the context of the experiment.” This may be a dumb question – let’s say I am using a negative binomial model for count data, where I know that it is physically impossible to exceed a certain count in my data in the real world. So, I can set priors on the shape parameter, intercept, and say a binary treatment effect, where I can run prior predictive checks and predict data that is within the realm of possibility. But let’s say I have a bunch of group level effects, and I put something like half-normal(0,1) on the sd, which seems reasonably conservative on the logit scale if I don’t know much. But now when I run a bunch of prior predictive checks, I get data that is well outside of what is possible, because of all these group level effects. It seems like in this scenario, the more priors you have to sample from, then the smaller they would need to be in order to get data that is still within the realm of possibility… should one leave out group level effects from prior predictive checks? 7. In the just finished paper that I did read, there is a distinction being made between the likelihood and the experiment which seems unnatural to me. I thought an adequate likelihood would reflect anything that is informative in the experiment. That is “the observations you got to see generating model” part of the joint distribution if adequate – won’t miss anything informative. In fact, it would define whats is informative.
2019-09-22 21:13:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5683318972587585, "perplexity": 962.9459251675322}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514575674.3/warc/CC-MAIN-20190922201055-20190922223055-00074.warc.gz"}
http://cdm.link/2015/06/8-bit-remake-hasselhoffs-true-survivor-best-thing-weve-heard-week/
Okay, we hit some sort of nerd singularity just now. Start with David Hasselhoff’s cheeky, cheesy “True Survivor.” Remake it on the 8-bit SidTracker 64 app. You’ll swear all of this actually happened in the 80s, even if it didn’t. Retrorgasm. And yes, this gem is included in the app. Musical arrangement: Fredrik Segerfalk
2018-08-22 03:55:29
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9136123657226562, "perplexity": 13228.101573470607}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221219469.90/warc/CC-MAIN-20180822030004-20180822050004-00426.warc.gz"}
https://socratic.org/questions/how-do-you-solve-by-substitution-2x-3y-1-and-y-x-8
# How do you solve by substitution 2x + 3y = 1 and y = x – 8? Jun 9, 2015 $2 x + 3 y = 1$ .......equation $\left(1\right)$ y = color(red)(x – 8 ...............equation $\left(2\right)$ Substituting equation $\left(2\right)$ in equation $\left(1\right)$ to find $x$ #### Explanation: $2 x + 3 y = 1$ 2x + 3color(red)((x-8)= 1 $2 x + 3 x - 24 = 1$ $5 x = 25$ color(blue)(x = 5 Substituting $x$ in equation $\left(2\right)$ to find $y$ $y = x - 8$ y = 5-8 , color(blue)(y =-3 The solutions for the system of equations are color(blue)( x=5, y=-3
2019-10-23 19:08:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 18, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7093604207038879, "perplexity": 2778.3744886621766}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987835748.66/warc/CC-MAIN-20191023173708-20191023201208-00293.warc.gz"}
https://www.kim-borg.com/rvb-new-tcasu/how-to-draw-octane-3bfe79
# how to draw octane Udgivet den:11 januar 2021 By Feb 23, 2020 - Helo Welcome to my chanel !!! octane apexlegends you are a nightmare to draw and I’m never drawing your mask again. If want see draw other stuff, jus. Answer: I tried. 1.5M ratings 277k ratings See, that’s what the app is perfect for. Poorly Drawn Apex Legends — Can you draw Octane, Wattson, and Revenant please? I recreated the eyelashes from a real eye and show you how I did it below. How to draw a full structure of 2,3-dimethyl octane and 3 hexyne ? 2. The distribution of n-octane in blood, liver, kidney, and brain of mice was studied at different inspired air concentrations and after different exposure times. Including stereoisomers, there are 24 stereoisomers of "C"_8"H"_18. Dates: Modify . b. only primary hydrogens. 4-methyl-octane. How to draw octane rocket league. More... Molecular Weight: 128.25 g/mol. See the answer. Expert Answer . Only used a grey pencil for this. Use Octane's Stim barehanded, try to swap weapon. Octane Next, draw all the heptanes. The answer to “Draw the structures of octane and isooctane.” is broken down into a number of easy to follow steps, and 7 words. Are you using any software with an overlay? Question: Draw The Complete Structure Of Isooctane (show All Hydrogen Atoms). There's only one carbon in our shortest path. WH1SPAZ Please provide your squad mates' gamertag/PSN ID/EA Account name if possible. Follow along with the step by step drawings and instructions below to also draw … #Elliott Witt #Apex Mirage #Apex Legends #Apex Bloodhound #Apex Octane #Bloodhound #Octane #Mirage #My art #Sketches #I'm sorry for mess #Very tired from a … They will have either two methyl groups or an ethyl group. 882k members in the RocketLeague community. The alkyl group names can be used in common nomenclature. So we put that in there. Draw only the lone pairs found in all resonance structures, do not include the lone pairs that are not on all of the resonance structures. This Legend can be unlocked by using digital currency: pay 12,000 or 750; or by buying the Champion Edition. Please explain. Kernel Settings. . Happened for me with a shotgun on slot 2 if that makes any difference. How To Draw Octane Rocket League Play | Download. 2-Methylheptane 3-Methylheptane (2 enantiomers) 4-Methylheptane Next, come the hexane isomers. Microsoft Xbox One X What is your gamertag/PSN ID/EA Account name? The air concentrations varied between 10 and 10,000 ppm and the exposure time, between 0.5 and 24 hr. I need help getting … Draw the structure and give the systematic name of a compound with molecular formula $\mathrm{C}_{5} \mathrm{H}_{12}$ that has a. only primary and secondary hydrogens. Tumblr is a place to express yourself, discover yourself, and bond over the stuff you love. Rocket-powered cars meet soccer in Psyonix's successful title Rocket League … I would rather leverage my comforts there if I can. And then finally, the shortest path excluding our bridgehead carbon. Expert Answer 100% (13 ratings) Previous question Next question Get more help from Chegg. And then, there were 8 total carbons, so it is octane. Draw the complete structure of isooctane (show all hydrogen atoms). So let's draw out octane. Octane is a Legend introduced in Season 1 that is locked from the base game. So whatever you recommend for this problem is find out a system that works for you. Well, that was these two right here. Octane has 4 different Kernel types for processing renders. PMC is the most processor intensive, and would normally only be used when things like caustics are involved, but I prefer the result. Normally it'll let you but sometimes you can't. Octane is a high-speed Offensive Legend as the name implies. First, draw the straight-chain octane. So I have 12345678 carbons. The left part lists all available/used materials, the middle part lists current scene models with their available surfaces. d. two secondary hydrogens. Log Octanol-Water Partition Coef (SRC): Log Kow (KOWWIN v1.67 estimate) = 3.55 Boiling Pt, Melting Pt, Vapor Pressure Estimations (MPBPWIN v1.42): Boiling Pt (deg C): 136.49 (Adapted Stein & Brown method) Melting Pt (deg C): -10.55 (Mean or Weighted MP) VP(mm Hg,25 deg C): 11.5 (Mean VP of Antoine & … Where there can be a double or triple bond, draw a dotted line (-----) for the bond. Contents. 4 … So, just to recap 12345678 carbons, and that is called octane. I need to illistrate* 10 distinct isomers of an octane. Create . Anonymous said: Can you draw Octane, Wattson, and Revenant please? Drawing realistic eyelashes is really hard to accomplish. Question: 1. Octane is a hydrocarbon and an alkane with the chemical formula C 8 H 18, and the condensed structural formula CH 3 (CH 2) 6 CH 3.Octane has many structural isomers that differ by the amount and location of branching in the carbon chain. Sorry for sketchy audio. Draw length adjustment using this inner cam variant is as easy as it gets. It looks like you’re using ArtStation from Great Britain. This expansive textbook survival guide covers 30 chapters, and 2818 solutions. The Materials Tab of the Octane Settings window. Would you like to change the currency to Pounds (£)? (a) An isopropylheptane (b) A diethyldecane (c) A cis-diethylcyclohexane (d) A trans-dihalocyclopentane (e) A (2,3-dimethylpentyl)cycloalkane (f) A bicyclononane View Answer. 4-methyl octane. 2021-01-02. You can just draw curved lines all over the eyes and expect them to look real. 2,3,4,5,6,7-hexamethyl-octane I need to draw the structural formulas for the following molecules. Hello guys i hope you enjoy this new video. Let's see. For example if i used cyclobutane for the parent structure instead of cyclooctane, I again get 16 H no matter how I attach the other 4 carbons to the cyclobutane parent. Okay, so eight carbons. One of these isomers, 2,2,4-trimethylpentane (commonly called iso-octane) is used as one of the standard values in the octane rating scale. I tend to always use PMC, though path-tracing is sufficient. His Stim is great for closing in on opponents and covering large distances. This problem has been solved! ''Draw For Fun''Follow along to learn how to draw octane | apex legends, super easy, step by step.. drawing lesson . Get 1:1 … I know that whatever draw next has to have eight carbons. In each case, draw and name two structures that match the description. The vector stencils library "Conformations" contains 32 symbols of ring conformations, Newman and Fisher projections for chemical and biochemical drawing the molecular models and structural formulas of organic molecules and biochemical metabolites, the conformers spatial structures of organic molecules, the schemes of stereospecific chemical reactions in organic synthesis. So next, there'll be a 2. Previous question Next question Get more help from Chegg. Draw condensed and skeletal structures for butane, hexane, octane, and decane. The higher the letter, the shorter your draw length will be (to paint a clearer picture, "A" is the longest draw length for the cam, with 1/2" … ]octane. 5.9k votes, 235 comments. Start by drawing the parent chain, octane. It's where your interests connect you with your people. I draw a few other cyclic structures to make sure this pattern holds true. It took me about an hour to make. I am much more familiar with Cinema 4D and Octane. This sounds like a homework question so I don’t feel comfortable giving you a straight up answer. ‌ When I try to equip weapon after using Octane's Stim barehanded sometimes, I can't equip/swap weapons until Stim runs out. c. one tertiary hydrogen. Simply adjust the letter knobs to adjust the length by 1/2". Predicted data is generated using the US Environmental Protection Agency’s EPISuite™. This full solution covers the following key subjects: draw, isooctane, octane, structures. The compound drawn above is actually trans-2-octene, because the two hydrogens are attached to different sides of the double bond. -Thanks *didn't know how to spell >.< 1 Structures Expand this section. The vector stencils library "Conformations" contains 32 symbols of ring conformations, Newman and Fisher projections for chemical and biochemical drawing the molecular models and structural formulas of organic molecules and biochemical metabolites, the conformers spatial structures of organic molecules, the schemes of stereospecific chemical reactions in organic synthesis. They will have a methyl group. I have octane. 2005-03-26. Remember that the double bond is placed between carbon 2 and carbon 3, so you'll have Now for the tricky part. Step by beginner drawing tutorial of the octane racer in rocket league. With your chosen model loaded, open the Octane Settings window, and select the materials tab. 3 Chemical and Physical Properties Expand this section. MichaelPeguero87 Tue, 03/27/2012 - 00:22. How? And what I mean by that is here. You work systematically. 2,3,4,5,6,7-hexamethyl-octane I need to draw the structural formulas for the following molecules. I also need to render reflections, displacements, and info passes, so I really hope there is a solution with Octane. Draw the Lewis Structure & Resonance for the molecule (using solid lines for bonds). 2 Names and Identifiers Expand this section. So the final IUPAC name for this molecule is 6,8-dimethylbicyclo[3.2.1. Product: Apex Legends Platform:Microsoft XBOX One Please specify your platform model. Looks like how to draw octane using ArtStation from great Britain eight carbons recreated the eyelashes from real. To look real you how i did it below triple bond, draw and I’m never drawing your again... To Pounds ( £ ) adjustment using how to draw octane inner cam variant is as easy as it gets by drawing. Name implies double bond is placed between carbon 2 and carbon 3, so you 'll have Now the! Are attached to different sides of the octane rating scale for you that! ) previous question Next question Get more help from Chegg triple bond, draw a other... Following molecules between 10 and 10,000 ppm and the exposure time, between 0.5 and 24.... You how i did it below eight carbons ( 2 enantiomers ) 4-Methylheptane,. A system that works for you discover yourself, discover yourself, and over...!!!!!!!!!!!!!!. A dotted line ( -- -- - ) for the following molecules what the app is perfect.... Each case, draw a few other cyclic structures to make sure this pattern holds true opponents covering... To my chanel!!!!!!!!!!!!!!!!. Question Get more help from Chegg used as one of these isomers 2,2,4-trimethylpentane. Path excluding our bridgehead carbon called octane you recommend for this problem find... Chanel!!!!!!!!!!!!!!!. With their available surfaces currency: pay 12,000 or 750 ; or by the! And 2818 solutions provide your squad mates ' gamertag/PSN ID/EA Account name if possible need. Stuff you love for me with a shotgun on slot 2 if that makes any difference as gets... Open the octane Settings window, and Revenant please the octane Settings window, and 2818 solutions so whatever recommend. It 's where your interests connect you with your people info passes so... Are a nightmare to draw the Complete Structure of isooctane ( show all Hydrogen Atoms.! Structures to make sure this pattern holds true currency: pay 12,000 or 750 ; or by buying the Edition! Mask again you recommend for this problem is find out a system that works for.. Window, and that is locked from the base game i need to draw dotted! Ratings 277k ratings See, that’s what the app is perfect for can you draw,... Getting … how to spell >. < it looks like you’re using ArtStation from Britain! 10,000 ppm and the exposure time, between 0.5 and 24 hr so it is octane spell > . < it looks like you’re using ArtStation from Britain! For the following molecules one carbon in our shortest path excluding our bridgehead carbon 's one. Data is generated using the US Environmental Protection Agency’s EPISuite™ molecule ( using solid for! Protection Agency’s EPISuite™ only one carbon in our shortest path excluding our bridgehead carbon C '' _8 '' H _18... 277K ratings See, that’s what the app is perfect for to Pounds £! In Season 1 that is locked from the base game can just draw curved lines over... Is a high-speed Offensive Legend as the name implies did n't know how draw... 4D and octane in the octane rating scale render reflections, displacements, and that is from. It looks like you’re using ArtStation from great Britain eye and show how... Question Next question Get more help from Chegg When i try to swap weapon Answer 100 % 13. Methyl groups or an ethyl group ratings ) previous question Next question Get more help Chegg! 2020 - Helo Welcome to my chanel!!!!!!!! Season 1 that is locked from the base game and then, there were 8 total carbons so... New video i draw a full Structure of isooctane ( show all Hydrogen Atoms ) never your.!!!!!!!!!!!!!!!!!!!!. Equip weapon after using octane 's Stim barehanded sometimes, i ca.. Actually trans-2-octene, because the two hydrogens are attached to different sides of the bond... On slot 2 if that makes any difference discover yourself, discover yourself, and Revenant please how to draw octane ''! Lists all available/used materials, the shortest path system how to draw octane works for you octane Wattson. On slot 2 if that makes any difference time, between 0.5 and 24 hr group..., discover yourself, and decane whatever draw Next has to have eight.... These isomers, 2,2,4-trimethylpentane ( commonly called iso-octane ) is used as one of these,. And 3 hexyne 1.5m ratings 277k ratings See, that’s what the app is perfect for the from! ; or by buying the Champion Edition Including stereoisomers, there were 8 total carbons, decane... For bonds ) Legends — can you draw octane, Wattson, and that is called octane,! You ca n't them to how to draw octane real octane apexlegends you are a nightmare to draw the Complete Structure isooctane..., that’s what the app is perfect for window, and how to draw octane draw. ( £ ) Atoms ) Environmental Protection Agency’s EPISuite™ the eyelashes from a eye! West Elm Sisal Rug, Is Flame Princess Dating Cinnamon Bun, Bikeroo Oversized Comfort Bike Seat, Dog Trainer Certification Canada, Mediterranean Diet Without Tomatoesbest Saddle For Touring, The International School Bangalore Principal, Morrowind Levitate Ring, Thai Restaurant Wellington, Recent Advances In Impression Materials, Hyundai Generator 3,800, Jute Fibres Are Decomposed By, Logitech Z200 White, Mexican Art Online Store,
2021-04-21 04:44:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3106476068496704, "perplexity": 6812.597931609687}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039508673.81/warc/CC-MAIN-20210421035139-20210421065139-00133.warc.gz"}
http://mathhelpforum.com/differential-geometry/162173-image-increasing-function.html
# Thread: Image of an Increasing Function 1. ## Image of an Increasing Function Hi, Could someone help me in proving this statement: Let I be an interval and assume that f: I--->R is an increasing function. Prove that if the image of f(I) is connected then f must be continuous. Thanks a lot 2. Originally Posted by AKTilted Hi, Could someone help me in proving this statement: Let I be an interval and assume that f: I--->R is an increasing function. Prove that if the image of f(I) is connected then f must be continuous. Thanks a lot What have you tried? 3. So far, I've written down the definition of an interval as follows: There exists a,b in S and c in S such that a < c < b. S Let S1 = Intersection (-inf,c) Let S2 = Intersection (c,inf). Then I've assumed f is not confinuous at a point a and started an epsilon-delta arguement. I think it has something to do with f(c) + epsilon/2, but I can't complete the proof... 4. Use the notation $f(c+)$ for the limit on the right at $x=c$. Likewise for the limit on the left, $f(c-)$. Monotonic functions are quasi-continuous, so both exist. If $f$ is not continuous at $x=c$ then either $f(c-) or $f(c). If $f(c-) consider $\left( { - \infty ,f(c - )} \right] \cup \left[ {f(c),\infty } \right)$. 5. Originally Posted by AKTilted So far, I've written down the definition of an interval as follows: There exists a,b in S and c in S such that a < c < b. S Let S1 = Intersection (-inf,c) Let S2 = Intersection (c,inf). Then I've assumed f is not confinuous at a point a and started an epsilon-delta arguement. I think it has something to do with f(c) + epsilon/2, but I can't complete the proof... Try thinking of it this way. Suppose that $f$ was not continuous then you can find some $x_0\in S$, and some $\varepsilon>0$ such that for all ]math]\delta>0[/tex] there exists some $y_\delta$ such that $|x-y_\delta|<\delta$ and $|f(x)-f(y_\delta)|\geqslant \varepsilon$. Try creating a contradiction with this. Hint: Think about the IV property.
2017-05-28 20:40:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9148620963096619, "perplexity": 849.1536729484471}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463611560.41/warc/CC-MAIN-20170528200854-20170528220854-00050.warc.gz"}
https://chat.stackexchange.com/transcript/71/2022/1/20
6:40 AM @fqq Have you tried modelling matters stackexchange? Thats one of my favourite 7:31 AM I recently wrote a Sage / Python program that uses JPL Horizons data to create interactive 3D plots of trajectories of Solar System bodies. You can specify any target & observation centre that Horizons knows about: (1,169,191 asteroids, 3,778 comets, 211 planetary satellites {includes satellites of Earth and dwarf planet Pluto}, 8 planets, the Sun, L1, L2, select spacecraft, and system barycenters). There's a link to my program in space.stackexchange.com/a/57832/38535 Of course, a lot of trajectories don't need 3D. Here's a plot of Io as seen from Ganymede. 7:45 AM @imbAF Because before Einstein, everybody thought the weirdness of stuff moving near lightspeed was something to do with electromagnetism. Einstein had the insight that it was due to a geometric connection between space and time (and his former maths teacher Minkowski made the final step of uniting them into spacetime). See physics.stackexchange.com/a/291346/123208 2 hours later… 9:21 AM "Morally, the second order approximation should be 'halfway between' the two aforementioned flows." @PM2Ring I don't really know what the Horizons data contains - is your program simulating anything or is it just plotting what the data tells it to? 9:41 AM @ACuriousMind Horizons has the position & velocity data, my program just fetches & plots that data. It uses the normalised velocities as tangents to create cubic Bézier control points. The JPL ephemerides are the basis of the USNO & British astronomical almanacs. See en.wikipedia.org/wiki/… for a good summary of how JPL produce their ephemerides. Briefly, they integrate the equations of motion, with relativistic corrections. But that requires good data for the body masses and initial locations & velocities. So the generated ephemerides are verified against ground- & space- based observational data. The Chebyshev coefficients are simply the method used to store the generated ephemeride data so that it can be precisely interpolated as necessary. — PM 2Ring 3 hours ago "The method of special perturbations was applied, using numerical integration to solve the n-body problem, in effect putting the entire Solar System into motion in the computer's memory, accounting for all relevant physical laws [...] As of DE421, perturbations from 343 asteroids, representing about 90% of the mass of the main asteroid belt, have been included in the dynamical model" neat I just learned that Horizons won't let me specify an asteroid as the observation centre. Which is a bit odd, since you can specify a spacecraft, eg STEREO-B space.stackexchange.com/a/56140/38535 Horizons gives you access to planetary system barycentre data from 9999 BC to 9999 AD. Data for the actual bodies covers a smaller time span. 10:24 AM Couldn't find any tool to really visualize special conformal transformations so I made a little one : 2 hours later… 12:45 PM Hello everyone, I'm reading Coleman's "Aspects of symmetry". Around page 70 he discuss scale invariance, and states that under a transformation of coordinates $x \rightarrow e^\alpha x$, in order to have a symmetry we need 1) vanishing masses and 2) that fields transform as $f \rightarrow e^{d\alpha} f$, where $d=1$ for bosonic fields and $d=3/2$ for fermionic fields. Anyone can explain me why? Or give me more references? @john the scaling dimension of a field is dictated by it's mass dimension because if you want the action to be invariant then you at least need the kinetic term for your field to scale with the inverse of what the $\mathrm{d}^nx$ integral measure scales with the two values you cite are for a scalar and a Dirac fermion in 4d, they are different for different numbers of space-time dimensions just write down just the kinetic part of the action and apply the scale transformation and you should be able to work out how the specific values come about @ACuriousMind oh ok great, thank you very much 1 hour later… 1:53 PM Is there a connection between the jet of a diffeomorphism and the vector field of an isotopy of diffeomorphism? 3 hours later… 4:24 PM I have some vague association of ideas in my head to tell me "maybe" diffeomorphism group is an infinite dimensional Lie group with the vector algebra of TM as a Lie algebra, isotopy of diffeomorphism is a curve in $\mathrm{Diff}(M)$, vectors are defined as jet equivalence of curves, relationship between the jet group and the tangent bundle, jet equivalences can be defined by their evaluation on curves I am not sure if that is anything tho also it sounds like it should
2022-05-19 16:34:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6734593510627747, "perplexity": 1055.3322830865818}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662529538.2/warc/CC-MAIN-20220519141152-20220519171152-00346.warc.gz"}
https://www.nag.com/numeric/nl/nagdoc_27.3/clhtml/d01/d01zlc.html
# NAG CL Interfaced01zlc (opt_​get) Settings help CL Name Style: ## 1Purpose d01zlc is used to query the current value associated with an optional parameter for d01esc and d01rac. ## 2Specification #include void d01zlc (const char *optstr, Integer *ivalue, double *rvalue, char *cvalue, Integer lcvalue, Nag_VariableType *optype, const Integer iopts[], const double opts[], NagError *fail) The function may be called by the names: d01zlc or nag_quad_opt_get. ## 3Description d01zlc is used to query the current value associated with optional parameters. It is necessary to initialize optional parameter arrays, iopts and opts, using d01zkc before any optional parameters are queried. d01zlc will normally return either an integer, real or character value dependent upon the type associated with the optional parameter being queried. Some real and integer optional parameters also return additional information in cvalue. Whether the optional parameter queried is of integer, real or character type, and whether additional information is returned in cvalue, is indicated by the returned value of optype. Information on optional parameter names and whether these options are real, integer or character can be found in Section 11 in d01esc and d01rac. None. ## 5Arguments 1: $\mathbf{optstr}$const char * Input On entry: a string identifying the option whose current value is required. See Section 11 in d01esc and d01rac for information on valid optional parameters. In addition, the following is a valid option: $\mathbf{Identify}$ In which case d01zlc returns in cvalue the $6$ character function name supplied to d01zkc when the optional parameter arrays iopts and opts were initialized. 2: $\mathbf{ivalue}$Integer * Output On exit: if the optional parameter supplied in optstr is an integer valued parameter, ivalue will hold that value. 3: $\mathbf{rvalue}$double * Output On exit: if the optional parameter supplied in optstr is a real valued parameter, rvalue will hold that value. 4: $\mathbf{cvalue}$char * Output Note: the string returned in cvalue will never exceed $\mathrm{min}\phantom{\rule{0.125em}{0ex}}\left({\mathbf{lcvalue}},41\right)$ characters in length (including the null terminator). On exit: if the optional parameter supplied in optstr is a character valued parameter, cvalue will hold that value. cvalue will also contain additional information for some integer and real valued parameters, as indicated by optype. 5: $\mathbf{lcvalue}$Integer Input On entry: length of cvalue. At most $\mathrm{min}\phantom{\rule{0.125em}{0ex}}\left({\mathbf{lcvalue}}-1,40\right)$ non-null characters will be written into cvalue. Constraint: ${\mathbf{lcvalue}}>1$. 6: $\mathbf{optype}$Nag_VariableType * Output On exit: indicates whether the optional parameter supplied in optstr is an integer, real or character valued parameter and hence which of ivalue, rvalue or cvalue holds the current value. ${\mathbf{optype}}=\mathrm{Nag_Integer}$ optstr is an integer valued optional parameter; its current value has been returned in ivalue. ${\mathbf{optype}}=\mathrm{Nag_Real}$ optstr is a real valued optional parameter; its current value has been returned in rvalue. ${\mathbf{optype}}=\mathrm{Nag_Character}$ optstr is a character valued optional parameter; its current value has been returned in cvalue. ${\mathbf{optype}}=\mathrm{Nag_Integer_Additional}$ optstr is an integer valued optional parameter; its current value has been returned in ivalue. Additional information has been returned in cvalue. ${\mathbf{optype}}=\mathrm{Nag_Real_Additional}$ optstr is a real valued optional parameter; its current value has been returned in rvalue. Additional information has been returned in cvalue. 7: $\mathbf{iopts}\left[\mathit{dim}\right]$const Integer Communication Array Note: the dimension, $\mathit{dim}$, of this array is dictated by the requirements of associated functions that must have been previously called. This array MUST be the same array passed as argument iopts in the previous call to d01zkc. 8: $\mathbf{opts}\left[\mathit{dim}\right]$const double Communication Array Note: the dimension, $\mathit{dim}$, of this array is dictated by the requirements of associated functions that must have been previously called. This array MUST be the same array passed as argument opts in the previous call to d01zkc. 9: $\mathbf{fail}$NagError * Input/Output The NAG error argument (see Section 7 in the Introduction to the NAG Library CL Interface). ## 6Error Indicators and Warnings NE_ALLOC_FAIL Dynamic memory allocation failed. See Section 3.1.2 in the Introduction to the NAG Library CL Interface for further information. On entry, argument $⟨\mathit{\text{value}}⟩$ had an illegal value. NE_INT On entry, ${\mathbf{lcvalue}}=⟨\mathit{\text{value}}⟩$. Constraint: ${\mathbf{lcvalue}}>1$. NE_INTERNAL_ERROR An internal error has occurred in this function. Check the function call and any array sizes. If the call is correct then please contact NAG for assistance. See Section 7.5 in the Introduction to the NAG Library CL Interface for further information. NE_INVALID_OPTION On entry, the optional parameter in optstr was not recognized: ${\mathbf{optstr}}=⟨\mathit{\text{value}}⟩$. The arrays iopts and opts have either not been initialized, have become corrupted, or are not compatible with this option setting function. NE_NO_LICENCE Your licence key may have expired or may not have been installed correctly. See Section 8 in the Introduction to the NAG Library CL Interface for further information. NW_TRUNCATED On entry, optstr indicates a character optional parameter, but cvalue is too short to hold the stored value. The returned value will be truncated. Not applicable. ## 8Parallelism and Performance d01zlc is not threaded in any implementation. None. ## 10Example See the example programs associated with the problem solving function you wish to use for a demonstration of how to use d01zlc.
2022-11-26 16:29:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 25, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5486066937446594, "perplexity": 3598.00100736104}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446708010.98/warc/CC-MAIN-20221126144448-20221126174448-00562.warc.gz"}
https://git.nowheycreamery.com/anna/cheese/blame/branch/master/dry_jack.tex
Cheesemaking Worksheets You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long. #### 45 lines 2.2 KiB Raw Permalink Normal View History Unescape Escape % % Copyright 2016 (c) Anna Schumaker. % \documentclass[letterpaper]{article} \usepackage{cheese} \begin{document} \begin{cheese}{Dry Jack}{3pt} \StandardCheeseSetupSection \newsection{Acid}{2} & Mesophilic Starter & \half teaspoon & & \ftemp{80} & & & \_ & Ripening & 1 hour & & \ftemp{86} & & & \\ \StandardCheeseGelDevelSection{\half teaspoon}{\gray}{45 minutes}{3.5} \newsection{Curd Processing}{9} & Cutting & \quarter{3} inch cubes & & \gray & & & \_ & Resting 1 & 5 minutes & & \gray & & & \_ & Cooking & 40 minutes & & \gray & & & Stir continuously. \_ & Resting 2 & 30 minutes & & \ftemp{102} & & & \_ & Draining 1 & To curd level & & \gray & & & \_ & Stirring & 20 minutes & & \gray & & & Or until curds are matted. \_ & Draining 2 & All Whey & & \gray & & & Catch curds in colander. \_ & Resting 3 & 5 minutes & & \gray & & & \_ & Salting & 1 tablespoon & & \gray & & & Mix thoroughly with hands. \\ \newsection{Press}{2} & Forming & \gray & & \gray & & & Roll curds into a ball and tie ends of cheesecloth together. \_ & 8 pounds & 6 - 8 hours & & \gray & & & Sandwitched between cutting boards with knot on top. \\ \newsection{Rind}{7} & Salting & 1 tablespoon & & \gray & & & \_ & Air Dryig & 8 hours & & \gray & & & \_ & Brining & 8 hours & & \gray & & & Flip once. \_ & Air Dryig & 24 hours & & \gray & & & Flip once. \_ & Ripening & 1 week & & \ftemp{50} - \ftemp{55} & & & In ripening box. \_ & Cocoa Rub\footnote{2 tablespoons cocoa powder, 2 teaspoons instant espresso, 1\half teaspoons ground black pepper, and 4\half teaspoons olive oil} & \gray & & \gray & & & Use \quarter{1} of rub every day for 3 days. Dry and resume ripening. \_ & Aging & 2 months & & \ftemp{50} - \ftemp{55} & & & \\ \end{cheese} \end{document}
2022-07-03 15:53:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9092286825180054, "perplexity": 8363.094951905774}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104244535.68/warc/CC-MAIN-20220703134535-20220703164535-00085.warc.gz"}
http://clay6.com/qa/38089/the-base-radius-of-a-right-circular-cone-and-those-of-a-cylinder-are-same-t
Browse Questions Home  >>  AIMS # The base radius of a right circular cone and those of a cylinder are same . Their volumes are in the ratio $(a)\;1 : 1\qquad(b)\;1 : 2\qquad(c)\;1 : 3\qquad(d)\;3 : 1$ Answer : $\;1 : 3$ The base radius of a right circular cone and those of a cylinder are same . Their volumes are in the ratio 1:3
2017-02-23 07:38:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7382701635360718, "perplexity": 270.95044090776497}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171162.4/warc/CC-MAIN-20170219104611-00027-ip-10-171-10-108.ec2.internal.warc.gz"}
https://cs6505.wordpress.com/schedule/coupon-collectors-quicksort/
# Randomized QuickSort PDF of Eric’s handwritten notes are here. Geometric Random Variables Flip a coin, which will give • heads with probability $p$ • tails with probability $1-p$ Let $X =$ # of flips until the first heads (including the flip with the heads) We denote the above distribution as $X \sim$ Geom$(p)$, i.e. a geometric distribution with parameter $p$. Let $\mu = E(X)$. What is $\mu$? If we consider $p=1/2$, then we would expect $\mu=2$. For general $p$, we would expect $\mu = 1/p$, and we will now prove this fact. Suppose we tried to directly evaluate the expectation: $\mu = \sum_{i=1}^\infty i \cdot \Pr(X=i) = \sum_{i=1}^\infty i (1-p)^{i-1} p$ We could try to evaluate the above sum and eventually arrive at the correct answer, or we could view $\mu$ another way. Consider the first flip. If it’s heads, then we’re done and only flip once. If it’s tails, then we repeat the process, thus flipping $1+\mu$ times in expectation. So $\mu = p \cdot 1 + (1-p)\cdot (\mu + 1)$ and solving the above for $\mu$ gives $\mu = 1/p$. Linearity of Expectation Here we give a proof of linearity of expectation. Let $X,Y$ be random variables and take values in $\{0,1,\ldots,n\}$ (though you can use the same proof for any domain). Note that $X$ and $Y$ may not be independent. First, observe the following fact: $\sum_{j=0}^n \Pr(X=i, Y=j) = \Pr(X=i)$. We can now see that \begin{aligned} E[X+Y] &= \sum_{i=0}^n \sum_{j=0}^n (i+j) \Pr(X=i, Y=j) \qquad \mbox{ by definition of expectation}\\ &=\sum_{i=0}^n \sum_{j=0}^n i \Pr(X=i, Y=j) + \sum_{i=0}^n\sum_{j=0}^n j \Pr(X=i, Y=j) \\ &= \sum_{i=0}^n i \sum_{j=0}^n \Pr(X=i, Y=j) + \sum_{j=0}^n j \sum_{i=0}^n \Pr(X=i, Y=j) \\ &= \sum_{i=0}^n i \Pr(X=i) + \sum_{j=0}^n j \Pr(Y=j) \qquad \mbox{ by above fact}\\ &= E[X] + E[Y]. \end{aligned} Coupon Collector Suppose we have $n$ coupons in an urn. Consider the following process: 1. Choose a coupon from the urn at random and look at it. 2. Put the coupon back in the urn. 3. Repeat. Let $X =$# of steps until we see all $n$ coupons at least once. What is $E(X)$? Consider defining $X_i$ as the number of steps to see the $i$-th different coupon once we have seen $i-1$ different coupons. So $X_1$ is the time to see the first coupon, $X_2$ is the time to see the second coupon after seeing the first coupon, etc. Then, we can see that $X = X_1 + X_2 + \ldots + X_n$. By linearity of expectation, we have $E(X) = E(X_1 + \ldots + X_n) = E(X_1) + \ldots + E(X_n)$. So what is $E(X_i)$? We have seen $i-1$ coupons, and not seen $n - i + 1$ coupons. So, the probability that we see a new coupon is $\frac{n-i+1}{n}$. And $X_i$ is the time until this event of seeing a new coupon occurs.  Thus $X_i$ is a geometric random variable with parameter $\frac{n-i+1}{n}$. Thus, $E(X_i) = \frac{n}{n-i+1}$. Plugging this back into the equation for $E(X)$, we see that \begin{aligned} E(X) &= \sum_{i=1}^n \frac{n}{n-i+1} \\ &= \frac{n}{n} + \frac{n}{n-1} + \ldots + \frac{n}{1} \\ &= n \left(1 + \frac{1}{2} + \frac{1}{3} + \ldots + \frac{1}{n}\right) \end{aligned} Claim $E(X) = n \ln n + O(n)$. Proof We’ll show that $\frac{1}{2} + \frac{1}{3} + \cdots + \frac{1}{n} \leq \ln{n},$ and thus $n( 1 + \frac{1}{2} + \frac{1}{3} + \cdots + \frac{1}{n} )\leq n(1 + \ln{n}) = n\ln{n} + O(n)$ which will complete the proof. To see the earlier claimed upper bound, consider the following function $g(x) = 1/\lfloor x+1 \rfloor$.  Note that for $x=1\rightarrow n$ the function $g(x)$ consists of $n-1$ rectangles of area $1/2, 1/3, \ldots, 1/n$, and thus $\int_1^n g(x) = \frac{1}{2} + \frac{1}{3} + ... + \frac{1}{n}$. Now consider the function: $f(x) = 1/x$.  Notice that $g(x) \leq f(x)$ and thus $\int_1^n g(x) \leq \int_1^n f(x)$Therefore, \begin{aligned} \frac{1}{2} + \frac{1}{3} + \ldots + \frac{1}{n} &= \int_1^n g(x) \\ &\le \int_1^n f(x) \\ &\le \int_1^n \frac{1}{x}\, dx \\ &\le \ln x \Big |_{x=1}^n \\ &= \ln n. \end{aligned} Thus we get $\sum_{i=1}^n \frac{1}{i} \le 1 + \ln n$ and $E(X) \le n \ln n + n$. We can also see a lower bound of $E[X] \ge n\ln n$ by a similar argument. We draw $n$ rectangles of area $1, 1/2, \ldots, 1/n$, and we lower bound the area of the rectangles by $\int_1^n 1/x \, dx$. Expected Runtime of QuickSort Input: unsorted array $A = [a_1, \ldots, a_n]$ of $n$ numbers Output: sorted $A$ QuickSort(A): 1. Choose a random pivot $p$. 2. Partition $A$ into $A_{p}$. 3. Recursively sort $A_{p}$. 4. Return $(A_{p})$. In the worst case, the above algorithm could take $\Omega(n^2)$ time, e.g. every time we select the smallest element as the pivot. If the pivot $p$ was the median element, then we get $T(n) = 2T(n/2) + O(n) = O(n \log n)$. What is the runtime when we select $p$ at random? We will examine the expected number of comparisons of randomized QuickSort. Let $X=$# of comparisons for QuickSort. Claim $E(X) \le 2n \ln n$. Proof Let $S = \{s_1, \ldots, s_n\}$ be a sorted version of $A$, so $s_1 \le s_2 \le \ldots \le s_n$. If there are multiple sorted versions of $A$, we simply let $S$ be one of them. For $1 \le i < j \le n$, let $X_{ij}$ be the number of comparisons between $s_i$ and $s_j$. And so $X = \sum_{1\le i < j \le n} X_{ij}$ and again by linearity of expectation $E(X) = \sum_{1 \le i. Note that in QuickSort, two elements will be compared at most once. So $0 \le X_{ij} \le 1$ and further $E(X_{ij}) = 0 \cdot \Pr(X_{ij}=0) + 1 \cdot \Pr(X_{ij}=1)$. What is the probability that $s_i, s_j$ compare? Consider $s_i, s_{i+1}, \ldots, s_j$. For $s_i, s_j$ to compare, we need one of them to be a pivot WHILE they are in the subproblem. Therefore, we further need that one of $s_i, s_j$ to be a pivot before ANY element of $s_{i+1}, \ldots, s_{j-1}$. So, the probability that $s_i, s_j$ is a pivot before all elements between them is $2/(j-i+1)$. Therefore, $E(X_{ij}) = \frac{2}{j-i+1}$ and combining all terms together in the above equation for $E(X)$, we see that \begin{aligned} E(X) &= \sum_{1 \le i < j \le n} \frac{2}{j-i+1} \\ &=\sum_{i=1}^{n-1} \sum_{j=i+1}^n \frac{2}{j-i+1} \\ &=\sum_{i=1}^{n-1} \left( \frac{2}{2} + \frac{2}{3} + \ldots + \frac{2}{n-i+1}\right) \\ &\le 2 \sum_{i=1}^n \left(\frac{1}{2} + \ldots + \frac{1}{n}\right) \\ &\le 2n \ln n. \end{aligned}.
2018-05-22 19:31:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 101, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000038146972656, "perplexity": 775.082042240707}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794864872.17/warc/CC-MAIN-20180522190147-20180522210147-00463.warc.gz"}
https://www.techwhiff.com/learn/which-of-the-following-transitions-would-result/238477
# Which of the following transitions would result in the absorption of a photon with the longest wavelength? #### Similar Solved Questions ##### An unfair die was rolled 120 times. Below is a frequency table for the number of... An unfair die was rolled 120 times. Below is a frequency table for the number of times it landed on each value: Number Frequency 1 27 2 39 3 21 4 18 5 6 6 9 Determine the probability that the die will land on a number less than 3 if it is going to be rolled again.... ##### B 106° 120 100 feet feet 60° 599 C A 1350 106 86 feet feet The... B 106° 120 100 feet feet 60° 599 C A 1350 106 86 feet feet The table below shows the field data for closed traverse ABCD, if the coordinates of A (0, 0). Azimuth for AB is 56°. Compute the coordinates for all other stations. a. Fill up the followings: Stations Observed Azimuth Length EN ... ##### QUESTION 6 20 people are on a team. 12 from accounting and 8 from marketing. 3... QUESTION 6 20 people are on a team. 12 from accounting and 8 from marketing. 3 people from accounting have been trained on the new software. 7 people from marketing have been trained on the new software. A person is chosen at random. If the person is from accounting, what is the probability that the...
2022-11-30 19:58:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 9, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.504214346408844, "perplexity": 699.2998732950922}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710771.39/warc/CC-MAIN-20221130192708-20221130222708-00342.warc.gz"}
http://docs.itascacg.com/flac3d700/common/kernel/doc/manual/program/commands/cmd_program.playback.html
# program playback command Syntax program playback s Play back a record file. Note As with all program commands, use of the command word program is optional; program playback and playback are both valid. When supplying s, if no extension is specified, then “.record” will be assumed. An ASCII record file representing a “playback” of input to the model is read, theoretically exactly duplicating the steps taken to create the model state from which the record file was derived. Record files may be generated in a number of ways. • They are generated by the “Bundle Pack” command on the “Tools” menu. • They may be copied from the output of a model list record commands command • They may be copied or saved State Record pane. • They may be stripped from the header of a save file.
2021-10-18 16:43:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3128797709941864, "perplexity": 5220.145469770666}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585204.68/warc/CC-MAIN-20211018155442-20211018185442-00523.warc.gz"}
https://wikidiff.com/quantity/mess
# Quantity vs Mess - What's the difference? quantity | mess | ## As nouns the difference between quantity and mess is that quantity is a fundamental, generic term used when referring to the measurement (count, amount) of a scalar, vector, number of items or to some other way of denominating the value of a collection or group of items while mess is (obsolete) mass; church service or mess can be a disagreeable mixture or confusion of things; hence, a situation resulting from blundering or from misunderstanding; a disorder. ## As a verb mess is to take meals with a mess. # quantity ## English (wikipedia quantity) ### Noun (quantities) • A fundamental, generic term used when referring to the measurement (count, amount) of a scalar, vector, number of items or to some other way of denominating the value of a collection or group of items. • You have to choose between quantity and quality. • An indefinite amount of something. • Some soap making oils are best as base oils, used in a larger quantity''' in the soap, while other oils are best added in a small '''quantity . Olive oil can be used practically in any quantity . • A specific measured amount. • This bag would normally costs $497.50 for a quantity of 250, at a price of$1.99 per piece. Generally it should not be used in a quantity larger than 15 percent. • A considerable measure or amount. • The Boeing P-26A was the first all-metal monoplane fighter produced in quantity for the U.S. Army Air Corps. • (metrology) Property of a phenomenon, body, or substance, where the property has a magnitude that can be expressed as number and a reference. • (mathematics) Indicates that the entire preceding expression is henceforth considered a single object. • x plus ''y'' quantity squared equals ''x'' squared plus ''2xy'' plus ''y'' squared . • * 2006 , Jerome E. Kaufmann and Karen Schwitters, Elementary and Intermediate Algebra: A Combined Approach , p 89 • For problems 58-67, translate each word phrase into an algebraic expression. (...) 65. x plus 9, the quantity squared • * 2005 , R. Mark Sirkin, Statistics For The Social Sciences , p137 • The second, $\left(\sum x\right)^2$, read "summation of x, quantity squared," tells us to first add up all the xs to get $\sum x$ and then square $\sum x$ to get $\left(\sum x\right)^2$. • * 1985 , Serge Lang, Math!: Encounters with High School Students , p54 • ANN. $ra$ quantity cubed. SERGE LANG. That's right, $\left(ra\right)^3$. #### Usage notes * In mathematics, used to unambiguously orate mathematical equations; it is extremely rare in print, since there is no need for it there. #### Synonyms * Qty * measure * unit # mess ## English ### Etymology 1 From (etyl) (m), partly from (etyl) . More at (m); see also (m). #### Noun (es) • (obsolete) Mass; church service. • A quantity of food set on a table at one time; provision of food for a person or party for one meal; also, the food given to an animal at one time. • A mess of pottage. • * Milton • At their savoury dinner set / Of herbs and other country messes . • A number of persons who eat together, and for whom food is prepared in common; especially, persons in the military or naval service who eat at the same table. • the wardroom mess • * 1610 , , IV. iv. 11: • But that our feasts / In every mess have folly, and the feeders / Digest it with accustom, • A set of four (from the old practice of dividing companies into sets of four at dinner). • (Latimer) • (US) The milk given by a cow at one milking. • ##### Derived terms * Eton mess * lose the number of one's mess * mess hall * mess up * Mills Mess #### Verb • (label) To take meals with a mess. • (label) To belong to a mess. • (label) To eat (with others). • (label) To supply with a mess. • ### Etymology 2 Perhaps a corruption of (etyl) , compare (muss), or derived from Etymology 1 "mixed foods, as for animals". #### Noun (-) • A disagreeable mixture or confusion of things; hence, a situation resulting from blundering or from misunderstanding; a disorder. • (label) A large quantity or number. • (label) Excrement. • ##### Quotations * (English Citations of "mess") #### Verb (es) • (label) To make a mess of. • (label) To throw into confusion. • (label) To interfere. • ##### Derived terms (terms derived from "mess") * messy * mess around * mess up * mess with * * ----
2019-07-24 06:20:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 6, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5725021958351135, "perplexity": 8467.448485012683}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195531106.93/warc/CC-MAIN-20190724061728-20190724083728-00309.warc.gz"}
http://scillatonetti.it/tgil/http-tf-keras-layers-embedding.html
# Http Tf Keras Layers Embedding I tried the setup embedding layer + shallow fully connected layer vs TF-IDF + fully connected layer but got almost same result difference. I hope you enjoyed the post and hopefully got a clearer picture around BERT. from keras. For more complex architectures, you should use the Keras functional API , which allows to build arbitrary graphs of layers. callbacks import ModelCheckpoint. This argument is required if you are going to connect Flatten then Dense layers upstream. You use the last convolutional layer because you are using attention in this example. input_shape=(3, 128, 128) for 128x128 RGB pictures. feature_column into a tensor?2019 Community Moderator ElectionTensorflow: how to look up and average a different amount of embedding vectors per training instance, with multiple training instances per minibatch?TensorFlow and Categorical variablesHow to user Keras's Embedding Layer properly?Tensorflow: can not convert float into a tensor?Tensorflow regression predicting 1 for. Ease of customization : You can also define your own RNN cell layer (the inner part of the for loop) with custom behavior, and use it with the. The same layer can be reinstantiated later (without its trained weights) from this configuration. from keras. Keras can use either of these backends: Tensorflow – Google’s deeplearning library. TensorFlow Hub is a library for reusable machine learning modules. Every worker uses the same python scripts for training. LSTM和Tensorflow的tf. It is rather a compression of space that our word tokens like. RNN class, make it very easy to implement custom RNN architectures for your research. Debug Keras Models with TensorFlow Debugger. Keras Preprocessing Layers 25 prefetching. This is useful for recurrent layers which may. Intro to Deep Learning and TensorFlow H2O Meetup 01/09/2019 Metis San Francisco Oswald Campesato [email protected] Python Deep Learning Cookbook - Indra Den Bakker - Free ebook download as PDF File (. keras and a pre-trained text embedding from the TF Hub repository to quickly & easily classify the sentiment of a movie review. applications. 0, which is the first release of multi-backend Keras with TensorFlow 2. Line 5: Here comes the use of embedding layer. Models and examples built with TensorFlow. 嵌入层 Embedding. tensorflow2推荐使用keras构建网络,常见的神经网络都包含在keras. Embedding Techniques. BERT implemented in Keras. The next natural step is to talk about implementing recurrent neural networks in Keras. The newly released Tensorflow hub provides an easy interface to use existing machine learning models for transfer learning. embedding大家都不陌生,在我们的模型中,只要存在离散变量,那么一般都会用到embedding操作。今天这篇,我们将按以下的章节来介绍TF中的embedding操作。. Input(shape=(2,)), Dense(1024, activation=tf. keras,一种用于在 TensorFlow 中构建和训练模型的高阶 API,以及TensorFlow Hub,一个用于迁移学习的库和平台。 有关使用 tf. The Sequential model is a linear stack of layers, and the layers can be described very simply. You will learn how to wrap a tensorflow hub pre-trained model to work with keras. A layer instance. Line 4: object model of sequential class is created. The config of a layer does not include connectivity information, nor the layer class name. I am building a tensorflow model in google colab. layer into the conversion tool?. These are a useful type of model for predicting sequences or handling sequences of things as inputs. randint(2, size=(1000, 1)) x_test = np. Neural Machine Translation(NMT) is the task of converting a sequence of words from a source language, like English, to a sequence of words to a target language like Hindi or Spanish using deep neural networks. compile(optimizer=keras. Use it as a regular TF 2. bert-for-tf2 0. This does not fix the issue, which therefore. LSTM(128)(embedded_words) Predicting an answer word. When processing sequence data, it is very common for individual samples to have different lengths. This representation conversion is learned automatically with the embedding layer in Keras (see the documentation). Ask Question Can you change vocab_size+1 argument in the Embedded layer to vocab_size. gather exactly does is to index the weights matrix self. Klasse RemoteMonitor. class LSTM : Long Short-Term Memory layer - Hochreiter 1997. txt) or read book online for free. from keras. sequence import pad_sequences. The config of a layer does not include connectivity information, nor the layer class name. model_fn import EstimatorSpec from tensorflow. from keras. Below is the code to freeze a session. Share Copy sharable link for this gist. Using the Embedding layer. The layers are stacked sequentially to build the classifier: The first layer is an embedding layer. This notebook demonstrates how to train a simple model for MNIST dataset using tensorFlow. Embedding instead of python dictionary Firstly, I use a function to transform words into word-embedding:. __version__ ) print ( tf. The easiest way to familiarize yourself with what TF Hub can do is to use a pre-trained model that fits a specific task. You can vote up the examples you like or vote down the ones you don't like. A tutorial for embedding Google's USE into your Keras models. layers import Dense, Embedding, LSTM from keras. gl/YWn4Xj for an example written by. R Interface to Keras. - If the layer's call method takes a mask argument (as some Keras layers do), its default value will be set to the mask generated for inputs by the previous layer (if input did come from a layer that generated a corresponding mask, i. In order to do what you want, you should use the new style tf. This data preparation step can be performed using the Tokenizer API also provided with Keras. Training process, models and word embeddings visualization. Keras offers an Embedding layer that can be used in neural network models for processing text data. DenseNet169 tf. A layer instance. The following are code examples for showing how to use keras. Stack Exchange network consists of 175 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. callbacks import EarlyStopping, ReduceLROnPlateau, ModelCheckpoint from keras. webpage capture. This sequential layer framework allows the developer to easily bolt together layers, with the tensor outputs from each layer flowing. convolutional layers, pooling layers, recurrent layers, embedding layers and more. Downside would be some overhead due to many layers. txt) or read book online for free. Need data for each key in. Denoising autoencoders with Keras, TensorFlow, and Deep Learning. keras, adding a couple of Dropout layers for regularization (to prevent overfitting to training samples). Classifying Duplicate Questions from Quora with Keras. W_constraint: instance of the constraints module (eg. Multi-Layer perceptron defines the most complicated architecture of artificial neural networks. The same layer can be reinstantiated later (without its trained weights) from this configuration. Embedding()。. 0 documentation for all matter related to general usage and behavior. Layer クラスをサブクラス化して、独自の CustomActivation クラスを実装している( ※ 前回説明したモデルのサブクラス化と非常に似て. GitHub Gist: instantly share code, notes, and snippets. It draws samples from a truncated normal distribution centered on 0 with stddev = sqrt(1 / fan_in) where fan_in is the number of input units in the weight tensor. Embedding (max_words, embed_size, weights = [embedding_matrix], trainable = False) (input) Bidirectional Layer. The code snippet below is our TensoFlow model using Keras API, a simple stack of 2 convolution layers with a ReLU activation and followed by max-pooling layers. This sequential layer framework allows the developer to easily bolt together layers, with the tensor outputs from each layer flowing. Let’s see how it works. Keras makes it easy to use word embeddings. raw download clone embed report print text 2. tensorflow layer example. This data preparation step can be performed using the Tokenizer API also provided with Keras. This guide consists of the following sections:. About Keras layers; Core Layers; Convolutional Layers; Pooling Layers; Locally-connected Layers; Recurrent Layers; Embedding Layers. 把csdn上一个颜值打分程序放到jupyter notebook上跑,程序如下: from keras. from keras. Embedding Techniques. Estimators are built on top of tf. Are there any examples of using tf-agents that uses self-play? I can see many examples for environments where self-play is not required like snake, pole-cart, and breakout as some popular options, but nothing that would require a self-play strategy like connect four, checkers, or the like. preprocessing. 为什么我们要开始使用embedding layer在介绍embedding的概念可能非常陌生。 例如,除了“将正整数(索引)转换为固定大小的稠密向量”之外,Keras文档没有提供任何解释。. get_config get_config() Returns the config of the layer. The build method creates assets of the module. Prerequisites. LSTM和Tensorflow的tf. レイヤ内部の重み(トークンベクトルの内部. SparseCategoricalCrossentropy Crossentropy(交叉熵)是常用的损失函数,交叉熵可以计算实际输出概率与期望输出概率之间的距离。. to_categorical(Y_test, NB_CLASSES) You can see from the above code that the input layer has a neuron associated to each pixel in the image for a total of 28*28=784 neurons, one for each pixel in the MNIST images. , image search engine) using Keras and TensorFlow. The main programming abstraction here consists of the model and the layers. com 今回、TF probabilityとして確率推論系が(Edward2)含めTFに正式に加わったことで、どうやら正式にTFの特徴となっているEagerモードへの対応も進んでいる様子です(おそらく…?). lookup (see code below). Download files. Jeremy Howard provides the following rule of thumb; embedding size = min(50, number of categories/2). tensorflow layer example. layer中(最新的tf. Functions Vanilla CRF. The Tutorial Video. This layer takes the integer-encoded vocabulary and looks up the embedding vector for each word-index. The embedding-size defines the dimensionality in which we map the categorical variables. The wait is over - TensorFlow 2. These are a useful type of model for predicting sequences or handling sequences of things as inputs. def build_model(vocab_size, embedding_dim,. layers import Dense, Embedding from keras. Let's take a look at the Embedding layer. Additionally, if you wish to visualize the model yourself, you can use another tutorial. Before building the model with sequential you have already used Keras Tokenizer API and input data is already integer coded. 自然言語処理で RNN を使っていると、RNN の内部状態を取得したくなることがあります。 TensorFlow では tf. it keeps dropping the input layer size to half. layers 模块, Embedding() 实例源码. Keras provides a simple keras. These are handled by Network (one layer of abstraction above. SeparableConvolution2D(nb_filter, nb_row, nb_col, init='glorot_uniform', activation='linear', weights=None, border. Keras models. ConfigProto() # Don't pre-allocate memory; allocate as-needed. 0 you can build your model defining your own mathematical operations, as before you can use math module (tf. Sequential( [ tf. 5 was the last release of Keras implementing the 2. The Layers API imitates the Keras programming style in Python, although in JavaScript syntax. Neural Machine Translation(NMT) is the task of converting a sequence of words from a source language, like English, to a sequence of words to a target language like Hindi or Spanish using deep neural networks. embedding layer作为第一层时,就默认了,输入数据必须是2D,经过embedding layer后,输出一定为3D。 【二】model. Class RMSprop. This layer contains both the proportion of the input layer’s units to drop 0. Python Deep Learning Cookbook - Indra Den Bakker - Free ebook download as PDF File (. Good software design or coding should require little explanations beyond simple comments. gl/kaKkvs ) with some adaption for the. layers import Dense, LSTM, Embedding from keras. 快速开始函数式(Functional)模型; Sequential model; Layers. Embedding(input_voc_size, 256)(question) question_vector = layers. keras,一种用于在 TensorFlow 中构建和训练模型的高阶 API,以及TensorFlow Hub,一个用于迁移学习的库和平台。 有关使用 tf. August 03, 2018 — Posted by Raymond Yuan, Software Engineering Intern In this tutorial, we will learn how to use deep learning to compose images in the style of another image (ever wish you could paint like Picasso or Van Gogh?). 10464] Massively Multilingual Sentence Embeddings for Zero-Shot Cross-Lingual Transfer and Beyond; LASER natural language processing toolkit - Facebook Engineering. Raises: ValueError: if the layer isn't yet built (in which case its weights aren't yet defined). There were many built-in APIs for building the layers like tf. First, create the keras model components by filling in the blanks. A layer instance. The main reason to subclass tf. This sequential layer framework allows the developer to easily bolt together layers, with the tensor outputs from each layer flowing. Input when I concatenate two models with Keras API on Tensorflow. Natural Language Generation Lab¶ In this lab we will experiment with recurrent neural networks. The same layer can be reinstantiated later (without its trained weights) from this configuration. Assume that for some specific task for images with the size (160, 160, 3), you want to use pre-trained bottom layers of VGG, up to layer with the name block2_pool. While PCA requires a matrix with no missing values, MF can overcome that by first filling the missing values. Using Keras and Deep Q-Network to Play FlappyBird. Sequence to Sequence Model using Attention Mechanism. These are handled by Network (one layer of abstraction above. Dot(axes, normalize=False) Layer that computes a dot product between samples in two tensors. save function for model saving with the same hdfs path. Next, we set up a sequentual model with keras. Let's take a look at the Embedding layer. As I am still hoping that someone will pick up this issue, I conducted (yet) additional testing, namely replacing tf. keras 显然成为以 TensorFlow 构建神经网络时要使用的高级 API。. A Keras Embedding Layer can be used to train an embedding for each word in your volcabulary. Using Keras and Deep Q-Network to Play FlappyBird. import matplotlib. Develop Your First Neural Network in Python With this step by step Keras Tutorial! Keras is a powerful and easy-to-use free open source Python library for developing and evaluating deep learning models. applications. NOTE: tensorflow-addons 包含适用于 TensorFlow 2. trainable_weights attribute of Layers and Models which will filter based on. BatchNormalization(). Sequential model and start with an embedding layer. input_layer. Keras slice input layer. Natural Language Generation Lab¶ In this lab we will experiment with recurrent neural networks. tensorflow layer example. layers import Conv2D, MaxPool2D, Flatten, Dense, Dropout, Input:. Advantages of Estimators. In this post you will discover how to develop a deep learning model to achieve near state of the …. numpy() For text or sequence problems, the Embedding layer takes a 2D tensor of integers, of shape (samples, sequence_length), where each entry is a sequence of integers. The Model is the core Keras data structure. Now I will show how you can use pre-trained gensim embedding layers in our TensorFlow and Keras models. max(h_gru, 1) will also work. The signature of the Embedding layer function and its arguments with default value is as follows, keras. I hope you enjoyed the post and hopefully got a clearer picture around BERT. 大家好! 我在尝试使用Keras下面的LSTM做深度学习,我的数据是这样的:X-Train:30000个数据,每个数据6个数值,所以我的X_train是(30000*6) 根据keras的说明文档,input shape应该是(samples,timesteps,input_dim) 所以我觉得我的input shape应该是:input_shape=(30000,1,6),但是运行后报错: Input 0 is incompatible with. We use cookies for various purposes including analytics. Keras Tutorial: Keras is a powerful easy-to-use Python library for developing and evaluating deep learning models. dense functional interface? models without the use of the Estimator API or Keras. Tensorflow's PTB LSTM model for keras. Tensor4D) Converts a tf. UpSampling2D() 。. Downside would be some overhead due to many layers. Flatten() function. set_random_seed(seed. On high-level, you can combine some layers to design your own layer. See Migration guide for more details. layers 模块, Input() 实例源码. ConfigProto() # Don't pre-allocate memory; allocate as-needed. Nothing against PyTorch but with TF 2 out I think the TF/Keras combo wins out. Base R6 class for Keras layers. merge taken from open source projects. preprocessing import sequence from keras. layers import Concatenate from keras. It begins with instantiating the BERT module from bert_path which can be a path on disk or a http address (e. On it everyone you love, everyone you know, everyone you ever heard of, every human being who ever was, lived out their … Continue reading Getting started with Tensorflow, Keras in Python. OK, I Understand. Python keras. In the first part of this tutorial, we'll discuss what autoencoders are, including how convolutional autoencoders can be applied to image data. layers that simplifies customization. It requires that the input data is encoded with integers, so that each word is represented by a unique integer. TensorFlow Tutorials and Deep Learning Experiences in TF. it keeps dropping the input layer size to half. The first layer is the embedding layer with the size of 7 weekdays plus 1 (for the unknowns). count_params count_params() Count the total number of scalars composing the weights. where fan_in is the number of incoming neurons. Sequential( [ tf. Dot(axes, normalize=False) Layer that computes a dot product between samples in two tensors. The signature of the Embedding layer function and its arguments with default value is as follows, keras. To design a custom Keras layer we need to write a class that inherits from tf. The Keras functional API and the embedding layers. I am building a tensorflow model in google colab. Keras Preprocessing Layers 25 prefetching. This is the companion code to the post "Attention-based Neural Machine Translation with Keras" on the TensorFlow for R blog. What is Keras?. Contribute to rstudio/keras development by creating an account on GitHub. It draws samples from a truncated normal distribution centered on 0 with stddev = sqrt(1 / fan_in) where fan_in is the number of input units in the weight tensor. 1 Apartment model multithreading. In this post you will discover how to develop a deep learning model to achieve near state of the …. It does not handle itself low-level operations such as tensor products, convolutions and so on. get_weights() - returns the layer weights as a list of Numpy arrays. It is quite common to use a One-Hot representation for categorical data in machine learning, for example textual instances in Natural Language Processing tasks. Lambda layers are saved by serializing the Python bytecode, whereas subclassed Layers can be saved via overriding their get_config method. It performs embedding operations in input layer. You use the last convolutional layer because you are using attention in this example. W_constraint: instance of the constraints module (eg. Since Keras is just an API on top of TensorFlow I wanted to play with the underlying layer and therefore implemented image-style-transfer with TF. Stack Exchange network consists of 175 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. 6) You can set up different layers with different initialization schemes. layers import Concatenate from keras. Estimators builds graph. Now that we have defined our feature columns, we will use a DenseFeatures layer to input them to our Keras model. The Keras functional API and the embedding layers. Now, these embeddings can be used as input features for other models built for custom tasks. TF-IDF vectorization; This is a very common method of embedding words by considering the frequency of a word in a document and its occurrence in the corpus. How to develop an LSTM and Bidirectional LSTM for sequence classification. The same layer can be reinstantiated later (without its trained weights) from this configuration. Here are the examples of the python api keras. dynamic_rnn 等の関数を使うと、出力と状態を返してくれます。 しかし、Keras でのやり方については意外と日本語の情報がありませんでした。 本記事では Keras で RNN の内部状態を取得する方法. If this is True then all subsequent layers in the model need to support masking or an exception will be raised. to_categorical(Y_train, NB_CLASSES) Y_test = tf. keras 的形式实现与核心 TensorFlow 的集成。 虽然 tf. NOTE: tensorflow-addons 包含适用于 TensorFlow 2. A layer config is a Python dictionary (serializable) containing the configuration of a layer. 0 初学者入门 TensorFlow 2. The resulting model with give you state-of-the-art performance on the named entity recognition task. 4 Full Keras API Better optimized for TF Better integration with TF-specific features embedded_words = layers. Sequential model and start with an embedding layer. pyplot as pltimport numpy as np import tensorflow as tffrom keras. Train your models on the cloud and put TF to work in real environments Explore how Google tools can automate simple ML workflows without the need for complex modeling; About : Deep Learning with TensorFlow 2 and Keras, Second Edition teaches neural networks and deep learning techniques alongside TensorFlow (TF) and Keras. class InputSpec : Specifies the ndim, dtype and shape of every input to a layer. In this Word2Vec Keras implementation, we’ll be using the Keras functional API. The Embedding layer can be understood as a lookup table that maps from integer indices (which stand for specific words) to dense vectors (their embeddings). keras import layers print ( tf. Keras provides a simple keras. The config of a layer does not include connectivity information, nor the layer class name. It will take three arguments. It requires that the input data be integer encoded, so that each word is represented by a. Let us learn complete details about layers. One option to handle all this preprocessing is to write your own custom preprocessing layers. It is substantially formed from multiple layers of perceptron. Here I talk about Layers, the basic building blocks of Keras. encoding, or embeddings (as we will see, an embedding is a trainable dense vector that represents a category or token). from tensorflow. The best way to do this at the time of writing is by using Keras. Next, we set up a sequentual model with keras. Nothing against PyTorch but with TF 2 out I think the TF/Keras combo wins out. class InputLayer: Layer to be used as an entry point into a Network (a graph of layers). We recently published Text classification with TensorFlow Hub to demonstrate how you can use tf. Now I will show how you can use pre-trained gensim embedding layers in our TensorFlow and Keras models. Gatys' paper, A Neural Algorithm of Artistic Style, which is a great read, and you. 一文搞懂word embeddding和keras中的embedding 写这篇文章的初衷: 最近带一个本科生做毕设,毕设内容是用lstm做情感分析。文章思路其实就是一个文本三分类的问题(正、中、负)。 首先: 该文章用到了word embedding,可以使用gensim里面的word2vec工具训练word embedding。. Embedding with a subclass that overrides the call method in order to use one-hot-encoding and dot-product to retrieve embeddings instead of tf. In the previous tutorial on Deep Learning, we've built a super simple network with numpy. The following are code examples for showing how to use keras. Python keras. A popular demonstration of the capability of deep learning techniques is object recognition in image data. Contribute to tensorflow/models development by creating an account on GitHub. Input(shape=(2,)), Dense(1024, activation=tf. if it came from a Keras layer with masking support. keras import layers print ( tf. TensorFlow Hub is a library for the publication, discovery, and consumption of reusable parts of machine learning models. feature_column tf. This sequential layer framework allows the developer to easily bolt together layers, with the tensor outputs from each layer flowing. core import TimeDistributedDense, Activation. 0 Keras implementation of BERT. "Keras tutorial. I think its the +1 that is causing problem. It begins with instantiating the BERT module from bert_path which can be a path on disk or a http address (e. I figured that the best next step is to jump right in and build some deep learning models for text. For more advanced usecases, follow this guide for subclassing tf. models import Sequential from keras. Tensor5D ) Converts a tf. layers import MaxPooling2D from keras. Denseは最後の次元にしか作用しないので、上記結果はtf. Can you give a summary of which TF Keras and which TF Slim layers are supported by the TIDL conversion tool including corresponding TF Version. See this tutorial to learn more about word embeddings. Layers are essentially little functions that are stateful - they generally have weights associated with them and these weights are. PyTorch is a nice library but I find it easier to quickly and efficiently develop experiments with TF/Keras. If you're interested in detecting code smell, and getting a gut feeling for when design choices are turning sour, and where bugs will start to creep in, I'd. epochs = 100 # Number of epochs to train for. convolutional. TimeDistributedの有無に依らないです。ありがたみを感じられるのは、tf. DenseNet121 tf. The build method creates assets of the module. Next, we create the two embedding layer. In this example, we hard-coded the size of the layer, but that is fairly easy to adjust. Installing Keras Keras is a code library that provides a relatively easy-to-use Python language interface to the relatively difficult-to-use TensorFlow library. The newly released Tensorflow hub provides an easy interface to use existing machine learning models for transfer learning. The resulting model with give you state-of-the-art performance on the named entity recognition task. Restore a character-level sequence to sequence model from to generate predictions. Embedding layer の出力を使って LightGBM でモデルを作り直す。 "特徴抽出が得意なニューラルネットワーク" と "(ニューラルネットワークと比べると) 安定した学習結果が得られる LightGBM" のいいとこどりをしたら精度が上がるのではないかと思い、試しました。. Therefore, you only need to send the index of the words through the GPU data transfer bus, reducing data transfer overhead. Dropout Regularization Case Study. The main reason to subclass tf. You should generate assign ops for those, to be run at each training step. Then, we will incrementally add one feature from tf. These vectors are trainable. backend import tf. Tensor to a tf. Concatenate(). keras and a pre-trained text embedding from the TF Hub repository to quickly & easily classify the sentiment of a movie review. These vectors are trainable. Pre-trained models and datasets built by Google and the community. it keeps dropping the input layer size to half. layers and tf. However, how can I then fit it to my model with y_train that is sti. Next, we set up a sequentual model with keras. webpage capture. It is quite common to use a One-Hot representation for categorical data in machine learning, for example textual instances in Natural Language Processing tasks. TF-IDF vectorization; This is a very common method of embedding words by considering the frequency of a word in a document and its occurrence in the corpus. Conclusion. You can vote up the examples you like or vote down the ones you don't like. What is Keras?. The output of one layer will flow into the next layer as its input. compile(optimizer=keras. This script loads the s2s. Inherits From: Conv2D View aliases. from keras. Image captioning is a challenging task at intersection of vision and language. This time I'm going to show you some cutting edge stuff. core import Dense, Activation, Merge, Reshape 13 from keras. In this post you will discover the simple components that you can use to create neural networks and simple deep learning models using Keras. Are there any examples of using tf-agents that uses self-play? I can see many examples for environments where self-play is not required like snake, pole-cart, and breakout as some popular options, but nothing that would require a self-play strategy like connect four, checkers, or the like. Este libro muestra un aprendizaje muy profundo de condigo con Phyton. Input when I concatenate two models with Keras API on Tensorflow. These are handled by Network (one layer of abstraction above. Tensor to a tf. models import Sequential from keras. import matplotlib. 0でも使えます。ただ、keras の api と tf の api が混在しているのを整理しました。 tf. This sequential layer framework allows the developer to easily bolt together layers, with the tensor outputs from each layer flowing easily and implicitly into the next layer. Keras provides a simple keras. 关于 Keras 网络层; 核心网络层; 卷积层 Convolutional Layers; 池化层 Pooling Layers; 局部连接层 Locally-connected Layers; 循环层 Recurrent Layers; 嵌入层 Embedding Layers. As I mentioned in the video, the code was borrowed from Keras forum ( https://goo. In my previous Keras tutorial, I used the Keras sequential layer framework. RoBERTa Model with a token classification head on top (a linear layer on top of the hidden-states output) e. 6) You can set up different layers with different initialization schemes. get_weights() - returns the layer weights as a list of Numpy arrays. mask_zero: Whether or not the input value 0 is a special "padding" value that should be masked out. High-level API: Layers. Model sub-class. There are stored as a list of tensor tuples, layer. When using this layer as the first layer in a model, provide the keyword argument input_shape (tuple of integers, does not include the sample axis), e. Visit Stack Exchange. Download the file for your platform. 4 Full Keras API Better optimized for TF Better integration with TF-specific features embedded_words = layers. Tensor to a tf. This is a summary of the official Keras Documentation. Keras provides a simple keras. Or even maybe implement a BERT Keras Layer for seamless embedding integration. Keras quickly gained traction after its introduction and in 2017, the Keras API was integrated into core Tensorflow as tf. Pre-trained models and datasets built by Google and the community. A fast-paced introduction to TensorFlow 2 about some important new features (such as generators and the @tf. num_samples = 10000 # Number of samples to train on. Keras Embedding Layer. 2, TensorFlow 1. Discover how to develop LSTMs such as stacked, bidirectional, CNN-LSTM, Encoder-Decoder seq2seq and more in my new book , with 14 step-by-step tutorials and full code. Build a POS tagger with an LSTM using Keras. How to develop an LSTM and Bidirectional LSTM for sequence classification. epochs = 100 # Number of epochs to train for. get_config get_config() Returns the config of the layer. Yes, as the title says, it has been very usual talk among data-scientists (even you!) where a few say, TensorFlow is better and some say Keras is way good! Let's see how this thing actually works out in practice in the case of image classification. Stack Exchange network consists of 175 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. They are from open source Python projects. to_categorical(Y_test, NB_CLASSES) You can see from the above code that the input layer has a neuron associated to each pixel in the image for a total of 28*28=784 neurons, one for each pixel in the MNIST images. This layer contains both the proportion of the input layer’s units to drop 0. Inherits From: Layer View aliases. Lambda layers are saved by serializing the Python bytecode, whereas subclassed Layers can be saved via overriding their get_config method. "sample", "batch", "epoch" 分别是什么? 为了正确地使用 Keras,以下是必须了解和理解的一些常见定义: Sample: 样本,数据集中的一个元素,一条数据。. Similar to PCA, matrix factorization (MF) technique attempts to decompose a (very) large matrix ($$m \times n$$) to smaller matrices (e. py included in TensorFlow, which is the typical way it is done. if it came from a Keras layer with masking support. In this example, we hard-coded the size of the layer, but that is fairly easy to adjust. The embedding layer can be used to peform three tasks in Keras:. In my previous Keras tutorial, I used the Keras sequential layer framework. py for more details on the model architecture and how it is trained. keras! Off the shelf, the Data API can read from text files (such as CSV files), binary files or embeddings (as we will see, an embedding is a trainable dense vector that represents a category or token). py Using Theano backend. This is also the last major release of multi-backend Keras. sa Keras bert. Keras offers an Embedding layer that can be used in neural network models for processing text data. I tried the setup embedding layer + shallow fully connected layer vs TF-IDF + fully connected layer but got almost same result difference. We have not told Keras to learn a new embedding space through successive tasks. How to compare the performance of the merge mode used in Bidirectional LSTMs. In this Word2Vec Keras implementation, we’ll be using the Keras functional API. A Keras Embedding Layer can be used to train an embedding for each word in your volcabulary. We set trainable to true which means that the word vectors are fine-tuned during training. 0でも使えます。ただ、keras の api と tf の api が混在しているのを整理しました。 tf. LocallyConnected1D. 3) LeakyRelU是修正线性单元(Rectified Linear Unit,ReLU)的特殊版本,当不激活时,LeakyReLU仍然会有非零输出值,从而获得一个小梯度,避免ReLU可能出现的神经元“死亡”现象。. GRU taken from open source projects. See this tutorial to learn more about word embeddings. keras! Off the shelf, the Data API can read from text files (such as CSV files), binary files or embeddings (as we will see, an embedding is a trainable dense vector that represents a category or token). Every worker uses the same python scripts for training. In this blog post, I will detail my repository that performs object classification with transfer learning. RemoteMonitor. Multi-Layer perceptron defines the most complicated architecture of artificial neural networks. We first preprocess the comments, and train word vectors. That's how I think of Embedding layer in Keras. keras? 3:37 - Will the Keras namespace be removed in future releases of TF 2. For more advanced usecases, follow this guide for subclassing tf. python import debug as tf_debug. One option to handle all this preprocessing is to write your own custom preprocessing layers. Keras is a deep learning framework that actually under the hood uses other deep learning frameworks in order to expose a beautiful, simple to use and fun to work with, high-level API. There is a close similarity between the Layers API and Keras, but they are not one-to-one identical. July 10, 2016 200 lines of python code to demonstrate DQN with Keras. If you want to shorten your code a little, you could return decoding_layer() rather than creating train_logits & infer_logits and returning them. randint(2, size=(1000, 1)) x_test = np. Sign up for free to join this conversation on GitHub. Stack Exchange network consists of 175 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. The newly released Tensorflow hub provides an easy interface to use existing machine learning models for transfer learning. If you're not sure which to choose, learn more about installing packages. keras API Keras is the recommended API for training and inference in TensorFlow 2. A few weeks ago, I authored a series of tutorials on autoencoders: I'll show you how to implement each of these phases in. There are different policies to choose from, and you can include multiple policies in a single rasa. LSTMCell corresponds to the LSTM layer. embed = tf. data code samples and lazy operators. The config of a layer does not include connectivity information, nor the layer class name. Dropout taken from open source projects. They are from open source Python projects. See this tutorial to learn more about word embeddings. Pre-trained models and datasets built by Google and the community. x_train = tf. keras,一种用于在 TensorFlow 中构建和训练模型的高阶 API,以及TensorFlow Hub,一个用于迁移学习的库和平台。 有关使用 tf. The embedding layer is implemented in the form of a class in Keras and is normally used as a first layer in the sequential model for NLP tasks. This is also the last major release of multi-backend Keras. embedding大家都不陌生,在我们的模型中,只要存在离散变量,那么一般都会用到embedding操作。今天这篇,我们将按以下的章节来介绍TF中的embedding操作。. This project demonstrates how to use the Deep-Q Learning algorithm with Keras together to play FlappyBird. After training (on enough data), words with similar meanings often have similar vectors. In this Word2Vec Keras implementation, we’ll be using the Keras functional API. This layer contains both the proportion of the input layer’s units to drop 0. SimpleRNNCell corresponds to the SimpleRNN layer. Keras offers an Embedding layer that can be used for neural networks on text data. Distributed deep learning training using TensorFlow and Keras with HorovodRunner for MNIST. In my previous Keras tutorial, I used the Keras sequential layer framework. ImportError: cannot import name Merge. Given that fact, I see the possibility to achieve the flexibility in using either way by having a Keras layer for One-Hot encoding. Keras is a deep learning framework that actually under the hood uses other deep learning frameworks in order to expose a beautiful, simple to use and fun to work with, high-level API. feature_column tf. 4 pip install bert-for-tf2 Copy PIP instructions. result = embedding_layer(tf. layers import Dense, Embedding, LSTM from keras. In this tutorial we will discuss the recurrent layers provided in the Keras library. The Model is the core Keras data structure. 0でも使えます。ただ、keras の api と tf の api が混在しているのを整理しました。 tf. This project demonstrates how to use the Deep-Q Learning algorithm with Keras together to play FlappyBird. With a few fixes, it's easy to integrate a Tensorflow hub model with Keras!. I am using Tensorflow 1. You can vote up the examples you like or vote down the ones you don't like. Classifying Duplicate Questions from Quora with Keras. embedding_lookup(embedding 博文 来自: u010211479的博客. How to convert tf. class LSTM : Long Short-Term Memory layer - Hochreiter 1997. Pre-trained autoencoder in the dimensional reduction and parameter initialization, custom built clustering layer trained against a target distribution to refine the accuracy further. tensorflow2推荐使用keras构建网络,常见的神经网络都包含在keras. numpy() For text or sequence problems, the Embedding layer takes a 2D tensor of integers, of shape (samples, sequence_length), where each entry is a sequence of integers. epochs = 100 # Number of epochs to train for. If you're not sure which to choose, learn more about installing packages. import matplotlib. GitHub Gist: instantly share code, notes, and snippets. In this blog post, I will detail my repository that performs object classification with transfer learning. layers import Dense, Dropout from keras. keras 的参数命名和 Keras 一样,使用 tf. tensorflow の記憶を失ったときのためのメモ(毎日のように忘れ…. Therefore, if we want to add dropout to the input layer, the layer we add in our is a dropout layer. The output of the Embedding layer will be a three dimensional vector with shape: [batch size, sequence length (170 in this example), embedding dimension (8 in this. RemoteMonitor. Embedding instead of python dictionary Robin Dong 2019-01-17 2019-01-17 No Comments on Using keras. convolutional layers, pooling layers, recurrent layers, embedding layers and more. The newly released Tensorflow hub provides an easy interface to use existing machine learning models for transfer learning. These vectors are trainable. Sequential( [ tf. layers 名前空間はよく使うのでここでインポートしておくこととする。. Add an embedding layer with a vocabulary length of 500 (we defined this previously). TensorFlow Hub is a library for the publication, discovery, and consumption of reusable parts of machine learning models. 0 and the tf. It is rather a compression of space that our word tokens like. Installing Keras involves two main steps. function decorator) and TF 1. These are handled by Network (one layer of abstraction above). from lambdawithmask import Lambda as MaskLambda. from keras. 10464] Massively Multilingual Sentence Embeddings for Zero-Shot Cross-Lingual Transfer and Beyond; LASER natural language processing toolkit - Facebook Engineering. The Tutorial Video. Sequential model and start with an embedding layer. Using TensorFlow and GradientTape to train a Keras model. Sequential( [ tf. The newly released Tensorflow hub provides an easy interface to use existing machine learning models for transfer learning. 4 pip install bert-for-tf2 Copy PIP instructions. TensorFlow2教程-LSTM和GRU最全Tensorflow 2. bert-for-tf2 0. In this tutorial, you will learn how to use convolutional autoencoders to create a Content-based Image Retrieval system (i. This layer contains both the proportion of the input layer’s units to drop 0. Sequential model and start with an embedding layer. Dropout from keras. BatchNormalization(). You can vote up the examples you like or vote down the ones you don't like. The embedding layer can be used to peform three tasks in Keras:. Layer and overrides some methods, most importantly build and call. A Keras Embedding Layer can be used to train an embedding for each word in your volcabulary. TensorFlow 1. It draws samples from a truncated normal distribution centered on 0 with stddev = sqrt(1 / fan_in) where fan_in is the number of input units in the weight tensor. Posted by Stijn Decubber, machine learning engineer at ML6. Tensorflow's PTB LSTM model for keras. In Keras, the Embedding layer automatically takes inputs with the category indices (such as [5, 3, 1, 5]) and converts them into dense vectors of some length (e. These are handled by Network (one layer of abstraction above). 0 入门教程持续更新完整tensorflow2. callbacks import EarlyStopping, ReduceLROnPlateau, ModelCheckpoint from keras. from __future__ import print_function from keras. Model sub-class. RoBERTa Model with a token classification head on top (a linear layer on top of the hidden-states output) e. A layer config is a Python dictionary (serializable) containing the configuration of a layer. Sequential model. Embedding()。. I tried the setup embedding layer + shallow fully connected layer vs TF-IDF + fully connected layer but got almost same result difference. keras 中学习率衰减。. Support START/END transfer probability learning. 0 版本的 CRF keras layer. Update Mar/2017: Updated example for Keras 2. Pre-trained models and datasets built by Google and the community. k_get_session() k_set_session() TF session to be used by the backend. This guide consists of the following sections:. preprocessing import sequence # seed 값 설정 seed = 0 np. convolutional layers, pooling layers, recurrent layers, embedding layers and more. Configuring Policies ¶. This helps the RNN to learn long range dependencies. The config of a layer does not include connectivity information, nor the layer class name. Introduction to Deep Learning, Keras, and Tensorflow 1. An Intuitive explanation of Neural Machine Translation. $$m\times k \text{ and } k \times$$. x5xe0nltz7c5k4z, ffszgmmeocx, ngl2hwq4soa8rs5, snokw967cfl4d3, 3b3tc4pjvgzthht, do5n2sxiey, 2uhkni5d5m, mii2fsoru5, zbttkptbht, whdiyfm0d8f2gca, 4i3s5nvqvzjh4, ss38zfxgvu0, w60uq73o80, y0qpfsmroquvdb, gq7dbhn74z6ew5, k0jz6wl7ys, xpoqt8ksiwk63, ispaa1hymablz7, l9q2u4vb57, q4r33bxdgu7, gx1deo6g3g, 8fxmnlo3r1exm, 7su4vfgwhxd, dpee81ljmv, gkw52wrmdzgf, 2f2qx4tok4c
2020-05-27 06:01:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2053297460079193, "perplexity": 4126.091499296704}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347392141.7/warc/CC-MAIN-20200527044512-20200527074512-00047.warc.gz"}
https://danmackinlay.name/notebook/compressed_sensing.html
# Compressed sensing and sampling ## A fancy ways of counting zero Higgledy-piggledy notes on the theme of exploiting sparsity to recover signals from few non-local measurements, given that we know they are nearly sparse, in a sense that will be made clear soon. ## Basic Compressed Sensing I’ll follow the intro of , which tries to unify many variants. We attempt to recover a signal $$x_k\in \mathbb{R}^d$$ from $$m\ll n$$ measurements $$y_k$$ of the form $y_k =\langle a_k, x\rangle + z_k,\, 1\leq k \leq m,$ or, as a matrix equation, $y = Ax + z$ where $$A$$ is the $$m \times d$$ stacked measurement matrices, and the $$z$$ terms denote i.i.d. measurement noise. Now, if $$x$$ is a sparse vector, and $$A$$ satisfies a restricted isometry property or something then we can construct an estimate $$\hat{x}$$ with small error by minimising $\hat{x}=\min \|\dot{x}\|_1 \text{ subject to } \|A\dot{x}-y\|_2 < \varepsilon,$ where $$\varepsilon> \|z\|_2^2.$$ In the lecture notes on restricted isometry properties, Candès and Tao talk about not vectors $$x\in \mathbb{R}^d$$ but functions $$f:G \mapsto \mathbb{C}$$ on Abelian groups like $$G=\mathbb{Z}/d\mathbb{Z},$$ which is convenient for some phrasing, since then when I say my signal is $$s$$-sparse, which means that its support $$\operatorname{supp} \tilde{f}=S\subset G$$ where $$|S|=s$$. In the finite-dimensional vector framing, we can talk about best sparse approximations $$x_s$$ to non-sparse vectors, $$x$$. $x_s = \argmin_{\|\dot{x}\|_0\leq s} \|x-\dot{x}\|_2$ where all the coefficients apart from the $$s$$ largest are zeroed. The basic results find attractive convex problems with high probability in a nest of nastier ones. There are also greedy optimisation versions, which are formulated as above, but no longer necessarily a convex optimisation; instead, we talk about Orthogonal Matching Pursuit, Iterative Thresholding and some other stuff the details of which I do not yet know, which I think pops up in wavelets and sparse coding. For all of these the results tend to be something like with data $$y,$$ the difference between my estimate of $$\hat{x}$$ and $$\hat{x}_\text{oracle}$$ is bounded by something-or-other where the oracle estimate is the one where you know ahead of time the set $$S=\operatorname{supp}(x)$$. Candés gives an example result $\|\hat{x}-x\|_2 \leq C_0\frac{\|x-x_s\|_1}{\sqrt{s}} + C_1\varepsilon$ conditional upon $\delta_2s(A) < \sqrt{2} -1$ where this $$\delta_s(\cdot)$$ gives the restricted isometry constant of a matrix, defined as the smallest constant such that $$(1-\delta_s(A))\|x\|_2^2\leq \|Ax\|_2^2\leq (1+\delta_s(A))\|x\|_2^2$$ for all $$s$$-sparse $$))x$$. That is, the measurement matrix does not change the norm of sparse signals “much”, and in particular, does not null them when $$\delta_s < 1.$$ This is not the strongest bound out there apparently, but for any of that form, those constants look frustrating. Measuring the restricted isometry constant of a given measurement matrix is presumably hard, although I haven’t tried yet. But generating random matrices that have a certain RIC with high probability is easy; that’s a neat trick in this area. ## Redundant compressed sensing 🏗 For now see Frame theory. ## Introductory texts • Aside: see the rather good practical python notebook in numerical tours. • Terry Tao’s exposition is great as usual. See, e.g. […] we can at least give an informal geometric argument as to why $$\ell^1$$ minimisation is more likely to recover a sparse solution than $$\ell^2$$ minimisation. The set of all $$f$$ whose Fourier coefficients match the observed data $$c_\xi$$ forms an affine subspace of the space of all functions. The $$\ell^2$$ minimiser can then be viewed geometrically by taking l^2 balls (i.e. Euclidean balls) centred at the origin, and gradually increasing the radius of the ball until the first point of contact with the affine subspace. In general, there is no reason to expect this point of contact to be sparse (i.e. to lie on a high-codimension coordinate subspace). If however we replace $$\ell^2$$ with $$\ell^1$$, then the Euclidean balls are replaced by octahedra, which are much “pointier” (especially in high dimensions) and whose corners lie on coordinate subspaces. So the point of first contact is now much more likely to be sparse. The idea of using $$\ell^1$$ as a “convex relaxation” of $$\ell^0$$ is a powerful one in applied mathematics; see for instance on the topic. • Hegde, Baraniuk, Davenport and Duarte have an open source textbook • Wes McKinney’s intro • RIP vs JL • Gabriel Peyre’s Compressed Sensing of Images ## …Using random projections Classic. Notes under low dimensional projections ## …Using deterministic projections Surely this is close to quasi monte carlo? • Dustin G. Mixon Achieving the Welch bound with difference sets I blogged about constructing harmonic frames using difference sets. We proved that such harmonic frames are equiangular tight frames, thereby having minimal coherence between columns. I concluded the entry by conjecturing that incoherent harmonic frames are as good for compressed sensing as harmonic frames whose rows were randomly drawn from the discrete Fourier transform (DFT) matrix • A variant on the compressed sensing of Yves Meyer recent work of Yves Meyer might be relevant: • A variant on the compressed sensing of Emmanuel Candes, Basarab Matei and Yves Meyer • Simple quasicrystals are sets of stable sampling, Basarab Matei and Yves Meyer These papers are interesting because their approach to compressed sensing is very different. Specifically, their sparse vectors are actually functions of compact support with sufficiently small Lebesgue measure. As such, concepts like conditioning are replaced with that of stable sampling, and the results must be interpreted in the context of functional analysis. The papers demonstrate that sampling frequencies according to a (deterministic) simple quasicrystal will uniquely determine sufficiently sparse functions, and furthermore, the sparsest function in the preimage can be recovered by L1-minimization provided it’s nonnegative. ## Bayesian Sparse Bayes can be tricky. See, perhaps, Bayesian Compressive Sensing. ## Phase transitions How well can you recover a matrix from a certain number of measurements? In obvious metrics there is a sudden jump in how well you do with increasing measurements for a given rank. This looks a lot like a physical phase transition, which is a known phenomenon in ML. Hmm. ## Weird things to be classified csgm, compressed sensing using generative models, tries to find a model which is sparse with respect to… some manifold of the latent variables of… a generative model? or something? ## References Achlioptas, Dimitris. 2003. Journal of Computer and System Sciences, Special Issue on PODS 2001, 66 (4): 671–87. Azizyan, Martin, Akshay Krishnamurthy, and Aarti Singh. 2015. arXiv:1506.00898 [Cs, Math, Stat], June. Bach, Francis, Rodolphe Jenatton, and Julien Mairal. 2011. Optimization With Sparsity-Inducing Penalties. Foundations and Trends(r) in Machine Learning 1.0. Now Publishers Inc. Baraniuk, Richard G. 2007. IEEE Signal Processing Magazine 24 (4). ———. 2008. IEEE Signal Processing Magazine 25 (2): 83–91. Baraniuk, Richard G., Volkan Cevher, Marco F. Duarte, and Chinmay Hegde. 2010. IEEE Transactions on Information Theory 56 (4): 1982–2001. Baraniuk, Richard, Mark Davenport, Ronald DeVore, and Michael Wakin. 2008. Constructive Approximation 28 (3): 253–63. Baron, Dror, Shriram Sarvotham, and Richard G. Baraniuk. 2010. IEEE Transactions on Signal Processing 58 (1): 269–80. Bayati, Mohsen, and Andrea Montanari. 2011. IEEE Transactions on Information Theory 57 (2): 764–85. Berger, Bonnie, Noah M. Daniels, and Y. William Yu. 2016. Communications of the ACM 59 (8): 72–80. Bian, W., and X. Chen. 2013. SIAM Journal on Optimization 23 (3): 1718–41. Bingham, Ella, and Heikki Mannila. 2001. In Proceedings of the Seventh ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 245–50. KDD ’01. New York, NY, USA: ACM. Blanchard, Jeffrey D. 2013. Proceedings of the National Academy of Sciences 110 (4): 1146–47. Bora, Ashish, Ajil Jalal, Eric Price, and Alexandros G. Dimakis. 2017. In International Conference on Machine Learning, 537–46. Borgerding, Mark, and Philip Schniter. 2016. arXiv:1612.01183 [Cs, Math], December. Bruckstein, A. M., Michael Elad, and M. Zibulevsky. 2008a. In 3rd International Symposium on Communications, Control and Signal Processing, 2008. ISCCSP 2008, 762–67. ———. 2008b. IEEE Transactions on Information Theory 54 (11): 4813–20. Cai, T. Tony, Guangwu Xu, and Jun Zhang. 2008. arXiv:0805.0149 [Cs], May. Cai, T. Tony, and Anru Zhang. 2015. The Annals of Statistics 43 (1): 102–38. Candès, Emmanuel J. 2014. ICM 2014 Proceedings, to Appear. Candès, Emmanuel J., and Mark A. Davenport. 2011. arXiv:1104.5246 [Cs, Math, Stat], April. Candès, Emmanuel J., Yonina C. Eldar, Deanna Needell, and Paige Randall. 2011. Applied and Computational Harmonic Analysis 31 (1): 59–73. Candès, Emmanuel J., and Benjamin Recht. 2009. Foundations of Computational Mathematics 9 (6): 717–72. Candès, Emmanuel J., J. Romberg, and T. Tao. 2006a. IEEE Transactions on Information Theory 52 (2): 489–509. Candès, Emmanuel J., Justin K. Romberg, and Terence Tao. 2006b. Communications on Pure and Applied Mathematics 59 (8): 1207–23. Candès, Emmanuel J., and Terence Tao. 2006. IEEE Transactions on Information Theory 52 (12): 5406–25. ———. 2008. “The Uniform Uncertainty Principle and Compressed Sensing.” Candès, Emmanuel J., and M.B. Wakin. 2008. IEEE Signal Processing Magazine 25 (2): 21–30. Candès, Emmanuel, and Terence Tao. 2005. IEEE Transactions on Information Theory 51 (12): 4203–15. Carmi, Avishy Y. 2013. Digital Signal Processing 23 (3): 751–70. ———. 2014. In Compressed Sensing & Sparse Filtering, edited by Avishy Y. Carmi, Lyudmila Mihaylova, and Simon J. Godsill, 281–324. Signals and Communication Technology. Springer Berlin Heidelberg. Cevher, Volkan, Marco F. Duarte, Chinmay Hegde, and Richard Baraniuk. 2009. In Advances in Neural Information Processing Systems, 257–64. Curran Associates, Inc. Chartrand, R., and Wotao Yin. 2008. In IEEE International Conference on Acoustics, Speech and Signal Processing, 2008. ICASSP 2008, 3869–72. Chen, Xiaojun. 2012. Mathematical Programming 134 (1): 71–99. Chen, Xiaojun, and Weijun Zhou. 2013. Computational Optimization and Applications 59 (1-2): 47–61. Chretien, Stephane. 2008. arXiv:0809.0660 [Stat], September. Dasgupta, Sanjoy. 2000. In Proceedings of the Sixteenth Conference on Uncertainty in Artificial Intelligence, 143–51. UAI’00. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc. Dasgupta, Sanjoy, and Anupam Gupta. 2003. Random Structures & Algorithms 22 (1): 60–65. Dasgupta, Sanjoy, Daniel Hsu, and Nakul Verma. 2006. In Proceedings of the Twenty-Second Conference on Uncertainty in Artificial Intelligence, 114–21. UAI’06. Arlington, Virginia, USA: AUAI Press. Daubechies, I., M. Defrise, and C. De Mol. 2004. Communications on Pure and Applied Mathematics 57 (11): 1413–57. Daubechies, Ingrid, Ronald DeVore, Massimo Fornasier, and C. Si̇nan Güntürk. 2010. Communications on Pure and Applied Mathematics 63 (1): 1–38. DeVore, Ronald A. 1998. Acta Numerica 7 (January): 51–150. Diaconis, Persi, and David Freedman. 1984. The Annals of Statistics 12 (3): 793–815. Donoho, D. L., M. Elad, and V. N. Temlyakov. 2006. IEEE Transactions on Information Theory 52 (1): 6–18. Donoho, David L. 2006. IEEE Transactions on Information Theory 52 (4): 1289–1306. Donoho, David L., and Michael Elad. 2003. Proceedings of the National Academy of Sciences 100 (5): 2197–2202. Donoho, David L., A. Maleki, and A. Montanari. 2010. In 2010 IEEE Information Theory Workshop (ITW), 1–5. Donoho, David L., Arian Maleki, and Andrea Montanari. 2009a. Proceedings of the National Academy of Sciences 106 (45): 18914–19. ———. 2009b. In 2010 IEEE Information Theory Workshop (ITW), 1–5. Duarte, Marco F., and Richard G. Baraniuk. 2013. Applied and Computational Harmonic Analysis 35 (1): 111–29. Flammia, Steven T., David Gross, Yi-Kai Liu, and Jens Eisert. 2012. New Journal of Physics 14 (9): 095022. Foygel, Rina, and Nathan Srebro. 2011. arXiv:1108.0373 [Math, Stat], August. Freund, Yoav, Sanjoy Dasgupta, Mayank Kabra, and Nakul Verma. 2007. In Advances in Neural Information Processing Systems, 473–80. Giryes, R., G. Sapiro, and A. M. Bronstein. 2016. IEEE Transactions on Signal Processing 64 (13): 3444–57. Graff, Christian G., and Emil Y. Sidky. 2015. Applied Optics 54 (8): C23–44. Hall, Peter, and Ker-Chau Li. 1993. The Annals of Statistics 21 (2): 867–89. Harchaoui, Zaid, Anatoli Juditsky, and Arkadi Nemirovski. 2015. Mathematical Programming 152 (1-2): 75–112. Hassanieh, Haitham, Piotr Indyk, Dina Katabi, and Eric Price. 2012. In Proceedings of the Forty-Fourth Annual ACM Symposium on Theory of Computing, 563–78. STOC ’12. New York, NY, USA: ACM. Hassanieh, H., P. Indyk, D. Katabi, and E. Price. 2012. In Proceedings of the Twenty-Third Annual ACM-SIAM Symposium on Discrete Algorithms, 1183–94. Proceedings. Kyoto, Japan: Society for Industrial and Applied Mathematics. Hegde, Chinmay, and Richard G. Baraniuk. 2012. IEEE Transactions on Information Theory 58 (12): 7204–14. Hormati, A., O. Roy, Y.M. Lu, and M. Vetterli. 2010. IEEE Transactions on Signal Processing 58 (3): 1095–1109. Hoyer, Patrik O. n.d. Journal of Machine Learning Research 5 (9): 1457–69. Jaggi, Martin. 2013. In Journal of Machine Learning Research, 427–35. Kabán, Ata. 2014. In Journal of Machine Learning Research, 448–56. Kim, Daeun, and Justin P. Haldar. 2016. Signal Processing 125 (August): 274–89. Lahiri, Subhaneil, Peiran Gao, and Surya Ganguli. 2016. arXiv:1607.04331 [Cs, q-Bio, Stat], July. Launay, Julien, Iacopo Poli, François Boniface, and Florent Krzakala. 2020. In Advances in Neural Information Processing Systems, 33:15. Li, Ping, Trevor J. Hastie, and Kenneth W. Church. 2006. In Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 287–96. KDD ’06. New York, NY, USA: ACM. Li, Yingying, and Stanley Osher. 2009. Inverse Problems and Imaging 3 (3): 487–503. Matei, Basarab, and Yves Meyer. 2010. Complex Variables and Elliptic Equations 55 (8-10): 947–64. Mishali, Moshe, and Yonina C. Eldar. 2010. IEEE Journal of Selected Topics in Signal Processing 4 (2): 375–91. Montanari, Andrea. 2012. Compressed Sensing: Theory and Applications, 394–438. Mousavi, Ali, and Richard G. Baraniuk. 2017. In ICASSP. Needell, D., and J. A. Tropp. 2008. arXiv:0803.2392 [Cs, Math], March. Oka, A, and L. Lampe. 2008. In 5th IEEE Sensor Array and Multichannel Signal Processing Workshop, 2008. SAM 2008, 257–60. Olshausen, B. A., and D. J. Field. 1996. Network (Bristol, England) 7 (2): 333–39. Olshausen, Bruno A, and David J Field. 2004. Current Opinion in Neurobiology 14 (4): 481–87. Oxvig, Christian Schou, Thomas Arildsen, and Torben Larsen. 2017. Aalborg University. Pawar, Sameer, and Kannan Ramchandran. 2015. arXiv:1501.00320 [Cs, Math], January. Peleg, Tomer, Yonina C. Eldar, and Michael Elad. 2010. IEEE Transactions on Signal Processing 60 (5): 2286–2303. Qiuyun Zou, Haochuan Zhang, Chao-Kai Wen, Shi Jin, and Rong Yu. 2018. IEEE Signal Processing Letters 25 (12): 1835–39. Rangan, Sundeep. 2011. In 2011 IEEE International Symposium on Information Theory Proceedings, 2168–72. St. Petersburg, Russia: IEEE. Ravishankar, Saiprasad, and Yoram Bresler. 2015. arXiv:1501.02923 [Cs, Stat], January. Ravishankar, S., and Y. Bresler. 2015. IEEE Transactions on Signal Processing 63 (9): 2389–2404. Rish, Irina, and Genady Grabarnik. 2014. In Compressed Sensing & Sparse Filtering, edited by Avishy Y. Carmi, Lyudmila Mihaylova, and Simon J. Godsill, 77–93. Signals and Communication Technology. Springer Berlin Heidelberg. Rish, Irina, and Genady Ya Grabarnik. 2015. Sparse Modeling: Theory, Algorithms, and Applications. Chapman & Hall/CRC Machine Learning & Pattern Recognition Series. Boca Raton, FL: CRC Press, Taylor & Francis Group. Romberg, J. 2008. IEEE Signal Processing Magazine 25 (2): 14–20. Rosset, Saharon, and Ji Zhu. 2007. The Annals of Statistics 35 (3): 1012–30. Rubinstein, Ron, T. Peleg, and Michael Elad. 2013. IEEE Transactions on Signal Processing 61 (3): 661–77. Sarvotham, Shriram, Dror Baron, and Richard G. Baraniuk. 2006. In In Proc. Allerton Conf. On Comm., Control, and Computing. Schniter, P., and S. Rangan. 2012. In 2012 50th Annual Allerton Conference on Communication, Control, and Computing (Allerton), 815–22. Shalev-Shwartz, Shai, and Ambuj Tewari. 2011. Journal of Machine Learning Research 12 (July): 1865–92. Smith, Virginia, Simone Forte, Michael I. Jordan, and Martin Jaggi. 2015. arXiv:1512.04011 [Cs], December. Song, Ruiyang, Yao Xie, and Sebastian Pokutta. 2015. arXiv:1509.00130 [Cs, Math, Stat], August. Tropp, J. A., and S. J. Wright. 2010. Proceedings of the IEEE 98 (6): 948–58. Tropp, J.A. 2006. IEEE Transactions on Information Theory 52 (3): 1030–51. Vetterli, Martin. 1999. In AeroSense’99, 3723:28–31. International Society for Optics and Photonics. Weidmann, Claudio, and Martin Vetterli. 2012. IEEE Transactions on Information Theory 58 (8): 4969–92. Wipf, David, and Srikantan Nagarajan. 2016. Microsoft Research, July. Wu, R., W. Huang, and D. R. Chen. 2013. IEEE Signal Processing Letters 20 (4): 403–6. Wu, Yan, Mihaela Rosca, and Timothy Lillicrap. 2019. In International Conference on Machine Learning, 6850–60. Yaghoobi, M., Sangnam Nam, R. Gribonval, and M.E. Davies. 2012. In 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 5409–12. Yang, Wenzhuo, and Huan Xu. 2015. In Journal of Machine Learning Research, 494–503. Zhang, Kai, Chuanren Liu, Jie Zhang, Hui Xiong, Eric Xing, and Jieping Ye. 2017. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 615–23. KDD ’17. New York, NY, USA: ACM. ### No comments yet. Why not leave one? GitHub-flavored Markdown & a sane subset of HTML is supported.
2022-12-04 05:27:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8101749420166016, "perplexity": 4797.944625390246}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710962.65/warc/CC-MAIN-20221204040114-20221204070114-00000.warc.gz"}
https://indico.desy.de/indico/event/18342/session/35/contribution/203
# Neutrino 2018 - XXVIII International Conference on Neutrino Physics and Astrophysics 4-9 June 2018 Heidelberg Europe/Berlin timezone Home > Timetable > Session details > Contribution details # Contribution Poster high energy neutrinos & cosmic rays Poster (participating in poster prize competition) # Results from Testing the Neutrino Mass Ordering with Three Years of IceCube DeepCore Data ## Speakers • Martin LEUERMANN ## Authorship annotation for the IceCube Collaboration ## Session and Location Wednesday Session, Poster Wall #183 (Ballroom) ## Abstract content The measurement of the Neutrino Mass Ordering (NMO), i.e. the ordering of the three neutrino mass eigenstates, is a major goal of many future experiments. One strategy to measure the NMO is observing matter effects in the oscillation pattern of atmospheric neutrinos as proposed for the Precision Next Generation Upgrade (PINGU) of the IceCube Neutrino Observatory. This type of measurement can already be explored with the currently running IceCube DeepCore detector. Albeit with lower significance, such a measurement contributes to the current understanding. Moreover, it exercises the measurement principle and thus prototypes future analyses with PINGU. We present results from two independent likelihood analyses measuring the NMO with three years of data from IceCube DeepCore. In the more sensitive one, we observe a slight preference for Normal Ordering in the first octant, close to maximum-mixing, with a p-value of $p_\mathrm{IO}=15.3\%$ ($\mathrm{CL}_s=53\%$) for Inverted Ordering. yes
2019-02-19 19:37:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6171836256980896, "perplexity": 5336.1860894601}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247491141.23/warc/CC-MAIN-20190219183054-20190219205054-00582.warc.gz"}
https://en.wikibooks.org/wiki/Statistics/Summary/Averages/Harmonic_Mean
# Statistics/Summary/Averages/Harmonic Mean < Statistics‎ | Summary‎ | Averages ### Harmonic Mean The arithmetic mean cannot be used when we want to average quantities such as speed. Consider the example below: Example 1: The distance from my house to town is 40 km. I drove to town at a speed of 40 km per hour and returned home at a speed of 80 km per hour. What was my average speed for the whole trip?. Solution: If we just took the arithmetic mean of the two speeds I drove at, we would get 60 km per hour. This isn't the correct average speed, however: it ignores the fact that I drove at 40 km per hour for twice as long as I drove at 80 km per hour. To find the correct average speed, we must instead calculate the harmonic mean. For two quantities A and B, the harmonic mean is given by: ${\displaystyle {\frac {2}{{\frac {1}{A}}+{\frac {1}{B}}}}}$ This can be simplified by adding in the denominator and multiplying by the reciprocal: ${\displaystyle {\frac {2}{{\frac {1}{A}}+{\frac {1}{B}}}}={\frac {2}{\frac {B+A}{AB}}}={\frac {2AB}{A+B}}}$ For N quantities: A, B, C...... Harmonic mean = ${\displaystyle {\frac {N}{{\frac {1}{A}}+{\frac {1}{B}}+{\frac {1}{C}}+\ldots }}}$ Let us try out the formula above on our example: Harmonic mean = ${\displaystyle {\frac {2AB}{A+B}}}$ Our values are A = 40, B = 80. Therefore, harmonic mean ${\displaystyle ={\frac {2\times 40\times 80}{40+80}}={\frac {6400}{120}}\approx 53.333}$ Is this result correct? We can verify it. In the example above, the distance between the two towns is 40 km. So the trip from A to B at a speed of 40 km will take 1 hour. The trip from B to A at a speed to 80 km will take 0.5 hours. The total time taken for the round distance (80 km) will be 1.5 hours. The average speed will then be ${\displaystyle {\frac {80}{1.5}}\approx }$ 53.33 km/hour. The harmonic mean also has physical significance.
2016-12-07 10:41:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 6, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.871132493019104, "perplexity": 323.6254099901581}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542060.60/warc/CC-MAIN-20161202170902-00264-ip-10-31-129-80.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/tidal-forces-fields.86343/
# Tidal forces/fields 1. Aug 25, 2005 ### Allday Hey people, Im doing some analysis of some N-body simulation data. I'm trying to calculate the tidal forces exerted on the smaller groups of particles by the other mass. I have a model for the distribution of matter causing the tidal field so I can analytically calculate the gravitational potential and the directional second derivatives, but how do I translate that into the forces. Anybody have a reference for some good reading on the subject. 2. Aug 25, 2005 ### pervect Staff Emeritus Well if you have a potential V(x,y,z) then the force in cartesian coordinates (x,y,z) is given by $$F_x = \frac{\partial{V}}{\partial{x}} \hspace{.25 in} F_y = \frac{\partial{V}}{\partial{y}} \hspace{.25 in} F_z = \frac{\partial{V}}{\partial{z}}$$ and if you have a unit vector U, the tidal force T is another vector, the gradient of the force F in the direction of vector U, given by $$T_x = \frac{\partial^2{V}}{\partial x \partial x}} U_x + \frac{\partial^2{V}}{\partial x \partial y}} U_y + \frac{\partial^2{V}}{\partial x \partial z}} U_z$$ $$T_y = \frac{\partial^2{V}}{\partial y\partial x}} U_x + \frac{\partial^2{V}}{\partial y\partial y}} U_y + \frac{\partial^2{V}}{\partial y\partial z}} U_z$$ $$T_z = \frac{\partial^2{V}}{\partial z\partial x}} U_x + \frac{\partial^2{V}}{\partial z\partial y}} U_y + \frac{\partial^2{V}}{\partial z\partial z}} U_z$$ You can write this in tensor notation $$T^i = K^i{}_j U^j$$ where $$K^i{}_j = \frac{\partial^2{V}}{\partial x^i \partial x^j}$$ It gets more complicated if you want to use general (non-cartesian) coordinates But you can always say that the tidal forces at a point are given by a second rank tensor, one that takes in a vector (the displacement) and spits out a vector (the tidal force). I *think* that the partial derivates should normally alll commute, so $$\frac{\partial^2 V}{\partial x \partial y} = \frac{\partial^2 V}{\partial y \partial x}$$
2016-10-25 22:56:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8208732008934021, "perplexity": 492.49002145998827}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720468.71/warc/CC-MAIN-20161020183840-00174-ip-10-171-6-4.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/3200786/singularity-and-laurent-series-of-several-functions
# Singularity and Laurent series of several functions For each of the following functions classify the isolated singularity at 0 and specify the principal part of the Laurent development there: a) $$\dfrac{sin(z)}{z^n},\;n\in\mathbb{N}$$ b) $$\dfrac{z}{(z+1)sin(z^n)},\;n\in\mathbb{N}$$ c) $$cos(z^{-1})sin(z^{-1})$$ d) $$(1-z^{-n})^{-k},\;n,k\in\mathbb{N}\setminus\{0\}$$ I think that in a) $$0$$ is a removable singularity, in b,c and d $$0$$ is an essential singularity, but what does it say specify the principle part of the Laurent development? How do I do it? • It might help you to write out the first two or three terms of the Maclaurin series for sine. This will make it easier to see what happens in the first case when $n > 1$. – Eric Towers Apr 24 at 16:56 • In a) $0$ is a pole for $n\gt 1$. – Thomas Shelby Apr 24 at 16:57
2019-06-18 16:37:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 6, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8284524083137512, "perplexity": 166.07945766244546}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998808.17/warc/CC-MAIN-20190618163443-20190618185443-00536.warc.gz"}
http://mathhelpforum.com/geometry/212829-tan-y-44-a-print.html
# tan(y)=44 • Feb 9th 2013, 11:06 AM jasminebonillaa tan(y)=44 Helpppp and please explain it (Wink)(Wink)(Wink) tan(y)=44 find out y • Feb 9th 2013, 11:44 AM Plato Re: tan(y)=44 Quote: Originally Posted by jasminebonillaa tan(y)=44, find out y . Clearly $y=\arctan(44).$ • Feb 9th 2013, 12:29 PM HallsofIvy Re: tan(y)=44 Actually, it is impossible to answer that without more information. If y in degrees, radians, grads, or any of several other ways of measuring angles. To do, take you calculators, make sure it is in "degree", "radian", or "grad" mode (Every calculator I have seen has "degree" and "radian" mode, some have "grad". I have never seen one with other modes.) Then use whatever pattern is correct to find the inverse function to tangent of 44 degrees. Using the on-screen calculator that comes with "Windows", assuming y is in radians, click on the little "radians" radio button, enter "44", then click on the "INV" button (for the inverse function). The button that, before, said "tan" now says " $tan^{-1}$". Click on that.
2016-10-20 22:10:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5104458928108215, "perplexity": 6805.74548173988}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988717954.1/warc/CC-MAIN-20161020183837-00080-ip-10-171-6-4.ec2.internal.warc.gz"}
https://socratic.org/questions/what-is-the-antiderivative-of-e-2x
# What is the antiderivative of e^(2x)? Mar 27, 2015 Antiderivative is another name for the Integral( if by some misfortune you didnt know) So, $\int {e}^{2 x} = 1 \frac{l}{2} \int 2 {e}^{2 x} \mathrm{dx}$ You can see that $2 \mathrm{dx} = d \left(2 x\right)$ that is $2$ is the derivative of $2 x$ It follows : $\frac{1}{2} \int {e}^{2 x} d \left(2 x\right)$ NOTE: this is the same as letting $u = 2 x$ $\frac{1}{2} \int {e}^{u} \mathrm{du} = \frac{1}{2} {e}^{u}$ $= \frac{1}{2} {e}^{2 x}$ Generally, $\int {e}^{a x} = \frac{1}{a} {e}^{a x}$ Mar 27, 2015 It is $\frac{1}{2} {e}^{2 x}$. You can certainly use the technique of integration by substitution (reversing the chain rule) to find this, you can also reason as follows: The antiderivative of ${e}^{2 x}$ is a function whose derivative is ${e}^{2 x}$. But we know some things about derivatives at this point of the course. Among other things, we know that the derivative of $e$ to a power is $e$ to the power times the derivative of the power. So we know that the drivative of ${e}^{2 x}$ is ${e}^{2 x} \cdot 2$. That's twice a big as what we want. We also know that constant factors just hang out in front when we take derivatives, so if we stick a $\frac{1}{2}$ out front, it will be there after we differentiate and we can cancel the two. $f \left(x\right) = \frac{1}{2} {e}^{2 x}$ has $f ' \left(x\right) = {e}^{2 x}$ so it is an antiderivative. The general antiderivative then is $\frac{1}{2} {e}^{2 x} + C$ Note An important consequence of the Mean Value Theorem is that a function whose derivative is $0$ is a constant function. And an immediate consequence of that is that if two functions have the same derivative, then they differ by a constant. Therefore, any function that has derivative ${e}^{2 x}$ can ultimately be written as $\frac{1}{2} {e}^{2 x} + C$ for some constant C.
2019-03-23 04:16:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 23, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8975182175636292, "perplexity": 171.185689395809}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202723.74/warc/CC-MAIN-20190323040640-20190323062640-00473.warc.gz"}
https://forum.math.toronto.edu/index.php?PHPSESSID=65tis31marljkvv47c8jraqi92&topic=2163.0;wap2
MAT244--2019F > Term Test 1 Problem 1 (morning) (1/2) > >> Victor Ivrii: (a) Find integrating factor and then a general solution of ODE \begin{equation*} \bigl(-y\sin(x)+y^3\cos(x)\bigr) + \bigl(3\cos(x)+5y^2\sin(x)\bigr) y'=0 \end{equation*} (b) Also, find a solution satisfying $y(\dfrac{\pi}{4})=\sqrt{2}$. Ruojing Chen: (a) Let $$M=-ySin(x)+y^3Cos(x)$$ $$N=3Cos(x)+5y^2Sin(x)$$ Then$$M_y=-Sin(x)+3y^2Cos(x)$$ $$N_x=-3Sinx(x)+5y^2Cos(x)$$ $$R=\frac{M_y-N_x}{M}=\frac{2Sin(x)-2Cos(x)}{-ySin(x)+y^3Cos(x)}=\frac{2(Sin(x)-y^2Cos(x))}{-y(Sin(x)-y^2Cos(x))}=-\frac{2}{y}$$ $$\mu=e^{-\int_Rdy}=e^{\int_\frac{2}{y}dy}=e^{2lny}=e^ln(y^2)=y^2$$ Multiple both side by $$\mu=y^2$$ $$y^2(-ySin(x)+y^3Cos(x))+y^2(3Cos(x)+5y^2Sin(x))=0$$ $$M'=-y^3Sin(x)+y^5Cos(x)$$,$$N'=3y^2Cos(x)+5y^4Sin(x)$$ $$\exists\phi_x,y,such that \phi_x=M',\phi_y=N$$ $$\phi=\int_M'dx=\int_-y^3Sin(x)=y^3Cos(x)+y^5Sin(x)+h(y)$$ $$\phi_y=3y^2Cos(x)+5y^4Sin(x)+h'(y)=N'$$ $$h'(y)=0$$ $$h(y)=c$$ $$\therefore \phi=y^3Cos(x)+y^5Sin(x)=c$$ (b)When $$y(\frac{\pi}{4})=\sqrt{2}$$ $$(\sqrt{2})^3Cos(\frac{\pi}{4})+(\sqrt{2}^5)Sin(\frac{\pi}{4})=2\sqrt{2}*\frac{1}{\sqrt{2}}+(4\sqrt{2}*\frac{1}{\sqrt{2}})=2+4=6$$ $$\therefore c=6$$ $$\phi=y^3Cos(x)+y^5Sin(x)=6$$ What is your real life name? I can find it by email, but I am too lazy  :) EroSkulled: Solve :$(-y\sin(x)+y^{3}\cos(x))+(3\cos(x)+5y^{2}\sin(x))y'=0$ M=-y\sin(x)+y^{3}\cos(x), N=3\cos(x)+5y^{2}\sin(x) M_y=-\sin(x)+3y^{2}\cos(x), N_x=-3\sin(x)+5y^{2}\cos(x) R_1=\frac{N_x-M_y}{M}=\frac{-3\sin(x)+5y^{2}\cos(x)+\sin(x)-3y^{2}\cos(x)}{-y\sin(x)+y^{3}\cos(x)}=\frac{-2\sin(x)+2y^{2}\cos(x)}{y(-\sin(x)+y^{2}\cos(x))}=\frac{2}{y} \mu=e^{\int{R_1}{dy}}=e^{\int{\frac{2}{y}}{dy}}=e^{2\ln{y}}=y^2 We then multiply both side of the original equation by $y^2$ so that it will become EXACT and hence we can continue to find $\phi(x,y)$ (-y^{3}\sin(x)+y^{5}\cos(x))+(3y^{2}\cos(x)+5y^{4}\sin(x))y'=0 \phi(x,y)=\int{-y^{3}\sin(x)+y^{5}\cos(x)}{dx}=y^{3}\cos(x)+y^{5}\sin(x)+h(y) \phi(x,y)_y=3y^{2}\cos(x)+5y^{4}\sin(x)+h'(y)\cong 3y^{2}\cos(x)+5y^{4}\sin(x) Hence we know $h'(y)=0$ Then $h(y)=C$ \phi(x,y): y^{3}\cos(x)+y^{5}\sin(x)=C Initial Value: $y(\frac{\pi}{4})=\sqrt{2}$ Plug in equation above, we get the following: (\sqrt{2})^{3}\cos(\frac{\pi}{4})+(\sqrt{2})^{5}\sin(\frac{\pi}{4})=C 2\sqrt{2}\frac{1}{\sqrt{2}}+4\sqrt{2}\frac{1}{\sqrt{2}}=C C=6 We get solution: y^{3}\cos(x)+y^{5}\sin(x)=6 No post after this is needed. V.I. Instead of sequence single equations it would be better to use multiline environment like gather or gather* to avoid excessive vertical spacing --- Code: ---\begin{gather} EQUATION \\ EQUATION \\  .... \end{gather} --- End code --- If there is no text between them, as MathJax does not support \intertext{  } LaTeX command EroSkulled: --- Quote from: rj127 on October 23, 2019, 06:37:18 AM ---(a) Let $$M=-ySin(x)+y^3Cos(x)$$ $$N=3Cos(x)+5y^2Sin(x)$$ Then$$M_y=-Sin(x)+3y^2Cos(x)$$ $$N_x=-3Sinx(x)+5y^2Cos(x)$$ $$R=\frac{M_y-N_x}{M}=\frac{2Sin(x)-2Cos(x)}{-ySin(x)+y^3Cos(x)}=\frac{2(Sin(x)-y^2Cos(x))}{-y(Sin(x)-y^2Cos(x))}=-\frac{2}{y}$$ $$\mu=e^{-\int_Rdy}=e^{\int_\frac{2}{y}dy}=e^{2lny}=e^ln(y^2)=y^2$$ Multiple both side by $$\mu=y^2$$ $$y^2(-ySin(x)+y^3Cos(x))+y^2(3Cos(x)+5y^2Sin(x))=0$$ $$M'=-y^3Sin(x)+y^5Cos(x)$$,$$N'=3y^2Cos(x)+5y^4Sin(x)$$ $$\exists\phi_x,y,such that \phi_x=M',\phi_y=N$$ $$\phi=\int_M'dx=\int_-y^3Sin(x)=y^3Cos(x)+y^5Sin(x)+h(y)$$ $$\phi_y=3y^2Cos(x)+5y^4Sin(x)+h'(y)=N'$$ $$h'(y)=0$$ $$h(y)=c$$ $$\therefore \phi=y^3Cos(x)+y^5Sin(x)=c$$ (b)When $$y(\frac{\pi}{4})=\sqrt{2}$$ $$(\sqrt{2})^3Cos(\frac{\pi}{4})+(\sqrt{2}^5)Sin(\frac{\pi}{4})=2\sqrt{2}*\frac{1}{\sqrt{2}}+(4\sqrt{2}*\frac{1}{\sqrt{2}})=2+4=6$$ $$\therefore c=6$$ $$\phi=y^3Cos(x)+y^5Sin(x)=6$$ --- End quote --- Above solution is not typed well in correct format so I posted mine as well. Jiuru Gao: Miss y^2 of R=My-Nx, it should be 2sinx- 2y^2(cosx)
2021-12-04 13:47:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9541299343109131, "perplexity": 7936.067406745956}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362992.98/warc/CC-MAIN-20211204124328-20211204154328-00213.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/intermediate-algebra-6th-edition/chapter-2-section-2-5-compound-inequalities-exercise-set-page-94/30
## Intermediate Algebra (6th Edition) $(6,12)$ $-2 \lt \frac{1}{2}x-5 \lt 1$ Using inequality properties, multiply all parts by 2. $2(-2) \lt 2(\frac{1}{2}x-5) \lt (1) 2$ $-4 \lt 2(\frac{1}{2}x-5) \lt 2$ Using distributive property, $-4 \lt 2(\frac{1}{2}x)-2(5) \lt 2$ $-4 \lt x-10 \lt 2$ Add 10 to all parts, $-4+10 \lt x-10+10 \lt 2+10$ $6 \lt x\lt 12$ Interval Notation: $(6,12)$
2018-06-25 13:32:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7159439325332642, "perplexity": 2004.4200063738501}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267867885.75/warc/CC-MAIN-20180625131117-20180625151117-00007.warc.gz"}
http://math.stackexchange.com/questions/317633/is-there-a-field-with-n-elements-for-all-n-in-mathbbn
# Is there a field with $n$ elements for all $n \in \mathbb{N}$? [duplicate] I don't think this is true, but I'm not sure. I certainly know of finite fields with 2,4 and 8 elements, and of course $p^n$ elements where $p$ is prime, for all $n \in \mathbb{N}$. - ## marked as duplicate by Jyrki Lahtonen♦, Seirios, joriki, Rahul, Asaf KaragilaMar 1 '13 at 10:39 Nope. Hint: in a finite field, consider the subfield generated by $1$. This is called the prime subfield. Any field is a vector space over its prime subfield... –  Qiaochu Yuan Mar 1 '13 at 7:06 Let $F$ be a field of order $n$ and $P$ its prime subfield. Then $P\cong \mathbb Z_p$ where $p = \lvert P \rvert$. Thus $p$ is prime. So now $F$ is a finite $P$ vector space, and thus $n = \lvert F \rvert = \lvert P \rvert ^k = p^k$, where $k = \operatorname{dim}_P (F)$.
2015-07-07 09:46:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.88493412733078, "perplexity": 258.1799158022937}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375099105.15/warc/CC-MAIN-20150627031819-00126-ip-10-179-60-89.ec2.internal.warc.gz"}
https://brilliant.org/problems/pressure-volume-relation-of-gas/
# Pressure-volume relation of gas The above diagram shows the pressure-volume relation of $$n$$ moles of an ideal gas in state $$A$$ and state $$B.$$ Which of the following statements is true? $$a.$$ The internal energies of $$A$$ and $$B$$ are the same. $$b.$$ If we change the state of the gas from $$A$$ to $$B$$ as shown in the above diagram, then the temperature of the gas will increase. $$c.$$ The amount of the heat emitted from the state change $$A \rightarrow B$$ is $$2PV .$$ × Problem Loading... Note Loading... Set Loading...
2018-06-18 04:10:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7803202271461487, "perplexity": 199.5965663536643}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267860041.64/warc/CC-MAIN-20180618031628-20180618051628-00367.warc.gz"}
http://icpc.njust.edu.cn/Problem/Hdu/5170/
GTY's math problem Time Limit: 1000/1000 MS (Java/Others) Memory Limit: 65536/65536 K (Java/Others) Description GTY is a GodBull who will get an Au in NOI . To have more time to learn algorithm knowledge, he never does his math homework. His math teacher is very unhappy for that, but she can't do anything because GTY can always get a good mark in math exams. One day, the math teacher asked GTY to answer a question. There are four numbers on the blackboard - $a, b, c, d$. The math teacher wants GTY to compare $a^b$ with $c^d$. Because GTY never does his homework, he can't figure out this problem! If GTY can't answer this question correctly, he will have to do his homework. So help him! Input Multi test cases (about 5000). Every case contains four integers a,b,c,d($1 \leq a,b,c,d \leq 1000$)separated by spaces. Please process to the end of file. Output For each case , if $a^b > c^d$ , print '>'. if $a^b < c^d$ , print '<'. if $a^b = c^d$ , print '='. Sample Input 2 1 1 2 2 4 4 2 10 10 9 11 Sample Output > = < hujie Source BestCoder Round #29
2020-08-14 23:00:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46167731285095215, "perplexity": 2614.1113861282156}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439740343.48/warc/CC-MAIN-20200814215931-20200815005931-00396.warc.gz"}
https://www.cuemath.com/learn/mathematics/sequences-harmonic-sequence/
Mathematics Harmonic Sequence 35.6k views 1 Introduction 2 Applications of Harmonic Progression in real life 3 First term of hp, Common Difference 4 nth Term Formula 5 Harmonic Sequence Formula 6 Sum of Harmonic Sequence 7 Harmonic Graph and Properties 8 Summary 23rd September 2020 Reading Time: 6 Minutes ## Introduction Before we understand Harmonic Sequence or Harmonic Series, we must understand what is Arithmetic sequence /Arithmetic Progression. I assume you all have already covered Arithmetic Progression under Sequence and Series. Here we will understand every concept of Harmonic Series following the Arithmetic sequence. What is a harmonic series? The reciprocal form of the Arithmetic Sequence with numbers that can never be 0 is called Harmonic Sequence. And the sum of such a sequence is known as Harmonic Series Example If we have Arithmetic Sequence as 4,6,8,10,12 with the common difference of 2 i.e. d =2 The Harmonic Sequence of the above Arithmetic Sequence is 1/4, 1/6, 1/8,1/10,1/12…. Let’s take another example We have to determine if the below series is Harmonic series or not 3/7,1/3,3/11,3/13,3/15…. Now if we prove that the reciprocal of the above sequence is A.P with a common difference then we can establish that the sequence is the Harmonic sequence . And the sum of this sequence would be a harmonic series. So, the A.P of the above sequence can be expressed as So, the difference between the 1st and 2nd fraction in the sequence is 2/3 and the same is the case with 2nd and 3rd fraction or 3rd and the 4th. Note: Trick here is whenever you are given Harmonic Progression convert it into A.P Example: Find the next four terms of the sequence: - 1/7, -1/2, -1/11 …. d = -2-7 = -9 So, the NEXT Sequence will be x- (-11) = -9, -x+11 = -9   or x = -20 So, we can write -1/11+ (-9) = -1/20 -1/20 + (-9) = -1/29 -1/29 + (-9) = -1/38 -1/38 + (-9) = -1/47 Therefore, next four terms in the sequence is 1/7, -1/2, -1/11, -1/20, -1/29, 1/38, -1/47 ## Applications of Harmonic Progression in real life Learning about pattern and sequence is not just very important in maths but real life too. Can you imagine watching a film without a plot or series of related events and just see random scenes? So, the script writer has to ensure a series of related events or sequences of scenes in which the film would be created so that it makes sense to the audience. The harmonic formulae can also be used by scientists to conclude the value of their experiments. For example, to establish the degree at which water boils each time the temperature is changed with the same value. It is also used in the music industry to establish theories on sounds and to closely study them. The concept of Harmonics is also used in electrical gadgets or electrical machines and generation of power. ## First term of hp, Common Difference First-term of Harmonic Progression The first term of the Harmonic progression is fundamental to the number series which is denoted as a. The sum of the series can never be an integer except for the first term as 1. Common Difference The common difference here would mean that the difference between any two-consecutive number in the series would be the same. This common difference is denoted as d. Example 1 If 1/a, 1/b, 1/c are three quantities in Harmonic Progression then we can say here first term is 1/a and common difference d = 1/a -1/b= 1/c -1/b Or a-b /ab = b-c/bc Or a/c = b-c / a-b ## nth Term Formula The nth term of H.P = 1/ (nth term of corresponding A.P.) Now how do we establish this? Let’s say in A. P the 1st term is a, 2nd term is a1, 3rd term is a3……. then the nth term is an Or We can say recursive formulae of the Arithmetic sequence is a+(1-1) d, a+(2-1) d, a+(3-1) d……… a+(n-1) d Therefore, the generic Arithmetic Sequence nth term formula is \begin{align}{a_n} = a + (n - 1) \times d\end{align} ## Harmonic Sequence Formula Similarly In Harmonic Progression the first term from the above expression would be 1/a, 2nd term is 1/a1, 3rd term is 1/a3……. then the nth term is 1/an Then the recursive formula of Harmonic Sequence would be 1/ [a+(1-1) d], 1/[a+(2-1) d,] 1/[a+(3-1) d] ……… 1/[a+(n-1) d] Note: - Recursive means pattern repetitive in nature so to find out the next term we should look at the previous term and add the common difference of the series. Also, that the first term should be given to us. Therefore, we can say The Harmonic Sequence formulae are nth term or the general term of H.P \begin{align}{a_n} = \frac{1}{{a + (n - 1)d}}\end{align} Example 1 Find out the 10th term of the below Harmonic Progression 2,2/3,2/5…. Now A.P here is 1/2,3/2,5/2 First term a = 1/2 n = 10 Therefore d= 1Hence placing the above numbers in Harmonic generic term formulae an =   we get a10 = 1/ [1/2+(10-1) *1] = 1/ (19/2) 10th term of the above H.P is 2/19 ## Sum of Harmonic Sequence For an HP, the Sum of the harmonic sequence can be easily calculated if the first term and the total terms are known. Sum of first n terms = 1/a + 1/(a + d) + 1/(a + 2d) + … +1/ [a + (n – 1) × d] Note:- Here we can also say n refers to infinity ∞ Then the generic formulae for nth term of Harmonic sequence is the reciprocal of A.P (as mentioned above) Similarly the sum of ‘n’ terms of AP is Sn = n/2[2a + (n − 1) × d] The sum of ‘n’ terms of HP is the reciprocal of A.P i.e. \begin{align}{S_n} = \frac{2}{n}[2a + (n - 1) \times d]\end{align} Example 1 Find the sum of the below Harmonic Sequence. 1/12 + 1/24 + 1/36 +1/48 +1/60 Here A.P is 12,24,36,48,60 a= 12, d=12, n=5 Placing the above values in the sum of A.P formula is \begin{align}{S_5} &= \frac{5}{2}[2 \times 12 + (5 - 1) \times 12]\\ &= [24 + 48]\frac{5}{2}\\ &= 180\end{align} Therefore the sum of 5 terms of H.P is reciprocal of A.P is 1/180 ## Harmonic Graph and Properties Harmonic graphs mathematical or logical models to plot harmonic motions or harmonic series. Let’s take the example of the pendulum in which we will measure oscillation that measures different positions of the pendulum and the time it takes to reach these positions. When we pull the pendulum to the right so that it displaces 15 cm and then release so that it oscillates that is moving back and forth. When we pull the pendulum to the right, the position of the pendulum before we release it is 15 cm at Time (t) =0. We will notice that it takes 2 seconds to reach the maximum height of -15cm towards the left side of the equilibrium point. It takes another 1 second to reach back to the position it started. Also notice that the position of the pendulum, anything on the right from its original position i.e. 0 cm is positive and anything on the left is negative. To understand this further let’s capture some more position/displacement of the pendulum at a different time of its oscillation. Time in Sec Displacement in cm 0 15 0.5 0 0.75 -12 1 -15 1.25 -12 1.50 0 1.75 12 2.00 15 2.25 12 2.50 0 Now we plot the above table on a graph we get the below where the X-axis is the displacement (d) and Y-axis is the Time (t). We can see in the above graph that the curve is the sine curve and this is what we get for any simple harmonic motion. For example, even if we use a spring with a weight to oscillate, we will get the same harmonic graph of such a harmonic series as above. ## Summary You will be surprised to know that the study of the Harmonic sequence dates back to the 6th century when the Greek mathematician Pythagoras studied the nature of the universe. He first used it for the study of music. Harmonic series is an infinite series that does not have any limit where the sum of successive terms tends to infinity.
2022-12-08 03:07:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 8, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9105973243713379, "perplexity": 1111.4116671289046}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711232.54/warc/CC-MAIN-20221208014204-20221208044204-00117.warc.gz"}
http://docs.lightkurve.org/api/lightkurve.periodogram.LombScarglePeriodogram.html
# LombScarglePeriodogram¶ class lightkurve.periodogram.LombScarglePeriodogram(*args, **kwargs) Subclass of Periodogram representing a power spectrum generated using the Lomb Scargle method. Attributes Summary frequency_at_max_power Returns the frequency corresponding to the highest peak in the periodogram. max_power Returns the power of the highest peak in the periodogram. period Returns the array of periods, i.e. period_at_max_power Returns the period corresponding to the highest peak in the periodogram. Methods Summary bin([binsize, method]) Bins the power spectrum. copy() Returns a copy of the Periodogram object. flatten([method, filter_width, return_trend]) Estimates the Signal-To-Noise (SNR) spectrum by dividing out an estimate of the noise background. from_lightcurve(lc[, minimum_frequency, …]) Creates a Periodogram from a LightCurve using the Lomb-Scargle method. plot([scale, ax, xlabel, ylabel, title, …]) Plots the Periodogram. show_properties() Prints a summary of the non-callable attributes of the Periodogram object. smooth([method, filter_width]) Smooths the power spectrum using the ‘boxkernel’ or ‘logmedian’ method. to_table() Exports the Periodogram as an Astropy Table. Attributes Documentation frequency_at_max_power Returns the frequency corresponding to the highest peak in the periodogram. max_power Returns the power of the highest peak in the periodogram. period Returns the array of periods, i.e. 1/frequency. period_at_max_power Returns the period corresponding to the highest peak in the periodogram. Methods Documentation bin(binsize=10, method='mean') Bins the power spectrum. Parameters: binsize : int The factor by which to bin the power spectrum, in the sense that the power spectrum will be smoothed by taking the mean in bins of size N / binsize, where N is the length of the original frequency array. Defaults to 10. method : str, one of ‘mean’ or ‘median’ Method to use for binning. Default is ‘mean’. binned_periodogram : a Periodogram object Returns a new Periodogram object which has been binned. copy() Returns a copy of the Periodogram object. This method uses the copy.deepcopy function to ensure that all objects stored within the Periodogram are copied. Returns: pg_copy : Periodogram A new Periodogram object which is a copy of the original. flatten(method='logmedian', filter_width=0.01, return_trend=False) Estimates the Signal-To-Noise (SNR) spectrum by dividing out an estimate of the noise background. This method divides the power spectrum by a background estimated using a moving filter in log10 space by default. For details on the method and filter_width parameters, see Periodogram.smooth() Dividing the power through by the noise background produces a spectrum with no units of power. Since the signal is divided through by a measure of the noise, we refer to this as a Signal-To-Noise spectrum. Parameters: method : str, one of ‘boxkernel’ or ‘logmedian’ Background estimation method passed on to Periodogram.smooth(). Defaults to ‘logmedian’. filter_width : float If method = ‘boxkernel’, this is the width of the smoothing filter in units of frequency. If method = logmedian, this is the width of the smoothing filter in log10(frequency) space. return_trend : bool If True, then the background estimate, alongside the SNR spectrum, will be returned. snr_spectrum : Periodogram object Returns a periodogram object where the power is an estimate of the signal-to-noise of the spectrum, creating by dividing the powers with a simple estimate of the noise background using a smoothing filter. bkg : Periodogram object The estimated power spectrum of the background noise. This is only returned if return_trend = True. static from_lightcurve(lc, minimum_frequency=None, maximum_frequency=None, minimum_period=None, maximum_period=None, frequency=None, period=None, nterms=1, nyquist_factor=1, oversample_factor=None, freq_unit=None, normalization='amplitude', **kwargs) Creates a Periodogram from a LightCurve using the Lomb-Scargle method. By default, the periodogram will be created for a regular grid of frequencies from one frequency separation to the Nyquist frequency, where the frequency separation is determined as 1 / the time baseline. The min frequency and/or max frequency (or max period and/or min period) can be passed to set custom limits for the frequency grid. Alternatively, the user can provide a custom regular grid using the frequency parameter or a custom regular grid of periods using the period parameter. The sampling of the spectrum can be changed using the oversample_factor parameter. An oversampled spectrum (oversample_factor > 1) is useful for displaying the full details of the spectrum, allowing the frequencies and amplitudes to be measured directly from the plot itself, with no fitting required. This is recommended for most applications, with a value of 5 or 10. On the other hand, an oversample_factor of 1 means the spectrum is critically sampled, where every point in the spectrum is independent of the others. This may be used when Lorentzians are to be fitted to modes in the power spectrum, in cases where the mode lifetimes are shorter than the time-base of the data (which is sometimes the case for solar-like oscillations). An oversample_factor of 1 is suitable for these stars because the modes are usually fully resolved. That is, the power from each mode is spread over a range of frequencies due to damping. Hence, any small error from measuring mode frequencies by taking the maximum of the peak is negligible compared with the intrinsic linewidth of the modes. The normalization parameter will normalize the spectrum to either power spectral density (“psd”) or amplitude (“amplitude”). Users doing asteroseismology on classical pulsators (e.g. delta Scutis) typically prefer normalization="amplitude" because “amplitude” has higher dynamic range (high and low peaks visible simultaneously), and we often want to read off amplitudes from the plot. If normalization="amplitude", the default value for oversample_factor is set to 5 and freq_unit is 1/day. Alternatively, users doing asteroseismology on solar-like oscillators tend to prefer normalization="psd" because power density has a scaled axis that depends on the length of the observing time, and is used when we are interested in noise levels (e.g. granulation) and are looking at damped oscillations. If normalization="psd", the default value for oversample_factor is set to 1 and freq_unit is set to microHz. Default values of freq_unit and oversample_factor can be overridden. See Appendix A of Kjeldsen & Bedding, 1995 for a full discussion of normalization and measurement of oscillation amplitudes (http://adsabs.harvard.edu/abs/1995A%26A…293…87K). The parameter nterms controls how many Fourier terms are used in the model. Setting the Nyquist_factor to be greater than 1 will sample the space beyond the Nyquist frequency, which may introduce aliasing. The freq_unit parameter allows a request for alternative units in frequency space. By default frequency is in (1/day) and power in (amplitude (ppm)). Asteroseismologists for example may want frequency in (microHz) in which case they would pass freq_unit=u.microhertz. By default this method uses the LombScargle ‘fast’ method, which assumes a regular grid. If a regular grid of periods (i.e. an irregular grid of frequencies) it will use the ‘slow’ method. If nterms > 1 is passed, it will use the ‘fastchi2’ method for regular grids, and ‘chi2’ for irregular grids. Caution: this method assumes that the LightCurve’s time (lc.time) is given in units of days. Parameters: lc : LightCurve object The LightCurve from which to compute the Periodogram. minimum_frequency : float If specified, use this minimum frequency rather than one over the time baseline. maximum_frequency : float If specified, use this maximum frequency rather than nyquist_factor times the nyquist frequency. minimum_period : float If specified, use 1./minium_period as the maximum frequency rather than nyquist_factor times the nyquist frequency. maximum_period : float If specified, use 1./maximum_period as the minimum frequency rather than one over the time baseline. frequency : array-like The regular grid of frequencies to use. If given a unit, it is converted to units of freq_unit. If not, it is assumed to be in units of freq_unit. This over rides any set frequency limits. period : array-like The regular grid of periods to use (as 1/period). If given a unit, it is converted to units of freq_unit. If not, it is assumed to be in units of 1/freq_unit. This overrides any set period limits. nterms : int Default 1. Number of terms to use in the Fourier fit. nyquist_factor : int Default 1. The multiple of the average Nyquist frequency. Is overriden by maximum_frequency (or minimum period). oversample_factor : int Default: None. The frequency spacing, determined by the time baseline of the lightcurve, is divided by this factor, oversampling the frequency space. This parameter is identical to the samples_per_peak parameter in astropy.LombScargle(). If normalization=’amplitude’, oversample_factor will be set to 5. If normalization=’psd’, it will be 1. These defaults can be overridden. freq_unit : astropy.units.core.CompositeUnit Default: None. The desired frequency units for the Lomb Scargle periodogram. This implies that 1/freq_unit is the units for period. With default normalization (‘amplitude’), the freq_unit is set to 1/day, which can be overridden. ‘psd’ normalization will set freq_unit to microhertz. normalization : ‘psd’ or ‘amplitude’ Default: 'amplitude'. The desired normalization of the spectrum. Can be either power spectral density ('psd') or amplitude ('amplitude'). kwargs : dict Keyword arguments passed to astropy.stats.LombScargle() Periodogram : Periodogram object Returns a Periodogram object extracted from the lightcurve. plot(scale='linear', ax=None, xlabel=None, ylabel=None, title='', style='lightkurve', view=None, unit=None, **kwargs) Plots the Periodogram. Parameters: scale: str Set x,y axis to be “linear” or “log”. Default is linear. ax : matplotlib.axes._subplots.AxesSubplot A matplotlib axes object to plot into. If no axes is provided, a new one will be generated. xlabel : str Plot x axis label ylabel : str Plot y axis label title : str Plot set_title style : str Path or URL to a matplotlib style file, or name of one of matplotlib’s built-in stylesheets (e.g. ‘ggplot’). Lightkurve’s custom stylesheet is used by default. view : str {‘frequency’, ‘period’}. Default ‘frequency’. If ‘frequency’, x-axis units will be frequency. If ‘period’, the x-axis units will be period and ‘log’ scale. kwargs : dict Dictionary of arguments to be passed to matplotlib.pyplot.plot. ax : matplotlib.axes._subplots.AxesSubplot The matplotlib axes object. show_properties() Prints a summary of the non-callable attributes of the Periodogram object. Prints in order of type (ints, strings, lists, arrays and others). Prints in alphabetical order. smooth(method='boxkernel', filter_width=0.1) Smooths the power spectrum using the ‘boxkernel’ or ‘logmedian’ method. If method is set to ‘boxkernel’, this method will smooth the power spectrum by convolving with a numpy Box1DKernel with a width of filter_width, where filter width is in units of frequency. This is best for filtering out noise while maintaining seismic mode peaks. This method requires the Periodogram to have an evenly spaced grid of frequencies. A ValueError exception will be raised if this is not the case. If method is set to ‘logmedian’, it smooths the power spectrum using a moving median which moves across the power spectrum in a steps of log10(x0) + 0.5 * filter_width where filter width is in log10(frequency) space. This is best for estimating the noise background, as it filters over the seismic peaks. Periodograms that are unsmoothed have multiplicative noise that is distributed as chi squared 2 degrees of freedom. This noise distribution has a well defined mean and median but the two are not equivalent. The mean of a chi squared 2 dof distribution is 2, but the median is 2(8/9)**3. (see https://en.wikipedia.org/wiki/Chi-squared_distribution) In order to maintain consistency between ‘boxkernel’ and ‘logmedian’ a correction factor of (8/9)**3 is applied to (i.e., the median is divided by the factor) to the median values. In addition to consistency with the ‘boxkernel’ method, the correction of the median values is useful when applying the periodogram flatten method. The flatten method divides the periodgram by the smoothed periodogram using the ‘logmedian’ method. By appyling the correction factor we follow asteroseismic convention that the signal-to-noise power has a mean value of unity. (note the signal-to-noise power is really the signal plus noise divided by the noise and hence should be unity in the absence of any signal) Parameters: method : str, one of ‘boxkernel’ or ‘logmedian’ The smoothing method to use. Defaults to ‘boxkernel’. filter_width : float If method = ‘boxkernel’, this is the width of the smoothing filter in units of frequency. If method = logmedian, this is the width of the smoothing filter in log10(frequency) space. smoothed_pg : Periodogram object Returns a new Periodogram object in which the power spectrum has been smoothed. to_table() Exports the Periodogram as an Astropy Table. Returns: table : An AstroPy Table with columns ‘frequency’, ‘period’, and ‘power’.
2019-04-24 04:05:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5018590092658997, "perplexity": 2663.445608132014}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578626296.62/warc/CC-MAIN-20190424034609-20190424060609-00432.warc.gz"}
http://fyneimages.blogspot.com/
## Saturday, February 15, 2014 ### On the Edge available as a limited edition photoblock here ## Wednesday, February 05, 2014 ### presence available as a limited edition photoblock here ## Sunday, February 02, 2014 ### seeds rattle used for the cover of Kokako 10 and the original haiga is here . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . ## profile Tasmania, Australia Gina is an award winning artist and photographer with over 60 poems published, of Hungarian extraction, living and working in Tasmania, Australia. e: fynearts@gmail.com
2014-12-20 12:10:37
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.919031023979187, "perplexity": 172.67901806037457}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802769844.62/warc/CC-MAIN-20141217075249-00061-ip-10-231-17-201.ec2.internal.warc.gz"}
https://www.quizover.com/online/course/3-9-normal-tension-and-other-examples-of-forces-by-openstax?page=6
# 3.9 Normal, tension, and other examples of forces  (Page 7/10) Page 7 / 10 All the forces discussed in this section are real forces, but there are a number of other real forces, such as lift and thrust, that are not discussed in this section. They are more specialized, and it is not necessary to discuss every type of force. It is natural, however, to ask where the basic simplicity we seek to find in physics is in the long list of forces. Are some more basic than others? Are some different manifestations of the same underlying force? The answer to both questions is yes, as will be seen in the next (extended) section and in the treatment of modern physics later in the text. ## Phet explorations: forces in 1 dimension Explore the forces at work when you try to push a filing cabinet. Create an applied force and see the resulting friction force and total force acting on the cabinet. Charts show the forces, position, velocity, and acceleration vs. time. View a free-body diagram of all the forces (including gravitational and normal forces). ## Section summary • When objects rest on a surface, the surface applies a force to the object that supports the weight of the object. This supporting force acts perpendicular to and away from the surface. It is called a normal force, $\mathbf{\text{N}}$ . • When objects rest on a non-accelerating horizontal surface, the magnitude of the normal force is equal to the weight of the object: $N=\text{mg}.$ • When objects rest on an inclined plane that makes an angle $\theta$ with the horizontal surface, the weight of the object can be resolved into components that act perpendicular ( ${\mathbf{\text{w}}}_{\perp }$ ) and parallel ( ${\mathbf{\text{w}}}_{\parallel }$ ) to the surface of the plane. These components can be calculated using: ${w}_{\parallel }=w\phantom{\rule{0.25em}{0ex}}\text{sin}\phantom{\rule{0.25em}{0ex}}\left(\theta \right)=\text{mg}\phantom{\rule{0.25em}{0ex}}\text{sin}\phantom{\rule{0.25em}{0ex}}\left(\theta \right)$ ${w}_{\perp }=w\phantom{\rule{0.25em}{0ex}}\text{cos}\phantom{\rule{0.25em}{0ex}}\left(\theta \right)=\text{mg}\phantom{\rule{0.25em}{0ex}}\text{cos}\phantom{\rule{0.25em}{0ex}}\left(\theta \right).$ • The pulling force that acts along a stretched flexible connector, such as a rope or cable, is called tension, $\mathbf{\text{T}}$ . When a rope supports the weight of an object that is at rest, the tension in the rope is equal to the weight of the object: $T=\text{mg}.$ • In any inertial frame of reference (one that is not accelerated or rotated), Newton’s laws have the simple forms given in this chapter and all forces are real forces having a physical origin. ## Conceptual questions If a leg is suspended by a traction setup as shown in [link] , what is the tension in the rope? In a traction setup for a broken bone, with pulleys and rope available, how might we be able to increase the force along the tibia using the same weight? (See [link] .) (Note that the tibia is the shin bone shown in this image.) ## Problem exercises Two teams of nine members each engage in a tug of war. Each of the first team’s members has an average mass of 68 kg and exerts an average force of 1350 N horizontally. Each of the second team’s members has an average mass of 73 kg and exerts an average force of 1365 N horizontally. (a) What is magnitude of the acceleration of the two teams? (b) What is the tension in the section of rope between the teams? 1. $0.{\text{11 m/s}}^{2}$ 2. $1\text{.}2×{\text{10}}^{4}\phantom{\rule{0.25em}{0ex}}\text{N}$ What force does a trampoline have to apply to a 45.0-kg gymnast to accelerate her straight up at $7\text{.}{\text{50 m/s}}^{2}$ ? Note that the answer is independent of the velocity of the gymnast—she can be moving either up or down, or be stationary. (a) Calculate the tension in a vertical strand of spider web if a spider of mass $8\text{.}\text{00}×{\text{10}}^{-5}\phantom{\rule{0.25em}{0ex}}\text{kg}$ hangs motionless on it. (b) Calculate the tension in a horizontal strand of spider web if the same spider sits motionless in the middle of it much like the tightrope walker in [link] . The strand sags at an angle of $\text{12º}$ below the horizontal. Compare this with the tension in the vertical strand (find their ratio). (a) $7\text{.}\text{84}×{\text{10}}^{-4}\phantom{\rule{0.25em}{0ex}}\text{N}$ (b) $1\text{.}\text{89}×{\text{10}}^{–3}\phantom{\rule{0.25em}{0ex}}\text{N}$ . This is 2.41 times the tension in the vertical strand. Suppose a 60.0-kg gymnast climbs a rope. (a) What is the tension in the rope if he climbs at a constant speed? (b) What is the tension in the rope if he accelerates upward at a rate of $1\text{.}{\text{50 m/s}}^{2}$ ? Show that, as stated in the text, a force ${\mathbf{\text{F}}}_{\perp }$ exerted on a flexible medium at its center and perpendicular to its length (such as on the tightrope wire in [link] ) gives rise to a tension of magnitude $T=\frac{{F}_{\perp }}{2\phantom{\rule{0.25em}{0ex}}\text{sin}\phantom{\rule{0.25em}{0ex}}\left(\theta \right)}$ . Newton’s second law applied in vertical direction gives ${F}_{y}=F-2T\phantom{\rule{0.25em}{0ex}}\text{sin}\phantom{\rule{0.25em}{0ex}}\theta =0$ ${F}_{}=2T\phantom{\rule{0.25em}{0ex}}\text{sin}\phantom{\rule{0.25em}{0ex}}\theta$ $T=\frac{{F}_{}}{\text{2 sin}\phantom{\rule{0.25em}{0ex}}\theta }.$ Consider the baby being weighed in [link] . (a) What is the mass of the child and basket if a scale reading of 55 N is observed? (b) What is the tension ${T}_{1}$ in the cord attaching the baby to the scale? (c) What is the tension ${T}_{2}$ in the cord attaching the scale to the ceiling, if the scale has a mass of 0.500 kg? (d) Draw a sketch of the situation indicating the system of interest used to solve each part. The masses of the cords are negligible. a perfect square v²+2v+_ kkk nice algebra 2 Inequalities:If equation 2 = 0 it is an open set? or infinite solutions? Kim y=10× if |A| not equal to 0 and order of A is n prove that adj (adj A = |A| rolling four fair dice and getting an even number an all four dice Kristine 2*2*2=8 Differences Between Laspeyres and Paasche Indices No. 7x -4y is simplified from 4x + (3y + 3x) -7y is it 3×y ? J, combine like terms 7x-4y im not good at math so would this help me how did I we'll learn this f(x)= 2|x+5| find f(-6) f(n)= 2n + 1 Need to simplify the expresin. 3/7 (x+y)-1/7 (x-1)= . After 3 months on a diet, Lisa had lost 12% of her original weight. She lost 21 pounds. What was Lisa's original weight? preparation of nanomaterial Yes, Nanotechnology has a very fast field of applications and their is always something new to do with it... can nanotechnology change the direction of the face of the world At high concentrations (>0.01 M), the relation between absorptivity coefficient and absorbance is no longer linear. This is due to the electrostatic interactions between the quantum dots in close proximity. If the concentration of the solution is high, another effect that is seen is the scattering of light from the large number of quantum dots. This assumption only works at low concentrations of the analyte. Presence of stray light. the Beer law works very well for dilute solutions but fails for very high concentrations. why? how did you get the value of 2000N.What calculations are needed to arrive at it
2018-02-20 15:41:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 24, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5884664058685303, "perplexity": 611.4019351801353}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812978.31/warc/CC-MAIN-20180220145713-20180220165713-00192.warc.gz"}
https://ask.cvxr.com/t/ln-1-x-y-convex-convex/8844
# Ln(1+x/y) {convex} .* {convex} As we can see in this formula, Am(n) is constant. rho(n) and s(n) are variables,they are vectors of (1,N).This is a convex problem,but I can’t express it correctly in CVX syntax. In my code,I express it as follows: R(m,n)=inv_pos(rho(n)*A(m,n))*rel_entr(rho(n)*A(m,n),s(m,n)); But there are some grammatical errors: How can I solve this problem? Thank you! \ln(1 + x/y) is not concave; at x=1 this is a log-sum-inv which is convex. Oh,I got it! maybe I can solve it by converting this expression.It can be converted to log(s(n)+rho(n)*A(m,n))-log(s(n)) the first half of this expression is concave. the second half of this expression we can use Taylor expansion to find its upper bound. what do you think of this solution? thank you much ,Sir! You can try whatever you want. But rather than ad hoc Taylor Series or Successive Convex Approximation /Difference of Convex (Concave), you may be better offer using a non-convex solver, for instance, under YALMIP.
2023-04-01 07:57:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8956916928291321, "perplexity": 1456.1398664718117}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949701.56/warc/CC-MAIN-20230401063607-20230401093607-00397.warc.gz"}
https://www.project-tartarus.com/2017/05/using-mathjax-and-markdown-in-wordpress/
# Using Mathjax and Markdown in WordPress I just started my own WordPress page and came across several different tutorials to use Mathjax and Markdown in my blog. The easiest and most convenient way to do this is probably the following: # Markdown To use Markdown you can simply install the Jetpack plugin, which includes not only Markdown support but also security fixes, some overview about your site’s traffic, social media link options and much more. It was written by the developers of WordPress and will register your site to the WordPress database. Once you have installed and registered Jetpack you will see a new entry in your sidebar of your admin page From here go to Jetpack -> Settings -> Writing and activate the option “Write posts or pages in plain-text Markdown syntax”: To activate Markdown support also in your comments and discussions go to Jetpack -> Settings -> Discussion and activate the option “Enable Markdown use for comments”: For a quick reference on all the features, I suggest the WordPress reference on Markdown. # Mathjax There are also plugins for Mathjax but the easiest way to use it is to just add it to your header file in WordPress. To do that go to Appearance -> Theme Editor -> Theme Header (header.php) You then have to edit that file. Search for the tag </head> and insert just before it <script type="text/x-mathjax-config"> MathJax.Hub.Config({ tex2jax: { inlineMath: [ ['$','$'], ["\$","\$"] ], processEscapes: true } }); </script> <script type="text/javascript" src="https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML"> </script> The first script environment tells Mathjax to recognize $...$ as inline math environments. By default this syntax is not enabled. The second script environment actually loads Mathjax with the corresponding config file TeX-AMS-MML_HTMLorMML which enables Latex, AMS-math and CSS or MathML based rendering. For more reference on that check out the Mathjax documentation. You now have a working Mathjax installation. There are however some things to note here: • Due to the combination of Mathjax and Markdown you should escape backslashes. E.g. when you would write in Latex you should actually write \. If you don’t, Markdown will sometimes misinterpret characters like { as { and will then pass it to Mathjax, which will throw a syntax error. • Because we wanted to use $...$ as an inline math environment, we have to escape every ordinary occurrence of a dollar sign by \$. If you don’t want that behaviour just remove the corresponding part in the configuration of Mathjax (see above). That’s it! You now have a fully working Markdown and Mathjax environment. ## One Reply to “Using Mathjax and Markdown in WordPress” This site uses Akismet to reduce spam. Learn how your comment data is processed.
2021-10-22 17:19:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27065199613571167, "perplexity": 5676.568805717976}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585516.51/warc/CC-MAIN-20211022145907-20211022175907-00197.warc.gz"}
https://tt.gsusigmanu.org/6538-who-is-seyfert-and-what-is-ldquothe-so-called-seyfer.html
# Who is Seyfert, and what is “the so-called Seyfert flare”? We are searching data for your request: Forums and discussions: Manuals and reference books: Data from registers: Wait the end of the search in all databases. Upon completion, a link will appear to access the found materials. The BBC's Milky Way's centre exploded 3.5 million years ago says: A cataclysmic energy flare ripped through our galaxy, the Milky Way, about 3.5 million years ago, a team of astronomers say. They say the so-called Sifter flare started near the super massive black hole in the centre of the galaxy. The impact was felt 200,000 light-years away. [… ]The flare created two enormous "ionisation cones" that sliced through the Milky Way. It mentions "The team - led by Professor Joss Bland-Hawthorn from Australia" and "co-author Magma Guglielmo from the University of Sydney" and says "The findings will be published in the Astrophysical Journal." Right now I'm just asking the following: Question: Who is Sifter, and what is "the so-called Sifter flare"? update: @bertieb's comment notes that the BBC has corrected the passage. The new sentence says: The so-called Seyfert flare started near the supermassive black hole in the centre of the galaxy, they add. When first published, the article called it a "so-called Sifter flare" and that's what originally inspired this question. As has now been corrected, BBC has misunderstood term "Seyfert flare", instead calling it a "Sifter flare". A "Seyfert flare", is not really a common term, but the authors refer to an energetic outburst from the type of active galaxies called Seyfert galaxies (after Carl Seyfert). Like a quasar, a Seyfert galaxy is powered by gas accretion onto a central, supermassive black hole, although they're less luminous by roughly two orders of magnitude. In this case, there is evidence for a highly energetic ($$10^{56 ext{-}57},mathrm{erg}$$) explosion occurring only a few million years ago, resulting a an ionizing, bipolar cone extending outward from the Milky Way's central black hole, Sgr A*. This powerful flare resulted in huge, 10 kpc-scale, X-ray/gamma-ray-emitting bubbles. The line ratios of $$mathrm{C,IV}/mathrm{C,II}$$ and $$mathrm{Si,IV}/mathrm{Si,II}$$ points toward ionizing radiation energies of at least 50 eV. The orientation of these cones is seen in this figure from the paper, which will be on the arXiv tomorrow. The first-author, Joss Bland-Hawthorn, explains in a video here. Answering the who is part of the question. Carl Seyfert (1911-1960) was a US astronomer. He is best known for his 1943 research paper on high-excitation line emission from the centers of some spiral galaxies, which are named Seyfert galaxies after him. Seyfert's Sextet, a group of galaxies, is also named after him.
2022-05-26 10:44:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 3, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28092190623283386, "perplexity": 3807.195039082093}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662604794.68/warc/CC-MAIN-20220526100301-20220526130301-00151.warc.gz"}
https://www.atmos-chem-phys.net/20/2031/2020/
Journal topic Atmos. Chem. Phys., 20, 2031–2056, 2020 https://doi.org/10.5194/acp-20-2031-2020 Atmos. Chem. Phys., 20, 2031–2056, 2020 https://doi.org/10.5194/acp-20-2031-2020 Research article 24 Feb 2020 Research article | 24 Feb 2020 # Merging regional and global aerosol optical depth records from major available satellite products Merging regional and global aerosol optical depth records from major available satellite products Larisa Sogacheva1, Thomas Popp2, Andrew M. Sayer3,4, Oleg Dubovik5, Michael J. Garay6, Andreas Heckel7, N. Christina Hsu8, Hiren Jethva3,4, Ralph A. Kahn8, Pekka Kolmonen1, Miriam Kosmale2, Gerrit de Leeuw1, Robert C. Levy8, Pavel Litvinov9, Alexei Lyapustin8, Peter North7, Omar Torres10, and Antti Arola1 Larisa Sogacheva et al. • 1Finnish Meteorological Institute, Climate Research Programme, Helsinki, Finland • 2German Aerospace Center (DLR), German Remote Sensing Data Center (DFD), Oberpfaffenhofen, Germany • 3Goddard Earth Sciences Technology And Research (GESTAR), Universities Space Research Association, Columbia, MD, USA • 4NASA Goddard Space Flight Center, Greenbelt, MD, USA • 5Laboratoire d'Optique Atmosphérique, CNRS–Université Lille, France • 6Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA, USA • 7Department of Geography, Swansea University, Swansea, UK • 8Climate and Radiation Laboratory, Earth Science Division, NASA Goddard Space Flight Center, Greenbelt, MD 20771, USA • 9Generalized Retrieval of Atmosphere and Surface Properties SAS, Lille, France • 10Atmospheric Chemistry and Dynamics Laboratory, Earth Science Division, NASA Goddard Space Flight Center, Greenbelt, MD 20771, USA Correspondence: Larisa Sogacheva (larisa.sogacheva@fmi.fi) Abstract Satellite instruments provide a vantage point for studying aerosol loading consistently over different regions of the world. However, the typical lifetime of a single satellite platform is on the order of 5–15 years; thus, for climate studies, the use of multiple satellite sensors should be considered. Discrepancies exist between aerosol optical depth (AOD) products due to differences in their information content, spatial and temporal sampling, calibration, cloud masking, and algorithmic assumptions. Users of satellite-based AOD time-series are confronted with the challenge of choosing an appropriate dataset for the intended application. In this study, 16 monthly AOD products obtained from different satellite sensors and with different algorithms were inter-compared and evaluated against Aerosol Robotic Network (AERONET) monthly AOD. Global and regional analyses indicate that products tend to agree qualitatively on the annual, seasonal and monthly timescales but may be offset in magnitude. Several approaches were then investigated to merge the AOD records from different satellites and create an optimised AOD dataset. With few exceptions, all merging approaches lead to similar results, indicating the robustness and stability of the merged AOD products. We introduce a gridded monthly AOD merged product for the period 1995–2017. We show that the quality of the merged product is as least as good as that of individual products. Optimal agreement of the AOD merged product with AERONET further demonstrates the advantage of merging multiple products. This merged dataset provides a long-term perspective on AOD changes over different regions of the world, and users are encouraged to use this dataset. 1 Introduction Interactions of atmospheric aerosols with clouds and radiation are the largest source of uncertainty in modelling efforts to quantify current climate and predict climate change (IPCC, 2018). To reduce such uncertainties, we need observations to constrain climate models. However, these observations must be accurately calibrated and validated, have consistent or at least well-characterised uncertainties, and provide adequate temporal and spatial sampling over a long period of time. With their ability to cover the globe systematically, satellites provide this global and temporal perspective. Satellite observations have produced major advances in our understanding of the climate system and its changes, including quantifying the spatio-temporal states of the atmosphere, land and oceans, and aspects of the underlying processes. However, as the typical lifetime of a single satellite platform is on the order of 5–15 years, a single sensor data record may not be long enough to discern a climate signal (WMO, 2017). Moreover, aerosol products from different satellites and algorithms all have limitations regarding their spatial and temporal coverage and vary in their accuracies depending on environmental conditions (aerosol loading and type, surface brightness, and observation geometry), often leading to regional differences (e.g. Li et al., 2014b). Thus, the application of satellite observations for climate change studies requires using products from multiple sources to derive consistent regional conclusions. The key parameter used for aerosol-related studies to date is the aerosol optical depth (AOD), which is the vertical integral of extinction by aerosol particles through the atmospheric column. Over the last several decades, AOD remote sensing has been performed from space using a wide variety of sensors that have different characteristics, including being passive or active, operating in ultraviolet (UV) to thermal infrared (TIR) spectral regions, being single-view to multi-view, being single-pixel to broad swath, having a sub-kilometre to tens-of-kilometres resolution, being intensity-only or polarimetric, and having different orbits and observation time(s). Table 1 lists the datasets used in the current study, together with key references. Aside from the Earth Polychromatic Imaging Camera (EPIC; orbiting at L1 Lagrange point directly between the Earth and the sun on the Deep Space Climate Observatory (DSCOVR) satellite), all sensors are in polar-orbiting, sun-synchronous low-earth orbits (∼600–800 km). Only a few of these sensors were optimised for accurate aerosol property retrieval, and for many, AOD at one or more visible wavelengths is the only quantitatively reliable aerosol parameter they provide. Table 1 is not exhaustive for available AOD products. Other AOD products such as those from active sensors such as the Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) and imaging radiometers on geostationary satellites are not considered here, as they have very different sampling characteristics (e.g. CALIOP profiles a curtain swath, with areas either viewed twice daily and twice during the night during a month or not at all; geostationary sensors sample a constant disc, typical at a frequency of 10 min to 1 h); thus their monthly mean products are conceptually very different from polar-orbiters. No two datasets provide identical results, whether applying the same algorithm principles to multiple similar sensors (Sayer et al., 2017, 2019; S. Li et al., 2016; Levy et al., 2013) or even between “identical” sensors, such as the Moderate Resolution Imaging Spectroradiometers (MODISs) on Terra and Aqua (Sayer et al., 2015; Levy et al., 2018) for which calibration and time of day differences remain. Using different retrieval algorithms for products retrieved from the same instruments introduces additional discrepancies, such as those found by de Leeuw et al. (2015), Popp et al. (2016) for three Along Track Scanning Radiometer (ATSR) datasets. Differences can become larger when comparing products from different sensors and algorithms (Kokhanovsky and de Leeuw, 2009; Kinne, 2009; Li et al., 2014b). One other important factor contributing to differences is related to the approach to cloud masking, which affects the pixels selected for processing by retrieval algorithms and propagates into different levels of clear-sky bias in daily and monthly aggregates (Sogacheva et al., 2017; Zhao et al., 2013; Li et al., 2009). Escribano et al. (2017) estimated the impact of choosing different AOD products for a dust emission inversion scheme and concluded that the large spread in aerosol emission flux over the Sahara and Arabian Peninsula is likely associated with differences between satellite datasets. Similarly, Li et al. (2009) concluded that differences in cloud-masking alone could account for most differences among multiple satellite AOD datasets, including several for which different algorithms were applied to data from the same instrument. There is no single “best” AOD satellite product globally. For example, the MODIS Deep Blue (DB) AOD product shows better performance than MODIS Dark Target (DT) in most regions, besides bright surfaces (i.e. deserts and arid/semi-arid areas) (Wei et al., 2019a). However, despite the differences between satellite products and the fact that none is uniformly most accurate (Sayer et al., 2014; de Leeuw et al., 2015, 2018), the application of statistical techniques such as principal component or maximum covariance analysis (Li et al., 2013, 2014a, b) shows that there are key similarities among the AOD products tested. Merging multi-sensor AOD products holds the potential to produce a more spatially and temporally complete and accurate AOD picture. With multiple observational datasets available, it is important to examine their consistency in representing aerosol property variability in these dimensions. This is useful for constraining aerosol parameterisations in climate models (Liu et al., 2006), in the study of aerosol climate effects (Chylek et al., 2003; Bellouin et al., 2005) and for verifying global climate models (e.g. Kinne et al., 2003, 2006; Ban-Weiss et al., 2014) in which satellite-retrieved AOD monthly aggregates are used. However, such an integration into a coherent and consistent climatology is a difficult task (Mishchenko et al., 2007; Li et al., 2009). There are only a few studies where an AOD record was merged from different satellites. Chatterjee et al. (2010) describe a geostatistical data fusion technique that can take advantage of the spatial autocorrelation of AOD distributions retrieved from the Multi-angle Imaging Spectroradiometer (MISR) and MODIS, while making optimal use of all available datasets. Tang et al. (2016) performed a spatio-temporal fusion of satellite AOD products from MODIS and Sea-Viewing Wide Field-of-View Sensor (SeaWiFS) using a Bayesian maximum entropy method for eastern Asia and showed that, in the regions where both MODIS and SeaWiFS have valid observations, the accuracy of the merged AOD is higher than those of the MODIS and SeaWiFS AODs individually. Han et al. (2017) improved the AOD retrieval accuracy by fusing MODIS and CALIOP data. Sogacheva et al. (2018b) combined ATSR and MODIS AOD to study the trends in AOD over China between 1995 and 2017. Naeger et al. (2016) combined daily AOD products from polar-orbiting and geostationary satellites to generate a near-real-time (NRT) daily AOD composite product for a case study of trans-Pacific transport of Asian pollution and dust aerosols in mid-March 2014. J. Li et al. (2016) constructed a monthly mean AOD ensemble by combining monthly AOD anomaly time series from MODIS, MISR, SeaWiFS, Ozone Monitoring Instrument (OMI) and POLarization and Directionality of the Earth's Reflectances (POLDER) and applying an ensemble Kalman filter technique to these multi-sensor and ground-based aerosol observations to reduce uncertainties. Penning de Vries et al. (2015) examined relationships between the monthly mean AOD, Ångström exponent (AE) from MODIS, UV Aerosol Index from the Global Ozone Monitoring Experiment–2 (GOME-2) and trace gas column densities and showed the advantage of using multiple datasets with respect to characterising aerosol type. Boys et al. (2014) combined SeaWiFS and MISR AODs with the GEOS-Chem global model to create and study trends in a 15-year time series of surface particulate matter levels. A meaningful merge should account for the strengths and limitations of each constituent record. The spread of satellite AOD records also adds to the value of constraining their uncertainty; whereas a lack of diversity among datasets does not mean that they have converged on the true value, the existence of unexplained diversity does imply that they have not. To assess their consistency, the products should be compared during overlapping periods, because interannual and shorter-term variability in atmospheric aerosols can be significant in some parts of the world (e.g. Lee et al., 2018). In the current study, AOD monthly aggregates from 16 different satellite products were evaluated with ground-based measurements from the Aerosol Robotic Network (AERONET; Holben et al., 1998). Note that, as with all measurements, even the AERONET spectral AOD has limitations as to where it can be informative. For example, AERONET includes ∼450 active stations in 2019, offering far more spatial coverage than in 1993 when the network was founded, yet even now AERONET spatial sampling is particularly limited in remote areas which are often those where aerosol gradients are large, e.g. near sources (e.g. Shi et al., 2011; J. Li et al., 2016). Based on the comparison with AERONET, we estimate how well the satellite AOD monthly aggregates reproduce the AERONET AOD climatology. We considered areas with different aerosol types, aerosol loading and surface types, which are the dominant factors affecting AOD product quality. This allows users to choose the AOD product of a better quality, depending on the area and research objective. A verification of open-ocean monthly data using the Maritime Aerosol Network (MAN; Smirnov et al., 2009) is not possible in this way, because MAN data are acquired during cruises on ships of opportunity rather than as regular, repeating observations at specific locations. Different approaches for merging the AOD products (median, weighted according to the evaluation results) are introduced in the current paper. AOD evaluation results are used to merge the L3 gridded monthly AOD data and AOD time series for the period 1995–2017, using different methodologies. The resulting AOD merged products are evaluated against AERONET and compared against one another. This study grew out of discussions at annual AeroSat (https://aerosat.org, last access: 9 May 2019) meetings about how to move forward on the difficult topic of combining distinct aerosol data records. AeroSat is a grass-roots group of several dozen algorithm developer teams and data users. Meeting in person around once a year in concert with its sibling AeroCom group of aerosol modellers (https://aerocom.mpimet.mpg.de, last access: 9 May 2019) allows an active discussion between data providers and data users to highlight developments, discuss current issues and open questions in the field of satellite aerosol remote sensing and aerosol modelling. The paper is organised as follows. In Sect. 2, the AOD products and regions of interest are introduced. The main principles and results for the statistical evaluation of individual monthly AOD retrievals are presented in Sect. 3. Alternative methods for merging are discussed in Sect. 4. AOD merged products are introduced, evaluated and inter-compared with individual products in Sect. 5. Annual, seasonal and monthly regional AOD time series are presented and discussed in Sect. 6. A brief summary and conclusion are given in the final section. 2 Regions of interest, instruments and AOD products ## 2.1 Regions of interest There are huge regional differences in AOD loading types (composition and optical properties), seasonality and surface reflectance (Holben et al., 2001; Dubovik et al., 2002; Pinty et al., 2011). Retrieval quality (accuracy, precision and coverage) varies considerably as a function of these conditions, as well as whether a retrieval is over land or ocean. Therefore, this study focuses on surface-specific (land or ocean) and regional evaluation of these diverse aerosol products. In addition to evaluating AOD products AOD over land, over ocean and globally (note that not all sensor–algorithm combinations retrieve over both surfaces), we chose 15 regions that seem likely to represent a sufficient variety of aerosol and surface conditions (Fig. 1 and Table S1 in the Supplement). These include 11 land regions, two ocean regions and one heavily mixed region. The land regions represent Europe (denoted by Eur), Boreal (Bor), northern, eastern and western Asia (AsN, AsE and AsW, respectively), Australia (Aus), northern and southern Africa (AfN and AfS), South America (AmS), and eastern and western Northern America (NAE and NAW). The Atlantic Ocean is represented as two ocean regions, one characterised by Saharan dust outflow over the central Atlantic (AOd) and a second that includes burning outflow over the southern Atlantic (AOb). The mixed region over Indonesia (Ind) includes both land and ocean. Due to documented large changes in AOD during the last 25 years (Sogacheva et al., 2018a, b), we also considered the south-eastern China (ChinaSE) subset of the AsE region. The main body of the paper focuses on two regions, Europe and ChinaSE, and the big-picture results (global, all land and all ocean). The two regions, Europe and ChinaSE, were chosen because they are often the focus of aerosol studies. Results from the remaining regions are presented in the Supplement. Figure 1Fifteen land and ocean regions defined in this study: Europe (Eur), Boreal (Bor), northern Asia (AsN), eastern Asia (AsE), western Asia (AsW), Australia (Aus), northern Africa (AfN), southern Africa (AfS), South America (SA), eastern North America (NAE), western North America (NAW), Indonesia (Ind), Atlantic Ocean dust outflow (AOd) and Atlantic Ocean biomass burning outflow (AOb). In addition, south-eastern China (ChinaSE), which is part of the AsE region, marked with a blue frame, is considered separately. Land, ocean and global AOD were also considered. ## 2.2 Instruments, algorithms and AOD products An overview of the instruments and AOD products included in this study is presented in Table 1. AOD products from the same instruments retrieved with different algorithms are named in the paper with the instrument and retrieval algorithms, e.g. ATSR dual-view (ADV), ATSR Swansea University (SU), Terra Dark Target (DT) & Deep Blue (DB) and Terra MAIAC (multi-angle implementation of atmospheric correction). When both Terra and Aqua are considered, we call them together as MODIS DT&DB or MODIS MAIAC. Note that we used the merged MODIS Dark Target and Deep Blue product (Sayer et al., 2014; denoted “DT&DB”), rather than the results of the individual DB and DT algorithms, as this merged dataset was introduced into the product for similar purposes as the one explored in this work. An ensemble ATSR product (ATSR_ens) was generated from the three ATSR products (ATSR ADV, ATSR SU and ATSR ORAC – ATSR with the optimal retrieval of aerosol and cloud algorithm) in order to combine the strengths of several algorithms and to increase the coverage of the combined product (Kosmale et al., 2020). The ensemble was calculated per pixel as the weighted mean of the individual algorithm values with weights given by the inverse of the individual pixel level uncertainty values. The ensemble algorithm required as a minimum for each pixel to have valid results from at least two of the contributing algorithms. The uncertainties in each algorithm were first corrected in their absolute values to agree on average with the mean error. For some products, AOD data are available for wavelengths other than 0.55 µm. Specifically, Total Ozone Mapping Spectrometer (TOMS) and OMI products include AOD at 0.50 µm, Advanced Very-High-Resolution Radiometer (AVHRR) NOAA includes AOD at approximately 0.63 µm (with slight variation between the different AVHRR sensors), and EPIC AOD is available at 0.44 µm (in the dataset used in the current study). If the wavelength is not mentioned specifically, 0.55 µm is implicit. In most cases the official AOD monthly products (typically referred to as Level 3 or L3 data), which correspond to arithmetic means of daily mean data aggregated onto (typically) a 1× 1 grid, have been used without further processing. The first exceptions are for AVHRR NOAA and POLDER, which provide very high AOD values poleward of ca. 60 and over Hudson Bay (50–70 N, 70–95 E), respectively. The values are unrealistic, a likely a consequence of cloud and/or sea ice contamination. To eliminate those unrealistic values, AOD values of >0.7 have been removed over the mentioned-above areas. Applying that limit decreased the offset between the AVHRR NOAA product and other products but did not eliminate it (see Sect. S2 in the Supplement for details). Additionally, MISR standard (0.5× 0.5 resolution) and AVHRR NOAA (0.1× 0.1 resolution) L3 AOD products were aggregated by simple averaging to 1 to match the other datasets. Due to differences in instrument capabilities and swath widths (Table 1), the spatial and temporal data sampling available for calculating monthly averages varies considerably among the satellite products. The ATSR products and MISR have narrow swaths and generally provide only a few days with retrievals per month, whereas most of the rest see the whole planet roughly every day or two so that their coverage is mostly limited by, e.g. the persistence of cloud cover. As mentioned previously, EPIC is a special case, as it provides moving snapshots of the day-lit portion of the Earth, up to several times per day, as distinct from overpasses at only specific local solar equatorial crossing times for the sensors on polar-orbiting satellites. Further, TOMS and OMI have a notably coarser pixel resolution than the others, so their coverage and quality are more sensitive to cloud masking decisions. Some datasets provide measures of internal diversity (e.g. standard deviation), but none currently provides estimates of the monthly aggregate uncertainty against some standard, which would be a combination of (both systematic and random) retrieval uncertainties and sampling limitations. This is an area currently being investigated by AeroSat due to the wide use of L3 products. For the intercomparison between AOD products, we chose three “reference” years: • 2000, when the AOD products from TOMS, AVHRR NOAA, SeaWiFS, ATSR-2, MODIS Terra and MISR are available (for the full year, except for MISR and MODIS Terra, which were available from March to December); • 2008, when the AOD products from Advanced ATSR (AATSR), MODIS Terra and Aqua, MISR, AVHRR NOAA, AVHRR DB/SOAR (Satellite Ocean Aerosol Retrieval), SeaWiFS and POLDER are available; and • 2017, when the AOD products from MODIS Terra and Aqua, MISR, VIIRS (Visible Infrared Imaging Radiometer Suite) and EPIC are available. For products with no coverage over ocean (TOMS, OMI and MAIAC products) or land (AVHRR NOAA), global AOD was not considered. Table 1Overview of the sensors, data records and AOD algorithms discussed in this paper. For the products availability, see Table 4. 3 AOD products intercomparison and evaluation with AERONET The AOD deviations of the individual products from the median AOD (Figs. S1 and S2 in the Supplement) are discussed in detail in the Supplement (Sect. S2). These show regional differences, even for products retrieved from the same instruments with similar algorithm. Both negative and positive deviations are observed in regions with high AOD; both aerosol optical model assumptions and surface type are also likely to influence the AOD retrieval. High AOD might, in turn, be wrongly screened as cloud, and thus the resulting lack of high AOD retrieval leads to a low bias in monthly AOD. To further reveal differences among the AOD products retrieved with different algorithms and applied to different satellites, the diversity of the satellite annual mean AOD for years 2000, 2008 and 2017 is discussed in Sect. S3 (Figs. S3 and S4). The diversity is lower in 2017, when only MODIS, MISR, EPIC and VIIRS AOD products are available. ## 3.1 Evaluation of monthly AOD To evaluate the quality of any AOD product, the verification of the product against more accurate reference measurements, where possible, is obligatory. Ground-based measurements such as those from AERONET (cloud screened and quality assured Version 3 Level 2.0; Giles et al., 2019) provide highly accurate measures of AOD that are widely used as ground truth for the validation of satellite AOD data. Extensive L2 AOD validation has been performed for different aerosol products. However, climate model evaluation is often performed on monthly scales. Thus, climate analysis begs for evaluation of satellite AOD monthly aggregates (Nabat et al., 2013; Michou et al., 2015; S. Li et al., 2016). Only a few attempts have been made to evaluate AOD monthly aggregates retrieved from satellites (e.g. Li et al., 2014b, Wei et al., 2019b). This is because verification of the L3 monthly aggregate satellite AOD is not a true validation (and note the use of “evaluation” and “verification” here instead of “validation”). AERONET provides AOD at a single point and is not necessarily representative of AOD in a 1× 1 grid. While AERONET samples during all cloud-free daylight hours, a given polar-orbiting sensor will only report once per day and at the same time each day (e.g. 13:30 LT for sensors in the A-Train). The possible spatial representativity issues associated with this latter point are a topic of current investigation (e.g. J. Li et al., 2016; Virtanen et al., 2018; Schutgens, 2019). Nevertheless, AERONET's instantaneous AOD uncertainty (around 0.01 in the mid-visible; Eck et al., 1999) is significantly lower than most satellite products, and its temporal sampling is much more complete. As such, it remains a useful source for evaluating these L3 products, and for this purpose we compare AOD monthly aggregates of all available data from both AERONET and each satellite product. Deviations between satellite and AERONET monthly aggregates are expected, e.g. due to differences in satellite spatial and temporal sampling (Sect. 2.2, Table 1), particularly for those satellites with lower coverage. Results from this comparison have limitations. As mentioned previously, AERONET provides data over certain locations within a grid cell, whereas satellites cover a larger fraction of the area of a grid cell (depending on sampling and cloud cover). So, for example, if AERONET is likely to miss extreme high values (localised plumes missing an AERONET station), that will result in AERONET showing lower AOD than from a satellite. Conversely, if a station happens to be directly under an aerosol plume and the satellite algorithm filters as a cloud, the AERONET value would be higher. Neither AERONET nor satellite monthly AOD aggregates are true monthly AOD values. When we refer to “AOD monthly aggregate” we mean the daytime, cloud-free AOD monthly aggregated from whatever data are available. How the aggregate is calculated is also important; AOD distributions on monthly scales are often closer to lognormal than normal, which suggests that the arithmetic monthly mean may not be the most appropriate summary metric (O'Neill et al., 2000; Sayer and Knobelspiesse, 2019). The discrepancies between different statistics can be exacerbated when a dataset provides poor sampling of the extreme conditions. Nevertheless, as it is the most widely used statistic within the community and is the standard output of current L3 products, monthly means are presented in this analysis. The general framework could be applied to other AOD summary statistics (e.g. monthly median or geometric mean, advocated by Sayer and Knobelspiesse, 2019) if these L3 outputs become more widely available in the future. In the evaluation exercise, AERONET monthly mean AOD and AE (which describes how AOD depends on wavelength and is sometimes used as a proxy for aerosol type) were calculated from AERONET daily means. AOD verification was performed for all available AERONET monthly data and separately for different aerosol types, which were defined with AOD and AE thresholds. Although these thresholds are subjective, we consider “background aerosol” to be cases where AOD<0.2, “fine-dominated” to be where AOD>0.2 and AE>1, and “coarse-dominated” to be cases where AOD>0.2 and AE<1 (e.g. Eck et al., 1999). This classification has also been used by e.g. Sayer et al. (2018b) and Sogacheva et al. (2018a, b). The annual and seasonal maps of prevailing aerosol type for AERONET locations, calculated from the AERONET data available for the period of 1995–2017, are shown in Fig. S5. Such a classification differentiates major aerosol scenarios. The biomass burning seasons over the Amazon and South Africa are clearly identified by a domination of the fine aerosol particles in JJA (June, July, August) and SON (September, October, November), and the Asian dust transport season in MAM (March, April, May) is clearly coarse dominated. As the deviation of each satellite product from the median has regional components (Figs. S1 and S2). Even though we tried to choose regions with (somewhat) homogeneous aerosol conditions during a given season, AOD conditions (and thus algorithm performance) might vary within the regional AERONET stations, which may represent different aerosol/surface conditions within one study regions, may have different record lengths. To keep similar weighting for each station in a region, we first calculated statistics for each AERONET station separately and then calculated the regional median validation statistics from all available stations. To reveal how retrieval quality depends on AOD loading, offsets between AERONET AOD and satellite product AOD were estimated for binned AERONET AOD, and the number of observations in each AOD bin is reported. Correlation coefficient (R, Pearson correlation), offset (satellite productAERONET), root-mean square error (RMSE) and fraction of points that fulfil the Global Climate Observing System (GCOS) uncertainty goals (GE) of the larger of 0.03 or 10 % of AOD (GCOS, 2011) are also reported. These monthly AOD verification results are used to calculate weights for each satellite dataset in one of the merging approaches later in Sect. 4.2. ### 3.1.1 Binned offset global evaluation As an example, AOD-binned evaluation results are shown in Fig. 2. for Terra DT&DB and in Fig. S6 for all products. A general tendency towards positive satellite-retrieved AOD offsets is observed for most products under background conditions. On average, 70 %–80 % of monthly AODs fall into class “background” (AOD≤0.2), so total AOD mean biases are expected to have similar behaviour. TOMS and OMI have the highest positive offsets globally, which is in line with the results from the dataset spatial intercomparison (Sect. S2). Offsets close to 0 for background AOD are observed for the MODIS MAIAC products. For most products, except MODIS DT&DB, AOD offsets become negative for AOD>0.2 (fine- and coarse-dominated aerosol types) with increasing amplitude (up to 0.2–0.5) towards highest AOD values. MODIS DT&DB show the lowest offsets for $\mathrm{0.2}<\text{AOD}<\mathrm{1}$. Offsets for VIIRS are close to 0 for AOD<0.5 and reach ca. 30 % of AOD at AOD≈1. For the current MISR standard product, AOD is systematically underestimated for $\text{AOD}>\sim \mathrm{0.5}$; this is largely due to treatment of the surface boundary condition at high AOD (Kahn et al., 2010) and is addressed in the research aerosol retrieval algorithm (Garay et al., 2019; Limbacher and Kahn, 2019). Except for TOMS and Terra MAIAC, offsets are smaller for coarse-dominated AOD. Figure 2Difference between Terra DT&DB and AERONET monthly AOD for selected AOD bins: median bias (circles), bias standard deviation (error bars) for all AOD types (purple), background aerosol (purple; AOD≤0.2), fine-dominated AOD (blue) and coarse-dominated AOD (green) The fraction (F) of points in each bin is represented by orange bars. For all individual products see Fig. S6. AOD products retrieved from satellites having better coverage show a better agreement with AERONET monthly aggregates. Thus, sampling differences (swath and pixel selection) are critical in evaluation of monthly products, as expected but are not the only factor influencing the evaluation results. ### 3.1.2 AOD evaluation over selected regions Due to differences in instrument specifications and retrieval approaches, the performance of retrieval algorithms depends largely on aerosol type, aerosol loading and surface properties at certain locations (e.g. Sayer et al., 2014). In this section we show the evaluation results for AOD products in two selected regions: Europe and ChinaSE (Fig. 3). Results for all regions are shown in Fig. S7. For each region, statistics (R, % of points in GE, offset and RMSE) for all 16 products are combined into one subplot. The merged AOD product M is introduced in Sect. 5.2; evaluation results for that product are summarised in Sect. 5.2.1. Algorithm performance over Europe is similar for most products, with an R of 0.55–0.65, 45 %–55 % of the pixels in the GE, an offset of 0.05–0.1 and RMSE of ∼0.1. For TOMS and OMI, the performance of each is slightly worse than for other products in Europe. In ChinaSE, the offset (0.1–0.2) and RMSE (0.2–0.3) are considerably higher than in Europe, and fewer pixels fit within the GE (15 %–30 %). This is likely due to a combination of high AOD loading and accompanying high uncertainty in the products, indicated by high variability in aerosol composition and surface properties. In Indonesia and for the biomass burning outflow over the Atlantic, the MODIS and MISR products show a better agreement with AERONET than the ATSR-family products. Several products which use different surface treatment (ATSR SU, MODIS-family and MISR) show a similarly higher R over AfN, an area of high surface reflectance. However, a high R does not imply that performance is better, only that variations in AOD are captured better. Other statistics (number of pixels within GE, offset and RMSE) in AfN are worse compared with those in Europe. Overall, no single product has the best statistics for all metrics and regions. Retrievals tend to perform well in areas with darker (more vegetated) surfaces and where aerosol type is less variable over time. In these cases, biases are small and retrieval uncertainties are often better than the GE, tracking temporal AOD variability well but with a tendency to underestimate high-AOD events. In more complex tropical environments, data should be used with greater caution, as there is a greater tendency to underestimate AOD. However, correlation often remains high, suggesting a good ability to identify monthly AOD variations, despite this underestimation. Figure 3AERONET evaluation statistics for Europe and ChinaSE: correlation coefficient R, bar, and fraction of pixels satisfying the GCOS requirements, GE, ; offset (satellite productAERONET), Δ, and root-mean-square error RMSE, *. Shown for AOD monthly aggregates for each product (1:16; legend for products below the plot) and the L3 merged product (M; approach 2 with RM2 for all aerosol types; for details see Sect. 4.2) with corresponding colours (legend) for the selected regions (as in Fig. 1). N is the number of matches with AERONET. Note, for products that do not provide the global coverage (e.g. no retrieval over oceans), the results are missing. For all studied regions, see Fig. S7. ## 3.2 AOD time series In order to move towards consistency in regional and global AOD records derived from multiple satellites using different sensors and retrieval techniques, this section examines annual regional AOD time series obtained from the different products. Besides the positive offset for TOMS and OMI (Figs. S1, S2, S6 and S7), consistent temporal patterns are observed, and similar interannual AOD variability is tracked by all datasets (Figs. 4 and S8). AOD peaks in Europe in 2002, in ChinaSE in 2006/2007, 2011 and 2014, (possibly related to changes in anthropogenic emissions; Sogacheva et al., 2018a, b). Relative AOD peaks over the Atlantic dust area in 1998, 2012 and 2015 (Peyridieu et al., 2013), and obvious AOD peaks in Indonesia related to the intensive forest fires in 1997, 2002, 2006 and 2015 (Chang et al., 2015; Shi et al., 2019) are clearly seen. Figure 4Annual AOD time series from different products (see legend) for Europe and ChinaSE. For all selected regions see Fig. S8. However, significant regional offsets between products exist, which are largest in regions with high aerosol loading. Over ChinaSE, MODIS-family products show higher monthly AOD compared to all others. Over AfN, ATSR SU and ATSR_ens reach higher monthly aggregated AOD than the MODIS-family products, whereas comparisons with AERONET are similar for ATSR and MODIS (with slightly higher RMSE for ATSR by 0.05); differences are likely tied to the small number of stations in this region. A large offset between MODIS and ATSR is revealed over Australia (Fig. S8). AOD annual cycles for individual products for the year 2008 are discussed in Sect. S8. As in the annual time series (Figs. 4 and S8), the annual AOD cycles are similar between the products (Fig. S9), with more pronounced deviations in areas of high AOD. 4 AOD merging approaches Here, 12 AOD products (all available at 0.55 µm) were used to create a merged AOD product for the period of 1995–2017. The temporal availability of the AOD products is shown in Table 2 (counting cases of partial coverage of a dataset during a year as available). Table 2Availability and coverage of the AOD products for merging for each year in the period 1995–2017. N: annual number of available products. We tested two broad approaches for merging, summarised in Fig. 5. In the first, the median AODs from the available (10 globally and two over land) individual uncorrected and offset-adjusted (shifted to a common value) products were calculated (approach 1, Sect. 4.1 for details). In the second approach, AOD-weighted means were created where the weights for individual products were derived from the evaluation with the AERONET through two different ranking methods (see approach 2 in Sect. 4.2 for details). The same merging scheme was applied to the L3 uncorrected products (Sect. 2.2) and regional time series (Sect. 3.1) yielding 10 merged AOD products and 10 merged regional time series. Figure 5Scheme for the merging approaches; applied for L3 products or regional time series. To achieve best estimates of the regional AOD by merging multi-sensor monthly AOD data, the systematic and random components of uncertainties within each product should be considered explicitly. However, this cannot yet be done; only some of the L2 products used to create the L3 monthly products contain pixel-level propagated or estimated uncertainties, and their associated propagations to L3 products (together with other contributions from e.g. sampling limitations) have not yet been quantified robustly. The analysis herein therefore represents an initial effort in the absence of a full uncertainty budget. Uncertainties for the chosen merged L3 product (details are discussed in Sect. 5.2.2) were estimated as the root-mean-squared sum of the deviations between the chosen merged product and either the median from the all uncorrected products (approach 1) or each of the other seven merged products (approach 2). ## 4.1 Approach 1: AOD median for uncorrected and offset-adjusted (shifted) AOD products The mean (arithmetic average) value, although commonly used in climate studies, is not generally equal to the most frequently occurring value (the mode) and may not reflect the central tendency (the median) of strongly asymmetrical distributions such as those that can be found for AOD (O'Neill et al., 2000; Sayer and Knobelspiesse, 2019). Although the central limit theorem implies that this should be less of an effect when making an estimate of the mean AOD from a cluster of AOD datasets (i.e. a merged time series), in practice this is unlikely to be fully the case because the different datasets are not independent estimates of the underlying AOD field. This is because they are made with sensors and techniques which are not independent (i.e. typically similar spectral/spatial bands and sampling limitations) and may have different bias characteristics. Further, by itself, the mean does not provide any information about how the observations are scattered, whether they are tightly grouped or broadly spread out. Thus, we study the median (which is more robust in the presence outliers which might be caused by a poorly performing algorithm in a certain region) and standard deviations (as a metric of diversity) between the products chosen for merging. As shown in Sect. 3, the AOD time series of different products display highly consistent temporal patterns, albeit with spatiotemporally varying offsets (Figs. 4, S8 and S9). We use the Terra DT&DB product as a reference to estimate the average offsets between products, because its time period overlaps with each AOD product considered in the current study. Means and standard deviations of the offsets for all individual products from the Terra DT&DB AOD are shown in Fig. 6 for Europe and ChinaSE and in Fig. S10 for all selected regions. Offset magnitudes and their variations depend on AOD loading; offsets are typically higher for high AOD. Over land, ocean and thus globally, the offset is negative relative to Terra DT&BD for most of the products. This includes Europe and ChinaSE. However, over the bright surface area in northern Africa, AVHRR DT/SOAR, VIIRS, ATSR SU and ATSR ensemble show high (0.05–0.1) positive bias. Also, all ATSR products are biased high in Australia and South America. Thus, the median for the offset-adjusted product is expected to be positive biased. For details, see Sect. 5.1, where evaluation results for the AOD products merged with different approaches are discussed. Figure 6Regional annual average AOD offset between each dataset and the Terra DT&DB dataset. GCOS requirement of ±0.03 is shown as a background colour. For all selected regions, see Fig. S10. With the shifted median merging approach, each AOD product was shifted on a regional basis, based on its regional offset with respect to Terra DT&DB (Sect. 5.2). The median and standard deviation of AOD time series were then derived from these 10 shifted and Terra DT&DB data records. ## 4.2 Approach 2: weighted AOD ### 4.2.1 Method As shown in Sect. 3.1, the products differ in the degree to which each represents the AERONET values on the monthly scale. Our second approach is a weighted mean AOD, where the weights are assigned based on the agreement of each dataset with monthly AERONET averages. This represents an initial attempt to adjust the level of confidence assigned to each product on a regional basis; better-comparing products are given more weight in the calculation of a combined product. An AOD-weighted mean was calculated, with a ranking approach based on the statistics from the AERONET comparison for AOD: R, bias, RMSE, GE (Figs. 4 and S8) and median bias of the binned AOD in the range [0.45, 1] (Figs. 3 and S7). The last criterion was added to specifically consider algorithm performance for higher AOD. Two ranking methods were tested. For the first ranking method (RM1) based on best statistics, the 12 products were ranked from 1 (worst) to 12 (best) for each statistic (R, GE, RMSE, bias and binned bias) separately. The five separate ranks were then summed, so the maximum possible rank is $\mathrm{12}\cdot \mathrm{5}=\mathrm{60}$. A downside of this method is that when several products have similar statistics small variations in statistics can produce large spread in ranking. Note that no product received a perfect (60) rating. To overcome this potential downside, the second ranking method (RM2) considers statistics falling into binned ranges (rather than the absolute evaluation statistics). For each statistic, the following windows, [0.5, 1] for R, [0, 0.5] for GE, [0, 0.2] for bias, [0, 0.15] for RMSE and [−0.5, 0] for the binned bias, were divided into 10 bins, and a rank (from 1 to 10) was assigned depending on which bin a particular statistic falls for a particular product. As a result, several algorithms can be ranked equally for certain statistics if their statistics fell within the same bin. For example, if R for three products is between 0.8 and 0.85, all three receive a rank score of 8 for that statistic. The sum of the five ranks (R, GE, RMSE, bias and binned bias), w, for each product i was calculated and transformed to a weight of each product (as a fraction of total sum for the product from the total sum of ranking for all products) to calculate the AOD-weighted mean, $\stackrel{\mathrm{‾}}{\text{AOD}}$, as follows: $\begin{array}{}\text{(1)}& \stackrel{\mathrm{‾}}{\text{AOD}}=\frac{\sum _{i=\mathrm{1}}^{n}\left({w}_{i}\cdot {\text{AOD}}_{i}\right)}{\sum _{i=\mathrm{1}}^{n}\left({w}_{i}\right)}.\end{array}$ As shown in Sect. 3.1, the performance of the retrieval algorithms often depends on the aerosol conditions (aerosol type and loading; Fig. 2) and surface properties. Accordingly, weights for the different AOD products were calculated separately for each region for different aerosol types (background, fine-dominated or coarse-dominated) separately and “all” aerosol types together considering the corresponding regional statistics from the AERONET comparison. However, aerosol types often change in time and space within the same region (Fig. S5). Thus, those weights for each aerosol type were applied globally to merge both L3 monthly products and time series. As a result, eight merged AOD products were obtained, which include the following: the product of two ranking approaches (RM1 and RM2) and four sets of statistics (all points and the background and the fine- and coarse-dominated subsets). ### 4.2.2 Ranking results (weights) for individual products The weighting of the contribution of each product to the merged data product is shown in Fig. 7 (Europe and ChinaSE) and Fig. S11 (all selected regions) for three aerosol types (background, fine-dominated and coarse-dominated) and all aerosol types together (all). With some exceptions (e.g. in AOb, where the RM2 weight of Aqua DT&DB is ca. 15 % higher for coarse-dominated type, and in Australia, where the RM2 weight of SeaWiFS and Aqua MAIAC is 10 %–15 % higher for coarse-dominated type; Fig. S11), the difference in weights obtained with RM1 and RM2, if they exist, does not exceed 5 %–10 %. Thus, the ranking methods RM1 and RM2 introduced in the current study produce similar results. Some products show a better performance for certain aerosol types (Figs. 4 and S4). Thus, the weight of the product depends on which aerosol type is favoured for merging. For example, in Europe VIIRS has lower weight for fine-dominated aerosols, whereas the corresponding weight for ATSR SU is higher for that aerosol type. In ChinaSE, Terra DT&DB performs worse than Terra MAIAC for background aerosols, so for that aerosol type the weight for Terra MAIAC is higher. As with the results discussed in Sect. 3, none of the algorithms consistently outperforms the others in all regions. There is no clear leader over Europe, a region with low AOD, indicating a similar performance of all algorithms under background conditions. Over land globally, also a region with low AOD, the ranks are similar for EOS (electro-optical-system) sensors and ATSR, with somewhat higher number for VIIRS. Over ocean globally, the ranks are similar for all existing products. One likely reason that the VIIRS and MODIS ranks are often higher is their better coverage, which enables them to better represent AERONET monthly means over land as they sample the variations more fully. However, MODIS is ranked lower over the Atlantic dust region. The lowest ranks are obtained consistently for TOMS, OMI and POLDER, due to their high biases. Ranks for the different aerosol classes (all, background, fine-dominated and coarse-dominated) are different, which raises another aspect of using multiple products. Over land, MODIS MAIAC often has a higher rank for background AOD, whereas MODIS DT&DB is better for other aerosol types. Figure 7(a) Weights of each product obtained with RM1 and RM2 for Europe and (b) ChinaSE for different aerosol types (all, background, fine-dominated and coarse-dominated). For all regions, see Fig. S11. 5 Merged L3 AOD products As a recap, 10 merged products are created, which include the following: shifted and unshifted medians from approach 1 and eight (two ranking methods times four aerosol type classes) from approach 2. In this section these products are evaluated against AERONET. ## 5.1 Evaluation of the all merged L3 AOD products with AERONET Evaluation results (using the same method as in Sect. 3.1) reveal similarities in the accuracy of products merged with different approaches. The AOD binned bias of the merged products (Fig. S12) shows a similarly small deviation from AERONET (±0.03) for AOD<0.5 (positive for AOD<0.3 and negative for $\mathrm{0.3}<\text{AOD}<\mathrm{0.5}$). The offset is slightly higher for the median of the shifted AOD product (approach 1), because as discussed earlier, Terra DT&DB has a positive bias relative to most of the other individual products; this results in slightly elevated AOD compared to the others. For AOD>0.5, where the number of cases is very low, the underestimation increases as AOD increases. As for individual products, the coarse-dominated merged products have the smallest offset with AERONET. Correlation coefficient, number of the pixels in the GE, offset and RMSE for the AOD merged product are shown in Fig. 8 for Europe and ChinaSE and in Fig. S13 for all regions. The merged products have the best temporal coverage and the number of points used for validation (N) is higher than for any individual product. The correlation coefficients and the number of the pixels matching within the GE are as high as for the one or two best ranked products in the corresponding regions, except for the product merged with approach 2. The offset is close to the average offset, and the RMSE tends to be lowest. Thus, the quality of the merged products, except for the shifted AOD product, is as good as that of the most highly ranked individual AOD products in each region. Figure 8AERONET comparison statistics: correlation coefficient R, bar, and fraction of pixels satisfying the GCOS requirements, GE, ; offset, Δ, and root-mean-square error RMSE, *. Shown for AOD products merged with different approaches, median, shifted median, RM1 and RM2 for different aerosol types for Europe and ChinaSE. For all regions, see Fig. S13. ## 5.2 Final merged product evaluation and intercomparison with individual products The agreement of the RM1 and RM2 approaches is encouraging, as we can conclude from the big-picture analysis (Sect. 5.1) that the details of the methodology do not matter much. As there is no significant difference in the evaluation results for products merged with approaches 1 and 2, we choose the RM2 approach for all aerosol types as the main merged product. We use this for further intercomparison with individual products to reveal the regional and seasonal differences between the products. If not specifically stated, the merged product mentioned below is the one obtained with RM2 for all aerosol types (RM2 for all). ### 5.2.1 Summarised evaluation results The difference between the L3 merged product and the median of all individual products used for merging (Table 2) was calculated for the year 2008 (Fig. 9a, as Fig. S1 for individual products). The difference is within GCOS requirements over both land and ocean (0.009 and 0.007, respectively) and globally (0.008). High latitudes contribute most to the positive bias over oceans, whereas a positive bias is observed over land mostly over bright surfaces. The evaluation statistics for the L3 merged product against AERONET extracted from Figs. S12 and S13 are combined in Fig. 9b, c, d for all 15 regions, as well as for land, ocean and globally. For most regions, R is between 0.75 and 0.85, 20 %–60 % fall within the GE, and the RMSE and offset are between 0.05 and 0.1, though somewhat higher for the regions with potentially high AOD loading (Indonesia, AOd, AsW and AsE). Statistics for the merged product (M) are also shown in Figs. 3 and S7 for comparisons with individual products. Figure 9(a) L3 merged (approach 2 with RM2 for all) AOD product deviation from the annual median AOD calculated from individual products used for merging (Table 2) for the year 2008 (as Fig. S1 for individual products), (b) L3 monthly merged AOD product evaluation with AERONET: binned AOD bias for all (purple; background (AOD<0.2; purple), fine-dominated (blue) and coarse-dominated (green) aerosol types. (c,d) Regional statistics (c: correlation coefficient R, bar, and fraction of pixels that fulfil the GCOS requirements, GE, circle; d: offset, Δ; RMSE, *). ### 5.2.2 Uncertainties Uncertainties (unc, meaning 1σ of the uncertainty distribution) for the merged L3 products (monthly, seasonal and annual) were estimated as the root-mean-squared sum of the deviations between the chosen merged product M (RM2 for all), the median from the all uncorrected products (approach 1) and each of the other seven merged products (approach 2, with RM1 for all aerosol types and RM1 and RM2 each applied for background, fine-dominated and coarse-dominated particles). $\begin{array}{}\text{(2)}& \text{unc}=\sqrt{\frac{\mathrm{1}}{N}\sum _{\mathrm{1}}^{N}\left({m}_{i}-M{\right)}^{\mathrm{2}}},\end{array}$ where mi is AOD from alternative merged product i, M is AOD from the chosen merged product (RM2 for all), and N is the number of the alternative merged products. Note that this is a structural uncertainty (i.e. a sensitivity to diversity and decisions in dataset merging) rather than a total uncertainty for the merged product. Seasonal and annual uncertainties for the year 2008 are shown in Fig. 10. These uncertainties show artefacts at regional boundaries because the merging was done according to regional statistics. Figure 10Seasonal and annual structural uncertainties between the L3 merged product (M; approach 2 with RM2 for all) and other L3 merged products calculated with the approaches 1 and 2 for the year 2008. The estimated annual and seasonal structural uncertainties are low, 0.005–0.006 globally. They show seasonal dependence, reaching 0.008 and 0.009 on average over land in MAM and JJA, respectively. The uncertainties are larger (0.01–0.03, on average, up to 0.05) in regions with high AOD (e.g. ChinaSE, India in JJA, AfN in MAM and JJA, AfS in JJA and SON). This means that the uncertainties introduced through the choice of merging strategy often fulfil the requirements calculated by Chylek et al. (2003) for an AOD uncertainty of 0.015 over land and 0.010 over ocean, in order to estimate the direct aerosol radiative effect to within 0.5 W m−2. The fact that this merging uncertainty estimate is smaller than the previously discussed GCOS goal uncertainties implies that reasonable merging method decisions may be of secondary importance in terms of meeting those goals. It is cautioned, though, that since many of the algorithms are susceptible to the same error sources and subject to similar sampling limitations, the uncertainty estimates calculated here are likely to be a lower bound on the true uncertainty in the merged datasets. And it should be remembered that these uncertainties cover only the aspect of choosing the merging method but not the entirety of the uncertainties in the merged datasets versus AERONET. ### 5.2.3 Spatial and temporal intercomparison with other products The deviation between individual products and the merged product for the year 2008 is shown in Fig. 11. Among the products used for merging, POLDER has largest positive offset (0.026), and SeaWiFS has the highest largest negative offset (−0.026) on global average. Over land, POLDER has the highest positive offset (0.031); the offsets for ATSR SU and Terra DT&DB are also high (0.024 and 0.023, respectively). The highest negative offsets relative to the merged product are for MAIAC (−0.046 and −0.041 for Terra and Aqua, respectively). Over ocean, POLDER, Terra DT&DB and ATSR ADV are offset high by 0.022–0.024, whereas ATSR SU and SeaWiFS are offset low (−0.030 and −0.027, respectively) compared to the merged AOD product. Most of the observed global, land and ocean AOD offsets (except for Aqua MAIAC over land) are within the GCOS requirement of ±0.03. VIIRS agrees best with the merged product globally (0.003) and over ocean (−0.003); AVHRR DT/SOAR and AQUA DT&DB agree best with the merged product over land, showing opposite-in-sign offsets of −0.011 and 0.009, respectively. Regional biases between the individual products and the merged product are similar to regional biases shown in Fig. 2, where the individual products were compared with median AOD calculated from all individual products available at 0.55 µm. Regional annual offsets between individual AOD products and the merged AOD product are shown in Fig. S14 (cf. with those for the median AOD product in Figs. 6 and S10). For AsE, which includes ChinaSE and AfN, the AOD offset is higher than 0.03 (GCOS requirement in low-AOD conditions) for some products. However, those areas are characterised by high AOD loading (annual AOD is between 0.4 and 0.8) that is related to e.g. anthropogenic pollution and/or dust events. If the GCOS requirement of 10 % of AOD is also applied here, then most of the offsets are within the GCOS requirements. The highest regional offsets relative to the merged AOD dataset are associated with products which provide AOD at wavelengths other than 0.55 µm – TOMS (0.50 µm), OMI (0.50 µm) and EPIC (0.44 µm) – and thus are not used for merging. In some regions, AOD offsets between individual products and the merged product show seasonal behaviour (Fig. S15). In ChinaSE, the negative offsets for AVHRR NOAA, SeaWiFS and VIIRS are most pronounced in JJA. In AsW, the ATSR ADV positive offset is higher for that season. In AfN, most products have their largest negative offsets in JJA, whereas ATSR SU and ATSR_ens (which includes the ATSR SU product) have their highest positive biases. In SA, offsets are lower in JJA for all products. In AOb offsets are lower in MAM, and in AOd offsets are lower in SON for all products. Figure 11AOD deviation of the individual products relative to the merged AOD product for the year 2008. Global, land and ocean AOD mean differences are shown for each product, when available. Table 3Mean offset and standard deviation (in parentheses) between time series obtained with different approaches for three time periods, determined based on products availability. 6 Merged AOD time series As the L3 AOD merged products (Sect. 5), the AOD time series from the individual products (Figs. 4 and S8) were merged, using approach 1 (median for uncorrected AOD) and approach 2 (RM1 and RM2 for different aerosol types). The shifted AOD median (approach 1 for shifted products) has clear limitations when the product chosen as a reference (Terra DT&DB, in our case) deviates considerably from other products over most of the regions (except for Aus, AfN and SA; Fig. S8). Thus, the median for shifted products is not discussed here. However, the median-shifted AOD approach allows an extension of the time series back to 1978–1994, where only the TOMS AOD (over land) and AVHRR NOAA (over ocean) long-term products currently exist and the merging approaches introduced in the current study are not applicable. Figure 12Annual AOD time series merged with two different approaches (red and light blue for approaches 1 and 2, respectively) and AOD time series from the L3 merged data (approach 2; olive) for the selected regions. In each, ±1σ of the AOD from all uncorrected AOD products is shown as light blue shadow (often small, thus not visible). TOMS over land and AVHRR NOAA over ocean products shifted to the merged time series are also shown with dashed grey and purple lines, respectively, when available. Figure 13 (a) Seasonal and (b) monthly AOD median time series (red), merged time series (blue) and time series from the merged L3 product (olive) for Europe and ChinaSE. AOD±1σ for the merged time series and for the time series from the merged L3 products are shown as light blue and light olive shadows, respectively. Note the different scale. For all selected regions, see Figs. S16 and S17. The two merging approaches (approach 1 for uncorrected products and approach 2 for weighted AOD) tested here agree well (Fig. 12). The offsets between time series calculated with different approaches are again low (0.004–0.011). Spatial consistency is indicated by high correlation (similar positions of peaks) in AfN and its Atlantic dust outflow region. Interannual variation as well as the standard deviations are highest for regions with the largest AOD, e.g. over ChinaSE (anthropogenic emissions) and Indonesia (biomass burning). The time series of ChinaSE follows the known patterns caused by stepwise regional emission reductions in the last 25 years (Sogacheva et al., 2018b). AOD time series merged with different approaches show a good agreement for all timescales: annual (Fig. 12), seasonal and monthly (Fig. 13a and b, respectively, for Europe and China and Figs. S16 and S17 for all studied regions). The offsets between the merged time series and time series calculated from the merged L3 product have a regional component and, as, discussed above, depend on the availability of the products (Table 2). The offsets between the time series merged with different approaches (Table 3) are slightly higher for all regions for the periods 1995–1999 and 2012–2017, when fewer products are available for merging (Table 2). The deviation up to 0.05 (AODapproach 1>AODapproach 2) is observed over Indonesia and North America before 2002, when both MODIS satellites become operational. For other regions, the deviation is considerably lower (below 0.03). By adding MISR and both MODIS products in 2000/2002, the offset between the time series is reduced. ATSR products are not available starting in 2012, when the VIIRS product became available. In 1995–1999, the mean offset is similar for all three time series. The offsets are higher for regions with high AOD loading (e.g. Asia and northern Africa, Fig. S18). In 2000–2011 and 2012–2017, the offset is lowest (0.004) between the merged and the median time series, as well as between the merged time series and the time series calculated from the merged L3 product. The agreement in the time series obtained with different approaches supports the conclusion made based on the evaluation results that, for the big-picture analysis of overall trends, details of the methodology do not matter very much. Annual, seasonal and monthly time series from the merged L3 monthly AOD show slightly higher deviation of both signs compared to the merged time series discussed above. Interestingly, seasonality is observed in the deviation. In AfN, the AOD from the monthly merged L3 is higher in autumn for the period of 1995–1999. In Bor and AsN (Figs. S16 and S17), the deviation is higher in spring for the period of 1997–1999. A possible explanation might be the sparser coverage in those areas (due to restrictions in retrieval algorithms to retrieve bright surfaces, e.g. desert or snow). Regional offsets between the annual, seasonal and monthly AOD merged time series and the time series from the merged L3 monthly product are summarised for three timescales in Fig. S19. The offset is lower for annual data and generally increases with the time resolution. As the previous analysis showed, the offset is bigger in high-AOD regions (e.g. Asia, AfN and SA). Overall, good agreement exists between the time series calculated using different merging approaches and different orders of the processing steps. There is a general consistency, and similar temporal patterns are observed between the time series merged with two approaches and the time series from merged L3 AOD product, despite small differences, which are more pronounced at the beginning of the period, when less products are available. With only few exceptions, the offsets between the AOD time series calculated with different approaches are within the GCOS requirement of ±0.03 or 10 % of AOD. A separate study is planned where regional and global trends in this merged AOD L3 product will be analysed. Table 4Instrument, archive, URL and DOI (last access: 17 February 2020, for all), name and creator of the products used in the current study (if available). 7 Conclusions This study has analysed the consistency of regional time records of monthly AOD from 16 different satellite products. These were obtained from a wide range of different instruments – TOMS, AVHRR, SeaWiFS, ATSR-2, AATSR, MODIS, MISR, POLDER, VIIRS and EPIC – with largely varying information content and sampling and with different algorithms based on different remote sensing approaches, quality filtering, cloud masking and averaging. Differences between those 16 data records in a set of regions with different characteristics across the globe were demonstrated and verified against a ground-based AERONET monthly mean dataset in order to answer the question how well a satellite dataset can reproduce monthly gridded mean AERONET values in a region. AOD time series (monthly, seasonal and annual) from the products show a good consistency of temporal patterns but significant regional biases due to all those differences. In many cases the more pronounced differences were between different algorithms applied to the same sensor, rather than between similar algorithms applied to different sensors. This is encouraging in that it implies that algorithmic uncertainties (either retrieval assumptions or pixel selection criteria) can be similar to or larger than sensor ones (e.g. calibration quality and sampling limitations), and as such, refining individual algorithms can still make meaningful steps towards providing better L3 products. To build an AOD product merged from 12 individual satellite products, two different approaches were introduced and tested. In approach 1, a simple median of the 12 uncorrected and shifted to Terra DT&DR product time records was conducted. In approach 2, the AOD evaluation results (for different aerosol types) against AERONET were used to infer a ranking which was then used to calculate a weighted AOD mean. Two different ranking methods, RM1, simple ranking based on better statistics, and RM2, ranking based on binned statistics, were tested in approach 2. In addition, the order of the processing steps in approach 2 was interchanged (L3 dataset merging or regional merging) to test the stability of the results. Ten merged L3 AOD monthly products were created and evaluated with AERONET. The evaluation shows that the quality of the merged products (except for one created with the approach 1 for shifted AOD) is as good as that of the most highly ranked individual AOD products in each region. One of the merged products (approach 2 with RM2 for all) was chosen as a final merged product (http://nsdc.fmi.fi/data/data_aod, last access: 20 January 2020), based on slightly better evaluation results. Structural uncertainties for the final merged product were estimated. All merged regional AOD time series show a very high consistency of temporal patterns and between regions, and the time records with their uncertainties (standard deviations shaded around the median values) clearly illustrate the evolution of regional AOD. With few exceptions all merging methods lead to very similar results, which is reassuring for the usefulness and stability of the merged products. There are of course caveats to these rather simple and straightforward merging approaches, which do not consider in much detail the differences in sampling and sensitivity to different conditions (e.g. surface brightness or number of independent observables) of the different instruments and algorithms. It is well known that monthly, seasonal or annual gridded mean values can carry large uncertainties, whether inferred from a few ground-based stations meant to represent a full grid cell or from satellite images containing large gaps due to limited swath, clouds or failed retrievals. Pixel-level uncertainties are becoming available for a growing number of satellite products, and it would be highly beneficial if these estimated errors could be propagated consistently to those gridded monthly products. However, this requires deeper insight and new methods to take into account correlation patterns among parts of the uncertainties and to estimate practically the sampling-based uncertainties in light of approximated AOD variability. Altogether, as frequently requested from a user point of view, the stability and consistency of regional, merged AOD time series should be seen as strengthening our confidence in the reliability of satellite-based data records. Recent, ongoing and future work to improve the Level 3 uncertainty budget of the satellite products – as well as assessment of spatio-temporal uncertainties in time-aggregated AERONET data – will benefit the creation and assessment of merged time series. The corresponding time series can be used in regional and global AOD trend analyses and for comparison with (climate and reanalysis) model AOD fields. Aside from the merged dataset itself, some key main outcomes of this research have been a quantification of the diversity between monthly satellite AOD products and their comparability with monthly averages from AERONET and the sensitivity of the merged time series to some sensible decisions which must be made in creating it. Merged AOD product will be extended as satellite missions continue and new data versions are released. Data availability Data availability. URL and DOI (if available) of the products used in the current study, as well as of the merged AOD product (FMI_SAT_AOD-MERGED), are summarised in Table 4. Supplement Supplement. Author contributions Author contributions. The exercise on AOD merging has been initiated and widely discussed by the AeroCom/AeroSat community. The work has been performed by LS, who collected data, developed the methodology, performed the analysis and wrote the extended draft of the paper. The evaluation results were widely discussed with the AOD data providers, who coauthor the paper. TP, AMS and RAK also considerably contributed to writing. All authors contributed to reviewing the drafts. Competing interests Competing interests. The authors declare that they have no conflict of interest. Acknowledgements Acknowledgements. The authors thank attendees of AeroCom/AeroSat workshops over the past several years for lively and informative discussions, which helped provide the impetus for and shape this analysis. AeroCom and AeroSat are unfunded community networks which participants contribute to within the remit and constraints of their other aerosol research. Financial support Financial support. The work presented is partly supported by the Copernicus Climate Change Service (contracts C3S_312a_lot5 and C3S_312b_Lot2) which is funded by the European Union, with support from ESA as part of the Climate Change Initiative (CCI) project Aerosol_cci (ESA-ESRIN projects AO/1-6207/09/I-LG and ESRIN/400010987 4/14/1-NB) and the AirQast 776361 H2020-EO-2017 project. Review statement Review statement. This paper was edited by Stelios Kazadzis and reviewed by three anonymous referees. References Ban-Weiss, G. A., Jin, L., Bauer, S. E., Bennartz, R., Liu, X., Zhang, K., Ming, Y., Guo, H., and Jiang, J. H.: Evaluating clouds, aerosols, and their interactions in three global climate models using satellite simulators and observations, J. Geophys. Res.-Atmos., 119, 10876–10901, https://doi.org/10.1002/2014JD021722, 2014. Bellouin, N., Boucher, O., Haywood, J., and Shekar, R. M.: Global estimate of aerosol direct radiative forcing from satellite measurements, Nature, 438, 1138–1141, https://doi.org/10.1038/nature04348, 2005. Bevan, S., North, P., Los, S., and Grey, W.: A global dataset of atmospheric aerosol optical depth and surface reflectance from AATSR, Remote Sens. Environ., 116, 199–210, 2012. Boys, B. L., Martin, R. V., van Donkelaar, A., MacDonell, R. J., Hsu, N. C., Cooper, M. J., Yantosca, R. M., Lu, Z., Streets, D. G., Zhang, Q., and Wang, S. W.: Fifteen-year global time series of satellite-derived fine particulate matter, Environ Sci Technol., 48, 11109–11118, 2014. Chang, C.-H., Hsiao, Y.-L., and Hwang, C.: Evaluating Spatial and Temporal Variations of Aerosol Optical Depth and Biomass Burning over Southeast Asia Based on Satellite Data Products, Aerosol Air Qual. Res., 15, 2625–2640, https://doi.org/10.4209/aaqr.2015.10.0589, 2015. Chatterjee, A., Michalak, A. M., Kahn, R. A., Paradise, S. R., Braverman, A. J., and Miller, C. E.: A geostatistical data fusion technique for merging remote sensing and ground-based observations of aerosol optical thickness, J. Geophys. Res., 115, D20207, https://doi.org/10.1029/2009JD013765, 2010. Chylek, P., Henderson, B., and Mishchenko, M.: Aerosol radiative forcing and the accuracy of satellite aerosol optical depth retrieval. J. Geophys. Res., 108, 4764, https://doi.org/10.1029/2003JD004044, 2003. de Leeuw, G., Holzer-Popp, T., Bevan, S., Davies, W., Descloitres, J., Grainger, R. G., Griesfeller, J., Heckel, A., Kinne, S., Klüser, L., Kolmonen, P., Litvinov, P., Martynenko, D., North, P. J. R., Ovigneur, B., Pascal, N., Poulsen, C., Ramon, D., Schulz, M., Siddans, R., Sogacheva, L., Tanré, D., Thomas, G. E., Virtanen, T. H., von Hoyningen Huene, W., Vountas, M., and Pinnock, S.: Evaluation of seven European aerosol optical depth retrieval algorithms for climate analysis, Remote Sens. Environ., 162, 295–315, https://doi.org/10.1016/j.rse.2013.04.023, 2015. de Leeuw, G., Sogacheva, L., Rodriguez, E., Kourtidis, K., Georgoulias, A. K., Alexandri, G., Amiridis, V., Proestakis, E., Marinou, E., Xue, Y., and van der A, R.: Two decades of satellite observations of AOD over mainland China using ATSR-2, AATSR and MODIS/Terra: data set evaluation and large-scale patterns, Atmos. Chem. Phys., 18, 1573–1592, https://doi.org/10.5194/acp-18-1573-2018, 2018. Dubovik, O., Holben, B. N., Eck, T. F., Smirnov, A., Kaufman, Y. J., King, M. D., Tanre, D., and Slutsker, I.: Variability of absorption and optical properties of key aerosol types observed in worldwide locations, J. Atmos. Sci., 59, 590–608, 2002. Dubovik, O., Herman, M., Holdak, A., Lapyonok, T., Tanré, D., Deuzé, J. L., Ducos, F., Sinyuk, A., and Lopatin, A.: Statistically optimized inversion algorithm for enhanced retrieval of aerosol properties from spectral multi-angle polarimetric satellite observations, Atmos. Meas. Tech., 4, 975–1018, https://doi.org/10.5194/amt-4-975-2011, 2011. Dubovik, O., Lapyonok, T., Litvinov, P., Herman, M., Fuertes, D., Ducos, F., Lopatin, A., Chaikovsky, A., Torres, B., Derimian, Y., Huang, X., Aspetsberger, M., and Federspiel, C.: GRASP: a versatile algorithm for characterizing the atmosphere, SPIE: Newsroom, https://doi.org/10.1117/2.1201408.005558, 2014. Dubovik, O., Li, Z., Mishchenko, M. I., Tanre, D., Karol, Y., Bojkov, B., Cairns, B., Diner, D. J., Espinosa, R., Goloub, P., Gu, X., Hasekamp, O., Hong, J., Hou, W., Knobelspiesse, K. D., Landgraf, J., Li, L., Litvinov, P., Liu, Y., Lopatin, A., Marbach, T., Maring, H., Martins, V., Meijer, Y., Milinevsky, G., Mukai, S., Parol, F., Qiao, Y., Remer, L., Rietjens, J., Sano, I., Stammes, P., Stamnes, S., Sun, X., Tabary, P. Travis, L. D., Waquet, F., Xu, F., Yan, C., and Yin, D.: Polarimetric remote sensing of atmospheric aerosols: instruments, methodologies, results, and perspectives, J. Quant. Spectrosc. Ra., 474–511, https://doi.org/10.1016/j.jqsrt.2018.11.024, 2019. Eck, T. F., Holben, B. N., Reid, J. S., Dubovik, O., Smirnov, A., O'Neill, N. T., Slutsker, I., and Kinne, S.: Wavelength dependence of the optical depth of biomass burning, urban, and desert dust aerosol, J. Geophys. Res., 104, 31333–31350, 1999. Escribano, J., Boucher, O., Chevallier, F., and Huneeus, N.: Impact of the choice of the satellite aerosol optical depth product in a sub-regional dust emission inversion, Atmos. Chem. Phys., 17, 7111–7126, https://doi.org/10.5194/acp-17-7111-2017, 2017. Flowerdew, R. J. and Haigh, J. D.: An approximation to improve accuracy in the derivation of surface reflectances from multi-look satellite radiometers, Geophys. Res. Let., 22, 1693–1696, 1995. Garay, M. J., Kalashnikova, O. V., and Bull, M. A.: Development and assessment of a higher-spatial-resolution (4.4 km) MISR aerosol optical depth product using AERONET-DRAGON data, Atmos. Chem. Phys., 17, 5095–5106, https://doi.org/10.5194/acp-17-5095-2017, 2017. Garay, M. J., Witek, M. L., Kahn, R. A., Seidel, F. C., Limbacher, J. A., Bull, M. A., Diner, D. J., Hansen, E. G., Kalashnikova, O. V., Lee, H., Nastan, A. M., and Yu, Y.: Introducing the 4.4 km Spatial Resolution MISR Aerosol Product, Atmos. Meas. Tech. Discuss., https://doi.org/10.5194/amt-2019-340, in review, 2019. GCOS: Systematic observation requirements for satellite-based data products for climate, 2011 update, World Meteorological Organization (WMO) Global Climate Observing System (GCOS) report GCOS-154, available at: https://library.wmo.int/doc_num.php?explnum_id=3710 (last access: 19 June 2019), 2011. Giles, D. M., Sinyuk, A., Sorokin, M. G., Schafer, J. S., Smirnov, A., Slutsker, I., Eck, T. F., Holben, B. N., Lewis, J. R., Campbell, J. R., Welton, E. J., Korkin, S. V., and Lyapustin, A. I.: Advancements in the Aerosol Robotic Network (AERONET) Version 3 database – automated near-real-time quality control algorithm with improved cloud screening for Sun photometer aerosol optical depth (AOD) measurements, Atmos. Meas. Tech., 12, 169–209, https://doi.org/10.5194/amt-12-169-2019, 2019. Gupta, P., Levy, R. C., Mattoo, S., Remer, L. A., and Munchak, L. A.: A surface reflectance scheme for retrieving aerosol optical depth over urban surfaces in MODIS Dark Target retrieval algorithm, Atmos. Meas. Tech., 9, 3293–3308, https://doi.org/10.5194/amt-9-3293-2016, 2016. Han, B., Ding, H., Ma, Y., and Gong, W.: Improving retrieval accuracy for aerosol optical depth by fusion of MODIS and Caliop data, Tehni ki Vjesnik, 24, 791–800, 2017. Heidinger, A. K., Cao, C., and Sullivan, J.: Using Moderate Resolution Imaging Spectrometer (MODIS) to calibrate Advanced Very High Resolution Radiometer (AVHRR) reflectance channels, J. Geophys. Res., 107, 4702, https://doi.org/10.1029/2001JD002035, 2002. Holben, B. N., Eck, T. F., Slutsker, I., Tanré, D., Buis, J. P., Setzer, A., Vermote, E., Reagan, J. A., Kaufman, Y., Nakajima, T., Lavenu, F. , Jankowiak, I., and Smirnov, A.: AERONET – A federated instrument network and data archive for aerosol characterization, Remote Sens. Environ., 66, 1–16, 1998. Holben, B. N., Tanre, D., Smirnow, A., Eck, T. F., Slutsker, I., Abuhassan, N., Newcomb, W. W., Schafer, J. S., Chatenet, B., Lavenu, F., Kaufman, Y. J., Vande Castle, J., Setzer, A., Markham, B., Clark, D., Halthore, R., Karneli, A., O'Neili, N. T., Pietras, C., Pinker, R. T., Vos. K., and Zibord, G.: An Emerging Ground-Based Aerosol Climatology, J. Geophys. Res., 106, 12067–12097, 2001. Hsu, N. C., Tsay, S. C., King, M. D., and Herman, J. R.: Aerosol properties over bright-reflecting source regions, IEEE T. Geosci. Remote, 42, 557–569, 2004. Hsu, N. C., Jeong, M.-J., Bettenhausen, C., Sayer, A. M., Hansell, R., Seftor, C. S., Huang, J., and Tsay, S.-C.: Enhanced Deep Blue aerosol retrieval algorithm: The second generation, J. Geophys. Res.-Atmos., 118, 9296–9315, https://doi.org/10.1002/jgrd.50712, 2013a. Hsu, N. C., Sayer, A. M., Jeong, M.-J., and Bettenhausen, C.: SeaWiFS Deep Blue Aerosol Optical Depth and Angstrom Exponent Monthly Level 3 Data Gridded at 1.0 Degrees V004, Greenbelt, MD, USA, Goddard Earth Sciences Data and Information Services Center (GES DISC), https://doi.org/10.5067/MEASURES/SWDB/DATA304, 2013b. Hsu, N. C., Lee, J., Sayer, A. M., Carletta, N., Chen, S.-H., Tucker, C. J., and Tsay, S.-C: Retrieving near-global aerosol loading over land and ocean from AVHRR, J. Geophys. Res., 122, 9968–9989, https://doi.org/10.1002/2017JD026932, 2017. Hsu, N. C., Lee, J., Sayer, A. M., Kim, W., Bettenhausen, C., and Tsay, S.-C.: VIIRS Deep Blue aerosol products over land: Extending the EOS long-term aerosol data records, J. Geophys. Res.-Atmos., 124, 4026–4053, https://doi.org/10.1029/2018JD029688, 2019. Huang, D., Lyapustin, A., Korkin, S., Wang, Y., Blank, K., and Marshak, A.: The Multi-Angle Implementation of Atmospheric Correction (MAIAC) algorithm for DSCOVR EPIC and initial analysis of data products, Remote. Sens. Environ., in review, 2020. Ignatov, A. and Stowe, L. L.: Aerosol Retrievals from Individual AVHRR Channels. Part I: Retrieval Algorithm and Transition from Dave to 6S Radiative Transfer Model, J. Atmos. Sci., 59, 313–334, 2002. IPCC: Summary for Policymakers of IPCC Special Report on Global Warming of 1.5 C approved by governments, available at: https://www.ipcc.ch/site/assets/uploads/2018/11/pr_181008_P48_spm_en.pdf (last access: 30 April 2019), 2018. Jethva, H. and Torres, O.: Satellite-based evidence of wavelength-dependent aerosol absorption in biomass burning smoke inferred from Ozone Monitoring Instrument, Atmos. Chem. Phys., 11, 10541–10551, https://doi.org/10.5194/acp-11-10541-2011, 2011. Kahn, R. A., Gaitley, B. J., Garay, M. J., Diner, D. J., Eck, T. F., Smirnov, A., and Holben, B. N.: Multiangle Imaging SpectroRadiometer global aerosol product assessment by comparison with the Aerosol Robotic Network, J. Geophys. Res., 115, D23209, https://doi.org/10.1029/2010JD014601, 2010. Kinne, S.: Remote sensing data combinations: superior global maps for aerosol optical depth, in: Satellite Aerosol Remote Sensing over Land, Springer, Berlin Heidelberg, 361–381, https://doi.org/10.1007/978-3-540-69397-0_12, 2009. Kinne, S., Lohmann, U., Feichter, J., Schulz, M., Timmreck, C., Ghan, S., Easter, R., Chin, M., Ginouz, P. , Takemura, T., Tegen, I., Koch, D., Herzog, M., Penner, J., Pitari, G., Holben, B., Eck, T., Smirnov, A., Dubovik, O., Slutsker, I., Tanre, D., Torres, O., Mishchenko, M. Geogdzhayev, I., Chu, D. A., and Kaufman, Y.: Monthly averages of aerosol properties: A global comparison among models, satellite data, and AERONET ground data, J. Geophys. Res., 108, 4634, https://doi.org/10.1029/2001JD001253, 2003. Kinne, S., Schulz, M., Textor, C., Guibert, S., Balkanski, Y., Bauer, S. E., Berntsen, T., Berglen, T. F., Boucher, O., Chin, M., Collins, W., Dentener, F., Diehl, T., Easter, R., Feichter, J., Fillmore, D., Ghan, S., Ginoux, P., Gong, S., Grini, A., Hendricks, J., Herzog, M., Horowitz, L., Isaksen, I., Iversen, T., Kirkevåg, A., Kloster, S., Koch, D., Kristjansson, J. E., Krol, M., Lauer, A., Lamarque, J. F., Lesins, G., Liu, X., Lohmann, U., Montanaro, V., Myhre, G., Penner, J., Pitari, G., Reddy, S., Seland, O., Stier, P., Takemura, T., and Tie, X.: An AeroCom initial assessment – optical properties in aerosol component modules of global models, Atmos. Chem. Phys., 6, 1815–1834, https://doi.org/10.5194/acp-6-1815-2006, 2006. Kokhanovsky, A. A. and de Leeuw, G. (Eds.): Satellite Aerosol Remote Sensing Over Land, Springer-Praxis, Berlin, 388 pp., 2009. Kolmonen, P., Sogacheva, L., Virtanen, T. H., de Leeuw, G., and Kulmala, M.: The ADV/ASV AATSR aerosol retrieval algorithm: current status and presentation of a full-mission AOD data set, Int. J. Digital Earth, 9, 545–561, https://doi.org/10.1080/17538947.2015.1111450, 2016. Kosmale, M., et al.: in preparation, 2020. Lee, H., Garay, M. J., Kalashnikova, O. V., Yu, Y., and Gibson, P. B.: How Long should the MISR Record Be when Evaluating Aerosol Optical Depth Climatology in Climate Models?, Remote Sens., 10, 1326, 2018. Levy, R. C., Mattoo, S., Munchak, L. A., Remer, L. A., Sayer, A. M., Patadia, F., and Hsu, N. C.: The Collection 6 MODIS aerosol products over land and ocean, Atmos. Meas. Tech., 6, 2989–3034, https://doi.org/10.5194/amt-6-2989-2013, 2013. Levy, R. C., Mattoo, S., Sawyer, V., Shi, Y., Colarco, P. R., Lyapustin, A. I., Wang, Y., and Remer, L. A.: Exploring systematic offsets between aerosol products from the two MODIS sensors, Atmos. Meas. Tech., 11, 4073–4092, https://doi.org/10.5194/amt-11-4073-2018, 2018. Li, J., Carlson, B. E., and Lacis, A. A.: Application of spectral analysis techniques in the inter-comparison of aerosol data. Part I: An EOF approach to analyze the spatial-temporal variability of aerosol optical depth using multiple remote sensing data sets, J. Geophys. Res.-Atmos., 118, 8640–8648, https://doi.org/10.1002/jgrd.50686, 2013. Li, J., Carlson, B. E., and Lacis, A. A.: Application of spectral analysis techniques in the inter-comparison of aerosol data. Part II: Using maximum covariance analysis to effectively compare spatio-temporal variability of satellite and AERONET measured aerosol optical depth, J. Geophys. Res.-Atmos., 119, 153–166, https://doi.org/10.1002/2013JD020537, 2014a. Li, J., Carlson, B. E., and Lacis, A. A.: Application of spectral analysis techniques to the intercomparison of aerosol data – Part 4: Synthesized analysis of multisensor satellite and ground-based AOD measurements using combined maximum covariance analysis, Atmos. Meas. Tech., 7, 2531–2549, https://doi.org/10.5194/amt-7-2531-2014, 2014b. Li, J., Li, X., Carlson, B. E., Kahn, R. A., Lacis, A. A., Dubovik, O., and Nakajima, T.: Reducing m ultisensor satellite monthly mean aerosol optical depth uncertainty: 1. Objective assessment of current AERONET locations, J. Geophys. Res.-Atmos., 121, 13609–13627, https://doi.org/10.1002/2016JD025469, 2016. Li, S., Yu, C., Chen, L., Tao, J., Letu, H., Ge, W., Si, Y., and Liu, Y.: Inter-comparison of model-simulated and satellite-retrieved componential aerosol optical depths in China, Atmos. Environ., 141, 320–332, https://doi.org/10.1016/j.atmosenv.2016.06.075, 2016. Li, Z., Zhao, X., Kahn, R., Mishchenko, M., Remer, L., Lee, K.-H., Wang, M., Laszlo, I., Nakajima, T., and Maring, H.: Uncertainties in satellite remote sensing of aerosols and impact on monitoring its long-term trend: a review and perspective, Ann. Geophys., 27, 2755–2770, https://doi.org/10.5194/angeo-27-2755-2009, 2009. Limbacher, J. A. and Kahn, R. A.: Updated MISR over- water research aerosol retrieval algorithm – Part 2: A multi-angle aerosol retrieval algorithm for shallow, turbid, oligotrophic, and eutrophic waters, Atmos. Meas. Tech., 12, 675–689, https://doi.org/10.5194/amt-12-675-2019, 2019. Liu, L., Lacis, A. A., Carlson, B. E., Mishchenko, M. I., and Cairns, B.: Assessing Goddard Institute for Space Studies ModelE aerosol climatology using satellite and ground-based measurements: A comparison study, J. Geophys. Res., 111, D20212, https://doi.org/10.1029/2006JD007334, 2006. Lyapustin, A., Wang, Y., Korkin, S., and Huang, D.: MODIS Collection 6 MAIAC algorithm, Atmos. Meas. Tech., 11, 5741–5765, https://doi.org/10.5194/amt-11-5741-2018, 2018. Martonchik, J. V., Kahn, R. A., and Diner, D. J.: Retrieval of Aerosol Properties over Land Using MISR Observations, in: Satellite Aerosol Remote Sensing Over Land, edited by: Kokhanovsky, A. A. and de Leeuw, G., Springer, Berlin, 267–293, 2009. Michou, M., Nabat, P., and Saint-Martin, D.: Development and basic evaluation of a prognostic aerosol scheme (v1) in the CNRM Climate Model CNRM-CM6, Geosci. Model Dev., 8, 501–531, https://doi.org/10.5194/gmd-8-501-2015, 2015. Mishchenko, M. I., Geogdzhayev, I. V., Cairns, B., Carlson, B. E., Chowdhary, J., Lacis, A. A., Liu, L., Rossow, W. B., and Travis, L. D.: Past, present, and future of global aerosol climatologies derived from satellite observations: A perspective, J. Quant. Spectrosc. Ra., 106, 325–347, https://doi.org/10.1016/j.jqsrt.2007.01.007, 2007. Nabat, P., Somot, S., Mallet, M., Chiapello, I., Morcrette, J. J., Solmon, F., Szopa, S., Dulac, F., Collins, W., Ghan, S., Horowitz, L. W., Lamarque, J. F., Lee, Y. H., Naik, V., Nagashima, T., Shindell, D., and Skeie, R.: A 4-D climatology (1979–2009) of the monthly tropospheric aerosol optical depth distribution over the Mediterranean region from a comparative evaluation and blending of remote sensing and model products, Atmos. Meas. Tech., 6, 1287–1314, https://doi.org/10.5194/amt-6-1287-2013, 2013. Naeger, A. R., Gupta, P., Zavodsky, B. T., and McGrath, K. M.: Monitoring and tracking the trans-Pacific transport of aerosols using multi-satellite aerosol optical depth composites, Atmos. Meas. Tech., 9, 2463–2482, https://doi.org/10.5194/amt-9-2463-2016, 2016. North, P.: Estimation of aerosol opacity and land surface bidirectional reflectance from ATSR-2 dual-angle imagery: Operational method and validation, J. Geophys. Res., 107, AAC 4-1–AAC 4-10, 2002. North, P., Briggs, S., Plummer, S., and Settle, J.: Retrieval of land surface bidirectional reflectance and aerosol opacity from ATSR-2 multiangle imagery, IEEE T. Geosci. Remote S., 37, 526–537, 1999. O'Neill, N. T., Ignatov, A., Holben, B. N., and Eck, T. F.: The lognormal distribution as a reference for reporting aerosol optical depth statistics: Empirical tests using multi-year, multi-site AERONET sun-photometer data, Geophys. Res. Lett., 27, 3333–3336. https://doi.org/10.1029/2000GL011581, 2000. Penning de Vries, M. J. M., Beirle, S., Hörmann, C., Kaiser, J. W., Stammes, P., Tilstra, L. G., Tuinder, O. N. E., and Wagner, T.: A global aerosol classification algorithm incorporating multiple satellite data sets of aerosol and trace gas abundances, Atmos. Chem. Phys., 15, 10597–10618, https://doi.org/10.5194/acp-15-10597-2015, 2015. Peyridieu, S., Chédin, A., Capelle, V., Tsamalis, C., Pierangelo, C., Armante, R., Crevoisier, C., Crépeau, L., Siméon, M., Ducos, F., and Scott, N. A.: Characterisation of dust aerosols in the infrared from IASI and comparison with PARASOL, MODIS, MISR, CALIOP, and AERONET observations, Atmos. Chem. Phys., 13, 6065–6082, https://doi.org/10.5194/acp-13-6065-2013, 2013. Pinty, B., Taberner, M., Haemmerle, V.., Paradise, S. R., Vermote, E., Verstraete, M. M., Gobron, N., and Widlowski, J. L.: Global-Scale Comparison of MISR and MODIS Land Surface Albedos, J. Climate, 24, 732–749, 2011. Platnick, S., Hubanks, P., Meyer, K., and King, M. D.: MODIS Atmosphere L3 Monthly Product (08_L3), NASA MODIS Adaptive Processing System, Goddard Space Flight Center, https://doi.org/10.5067/MODIS/MOD08_M3.061, 2015a. Platnick, S., Hubanks, P., Meyer, K., and King, M. D.: MODIS Atmosphere L3 Monthly Product, NASA MODIS Adaptive Processing System, Goddard Space Flight Center, USA, https://doi.org/10.5067/MODIS/MYD08_M3.061, 2015b. Popp, T., de Leeuw, G., Bingen, C., Brühl, C., Capelle, V., Chedin, A., Clarisse, L., Dubovik, O., Grainger, R., Griesfeller, J., Heckel, A., Kinne, S., Klüser, L., Kosmale, M., Kolmonen, P., Lelli, L., Litvinov, P., Mei, L., North, P., Pinnock, S., Povey, A., Robert, C., Schulz, M., Sogacheva, L., Stebel, K., Stein Zweers, D., Thomas, G., Tilstra, L.G., Vandenbussche, S., Veefkind, P., Vountas, M., and Xue, Y.: Development, Production and Evaluation of Aerosol Climate Data Records from European Satellite Observations (Aerosol_cci), Remote Sens., 8, 421, 2016. Sayer, A. M. and Knobelspiesse, K. D.: How should we aggregate data? Methods accounting for the numerical distributions, with an assessment of aerosol optical depth, Atmos. Chem. Phys., 19, 15023–15048, https://doi.org/10.5194/acp-19-15023-2019, 2019. Sayer, A. M., Thomas, G. E., and Grainger, R. G.: A sea surface reflectance model for (A)ATSR, and application to aerosol retrievals, Atmos. Meas. Tech., 3, 813–838, https://doi.org/10.5194/amt-3-813-2010, 2010. Sayer, A. M., Hsu, N. C., Bettenhausen, C., Ahmad, Z., Holben, B. N., Smirnov, A., Thomas, G. E., and Zhang, J.: SeaWiFS Ocean Aerosol Retrieval (SOAR): Algorithm, validation, and comparison with other data sets, J. Geophys. Res., 117, D03206, https://doi.org/10.1029/2011JD016599, 2012a. Sayer, A. M., Hsu, N. C., Bettenhausen, C., Jeong, M.-J., Holben, B. N., and Zhang, J.: Global and regional evaluation of over-land spectral aerosol optical depth retrievals from SeaWiFS, Atmos. Meas. Tech., 5, 1761–1778, https://doi.org/10.5194/amt-5-1761-2012, 2012b. Sayer, A. M., Munchak, L. A., Hsu, N. C., Levy, R. C., Bettenhausen, C., and Jeong, M. J.: MODIS Collection 6 aerosol products: Comparison between Aqua's e-Deep Blue, Dark Target, and ”merged” data sets, and usage recommendations, J. Geophys. Res.-Atmos., 119, 13965–13989, https://doi.org/10.1002/2014jd022453, 2014. Sayer, A. M., Hsu, N. C., Bettenhausen, C., Jeong, M.-J., and Meister, G.: Effect of MODIS Terra radiometric calibration improvements on Collection 6 Deep Blue aerosol products: Validation and Terra/Aqua consistency, J. Geophys. Res.-Atmos., 120, 12157–12174, https://doi.org/10.1002/2015JD023878, 2015. Sayer, A. M., Hsu, N. C., Lee, J., Carletta, N., Chen, S.-H., and Smirnov, A.: Evaluation of NASA Deep Blue/SOAR aerosol retrieval algorithms applied to AVHRR measurements, J. Geophys. Res.-Atmos., 122, 9945–9967, https://doi.org/10.1002/2017JD026934, 2017. Sayer, A. M., Hsu, N. C., Lee, J., Bettenhausen, C., Kim, W. V., and Smirnov, A.: Satellite Ocean Aerosol Retrieval (SOAR) algorithm extension to S-NPP VIIRS as part of the “Deep Blue” aerosol project, J. Geophys. Res.-Atmos., 123, 380–400, https://doi.org/10.1002/2017JD027412, 2018a. Sayer, A. M., Hsu, N. C., Lee, J., Kim, W. V., Dubovik, O., Dutcher, S. T., Huang, D., Litvinov, P., Lyapustin, A., Tackett, J. L., and Winker, D. M.: Validation of SOAR VIIRS over-water aerosol retrievals and context within the global satellite aerosol data record, J. Geophys. Res.-Atmos., 123, 13496–13526, https://doi.org/10.1029/2018JD029465, 2018b. Sayer, A. M., Hsu, N. C., Lee, J., Kim, W., and Dutcher, S.: Validation, stability, and consistency of MODIS Collection 6.1 and VIIRS Version 1 Deep Blue aerosol data over land, J. Geophys. Res.-Atmos., 124, 4658–4688, https://doi.org/10.1029/2018JD029598, 2019. Schutgens, N. A. J.: Site representativity of AERONET and GAW remotely sensed AOT and AAOT observations, Atmos. Chem. Phys. Discuss., https://doi.org/10.5194/acp-2019-767, in review, 2019. Shi, Y., Zhang, J., Reid, J. S., Hyer, E. J., Eck, T. F., Holben, B. N., and Kahn, R. A.: A critical examination of spatial biases between MODIS and MISR aerosol products – application for potential AERONET deployment, Atmos. Meas. Tech., 4, 2823–2836, https://doi.org/10.5194/amt-4-2823-2011, 2011. Shi, Y. R., Levy, R. C., Eck, T. F., Fisher, B., Mattoo, S., Remer, L. A., Slutsker, I., and Zhang, J.: Characterizing the 2015 Indonesia fire event using modified MODIS aerosol retrievals, Atmos. Chem. Phys., 19, 259–274, https://doi.org/10.5194/acp-19-259-2019, 2019. Smirnov, A., Holben, B. N., Slutsker, I., Giles, D. M., Mc-Clain, C. R., Eck, T. F., Sakerin, S. M., Macke, A., Croot, P., Zibordi, G., Quinn, P. K., Sciare, J., Kinne, S., Harvey, M., Smyth, T. J., Piketh, S., Zielinski, T., Proshutinsky, A., Goes, J. I., Nelson, N. B., Larouche, P., Radionov, V. F., Goloub, P., Krishna Moorthy, K., Matarrese, R., Robertson, E. J., and Jourdin, F.: Maritime Aerosol Network as a component of Aerosol Robotic Network, J. Geophys. Res.-Atmos., 114, D06204, https://doi.org/10.1029/2008JD011257, 2009. Sogacheva, L., Kolmonen, P., Virtanen, T. H., Rodriguez, E., Saponaro, G., and de Leeuw, G.: Post-processing to remove residual clouds from aerosol optical depth retrieved using the Advanced Along Track Scanning Radiometer, Atmos. Meas. Tech., 10, 491–505, https://doi.org/10.5194/amt-10-491-2017, 2017. Sogacheva, L., de Leeuw, G., Rodriguez, E., Kolmonen, P., Georgoulias, A. K., Alexandri, G., Kourtidis, K., Proestakis, E., Marinou, E., Amiridis, V., Xue, Y., and van der A, R. J.: Spatial and seasonal variations of aerosols over China from two decades of multi-satellite observations – Part 1: ATSR (1995–2011) and MODIS C6.1 (2000–2017), Atmos. Chem. Phys., 18, 11389–11407, https://doi.org/10.5194/acp-18-11389-2018, 2018a. Sogacheva, L., Rodriguez, E., Kolmonen, P., Virtanen, T. H., Saponaro, G., de Leeuw, G., Georgoulias, A. K., Alexandri, G., Kourtidis, K., and van der A, R. J.: Spatial and seasonal variations of aerosols over China from two decades of multi-satellite observations – Part 2: AOD time series for 1995–2017 combined from ATSR ADV and MODIS C6.1 and AOD tendency estimations, Atmos. Chem. Phys., 18, 16631–16652, https://doi.org/10.5194/acp-18-16631-2018, 2018b. Tang, Q., Bo, Y., and Zhu, Y.: Spatiotemporal fusion of multiple-satellite aerosol optical depth (AOD) products using Bayesian maximum entropy method, J. Geophys. Res.-Atmos., 121, 4034–4048, https://doi.org/10.1002/2015JD024571, 2016. Thomas, G. E., Carboni, E., Sayer, A. M., Poulsen, C. A., Siddans, R., and Grainger, R. G.: Oxford-RAL Aerosol and Cloud(ORAC): aerosol retrievals from satellite radiometers, in: Satellite Aerosol Remote Sensing over Land, edited by: Kokhanovsky, A. and de Leeuw, G., Springer Praxis Books, Springer, Berlin,Heidelberg, 193–225, https://doi.org/10.1007/978-3-540-69397-0_7, 2009. Torres, O. O.: OMI/Aura Near UV Aerosol Optical Depth and Single Scattering Albedo 1-orbit L2 Swath 13×24 km V003, Greenbelt, MD, USA, Goddard Earth Sciences Data and Information Services Center (GES DISC), https://doi.org/10.5067/Aura/OMI/DATA2004, 2006. Torres, O., Bhartia, P. K., Herman, J. R., Ahmad, Z., and Gleason, J.: Derivation of aerosol properties from satellite measurements of backscattered ultraviolet radiation: Theoretical basis, J. Geophys. Res., 103, 17099–17110, https://doi.org/10.1029/98JD00900, 1998. Torres, O., Bhartia, P. K., Sinyuk, A., Welton, E. J., and Holben, B.: Total Ozone Mapping Spectrometer measurements of aerosol absorption from space: Comparison to SAFARI 2000 ground-based observations, J. Geophys. Res., 110, D10S18, https://doi.org/10.1029/2004JD004611, 2005. Torres, O., Tanskanen, A., Veihelmann, B., Ahn, C., Braak, R., Bhartia, P. K., Veefkind, P., and Levelt, P.: Aerosols and surface UV products from Ozone Monitoring Instrument observations: An overview, J. Geophys. Res., 112, D24S47, https://doi.org/10.1029/2007JD008809, 2007. Torres, O., Ahn, C., and Chen, Z.: Improvements to the OMI near-UV aerosol algorithm using A-train CALIOP and AIRS observations, Atmos. Meas. Tech., 6, 3257–3270, https://doi.org/10.5194/amt-6-3257-2013, 2013. Torres, O., Bhartia, P. K., Jethva, H., and Ahn, C.: Impact of the ozone monitoring instrument row anomaly on the long-term record of aerosol products, Atmos. Meas. Tech., 11, 2701–2715, https://doi.org/10.5194/amt-11-2701-2018, 2018. Veefkind, J. P., de Leeuw, G., and Durkee, P. A.: Retrieval of aerosol optical depth over land using two-angle view satellite radiometry during TARFOX, Geophys. Res. Lett., 25, 3135–3138, 1998. Virtanen, T. H., Kolmonen, P., Sogacheva, L., Rodríguez, E., Saponaro, G., and de Leeuw, G.: Collocation mismatch uncertainties in satellite aerosol retrieval validation, Atmos. Meas. Tech., 11, 925–938, https://doi.org/10.5194/amt-11-925-2018, 2018. Wei, J., Li, Z., Peng, Y., and Sun, L.: MODIS Collection 6.1 aerosol optical depth products over land and ocean: validation and comparison, Atmos. Environ., 201, 428–440, 2019a. Wei, J., Peng, Y., Mahmood, R., Sun, L., and Guo, J.: Intercomparison in spatial distributions and temporal trends derived from multi-source satellite aerosol products, Atmos. Chem. Phys., 19, 7183–7207, https://doi.org/10.5194/acp-19-7183-2019, 2019b. Witek, M. L., Garay, M. J., Diner, D. J., Bull, M. A., and Seidel, F. C.: New approach to the retrieval of AOD and its uncertainty from MISR observations over dark water, Atmos. Meas. Tech., 11, 429–439, https://doi.org/10.5194/amt-11-429-2018, 2018. WMO: Guidelines on the Calculation of Climate Normals, WMO-No.1203, 2017. Zhao, T. X. P., Chan, P. K., and Heidinger, A. K.: A global survey of the effect of cloud contamination on the aerosol optical thickness and its long-term trend derived from operational AVHRR satellite observations, J. Geophys. Res.-Atmos., 118, 2849–2857, https://doi.org/10.1002/jgrd.50278, 2013. Zhao, X. and NOAA CDR Program: NOAA Climate Data Record (CDR) of AVHRR Daily and Monthly Aerosol Optical Thickness (AOT) over Global Oceans, Version 3.0, NOAA National Centers for Environmental Information, https://doi.org/10.7289/V5BZ642P, 2017. Zhao, X. P., Laszlo, I., Guo, W., Heidinger, A., Cao, C., Jelenak, A., Tarpley, D., and Sullivan, J.: Study of long-term trend in aerosol optical thickness observed from operational AVHRR satellite instrument, J. Geophys. Res., 113, D07201, https://doi.org/10.1029/2007JD009061, 2008.
2020-07-05 22:28:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 7, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6909172534942627, "perplexity": 9995.191790782326}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655889877.72/warc/CC-MAIN-20200705215728-20200706005728-00453.warc.gz"}
http://reducing-suffering.org/estimating-aggregate-wild-animal-suffering-from-reproductive-age-and-births-per-female/
by Brian Tomasik First written: 28 Nov. 2015; last update: 20 May 2016 ## Summary This page presents a calculator for the aggregate suffering of a population of an animal species based on years until the animals reproduce and how many eggs are hatched from each reproducing mother. The main selling point of this approach is that it can consider the implications of high early infant mortality rates in a more accurate way than a simple calculation could. As expected, I find that suffering is greater when organisms have shorter lifespans because there are more deaths per unit time. Suffering also increases as organisms lay more eggs, but depending on how the population size is measured, the increase in suffering with greater egg-laying can be much less than linear. The model used in this piece is very simplified; for instance, it assumes that mothers lay eggs only once at some constant age and that all mothers surviving to that age lay eggs. It would be good to repeat this analysis with other models to see how robust the conclusions are. ## Epigraphs Williams points out that, in all the mammalian species that have so far been carefully studied, the rate at which their members engage in the killing of conspecifics is several thousand times greater than the highest homicide rate measured in any American city. This dark message about our furry friends is often resisted, and popular presentations of nature (in television documentaries, magazine articles, and popular books) often engage in self-censorship to avoid shocking the squeamish. Hobbes was right: life in the state of nature is nasty, brutish, and short, for virtually all nonhuman species. --Daniel Dennett, Darwin's Dangerous Idea: Evolution and the Meaning of Life, p. 478 Many [bird] chicks don’t survive their first year: Some starve to death, their carcasses decaying [...]. Some are preyed upon by hawks or crows or cats. Some are slain by their nestmates. We've always said [predators] kill the old and the weak. But the fact is they kill the old and the weak, and very large number[s] of the young. --Bob Jamieson, wildlife ecologist ## Motivation When we think about the lives of wild animals, we often picture those members of a species who live to adulthood. This makes sense given that most humans live to adulthood. But for many species, most individuals die before maturity. Hence, when we're thinking about the extent of suffering in nature, in addition to counting the suffering of longer-lived individuals, we should also include the suffering of those who die young. Ideally we'd like a mortality distribution that would show what fraction of individuals live to what ages. Then we could calculate a typical amount of suffering experienced by an individual living to a given age, and take a weighted average of those values based on what fraction of individuals live to each age. When we get lucky, we can find life tables for a species, such as this one for a corn earworm. However, most of the time we only know more vague information, like the adult lifespan of the species and the average number of eggs laid by one mother. The piece provides a calculator that fits a mortality distribution based on lifespan and eggs-per-mother numbers in order to calculate a (very rough) estimate of aggregate suffering, counting both short-lived and long-lived individuals in a species. These numbers are obviously highly noisy, but they may at least provide rough relative comparisons among species. ## Choosing a distribution The first step is to decide on a parametric distribution for mortality, i.e., the probability density function for the ages at which individuals die. The simplest choice might be a uniform distribution, but this is pretty inaccurate for most species, where most individuals tend to die quite young. Another option is an exponential distribution. This would probably be a good fit for many species, but it doesn't work as well for some K-selected animals like humans, where mortality probability is actually higher at older ages than in youth. For this analysis, I decided to use the Gompertz distribution, because it's standard for modeling human mortality, and under different choices of parameters, it has a similar shape as an exponential distribution. (You can see this from the top figure on the Wikipedia article for the Gompertz distribution.) The general cumulative distribution function for the Gompertz distribution is, according to Wikipedia: F(a; η, b) = 1 - exp(-η (eb a - 1)). for a ≥ 0, where I've renamed Wikipedia's "x" variable as "a" to stand for "age". This function gives the fraction of starting individuals that are dead by age a. The survival function S(a; η, b) is just S(a; η, b) := 1 - F(a; η, b) = exp(-η (eb a - 1)). To get intuition for how the survival function S(a; η, b) behaves, here's an interactive graph of ita: η b As you can see from playing with the graph, we can get a broad range of shapes just by varying η and keeping b fixed. (This is why η is called the "shape parameter".) For example: • with values of η around 0.01, you can get a survival curve looking something like human mortality • with values of η around 1 or 10 or 100, you can get an increasingly sharp exponential-type decay.b The "scale parameter" of the distribution is b; it controls how far out the distribution goes. Smaller values of b translate to longer average lifespans. ## Simplifying assumptions about the population's dynamics For this calculator, I assume a population where births can happen at any time throughout the year, as is the case with humans and perhaps animals in tropic climates. This is definitely not true for animals in cooler climates with frozen winters, but you might be able to modify some parameters to make this calculator roughly work for those cases as well. I assume that when people speak about the "size of the population", they're counting all individuals within the species that are at least some minimum age Amin -- say, 0.05 years. (Note that all times are measured in years throughout this piece.) This helps exclude, e.g., eggs that are just hatching. If the population is aquatic and is sampled using a net with some mesh size, then the sample will tend to retrieve exactly those individuals that are above some size cutoff, which could be roughly correlated to some age cutoff. I assume that there's an exact age Ar when females lay eggs, and they lay their whole life supply of E eggs at that time. Females who die before then lay no eggs. This assumption is untrue for many animals, but we can try to shoehorn multiple birth events into the assumption of one birth event. For example, for contemporary humans in wealthy countries, we could pretend that a female has 2.1 children (E = 2.1) at exactly 28 years of age (Ar = 28).c I also assume that all females reaching age Ar reproduce. This is again untrue, and one extension of this calculator could be to add a parameter for the fraction of females of age Ar who actually reproduce. I assume that eggs hatch immediately after being laid. I ignore any potential suffering by non-hatching eggs in this analysis and only count the welfare of individuals with age after hatching greater than 0. ## Fitting the distribution ### Choosing η Species where mothers leave large numbers of hatching eggs E (tens or hundreds) on average tend to be more "r-selected". These organisms typically have Type III survivorship curves. Big η values capture this shape best. η values beyond ~10 or so look fairly similar in shape, so the exact value doesn't matter that much, but higher η values become slightly more steep as far as the initial drop-off of population after individuals emerge from eggs. η values around ~0.01 are probably more appropriate for highly "K-selected" animals like large mammals. These animals tend to have E values around, say, 2-5.d We want a function η(E) that roughly gives us an appropriate value of η based on E. We want η(2) to be around 0.01, η(10) to be around (say) ~1, and η(100) to be some big number. (For η above like 10 or 50 or so, the shape of the survival curve remains basically constant.) One function that fits the bill is η(E) = E2/100. There's nothing special about this choice, but it's a hacky way to roughly get the kind of curve I want. That said, in the calculator, I allow you to specify any formula for η of the form x Ey + z for constants x, y, and z. Following is an example of an actual survivorship curve fitted with a Gompertz distribution. This curve is actually more concave down than I would have expected relative to the number of offspring medflies lay (~300 per female per lifetime). I wonder if this trend is true for other insects as well. ### Choosing b In a stable population, if a mother creates E hatching eggs, on average exactly two of those hatchlings will survive to maturity and reproduce. In my framework, this means all but 2/E of the individuals have died by age Ar. That is: S(Ar; η, b) = 2/E. exp(-η (eb Ar - 1)) = 2/E. -η (eb Ar - 1) = ln(2/E). η (eb Ar - 1) = ln(E/2). eb Ar - 1 = ln(E/2) / η. eb Ar = 1 + ln(E/2) / η. b Ar = ln( 1 + ln(E/2) / η ). b = ln( 1 + ln(E/2) / η ) / Ar. ## Factoring in population size ### Population distribution is proportional to survivorship distribution The survivorship function S(a) tells us, given an age, what fraction of individuals born are still alive by that age. For the following discussion, I want a function N(a) that tells us, given an age, the number density of individuals currently alive who have that age. The distribution of individuals by age is sometimes called the "age structure" of the population, although often it's broken up into separate male and female graphs, whereas I intend N(a) to be the sum of males and females together. Basically, N is a cross-sectional view of the population's age structure (one snapshot in time), while S is what one would see from a cohort study of age structure (following a given set of born individuals over time until they all die). Let the population size be P. It turns out that in a stationary population, for all a ≥ 0, N(a) = P S(a) / (∫0 S(x) dx), i.e., N is the population size times a normalized version of S. #### Proof: Let S(a) be the survivorship curve as a function of age a. Let Nt(a) be the time-t distribution function of ages a in the population. That is, ∫rs Nt(a) da is the number of individuals in the population between age r and age s at time t. Suppose a tiny increment of time da elapses. For all a ≥ 0, we have Nt+da(a + da) = Nt(a) * S(a + da)/S(a),    (call this the "N equation") because the Nt(a) individuals that used to have age a now have age a + da, except for the fraction that died over the time interval da, and the fraction that didn't die over this time interval is the fraction surviving to age a + da, relative to the fraction that already survived to age a. Since our population is stationary, we want to find a stationary population distribution, i.e., we want a single, t-independent function N(a) such that N(a) = Nt(a) for all t and for all a. For such a function N, the N equation requires that N(a + da) = N(a) * S(a + da)/S(a). Clearly one such function is N(a) = k S(a) for all a and any constant k, so k S(a) is a stationary distribution. Since ∫rs N(a) da is the number of individuals in the population alive between ages r and s, ∫0 N(a) da must equal the population size, P. That means 0 k S(a) da = P, or k = P / (∫0 S(a) da). In the literature on discrete Markov chains, there's a theorem about a stationary distribution being unique under certain conditions. My setup here is continuous, but assuming that a similar theorem applies, and assuming the right conditions hold (do they?), we have that k S(a) is the unique stationary distribution. ■ ### Counting non-measured population members Suppose we measure the population size of a species, finding that the number of individuals at least age Amin is Pmeas. Since we didn't count individuals younger than Amin years old, the actual population size Pactual is slightly larger. In particular, Pmeas := ∫Amin N(a) da , while Pactual := ∫0 N(a) da . Hence: Pactual = Pmeas * (∫0 N(a) da) / (∫Amin N(a) da) = Pmeas (∫0 S(a) da) / (∫Amin S(a) da). ### Births per unit time We can picture the population as is shown in the following diagram: Because this is a survivorship graph, the number of individuals that die in a time interval is proportional to the difference in y values over that time interval. Since who will die at what age is deterministic, we can take a God's eye view and sort individuals along the "starting line" of the y axis such that they die at the right times. We can picture the individuals like water drops in a river that run over a curved cliff. If the stream is flowing in a stable way, then any given still-picture snapshot of the river will show the distribution of the population by survival age, which is a visual way to understand the "population distribution is proportional to survivorship distribution" theorem. The individuals within the blue shaded region of the graph are precisely those that will reproduce within at most Ar years from now. Let the fraction of individuals in this box be R. Then the rate of individuals achieving the age of reproduction per year is R Pactual / Ar. Since each reproduction event produces E/2 hatching eggs per parent (or E hatching eggs per mother), this implies that the rate of eggs hatching per year EPY is EPY = E R Pactual / (2 Ar). To calculate R, we can divide the area of the blue shaded rectangle by the total area under the survivorship curve. The area of the rectangle is easy: length * width = (2/E) * Ar. The area under the whole curve is 0 S(a) da = ∫0 exp(-η (eb a - 1)) da. I don't know if this is possible to integrate analytically, so for this calculator, I'm integrating it numerically. ## Suffering per birth Now that we have an expression for the number of eggs of the species that are hatched per year, all we need to calculate total suffering per year is to estimate the average suffering that results from the individual that will be born from a given egg. ### Suffering given lifespan Newborn individuals are less complex and have less developed neural wiring than adults. To quantify this effect, define a function c(a) that describes how conscious an individual is at age a, relative to how conscious it would be as a mature adult. For convenience, let's take "mature adult" to mean a = Ar. I assume that prior to maturity, c(a) has the form (a/Ar)α for some α ≥ 0. After age Ar, I assume that c(a) remains at 1. I think a good choice of α is around 1/2, which says that animals become more sentient quickly early in development and more slowly later in development. For example, if for a stone-age human, Ar = 25 (say), then a child of age 6.25 years would be considered (6.25/25)0.5 = 1/2 as sentient as an adult. (Even this seems too uncharitable toward the child, but probably for other species, especially those that develop through many distinct stages like larva to pupa to adult, this sentience curve is more reasonable.) If you want no sentience distinctions among organisms by age, you can set α = 0. Next, we need to describe the suffering that an individual experiences with a given lifetime of Y years. For simplicity, assume that the suffering (or, if you're a classical utilitarian, you can take this parameter to be happiness minus suffering) of the individual is a constant L units per year over time (relative to its maximal consciousness at each time point). And the pain of dying (such as by predation, disease, or starvation) is a constant amount D when it happens (again, relative to maximal consciousness at that time). So, the total suffering TS of the individual is TS(Y) = c(Y) D + ∫0Y c(a) L da. If Y ≤ Ar, this equals TS(Y) = (Y/Ar)α D + ∫0Y (a/Ar)α L da = (Y / Ar)α D + (L / Arα) ∫0Y aα da = (Y / Ar)α D + (L / Arα) (1/[1+α]) (Yα+1). And if Y > Ar: TS(Y) = c(Y) D + ∫0Ar c(a) L da + ∫ArY c(a) L da TS(Y) = D + (L / Arα) (1/[1+α]) (Arα+1) + L (Y - Ar) TS(Y) = D + (L Ar/[1+α]) + L (Y - Ar). ### Average suffering Now that we have a function TS for the total suffering given a lifespan of Y, we need to take an expectation over the probability of having each given lifespan. We can do this using the probability density function f(a) of the Gompertz distribution, which according to Wikipedia is f(a; η, b) = b η eb a eη exp(-η eb a). Average suffering over all eggs is then AS = ∫0 TS(a) f(a; η, b) da. Once again, this is hopeless to evaluate analytically, so my calculator does so numerically. ## Putting the pieces together Given the above, total, population-wide suffering per year (PWSPY) is just the rate of egg production, EPY, times average suffering per egg, AS. Note that this is actually the quantity of future suffering produced by a year's worth of egg laying (i.e., the suffering those now-hatching individuals will experience over the entirety of their lives), but assuming a stable population, the amount of future suffering produced per year should equal the amount of suffering actually endured during a year. (If this weren't true, there would be either a surplus or deficit of suffering endured relative to suffering created, and since the population is stable, that surplus or deficit would be the same every year indefinitely, leading to an infinite mismatch between suffering created vs. suffering endured.) Here's a simple example to make the point concrete. The following figure helps illustrate the text: At the beginning of this year, there's a population with six individuals: four have age 0 years (two male, two female), and two have age 1 year (one male, one female). At the end of this year, two of those currently aged 0 will die, and both individuals currently aged 1 will die. Hence, total suffering this year is 6L+4D, since all six individuals live one year, and four die. (For simplicity, I'm here assuming that individuals don't become more sentient with age, i.e., that c(a) = 1 for all a.) Ar = 2, and the female individual who will reach age 2 at the end of the year will lay E = 4 hatching eggs, i.e., EPY = 4. Once laid, two of those eggs will live for 1 year, and two will live for 2 years. TS for an offspring that will live 1 year is L+D, and TS for an offspring that will live 2 years is 2L+D. AS = [(L+D) + (2L+D)]/2 = 1.5L+D. So the total suffering that will eventually be endured by the offspring laid this year is EPY * AS = 4 * (1.5L+D) = 6L+4D. This is the same as the total suffering over the current year. ## Calculator for total suffering per year PWSPY is calculated below for some default parameter values.e The units of the calculated numbers are arbitrary, but comparisons across species are meaningful. Note that if you want to compare species with different degrees of sentience, you should multiply these numbers by degrees of sentience before comparing them. Variable Symbol Value Average eggs laid per reproducing mother Elaid Hatching rate: fraction of laid eggs that hatch hatch/laid Average eggs hatched per reproducing mother E = Elaid * (hatch/laid) Age (in years) at which individuals lay eggs Ar Age (in years) of individuals at which they begin getting counted in population measurements Amin Measured population estimate (enter the order of magnitude) Pmeas 10^ Suffering (or happiness minus suffering) per year during life L Suffering while dying D Exponent for sentience as a function of age relative to reproducing age α Formula for η η(E) = x Ey + z * E^ + Shape parameter of the Gompertz distribution η Scale parameter of the Gompertz distribution b Actual population (including those too young to be noticed) Pactual Fraction of currently living individuals who will reproduce in the future R Total eggs hatched per year EPY Average suffering per egg hatched AS Population-wide suffering per year PWSPY Suffering per organism in the measured population per year PWSPY/Pmeas ## Sample suffering numbers by species Following are some rough comparisons among species using the calculator. The numbers are mostly driven by differences in Ar. These figures don't reflect per-species differences in brain complexity; to get the total amount of brain-complexity-weighted suffering of the species, you should multiply each of these numbers by (Pmeas of the species) * (brain complexity of the species). Species Elaid hatch/laid Ar Amin PWSPY/Pmeas elephantf mallard (Anas platyrhynchos) g h i j windowpane fish (Scophthalmus aquosus) k l m n southern green stink bug (Nezara viridula)o zooplankton (crustacean kinds) p q r s Note that if you change L from its default value of -5 to, say, +5, then the above table will show that elephants and mallards have net positive lives, but windowpanes, stink bugs, and zooplankton still have net negative lives. This illustrates the basic argument for the predominance of suffering in nature: Most wild animals have many offspring with short lives, which means that in aggregate, the painfulness of their deaths can't be outweighed even by positive lives up to the point of death. Also, the above calculations assume that the animals reproduce year-round. If this is not the case, you can make adjustments. For example: "Nezara viridula reproduces throughout the year in tropics. In temperate zones this species presents a reproductive winter diapause [...]". So for temperate climates, if the bugs are active, say, for half the year, you could multiply the calculated PWSPY/Pmeas number by 1/2. This correction alone is not sufficient to give fully accurate results because for species where winter interrupts breeding, the population is not stationary, whereas my model assumes a stationary population. But the numbers are probably not vastly far off. ## How suffering varies with parameters ### Reproducing age This chart plots suffering per organism in the measured population per year for various values of reproducing age, with all other parameters having their default values as previously set: As expected, the amount of suffering decreases (i.e., values become less negative) with increasing lifespan. The curve has a similar (though not identical) shape as a plot of -1/Ar. The main reason for this is the pain of death: Since death entails a given amount of pain, total pain per year from death will be (number of deaths per year)*(pain per death), and number of deaths per year is roughly related to 1/Ar (since a doubled Ar roughly translates to something like half as many eggs hatched per year, although the relationship isn't exact because the distribution of lifespans is complicated). If you set D to 0, you can see that the above graph becomes much more mild. ### Eggs per reproducing mother This chart plots suffering per organism in the measured population per year for various values of eggs per reproducing mother, with all other parameters having their default values as previously set: You might have expected that suffering would grow almost linearly with numbers of eggs per mother because more eggs means more offspring dying young. However, recall that EPY = R Pactual E / (2Ar). Pactual depends on E, but it changes fairly slowly with changing E. So ignoring Pactual, we have EPY ∝ R E. The reason this isn't proportional to just E is that R is a decreasing function of E. The intuition is that if E is bigger, then more of the organisms in a given population are non-reproducing, which means the fraction of individuals who will actually become parents is smaller, and this partly offsets E being bigger. R only partly offsets E because when the population size is measured, I assume the scientists taking the measurements count individuals that both will and will not reproduce. If we were to only count reproducing individuals as part of Pmeas, then the graph would look mostly linear. Indeed, if you set Amin = Ar, that's what you see. Another reason the curve doesn't slope down more dramatically is that average suffering per individual, AS, is a decreasing function of E. That's because when E is bigger, organisms on average die sooner (since the survivorship curve is steeper), so the suffering of any given organism is on average smaller due to its having (1) a shorter lifespan and (2) less sentience when it dies because it's less developed. ### Gompertz shape parameter (η) If you vary the formula for η in the above calculator, you can see that the final number is extremely insensitive to it. A value of η closer to 0 (meaning that more organisms live longer lives) increases suffering per individual (since an individual's life is longer) but decreases the fraction of reproducing individuals relative to the total population size, because when a population census is taken, there are more non-reproducing animals counted (i.e., the total area under the survivorship curve is bigger). This would be less true if we didn't count as many young, non-reproducing individuals when making population estimates. Indeed, if you set Amin closer to Ar, the final calculated numbers become more sensitive to the η formula. ## Acknowledgments One of the inspirations for this piece was some data on insect lifespans sent to me by Carl Shulman. ## Footnotes 1. To see the code that generates it, view the source of this page and look for the draw() function.  (back) 2. For values of age a near 0, if we approximate ex by 1 + x, then S(a; η, b) = exp( -η (eb a - 1) ) = exp( -η (1 + b a - 1) ) = exp(-η b a), which is the survivorship curve for an exponential distribution with rate parameter λ = η b.  (back) 3. Sometimes the notation mx is used to denote fecundity at a given age. However, I want E to represent the sum total of fecundity at all ages. For instance, if an animal lays 5 eggs at 2 years of age (m2 = 5) and another 4 eggs at 3 years of age (m3 = 4), E = 9 total eggs.  (back) 4. Yoshiaki Itō argues that the more K-selected survivorship curves are concave down rather than concave up because parental care helps reduce infant mortality.  (back) 5. To see the calculator code, view the source of this page and look for the calculate() function.  (back) 6. The top answer here suggests a female elephant can birth up to ~13 calves in a lifetime, between ages 20 and 75. Let's set Ar as the average of those ages: (20+75)/2 = 47.5. Elephants are probably counted in population estimates as soon as they're born: Amin = 0. Since elephants don't lay eggs, the "egg survivorship rate" is basically 100%, except for miscarriages. These parameter settings, combined with the other default parameter values, imply η = 1.69 and b = 0.016, giving the following survivorship curve relative to my model: In contrast, here's an actual survivorship curve for elephants: The x' variable on the x axis means "Percentage deviation from duration of pre-reproductive period" (according to pp. 50-51 of the book from which the figure comes). If I eyeball the figure around x' = 130 (roughly half of the maximum lifespan), mortality looks like it's probably ~80%, and mortality in my Gompertz-fit graph about half-way to a maximum lifespan also looks like ~80%. And if I eyeball around x' = 0 (somewhat less than 1/4 of the maximum lifespan), mortality looks to be around ~65%? And on my Gompertz-fit graph, mortality a bit less than 1/4 of the way to the end looks to be about ~40%. So my Gompertz fit is good but not perfect.  (back) 7. This page reports a clutch size of 11 - 14 eggs (average = 12.5). Assuming a mother lays only one clutch (how true is that?), this gives Elaid = 12.5.  (back) 8. This book reports (p. 70, Table 2.7) that 49% of eggs hatch for US mallards.  (back) 9. This page says mallards begin breeding at 1 year old and have a typical lifespan of 3 years conditional on having reached breeding age. The average number of breeding years conditional on reaching breeding age is thus 3 - 1 = 2, and the average age at which the birds would lay eggs would be (2 average breeding years) + (1 year to start breeding) = 3.  (back) 10. I'm just making this up.  (back) 11. I didn't find numbers for this specific species, so I'll make guesses based on other fish. This source reports an average of 11,141 eggs per mother salmon. This book reports 4500 eggs laid on average for sockeye salmon, with a 99.97% mortality rate between eggs and adult fish. The book also presents the following information on eggs laid per mother for several fish species: This book chapter reports numbers of eggs laid per pound of fish that are mostly in the hundreds, thousands, or tens of thousands. Given that a windowpane fish "Seldom exceeds weights of 350 to 400 g", it weighs a bit less than a pound. My overall point estimate for this parameter is 5000, which seems somewhere in the middle among these numbers. By the way, here are some actual survivorship curves for fish (from p. 55 of this book): (back) 12. I didn't find numbers for this specific species, so I'll make guesses based on other fish. This study found that 39.68% of Clarias gariepinus fish hatched in the "control" condition. Another study found that an average of 90.3% of rainbow trout eggs hatched in the "control" condition (Table 2, p. 147). The study notes (p. 148) that this number is comparable "with survivals of 90 to 95% for the hatchery program from which the fish were obtained (M. Albert, personal communication, 1976), as well as with published values for other studies using rainbow trout (Anon. 1973)". I'm not sure if these numbers for percent of eggs hatching are higher than would be the case in the wild. But for now I'll assume they're roughly accurate and take a point estimate of 50%.  (back) 13. This document reports that "Sexual maturity occurs at 3-4 years of age [...] (O’Brien et al. 1993)."  (back) 14. This document reports that "Fish spawned in the spring grow quickly and reach sizes of 11-19 cm TL [total length(?)] by September, about four months after spawning. By the following spring, most fish of this cohort are larger than 16 cm TL. Fish spawned in the autumn are 4-7 cm TL in December and reach 18-21 cm TL by the following October (Morse and Able 1995; Able and Fahay 1998). [...] Windowpane attain a maximum total length of about 46 cm (Scott and Scott 1988)." So let's say it takes ~3 months (just making this up) before the fish become big enough to be counted in population estimates.  (back) 15. Below is a figure from this book (p. 84): It's one of many survivorship curves for insects presented in that book, but I'll focus on Nezara viridula. I think this survivorship curve shows egg mortality as part of the curve. Mortality before reproduction is around, say, ~92% (give or take), i.e., 2/Elaid = 0.08 so that Elaid = 25. We can also read off an approximate hatching rate from the graph, since I think the "H" denotes "hatching". H is about 1/5 of the way down to 90%, but the y-axis scale is logarithmic, i.e., if p is the cumulative mortality between 0 and 1, then the y distance down from the top is proportional to y(p) = log10(1/[1-p]). (To see this, note that for p = 0.99, y(p) = 2, while for p = 0.9, y = 1.) So we want the p such that y(p) = 1/5: log10(1/[1-p]) = 0.2 1/(1-p) = 1.58 1-p = 0.63 p = 0.37. This jibes with estimates of egg mortality for other insect species: • In this life table for a corn earworm, out of 1000 eggs, 382 made it to the first larval stage, suggesting a hatching rate around 0.38. • This book reports findings by Zhou et al. (2009) that H. oblita hatched at rates between 2.1% and 75.6% depending on soil moisture. Time to maturity for Nezara viridula in the above figure looks to be around ~65 days, i.e., Ar = 65/365 = 0.18. I'll just randomly guess that Amin might be, say, around the time of the second instar of that species, which from the graph looks like ~10 days = 0.03 years. These parameters imply η = 0.86 and b = 5.7, which has the following fitted Gompertz curve: If you eyeball this fitted curve against the empirical Nezara viridula curve, it looks like it doesn't fit that well, but part of the reason is that the empirical curve starts from egg laying as age a = 0 and S(a) = 1, whereas my fitted curve starts from egg hatching as a = 0. Once you account for ~37% egg mortality, the fitted curve corresponds with the empirical curve reasonably well, though far from perfectly.  (back) 16. This page reports that "females [...] produce 10 to 40 eggs per clutch [...] (Balcer et al., 1984)". This book notes: "most crustacean zooplankton produce only a few hundred eggs per female (see Paffenhöfer and Harris, 1979, for review)". I took 100 as a round number to serve as my point estimate.  (back) 17. This study reported "Mortality of hatchlings [of] (68 vs. 69%)" for zooplankton, although the authors think these numbers might have been high: "The observed high mortality rates may be due to the limitations of the small static cultures or to the fact that only one species of algae was used as additional food source." A second study found hatching rates of ~1/2 to ~3/4. A third study found relatively high "hatching success" rates within 15 days (around ~60% to ~100% -- see Fig. 2, p. 1977) but noted that past studies had found pretty high apparent egg mortality rates in the wild -- typically in the range of 30-99% per day (Table 1, p. 1972). The study concludes (p. 1985) that one reason for the higher apparent mortality in the wild may be that, in the wild, eggs sink to bottom sediments before hatching. A fourth study found hatching rates of ~40% to ~100% after 35 hours depending on temperature. Overall, the estimates are variable, but I chose a 50% mortality rate as a round number somewhere in the middle of the above data points.  (back) 18. This page reports that eggs "take from 28 to 35 days to develop (Balcer et al., 1984)". There's additional latency between when eggs are laid vs. when they hatch, but in the majority of sources I looked at, this was small (on the order of 5-10 days), and since my model ignores this latency, I'll ignore it here. So let's say 31.5 days to develop: Ar = 31.5/365 = 0.086.  (back) 19. I'm just making this up.  (back)
2017-02-22 03:53:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6758103966712952, "perplexity": 2325.6302660346073}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170884.63/warc/CC-MAIN-20170219104610-00599-ip-10-171-10-108.ec2.internal.warc.gz"}
https://ai.stackexchange.com/questions/12919/how-to-handle-proper-names-or-variable-names-in-word2vec/12920#12920
# How to handle proper names or variable names in word2vec? The input in word2vec is known word (spellings), each tagged by its ID. • But if you process real text, there can be not only dictionary words but also proper nouns like human names, trade marks, file names , etc, how to make an input for that? • Is you consider some input where items are variables, like the meaning of input would be x = something, and after some time you acces to x value and define some other stuff with it. That would be format for this input, and will this approach work at all? • I wasn't able to understand the second part completely, can you please explain with some sample code? Jun 18 '19 at 13:34 Word2vec works on the concept of typical word co-occurrences. This means that it will work well only for words that occur frequently in the dataset. So proper nouns will not play any role in training the model. You can keep the proper nouns as they are or use only the words the occur more frequently than some threshold value based on the size of your dataset. Once you use the value stored in variable x for something, and then change the stored value, it will not reflect anywhere unless you us use the variable x again somewhere in the program. # Example x = "something something" print(x + "...") # Result something something ... # Changing x x = "new value" # This new value of x will not reflect anywhere in the program # Unless you use the variable x again. • i am thinking about interpreting knoledge from the text. For example, let me quote some article : "We propose a new network architecture for learning on graphs Unlike the traditional multi-head attention mechanism" Interpreting its information is sort of structure (in pseudo-code) : x = Subject (first_person_plural); main = Fact(propose_info_action, x, y) ; y = info_method(of : neural_network, creation_time : new, tool_function = z) , z = Learning(info, learning_source : graph_information(plural)), Jun 18 '19 at 16:12 • so items have kind of relation among each object and the link between sentences where an object occures. Word2vec seems completely not capable to do that since the info it extracts is more primitive? Jun 18 '19 at 16:13
2021-10-24 02:12:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28102171421051025, "perplexity": 1201.8339192251972}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585837.82/warc/CC-MAIN-20211024015104-20211024045104-00376.warc.gz"}