content
stringlengths
86
994k
meta
stringlengths
288
619
[Solved] Three forces with magnitudes of 70pounds, | SolutionInn Three forces with magnitudes of 70pounds, 40 pounds, and 60 pounds act on an object at angles Three forces with magnitudes of 70pounds, 40 pounds, and 60 pounds act on an object at angles of −30°, 45°, and 135°, respectively, with the positive x-axis. Find the direction and magnitude of the resultant of these forces. Fantastic news! We've Found the answer you've been seeking! Step by Step Answer: Answer rating: 87% (8 reviews) u 70 cos 30 o i 70 sin 30 o j 6062i 35j v 4...View the full answer Answered By Utsab mitra I have the expertise to deliver these subjects to college and higher-level students. The services would involve only solving assignments, homework help, and others. I have experience in delivering these subjects for the last 6 years on a freelancing basis in different companies around the globe. I am CMA certified and CGMA UK. I have professional experience of 18 years in the industry involved in the manufacturing company and IT implementation experience of over 12 years. I have delivered this help to students effortlessly, which is essential to give the students a good grade in their 3.50+ 2+ Reviews 10+ Question Solved Students also viewed these Calculus questions Study smarter with the SolutionInn App
{"url":"https://www.solutioninn.com/three-forces-with-magnitudes-of-70pounds-40-pounds-and-60","timestamp":"2024-11-04T08:28:05Z","content_type":"text/html","content_length":"79623","record_id":"<urn:uuid:9937f938-7332-4906-aeac-307ee5052a8d>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00402.warc.gz"}
Can silicon photonics escape the rack? For Arlon Martin, silicon photonics has been a commercial reality since 2004. That’s when his then employer, Monterey Park, California’s Kotura, launched a silicon-based variable optical attenuator (VOA) into the telecoms market. ‘The VOA market is not huge, but we’ve grown to be the number one supplier and are shipping millions of units,’ said Martin, now senior director, marketing, at Mellanox Technologies following its acquisition of Kotura in 2013. Building on lessons learned in VOAs, Mellanox today uses silicon photonics in much more important cable and transceiver roles. In these products, and those of several other companies around the world, silicon photonics is starting to fulfil its ample potential. While such offerings are largely limited to fibre-optic networking, other applications are now building on their progress. Diagnostics and intra-chip communications, for example, are exploiting an increasingly well-established body of knowledge in making use of silicon’s optical properties, and of the integration of germanium and semiconductor laser components. In doing so, they could take even greater advantage of silicon’s greatest promise: precision manufacturing on truly vast scales. In transceivers, Mellanox is able to flip-chip bond indium phosphide (InP) lasers directly onto silicon chips, Martin explained. ‘We split the light using silicon photonics and that enables us to use a single laser for multiple channels,’ he said. ‘We then perform 25GHz modulation with a silicon on-chip modulator. When we do WDM, the multiplexer that combines multiple wavelengths onto a single lightwave pipe – which is then connected to the outside world – is also silicon.’ This approach makes Mellanox the only silicon photonics networking group that’s not using components like lenses, isolators or beam collimators, Martin said. The level of integration in the chip prevents dust entering and interfering with critical functions, so Mellanox doesn’t need expensive hermetic packaging. Using silicon for modulation also cuts the laser source’s price, Martin added. ‘We don’t need an expensive 25GHz laser that only has a couple of suppliers. We can just use a low-speed InP laser “lightbulb” that’s a 10th of the price.’ In March 2016, the company showed how silicon photonics will support future 200Gb/s networks, demonstrating its first key 50Gb/s modulator and detector building blocks. Key to obtaining the necessary speed is forming germanium waveguides, whose sizes are precisely controlled on silicon photonics chips. ‘We use the waveguide’s width to govern the speed,’ Martin said. ‘Most others use thickness. Thickness is a growth step, and it’s hard to control. But width is controlled by a semiconductor mask, which is extremely accurate.’ Such capabilities are converting communication component companies that weren’t originally interested in silicon photonics to the technology, with Martin citing the California-headquartered firms Finisar and Lumentum as examples. The Mellanox executive emphasises that the devices bring a cost advantage that he expects to become even more evident at higher data rates. Yet he admits that volumes are far from living up to the expectation created by some during silicon photonics’ early days of making ‘billions of these devices, like microprocessors’. ‘If you have that expectation, then silicon photonics is a failure,’ Martin conceded. ‘I would position it far differently in that the advantage of silicon photonics is that we don’t need the latest and greatest fab technologies. In our case we’re using eight-inch fabs.’ This ‘piggybacking’ of silicon photonics on an existing technology base is the key to its success to date, agrees Roel Baets from Ghent University (UGent) in Belgium. ‘The more you do that the more you profit from that advantage,’ he said. Baets has worked closely on silicon photonics with Belgian research institute Imec, including in founding the ePIXfab multiproject wafer service, recently taken over by Europractice. ‘As much as possible, people are trying to use the process set that’s standard in a CMOS fab, perhaps with a bit of tweaking,’ Baets stressed. ‘There are a few things that sometimes need an extra non-standard tool.’ Baets has seen what he describes as ‘a fairly steep growth curve’ in the industrial uptake of silicon photonics in recent years. He refers to Carlsbad, California’s Luxtera, which originally made silicon photonics at what was then Freescale Semiconductor’s Austin, Texas CMOS fab. Needing to move to a more advanced CMOS node, Luxtera is now collaborating with Geneva, Switzerland, headquartered silicon powerhouse STMicroelectronics, which has ‘invested heavily in silicon photonics manufacturing’. ‘That was one of the big triggers for STMicroelectronics, to develop an industrial platform, obviously not exclusively for Luxtera, but many industrial users,’ Baets observed. Testing applications Some of Baets’ research involves moving silicon photonics away from telecommunications, shifting wavelengths towards the visible and applications towards the medical. ‘As an example, we want to develop a very tiny coin-like device, which could easily be implanted under the skin of diabetes patients,’ Baets said. The devices will use light from an InP light source to the silicon to perform absorption spectroscopy on tissue fluids, staying in place for six months. They can be powered by wireless induction, continuously monitoring glucose level variations which Baets says ‘can be wild in diabetes patients’. The devices can also exchange data wirelessly. By contrast, another young silicon photonic medical device company is avoiding the fully integrated approach common in optical networking. Cary Gunn, chief executive officer of Genalyte, based in San Diego, California, said that his company’s approach is the opposite of companies like Luxtera, which he also co-founded. Genalyte produces disposable silicon chips that carry 128 passive ring resonators, and the Maverick instruments that contain the active components to interrogate them remotely. ‘We kept the disposable piece’s cost as low as possible,’ Gunn said. ‘There are no active devices, just silicon and biology.’ The biological component is either an antibody or a protein attached to the chip that interacts with target material in the blood sample. The ring resonators trap a standing wave whose frequency shifts as the target material binds. ‘Our machine is taking a small drop of blood and looking at the frequency shift of all 128 sensors simultaneously, doing the tests in about 10 minutes,’ Gunn explained. To date, Genalyte has sold more than 20 instruments to research groups, principally for monitoring whether clinical trial participants are experiencing an immune system response to antibody drugs. They’re also now developing a larger number of diagnostics tests for use in everyday medical testing, although Gunn stressed that such use has not yet been cleared by regulatory ‘It’s all about finding a customer who’s going to use a lot of tests, as we make money on the chips,’ Gunn explained. ‘It’s a razor/razor blade business model. If you look at the number of diagnostic tests that are performed on a daily basis, it’s hundreds of millions just in the US.’ And if this kind of diagnostic use is taken up widely, it could begin to approach the impact that was originally foreseen for silicon photonics. ‘The silicon that would be consumed would be a major fraction of semiconductor industry volume,’ Gunn enthused. ‘I really think that this is the killer app for silicon Researchers at the University of Southampton, UK, are also developing silicon photonics technologies that could be used in chemical and biological sensing, for example detecting terrorist threats. They’re targeting longer infrared wavelengths, accessing the ‘fingerprint region’ in spectroscopy where characteristic absorption lines appear that identify particular chemical species. ‘Goran Mashanovic in my group is looking at how far we can push silicon-on-insulator (SOI),’ explained Southampton’s Graham Reed. A key problem in going to longer wavelengths is that light begins to move into the silicon dioxide cladding areas around silicon waveguides. To prevent this, Mashanovic controls the oxide region’s refractive index by selectively etching it away. His approach can support silicon photonics at wavelengths up to 3.8µm. However, the Southampton scientists are also working at wavelengths as long as 14µm, again by exploiting the capabilities of germanium. Deployment everywhere? Reed and his colleagues are also working on automated ‘passive alignment’, simplifying coupling both optical fibre and semiconductor lasers onto silicon photonics chips. While the Southampton researcher emphasises that this should lower costs further, he’s cautious not to overstate the importance of integrating lasers into silicon photonics. The classic case where people have argued this is necessary is ‘infra- and intra-chip communications’, which were originally seen as a driving force for silicon photonics, according to Reed. ‘Moore’s law is running out of steam and shrinking transistors is not necessarily the answer,’ Reed said. Part of the problem is that the interconnect that transfers data in and out electrically imposes a physical space limit on bandwidth, and is inefficient. Therefore, microprocessors that communicate optically rather than electrically might help computing sustain its traditional rate of advance. Yet Reed disagrees with the idea of putting lasers on microprocessors. ‘Interconnect isn’t the only limiting factor,’ Reed said. ‘Microprocessors run very hot, and a laser would be one of the most power hungry devices you could put on it. So, there’s a good debate to be had whether that laser should be put there. I don’t think it should. Data centres become a very different argument, because the laser’s not sitting on a microprocessor. You haven’t got the same thermal problem.’ Chen Sun, chief technology officer at Ayar Labs in Berkeley, California, added reliability as an argument against integrating lasers and microprocessors. ‘One of the primary modes of failure in an optical module comes from the short average time to failure of a laser,’ he said. ‘System makers remain scared of bringing optics deeper into the system. If you keep the laser separated, this opens up the possibility for keeping the lasers somewhere that is field-serviceable, while moving all other optics deeper into the system, a proposition that is very attractive to system makers.’ Sun has therefore used external lasers in his otherwise highly integrated silicon photonics work with University of California, Berkeley’s Vladimir Stojanovic. With their colleagues, they were able to produce individual silicon microprocessors combining more than 70 million transistors and 850 photonic components that provide logic, memory, and interconnect functions. And the most important element of their work is that the chip was made with a ‘zero-change process’ – that is, on an existing 45nm SOI manufacturing line, without any tweaks. ‘Our designs feature full photonic links with several key device components: grating couplers, spoked-ring modulators and silicon-germanium photodiodes,’ Stojanovic said. ‘Each of these uses the existing process technology to create a device and bypass its inherent constraints, and sometimes even create record-breaking devices – for example record vertical grating couplers. Zero-change process makes it far easier to combine multiple photonic links onto a single piece of silicon, offsetting laser and packaging costs and driving the overall $/Gb/s price down significantly.’ Ayar Labs is now working on using the approach that produced the microprocessor chips commercially. ‘We are currently putting together demonstrators that showcase the bandwidth density, power, form factor, and architectural advantages of the technology,’ said Sun. ‘We believe that the introduction of “just good enough” photonic components will enhance existing applications and enable a whole new range of cost-effective ones,’ added Stojanovic. ‘Adding photonic devices to CMOS chips opens doors to lots of other communication and sensing applications where tight integration, with lots of connections and high-sensitivity, are needed – such as lidar.’ That would be a step closer to what Reed would consider the ultimate success of silicon photonics: deployment everywhere. ‘If eventually you get massive bandwidth to your house it will come down a fibre and there will be a silicon photonics chip on the end,’ the Southampton researcher predicted. ‘There will be silicon photonics in every microprocessor, in every computer.’ He believes that it will take a large commitment to the technology from an industrial giant for that to happen. And while Baets is less specific, he’s similarly optimistic about the prospects of silicon photonics. ‘Before 2005 it was an exotic research field,’ the UGent academic said. ‘Less than 10 years later, we have many major companies keeping a close eye on the technology or developing products. That’s remarkable. Give it another 10 years and it may become really big.’
{"url":"https://www.electrooptics.com/feature/can-silicon-photonics-escape-rack","timestamp":"2024-11-02T15:22:49Z","content_type":"text/html","content_length":"59390","record_id":"<urn:uuid:4b3a2df7-cd1a-42b5-b8b5-ff6f8c9faa69>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00534.warc.gz"}
Topic model Jump to navigation Jump to search In machine learning and natural language processing, a topic model is a type of statistical model for discovering the abstract "topics" that occur in a collection of documents. Topic modeling is a frequently used text-mining tool for discovery of hidden semantic structures in a text body. Intuitively, given that a document is about a particular topic, one would expect particular words to appear in the document more or less frequently: "dog" and "bone" will appear more often in documents about dogs, "cat" and "meow" will appear in documents about cats, and "the" and "is" will appear equally in both. A document typically concerns multiple topics in different proportions; thus, in a document that is 10% about cats and 90% about dogs, there would probably be about 9 times more dog words than cat words. The "topics" produced by topic modeling techniques are clusters of similar words. A topic model captures this intuition in a mathematical framework, which allows examining a set of documents and discovering, based on the statistics of the words in each, what the topics might be and what each document's balance of topics is. Topic models are also referred to as probabilistic topic models, which refers to statistical algorithms for discovering the latent semantic structures of an extensive text body. In the age of information, the amount of the written material we encounter each day is simply beyond our processing capacity. Topic models can help to organize and offer insights for us to understand large collections of unstructured text bodies. Originally developed as a text-mining tool, topic models have been used to detect instructive structures in data such as genetic information, images, and networks. They also have applications in other fields such as bioinformatics.^[1] An early topic model was described by Papadimitriou, Raghavan, Tamaki and Vempala in 1998.^[2] Another one, called probabilistic latent semantic analysis (PLSA), was created by Thomas Hofmann in 1999.^[3] Latent Dirichlet allocation (LDA), perhaps the most common topic model currently in use, is a generalization of PLSA. Developed by David Blei, Andrew Ng, and Michael I. Jordan in 2002, LDA introduces sparse Dirichlet prior distributions over document-topic and topic-word distributions, encoding the intuition that documents cover a small number of topics and that topics often use a small number of words.^[4] Other topic models are generally extensions on LDA, such as Pachinko allocation, which improves on LDA by modeling correlations between topics in addition to the word correlations which constitute topics. Hierarchical latent tree analysis (HLTA) is an alternative to LDA, which models word co-occurrence using a tree of latent variables and the states of the latent variables, which correspond to soft clusters of documents, are interpreted as topics. Topic models for context information[edit] Approaches for temporal information include Block and Newman's determination of the temporal dynamics of topics in the Pennsylvania Gazette during 1728–1800. Griffiths & Steyvers use topic modeling on abstracts from the journal PNAS to identify topics that rose or fell in popularity from 1991 to 2001. Nelson has been analyzing change in topics over time in the Richmond Times-Dispatch to understand social and political changes and continuities in Richmond during the American Civil War. Yang, Torget and Mihalcea applied topic modeling methods to newspapers from 1829–2008. Mimno used topic modelling with 24 journals on classical philology and archaeology spanning 150 years to look at how topics in the journals change over time and how the journals become more different or similar over time. Yin et al.^[6] introduced a topic model for geographically distributed documents, where document positions are explained by latent regions which are detected during inference. Chang and Blei^[7] included network information between linked documents in the relational topic model, which allows to model links between websites. The author-topic model by Rosen-Zvi et al.^[8] models the topics associated with authors of documents to improve the topic detection for documents with authorship information. HLTA was applied to a collection of recent research papers published at top AI and Machine Learning venues. The results are shown at aipano.cse.ust.hk to help researchers track research trends and identify papers to read, and help conference organizers and journal editors identify reviewers for submissions. In practice researchers attempt to fit appropriate model parameters to the data corpus using one of several heuristics for maximum likelihood fit. A recent survey by Blei describes this suite of algorithms.^[9] Several groups of researchers starting with Papadimitriou et al.^[2] have attempted to design algorithms with probable guarantees. Assuming that the data were actually generated by the model in question, they try to design algorithms that probably find the model that was used to create the data. Techniques used here include singular value decomposition (SVD) and the method of moments. In 2012 an algorithm based upon non-negative matrix factorization (NMF) was introduced that also generalizes to topic models with correlations among topics.^[10] See also[edit] • BigARTM (https://github.com/bigartm/bigartm) • Stanford Topic Modeling Toolkit (http://nlp.stanford.edu/software/tmt/tmt-0.4/) • Gensim – Topic Modeling for Humans (http://radimrehurek.com/gensim/) • topicmodels R package (https://cran.r-project.org/package=topicmodels) • Lettier's LDA Topic Modeling - a PureScript, browser-based implementation of LDA topic modeling. (Tool: https://lettier.com/projects/lda-topic-modeling/ Source: https://github.com/lettier/ • jLDADMM A Java package for topic modeling on normal or short texts. jLDADMM includes implementations of the LDA topic model and the one-topic-per-document Dirichlet Multinomial Mixture model. jLDADMM also provides an implementation for document clustering evaluation to compare topic models. • TopicModelsVB.jl Julia package (https://github.com/ericproffitt/TopicModelsVB.jl) • STTM A Java package for short text topic modeling (https://github.com/qiang2100/STTM). STTM includes these following algorithms: Dirichlet Multinomial Mixture (DMM) in conference KDD2014, Biterm Topic Model (BTM) in journal TKDE2016, Word Network Topic Model (WNTM ) in journal KAIS2018, Pseudo-Document-Based Topic Model (PTM) in conference KDD2016, Self-Aggregation-Based Topic Model (SATM) in conference IJCAI2015, (ETM) in conference PAKDD2017, Generalized P´olya Urn (GPU) based Dirichlet Multinomial Mixturemodel (GPU-DMM) in conference SIGIR2016, Generalized P´olya Urn (GPU) based Poisson-based Dirichlet Multinomial Mixturemodel (GPU-PDMM) in journal TIS2017 and Latent Feature Model with DMM (LF-DMM) in journal TACL2015. STTM also includes six short text corpus for evaluation. STTM presents three aspects about how to evaluate the performance of the algorithms (i.e., topic coherence, clustering, and classification). • Lettier's LDA Topic Modeling - a PureScript, browser-based implementation of LDA topic modeling. (Tool: https://lettier.com/projects/lda-topic-modeling/ Source: https://github.com/lettier/ • A Java implement of HLTA: https://github.com/kmpoon/hlta Further reading[edit] External links[edit]
{"url":"https://static.hlt.bme.hu/semantics/external/pages/LSA/en.wikipedia.org/wiki/Topic_model.html","timestamp":"2024-11-06T23:40:56Z","content_type":"text/html","content_length":"87373","record_id":"<urn:uuid:bdee7f02-a3dd-492b-a38d-5eb3fb13d3b4>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00642.warc.gz"}
Dynamics of Polymeric Liquids, Vol. 1 Fluid Mechanics Modified on Wednesday, 12 November 2008 01:07 PM by mcurenton — Categorized as: Book Reviews Dynamics of Polymeric Liquids, Vol. 1 Fluid Mechanics Vol. 22, #1, May 1995 by Bird, Armstrong, and Hassager, Second Edition, Wiley Interscience This is a fundamental text for understanding the physics and the calculation of polymeric flow. Newtonian, non Newtonian, and viscoelastic polymers are covered, and the physics is presented for many models of material behavior. Material constants for many common polymers are given for use with the models. The approach is highly mathematical, requiring the understanding of partial differential equations. Often vector notation is used, but the text is readily useful for those not skilled in this discipline. The text is divided into three sections. The first section describes the physics of the various polymer materials in a variety of flow geometrics. The second section presents various equations for describing the viscous behavior of Newtonian and non-Newtonian melts. The third section (about one half of the text) includes the equations for viscoelastic effects in melt behavior. The section on viscoelastic behavior requires vector calculus, and the mathematical conventions are presented in an appendix. Models for many fundamental flow geometrics (such as flow in a slot) are given for many rheological models. Emphasis is on isothermal conditions but some cases with heat transfer are given for Newtonian materials. Dimensionless results are presented for efficient scaling of the results. This is an advanced comprehensive text on the theory of polymeric flow in terms of mathematics. However, it is very useful to the polymer processing engineer and scientist because it contains basic information about fundamental melt flow. - S. Derezinski
{"url":"http://extrusionwiki.com/wiki/Print.aspx?Page=BR-DynamicsOfPolymericLiquidsVol1FluidMechanics","timestamp":"2024-11-11T10:38:25Z","content_type":"application/xhtml+xml","content_length":"5830","record_id":"<urn:uuid:af4686d0-dace-4524-bc84-4969a8b86bb8>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00769.warc.gz"}
Michael Singer's webpages The London School of Geometry and Number Theory I am the founding director of the EPSRC Centre for Doctoral Training in Geometry and Number Theory, or London School of Geometry and Number Theory for short. The London Mathematical Society I am a member of Council of the London Mathematical Society and the LMS's Research Policy Committee (my term started in late 2012). Mathematical Surveys and Monographs of the American Mathematical Society I have been on the editorial of this series for a few years. From February 2015 I will be chair of the committee. Proceedings of the London Mathematical Society I am an editor of Proceedings of the LMS. Workshop organization Metric and analytical aspects of moduli spaces, Isaac Newton Institute, Cambridge, July--August 2015.
{"url":"http://www.homepages.ucl.ac.uk/~ucahasi/random.htm","timestamp":"2024-11-06T13:31:07Z","content_type":"text/html","content_length":"2479","record_id":"<urn:uuid:b7153c63-3d2d-48cf-a895-21e4f20d773f>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00115.warc.gz"}
Repeated measures ANOVA The Repeated measures ANOVA procedure analyzes groups of related dependent variables that represent different measurements of the same attribute. Note that the order in which you specify within-subjects factors is important. Each factor constitutes a level within the previous factor. In a doubly multivariate repeated measures design, the dependent variables represent measurements of more than one variable for the different levels of the within-subjects factors. For example, you could have measured both pulse and respiration at three different times on each subject. The Repeated measures ANOVA procedure provides multivariate analyses for the repeated measures data. Both balanced and unbalanced models can be tested. A design is balanced if each cell in the model contains the same number of cases. In a multivariate model, the sums of squares due to the effects in the model and error sums of squares are in matrix form. These matrices are called SSCP (sums-of-squares and cross-products) matrices. In addition to testing hypotheses, Repeated measures ANOVA produces estimates of parameters. After an overall F test has shown significance, you can use post hoc tests to evaluate differences among specific means. Estimated marginal means give estimates of predicted mean values for the cells in the model, and profile plots (interaction plots) of these means allow you to visualize some of the relationships easily. Residuals, predicted values, Cook's distance, and leverage values can be saved as new variables in your data file for checking assumptions. Also available are a residual SSCP matrix, which is a square matrix of sums of squares and cross-products of residuals, a residual covariance matrix, which is the residual SSCP matrix divided by the degrees of freedom of the residuals, and the residual correlation matrix, which is the standardized form of the residual covariance matrix. In a weight-loss study, suppose the weights of several people are measured each week for five weeks. In the data file, each person is a subject or case. The weights for the weeks are recorded in the variables weight1, weight2, and so on. The gender of each person is recorded in another variable. The weights, measured for each subject repeatedly, can be grouped by defining a within-subjects factor. The factor could be called week, defined to have five levels. The variables weight1, ..., weight5 are used to assign the five levels of week. The variable in the data file that groups males and females (gender) can be specified as a between-subjects factor to study the differences between males and females. If subjects were tested on more than one measure at each time, define the measures. For example, the pulse and respiration rate could be measured on each subject every day for a week. These measures do not exist as variables in the data file but are defined here. A model with more than one measure is sometimes called a doubly multivariate repeated measures model. Type I, Type II, Type III, and Type IV sums of squares can be used to evaluate different hypotheses. Type III is the default. Post hoc range tests and multiple comparisons (for between-subjects factors): least significant difference, Bonferroni, Sidak, Scheffé, Ryan-Einot-Gabriel-Welsch multiple F, Ryan-Einot-Gabriel-Welsch multiple range, Student-Newman-Keuls, Tukey's honestly significant difference, Tukey's b, Duncan, Hochberg's GT2, Gabriel, Waller Duncan t test, Dunnett (one-sided and two-sided), Tamhane's T2, Dunnett's T3, Games-Howell, and Dunnett's C. Descriptive statistics: observed means, standard deviations, and counts for all of the dependent variables in all cells; the Levene test for homogeneity of variance; Box's M; and Mauchly's test of sphericity. Spread-versus-level, residual, and profile (interaction). Data considerations The within-subjects factor variables should be quantitative. The data file should contain a set of variables for each group of measurements on the subjects. The set has one variable for each repetition of the measurement within the group. A within-subjects factor is defined for the group with the number of levels equal to the number of repetitions. For example, measurements of weight could be taken on different days. If measurements of the same property were taken on five days, the within-subjects factor could be specified as day with five levels. For multiple within-subjects factors, the number of measurements for each subject is equal to the product of the number of levels of each factor. For example, if measurements were taken at three different times each day for four days, the total number of measurements is 12 for each subject. The within-subjects factors could be specified as day(4) and time(3). The multivariate approach considers the measurements on a subject to be a sample from a multivariate normal distribution. Related procedures Use the Explore procedure to examine the data before doing an analysis of variance. If there are not repeated measurements on each subject, use Mixed between-within ANOVA. If there are only two measurements for each subject (for example, pre-test and post-test) and there are no between-subjects factors, you can use the Paired-samples t Test procedure. Obtaining a Repeated measures ANOVA This feature requires Custom Tables and Advanced Statistics. 1. From the menus choose: 2. Enter a Factor name under the Within-subject factors section. The factor names represent the independent variables that constitute the different time points or conditions at which the dependent variables are measured. You can use the Number of levels control to specify the number of levels for the associated factor name. You can optionally click Add a factor to enter additional factor names. 3. Click Select variables under the Measures section to define dependent variables that were repeatedly measured across the within-subjects factor levels. The Select variables dialog allows to specify a variable for each factor level that is specified in the Within-subject factors section. Click OK when done. You can optionally click Add to create additional measures. 4. Optionally, you can select the following options from the Additional settings menu: □ Click Contrasts to test for differences among the factor variables. □ Click Statistics to select which statistics to include in the procedure. □ Click EM means to select the factors and interactions for which you want estimates of the population marginal means in the cells. □ Click Plots to enable the display of charts in the output and to select the charting settings. □ Click Options to specify null hypothesis settings and control of the treatment of missing data. □ Click Save to dataset to add values predicted by the model, residuals, and related measures to the dataset as new variables. 5. Click Run analysis. This procedure pastes GLM: Repeated Measures command syntax.
{"url":"https://www.ibm.com/docs/da/spss-statistics/beta?topic=statistics-repeated-measures-anova","timestamp":"2024-11-04T11:38:13Z","content_type":"text/html","content_length":"20942","record_id":"<urn:uuid:d9e3c6c1-2e5b-4504-ac37-963ae9f64a01>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00546.warc.gz"}
How to Calculate Crypto Profit And Loss? - CryptoStakerFX The concept of crypto Profit and Loss is an easy one to grasp for those trading the asset. Whenever you sell or dispose of your crypto asset, you will be making either a profit or a loss. Here you will find all you need to know about crypto profit and loss calculation, with some useful tips on how to do that. Like any regular business, cryptocurrency trading is usually motivated by a desire to make a gain or profit. However, based on the risks involved in business, there is also a likelihood of incurring a loss if the right decisions are not made due to misplaced or misinformed planning. The implication of such losses can be really telling, especially when the amount invested is significant. Thus, every person trading crypto necessarily has to know how to calculate profit and loss. Failing to do such calculations would amount to making a blind investment and pose trading difficulties as you will never know when the time is favorable to sell your assets and make money. Crypto Profit and Loss Definition The concept of crypto profit and loss is an easy one to grasp for those trading the asset. Whenever you sell or dispose of your crypto asset, you will be making either a profit or a loss. You make a profit on your crypto asset when you sell your crypto for more than the price at which you bought it, and you make a loss when you sell it for less than what it cost you to buy it. Calculation of Crypto Profit and Loss When calculating your profit in cryptocurrency trading, a good point to start is knowing and taking into account your breakeven price. This is important to help you draw proper contrast to determine whether a profit or loss is to be made. Let’s use a simple illustration. Assuming you bought 1 Bitcoin (BTC) a month ago for $8,000. This becomes your breakeven price. Today, when you checked the market, you noticed the price of a Bitcoin has increased to $8,250. To know your potential profit, you simply subtract the breakeven price from the current market price. Subtracting $8,000 from $8,250 will result in $250. This means that if you sell your Bitcoin today, you will be able to earn a profit of $250. You can then decide if you want to make the sale and earn this profit immediately, use it to buy another crypto asset, or leave it with the expectation of earning more on it. This is only a basic illustration of how you can calculate your profit. The action becomes more complex where you have coins in multiple cryptocurrencies, when you trade regularly, and when you have varying price point targets for each cryptocurrency asset. When calculating your profit, there are other metrics of profit and loss that should be borne in mind. So, besides calculating your total profit, you might also need to use some other metrics, depending on the targets you set. These metrics include: • average buy price and sell price, • realized profit (profit made on coins which you sold already, • unrealized profit (profit from calculations done with the current market price), • total profit (the sum of the realized and unrealized profit). Using unrealized profit Most times, traders in the crypto market become impatient, deciding to take profits and leave the market even when there’s an upward trend. In other cases, traders fail to sell their crypto assets when they should. Considering the volatility of the market and the contrasting possibilities, it is important to keep a constant watch on the market. Say you bought BNB for $160 and it rose to $180. You would have already made a profit of $20. But you’re actually yet to sell your assets so you have not really earned the profit until you sell off the assets at such trending prices. Similarly, the price of the BNB can drop a little or go below the price at which you bought it. For example, if you bought BNB for $160 and the current market price is $130, you lost $30. However, you’re not really at a loss, as long you don’t make sales of your assets. Multiplying to get the percentage profit A larger number of traders in the crypto market prefer using the percentage approach to calculate their crypto profit and loss. To achieve this, you can calculate your crypto profit by multiplying by the percentage increase in the value of your crypto asset. To do so, you have to multiply the price at which you bought the crypto (breakeven price or entry price) by the corresponding percentage expression. For instance, if you bought BNB at $2 entry price, and you only want to make 10% of the trade and sell your assets, you would have to multiply your entry price by the corresponding percentage profit of 10%. Thus, it would be $2 your entry price multiplied by 1.1 to get your exit price. The value of your assets would then be $2.2, less the entry price, to arrive at a profit of $0.2. The rule of thumb is to add the number 1 every time you want to multiply by a hundred. Using a spreadsheet One effective way to calculate your crypto profit and loss is by creating a spreadsheet in Microsoft Excel. With the spreadsheet created, you can enter your entry price, the current market price, the number of coins, and the appropriate formula to determine your profit. The disadvantage of using a spreadsheet is the manual input required for all the data you would use to calculate your profit. Considering the daily change in prices in the market, you need to be apt in updating prices. It will become more difficult and tasking when you have different cryptocurrencies. Hence, the spreadsheet method could be time-consuming and the manual input of data could lead to input errors. If however, you’re only starting out with a few coins, then the spreadsheet method is a good way to calculate your crypto profit and loss. Crypto trading calculators The easiest and more effective way to calculate your Crypto profit and loss when trading is the use of crypto trading calculators which are readily available online. So, if you find the use of a spreadsheet to be too tasking and complicated, you can easily search out a trading calculator that works best for you. A good option has been the use of the Binance profit calculator, also known as the Binance Futures Calculator. You can use this calculator to determine your profits and loss, returns, and margins. It can also be used to set a price point target that you plan to attain before selling your There is also the Easy Bitcoin calculator. You can use it to calculate your profits, but it is limited to Bitcoin. Another good option is the Altrady calculator which not only aids the calculation of your profit but also shows your latest trade position and the estimate of your crypto assets when they are converted to US dollars or other currencies. The great thing about the crypto trading calculators is that they are not just convenient, most of them are actually available with a period of a free trial or for a cost-friendly amount. Importance of Effective Crypto Profit and Loss Calculation You’re taught in the cryptocurrency market to “buy low and sell high.” In other words, you have to keep a regular watch on the market and the price trends of each crypto asset, and when you notice the price is low, you strike. Cryptocurrencies that have an upward trend are eventually going to rise in price, and that would be the best time for you to sell yours and make a good gain or profit. But what is really the main deal of calculating Crypto profit and loss ? Why is it so important? The points have briefly been made at the outset of this piece, but here’s a detailed explanation for better insight and clarity. You spend your personal money anytime you buy a cryptocurrency. When you decide later to dispose of the asset at a price higher than what you paid to buy it, you make a profit. If you fail to do a pre-calculation of the crypto profit and loss to accrue from your intended sale, you might end up selling the asset at a lower price than you actually should and thus incur a loss. In addition, a mistake that is common to both new and old traders in some cases is waiting too long after buying a coin before selling or disposing of it. The truth is, unless you’re the strategy of leaving your crypto asset for a few years to earn profits, it could result in a major loss if you wait for too long. It is actually good and profitable to buy low and sell high if you implement the strategy with a high measure of discipline. Otherwise, you are likely going to become tempted easily by an upward trend of a particular coin and wait for it to keep going higher. The risk involved here is that the price of such coins can suddenly drop and this would lead you to a loss rather than making a Setting a really high point of sales price risks the possibility of a sudden drop in the market price before you get the chance to sell. This forms the essence or importance of calculating your Crypto profit and loss. It’s a much better strategy to stick to a plan and a price point that is realistic than simply adopting the approach of waiting for the price to go very high before moving to sell your crypto asset. Bottom Line Being able to calculate your Crypto profit and Loss in trading cryptocurrency is one of the important skills you need to become successful as a trader. You can make use of a personal spreadsheet or a crypto trading calculator. The important thing is you should be aware of how much profit you stand to make to determine whether to sell or hold back until a more favorable time.
{"url":"https://cryptostakerfx.com/how-to-calculate-crypto-profit-and-loss/","timestamp":"2024-11-04T11:53:29Z","content_type":"text/html","content_length":"153546","record_id":"<urn:uuid:fbd85747-1420-4f13-8077-d5a598eb1f9f>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00362.warc.gz"}
How to Find a Degree of Freedom? How to find a degree of freedom? In mathematics, a degree of freedom is the number of independent coordinates that are required to specify a position. In other words, it is the number of variables that can be varied independently. There are two ways to find the degree of freedom of a system. The first way is to count the number of variables that are required to describe the system. For example, if you have a system with two particles, each with three coordinates (x, y, and z), then the degree of freedom would be six (two particles times three coordinates). The second way to find the degree of freedom is to use calculus. This method is more complicated but it can be used to find the degree of freedom for systems with more than one particle. To find the degree of freedom using calculus, you need to take the derivative of the position vector with respect to time. What are Degrees of Freedom? In statistics, the term “degrees of freedom” refers to the number of values in a data set that are free to vary. For example, if you have a data set consisting of 10 observations, there are 9 degrees of freedom. This is because the first value is constrained by the other 9 values; it can only take on 1 of 10 possible values. The second value is also constrained by the other 9 values; it can only take on 1 of 9 possible values. And so forth. The degrees of freedom for a data set with n observations is therefore n-1. The concept of degrees of freedom can be applied to different types of statistical analyses. In regression analysis, for example, the number of independent variables (IVs) in the model determines the degrees of freedom for error (DFE). Degrees of Freedom: Two Samples When considering the degree of freedom for two samples, it’s important to remember that this is a measure of how much variability there is in the data. The degrees of freedom for two samples is equal to the number of observations in each sample minus 1. This means that if there are 30 observations in one sample and 40 observations in another, the degrees of freedom would be 29 and 39 respectively. The total degrees of freedom for two samples is then simply the sum of the degrees of freedom for each sample. In our example, this would be 29 + 39 = 68. So what does this all mean? Essentially, the degree of freedom tells us how many independent comparisons can be made between two samples. In our example, we have 68 independent comparisons that can be made between the two Effective Degrees of Freedom There are many different types of effective degrees of freedom, but they all have one common goal: to make the most efficient use of available resources. By definition, effective degrees of freedom is the number of independent variables that can be varied without violating any constraints. In other words, it is a measure of how much “wiggle room” you have to work with when making decisions. The concept of effective degrees of freedom can be applied to many different areas of life, including business, personal finance, and even relationships. For example, if you are trying to save money on your monthly expenses, you might look at your budget and see that you have $100 left over after all your bills are paid. You could then use that $100 to either save for a rainy day or go out and enjoy yourself. Degrees of Freedom in ANOVA In statistics, the term “degrees of freedom” (DF) is used to describe the number of independent pieces of information that go into the estimation of a parameter. For example, when estimating the population mean from a sample mean, there are two pieces of information: the sample size and the variability of the data. Therefore, the DF for this estimation is 2. The DF for ANOVA can be thought of in a similar way. When estimating the population means from several samples, there are multiple pieces of information that contribute to the estimation. The number of samples, the variability within each sample, and the variability between the samples all play a role in estimating the population means. The DF for ANOVA can be broken down into two components:the within-group degrees of freedom andthe between-group degrees of freedom. How to Calculate the Effective Degrees of Freedom When calculating the effective degrees of freedom, there are a few things to keep in mind. First, you need to know the population size and the number of groups. Second, you need to calculate the variance for each group. Finally, you need to sum the variances and divide by the degrees of freedom. To calculate the effective degrees of freedom, you first need to know the population size and the number of groups. The population size is the total number of individuals in all the groups combined. The number of groups is the total number of distinct groups that make up the population. To calculate the variance for each group, you first need to calculate the mean for each group. Then, for each group, you subtract the mean from every individual score in that group and square the result. Finally, you add up all of these squared values and divide bythe degrees of freedom.
{"url":"http://higheducations.com/how-to-find-a-degree-of-freedom/","timestamp":"2024-11-04T18:25:41Z","content_type":"text/html","content_length":"92763","record_id":"<urn:uuid:9696c139-a4f9-4617-9be8-df9957c49d15>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00679.warc.gz"}
Write cell workbook is not working while using formula i used this excel formula =“=IFS(I2=1,G2/1,I2=2,G2/2,I2=3,G2/3)” in write cell then after writing it im not getting the result instead im getting like mentioned in pic I think it should be separated by semicolon Hope this helps Hi @Sathish_Kumar5 If possible can you share the excel file, if it doesn’t has confidential data. Hi @Sathish_Kumar5 , Could you please share the snap of write range activity.? yeah sure please find the sample excel without main data Filteredcombo.xlsx (9.0 KB) i want the result in s column please find the excel file itself Filteredcombo.xlsx (9.0 KB) this is the formula im using =IFS(I2=1,G2/1,I2=2,G2/2,I2=3,G2/3) this is the formula im using =IFS(I2=1,G2/1,I2=2,G2/2,I2=3,G2/3) please let me know if u need anything else Hey @Sathish_Kumar5 Please share a snap of the activity in studio in such a way the properties pane is also visible. Hey @Sathish_Kumar5 , Please attached the file where I have given the formula for you, I am using excel Version Office 16 and I couldn’t find the ‘IFS’ in the drop down so went with a slightly different approach. Filteredcombo.xlsx (9.1 KB) To Implement the same in the Workbook range activity, Please use “=IF(I2=1,G2/1,IF(I2=2,G2/2,IF(I2=3,G2/3,”““Value in I Column is not in range 1-3"””)))" I also found the official Microsoft page on ‘IFS’ where they have listed the Excel versions which support the function, Please check if the Excel version you are using is one of them that may be the reason the function is not working as expected too. Happy Automating.! If this response has helped you, please mark it as a solution. 1 Like The problem may be due to incorrect use of the IFS function or incorrect formatting of the formula. Make sure that the conditions in the IFS function match your requirements correctly. Check the correct spelling of the cells. You may have provided incorrect cell references or incorrect column and row addresses. Verify that cells I2 and G2 contain the expected values. The problem may be with the data you are trying to process. Very huge thanks for your efforts and yeah it worked 1 Like My pleasure @Sathish_Kumar5 1 Like Actually i used this formula to write just added double quote ok “=IF(I2=1,G2/1,IF(I2=2,G2/2,IF(I2=3,G2/3,”“Value in I Column is not in range 1-3"”)))". so in this im facing only one issue like im not getting zero so is there anything i can do to get zero values Hi @Sathish_Kumar5 , I am having difficulty understanding your comment. When exactly do you want 0 as result.? Can you give a sample case .? 1 Like Hi actually its my fault it is working fine i didnt extend the code till end so it is working fine… anyway thanks for quick response No Problem, Always happy to help. 1 Like This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.
{"url":"https://forum.uipath.com/t/write-cell-workbook-is-not-working-while-using-formula/731060","timestamp":"2024-11-04T10:56:37Z","content_type":"text/html","content_length":"78882","record_id":"<urn:uuid:4f10eb80-85f1-4a73-8691-27b3ebdcaa45>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00473.warc.gz"}
Two-Factor ANOVA BrainVoyager v23.0 Two-Factor ANOVA The two-way ANOVA model is an extension of the one-way ANOVA to two grouping factors where each item (i.e. scanned person) is grouped in two ways. Like the one factorial ANOVA, the model requires one continuous dependent variable as input that is analyzed with respect to two categorical between-subjects (grouping) factors, each with two or more levels (e.g. factor "disorder" with the levels "healthy participants", "disorder A", "disorder B", and factor "gender" with the two levels "male" and "female"). The model allows to analyze whether the two factors interact or whether they exhibit independent main effects on the dependent variable. In order to provide the dependent variable, it is recommended to create a VMP or SMP file containing one contrast map per subjerct as input. A GLM may also be provided but it is only possible to select one of the condition beta estimates as the dependent variable, i.e. you can (at present) not specify a ontrast when using the GLM as input. The snapshot below shows a VMP file with one entry per subject that has been prepared using the mentioned contrast map per subject tool in the Overlay GLM Options dialog. Unbalanced Data The between-subjects ANOVA model has been originally developed for balanced data, i.e. with an equal number of subjects assigned to each factor level combination, which are also called treatments. When data is unbalanced, there are different ways to calculate the sums of squares for ANOVA effects. There are at least 3 approaches, commonly called Type I, II and III sums of squares (introduced by the SAS statistics software). Which type to use has led to an ongoing controversy in the field of statistics (for an overview, see Heer (1986)). However, it essentially comes down to testing different hypotheses about the data. Note that when data is balanced, the factors are orthogonal, and types I, II and III all give the same results. In a model with two factors A and B, there are two main effects, and an interaction, AB. The full model is usually represented by SS(A, B, AB). Other models are represented similarly: SS(A, B) indicates a model with no interaction, SS(B, AB) indicates a model that does not account for effects from factor A, and so on. The influence of particular factors (including interactions) can be tested by examining the differences between models. For example, to determine the presence of an interaction effect, an F-test of the models SS(A, B, AB) and the no-interaction model SS(A, B) would be carried out. The Type I model tests one effect after other effects, i.e. first it tests the effect of factor A (SS(A)), then factor B (SS(A, B) - SS(A)), and then the interaction (SS(A, B, AB) - SS(A, B)). This sequential sums of squares calculation leads to different results depending on the order of the main effects. It can be shown that this approach is testing the first factor without controlling for the other factor. Because of these undesired properties, the Type I model is not used in BrainVoyager. The Type II model tests for each main effect after the other, i.e. it tests the effect of factor A as SS(A, B) - SS(B), and the effect of factor B as SS(A, B) - SS(B). While this leads to results that are independent of the order of factors, this approach assumes that there are no significant interaction effects. Since this assumption is too limited for massive voxel-level testing, this approach is also not used in BrainVoyager (but it might be added as an option for ROI analyses in the future since it is more powerful than the type III model in case of no significant interaction). The Type III model tests for a main effect after the other factor and interaction, i.e. the effect of factor A is tested using the difference SS(A, B, AB) - SS(B, AB) and factor B is tested as SS(A, B, AB) - SS(A, AB). This approach is, thus, valid in the presence of significant interaction AB. It should be noted, however, that it is often not interesting to interpret a main effect if interactions are present. Because of its least restrictive assumptions, the Type III model is used In BrainVoyager. In the following, the two-factor ANOVA is demonstrated first using a whole-brain analysis followed by a region-of-interest analysis. The data used is from a study by Zutphen et al. (2016). Specification of the Design When providing a prepared VMP (or SMP) with one dependent variabler as input in the Import VMPs field, the dialog initially selects a one-factorial between-subjects design. In order to specify that a two-factorial design is desired, increase the No. of between-subjects factors spin box to 2 (see dialog below). Furthermore, the names of the two factors can be changed to reflect the respective design by double-clicking the default name in the Factor list box; in the example used to illustrate the two-factor ANOVA, the two factors are labelled as "Group" and "Site". The levels for a selected factor are shown in the Level list box and the number and names of the levels can be adjusted by using the No. of levels spin box and by clicking on the default names in the Levels list box, respectively. In the example, the first factor (selected in the screenshot above) has two levels "BPD" and "NPC" representing a patient and control group and the levels of the second factor represent different scanning sites. It is recommended to save the specified design with factor and level neames to disk for later use (e.g. VOI analysis) using the Save Design button. The final step before running the two-factor ANOVA model is to assign the subjects to the respective group cells (factor level combinations). This can be achieved using the entries shown in the Table tab (see screenshot above). While one can enter the grouping in the Full Table mode, for purely between-subjects designs it is more convenient to select the Groups tab, which restricts the table to the grouping assignment column(s) and allows to save/load grouping information in grouping (.GPA) files. The grouping sub-table shows two columns representing the two factors of the specified design. The cells need to be filled with level values starting with value 1 for the first factor level. The entries in a row for the two factors specify the level combination (cell) to which the corresponding participant will be assigned. The table in the snapshot above shows 51-59 of the full (97) participant list; subjects 51-55 are assigned to the cell [1, 3], which corresponds to the level combination "BPD" - "Luebeck" (borderline patient group, measured at site Luebeck) while subjects 56-59 are assigned to the group [2, 1], which corresponds to the level combination "NPC" - "Maastricht" (non-patient control group measured at site Maastricht). While it is recommended to assign the same number of subjects to each cell (balanced design), the subsequent analysis handles also unbalanced designes (see explanation above). After completing the grouping assignment, it is recommended to save the information to disk for later use using the Save GPA button. The whole-brain two-factor ANOVA can now be started by clicking the GO button. Inspecting Two-Factor ANOVA Map Results After running the two-factor ANOVA, a statistical map is shown testing for the effect of the first factor ("Group" in the example data). The F map for the first factor of the example data is shown in the screenshot below. In order to test other effects of interest, the Overlay RFX ANCOVA Tests dialog can be invoked from the Analysis menu (or from the context menu of the VMR document). For two-factor ANOVA designs, the dialog offers testing for the main effect of factor A (resulting F map shown in the screenshot above for the example data), testing for the main effect of factor B and testing for the interaction between the two factos (see snapshot below). These tests can be selected by chedking the F test, factor A, F test factor B and F test, interaction A x B items followed by clicking the OK button. Besides these F tests, specific contrasts can be tested by selecting the Specify contrast option. After selecting this option, the contrast table in the upper part of the dialog will be enabled and the desired contrast can be entered by clicking into the cells repeatedly to cycle through the contrast values +1, -1 and 0. Other contrast values can be entered by double-clicking the cell to bring up a text editing field allowing to enter a number. Note that contrasts need to follow the typical rules of ANOVA models as described in standard statistical text books. In the example below, a contrast is entered testing for the main effect of patient vs control group but leaving out one scanning site. ROI Analysis The data from regions-of-interest (ROIs, i.e. VOIs in volume space and POIs in cortex surface space) are analyzed in the same way as for other designs. The only difference is that a two-factor ANOVA model needs to be specified and data from desired ROIs needs to be available. In order to avoid circularity, the ROIs should be determined from independent data, e.g. from anatomical coordinates or better from additioal functional localizers. It is usually also acceptable to use ROIs from the same data as long as the subsequent contrasts are not biased by the contrast used to select the ROIs (i.e. the subsequent contrasts need to be orthogonal to the selecting contrasts). While the dependent measures can be manually entered (one value per subject), they can be also loaded from specific file types (".ATD" and ".DPV"). Both the VOI Analysis Options and the Patch-Of-Interest Analysis Options dialog contain routines to save data from multi-subject GLMs / VMPs and SMPs in ".ATD" ("ATD = ANCOVA table data) format, which can be read directly into the ANCOVA dialog. To start a ROI ANOVA analysis, the Use ANCOVA with table data option has to be checked in the Input tab of the ANCOVA dialog. While the number of subjects might be determined directly from a .ATD file later, it is recommended to also specify the number of subjects to analyze in the Number of subjects field. For convenience, the ANCOVA dialog is opened and prepared for analysis when exporting ROI values from the VOI Analysis Options dialog that can be invoked by clicking the Options button in the Volume-Of-Interest Analysis dialog. In the screenshot below, a previously defined VOI ("Insular RH") has been loaded that is now available for subject-specific value extraction. Since we want to read the values from the previously prepared multi-subject VMP file, the Extract values from multi-subject VMP option need to be selected in the Create input data for ANCOVA field (see screenshot below). In case that the multi-subject VMP file (usually the same used also for the whole-brain analysis) has been already loaded in the Volume Maps dialog, the VMP file entry will automatically be filled with the respective file name. In case that the VMP file has not been already loaded, it can be done using the Browse button next to the VMP file text field. The Extract Values button can now be clicked in order to extract the subject-specific values from the specified VOI. The extracted data is not only saved to disk but also directly loaded into the ANCOVA dialog (see snapshot below). The presented ANCOVA dialog shows in the second column the extracted values per subject as desired but the default model and group assignment is not yet correct. While the design can be adjusted manually, it is easier to reload the design created and saved for the whole-brain analysis (see above), which will change the default model to a two-factor model and sets the factor and level names. In case that the grouping values for the third (factor A) and fourth column (factor B) were also saved earlier they can be loaded by switching to the Groups tab and using the Load GPA button. The prepared data table for a two-factorial ROI ANOVA looks than similar to the one shown below. After clicking the Compute button, the ANOVA results are presented in a window and also saved to disk as a HTML file. The output first presents a title indicating the name of the ROI from which the data was extracted (see snapshot below). This is followed by two tables indicating the name and levels of factor A and factor B. The next table shows the cell means containing the average value of the dependent variable for each group (factor-level combination) as well as row/column values showing the factor level values averaged across all levels of the other factor respectively. The numbers in brackets next to each entry indicate the number of subjects assigned to each group. As you can see in the example output below, the study has a rather unbalanced design but can be analyzed using the Type III sums-of-squares model as described above. The ANOVA table is shown below the cell means presenting the results of testing for main effects (factor A, factor B) and/or the interaction effect between A and B. In the example data, the insular ROI data shows a highly significant main effect "Group", a nearly significant main effect "Site" and no interaction effect. Below the Two-Factor ANOVA table, critical minimal distances between cell means are shown that maybe helpful in case of more than two levels in order to decide which cells differ significantly from each other. Herr DG (1986). On the History of ANOVA in Unbalanced, Factorial Designs: The First 30 Years. The American Statistician, 40(4), 265-270. Zutphen L et al. (2016). Copyright © 2023 Rainer Goebel. All rights reserved.
{"url":"http://brainvoyager.com/bv/doc/UsersGuide/StatisticalAnalysis/RandomEffectsAnalysis/TwoFactorANOVA.html","timestamp":"2024-11-11T17:18:56Z","content_type":"text/html","content_length":"51083","record_id":"<urn:uuid:becc4222-c61e-4e4f-85b4-9013b8a220de>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00356.warc.gz"}
getPriorParameters {bhmbasket} R Documentation This function provides default prior parameters for the analysis methods that can be used in performAnalyses. n_worth = 1, tau_scale = 1, w_j = 0.5 method_names A vector of strings for the names of the methods to be used. Available methods: c("berry", "exnex", "exnex_adj", "pooled", "stratified") target_rates A vector of numerics in (0, 1) for the target rate of each cohort n_worth An integer for the number of subjects the variability of the prior should reflect response rate scale, Default: 1 tau_scale A numeric for the scale parameter of the Half-normal distribution of \tau in the methods "berry", "exnex", and "exnex_adj", Default: 1 w_j A numeric in (0, 1) for the weight of the Ex component in the methods "exnex" and "exnex_adj", Default: 0.5 Regarding the default prior parameters for "berry", "exnex", and "exnex_adj": • "berry": The mean of \mu is set to 0. Its variance is calculated as proposed in "Robust exchangeability designs for early phase clinical trials with multiple strata" (Neuenschwander et al. (2016)) with regard to n_worth. The scale parameter of \tau is set to tau_scale. • "exnex": The weight of the Ex component is set to w_j. For the Ex component: The target rate that results in the greatest variance is determined. The mean of \mu is set to that target rate. The variance of \mu is calculated as proposed in "Robust exchangeability designs for early phase clinical trials with multiple strata" (Neuenschwander et al. (2016)) with regard to n_worth. The scale parameter of \tau is set to tau_scale. For the Nex components: The means of \mu_j are set to the respective target rates. The variances of \tau_j are calculated as proposed in "Robust exchangeability designs for early phase clinical trials with multiple strata" (Neuenschwander et al. (2016)) with regard to n_worth, see also getMuVar. • "exnex_adj": The weight of the Ex component is set to w_j. For the Ex component: The target rate that results in the greatest variance is determined. The mean of \mu is set to 0. The variance of \mu is calculated as proposed in "Robust exchangeability designs for early phase clinical trials with multiple strata" (Neuenschwander et al. (2016)) with regard to n_worth, see also getMuVar. The scale parameter of \tau is set to tau_scale. For the Nex components: The means of \mu_j are set to the 0. The variances of \tau_j are calculated as proposed in "Robust exchangeability designs for early phase clinical trials with multiple strata" (Neuenschwander et al. (2016)) with regard to n_worth, see also getMuVar. • "pooled": The target rate that results in the greatest variance is determined. The scale parameter \alpha is set to that target rate times n_worth. The scale parameter \beta is set to 1 - that target rate times n_worth. • "stratified": The scale parameters \alpha_j are set to target_rates * n_worth. The scale parameters \beta_j are set to (1 - target_rates) * n_worth. A list with prior parameters of class prior_parameters_list Stephan Wojciekowski Berry, Scott M., et al. "Bayesian hierarchical modeling of patient subpopulations: efficient designs of phase II oncology clinical trials." Clinical Trials 10.5 (2013): 720-734. Neuenschwander, Beat, et al. "Robust exchangeability designs for early phase clinical trials with multiple strata." Pharmaceutical statistics 15.2 (2016): 123-134. See Also performAnalyses setPriorParametersBerry setPriorParametersExNex setPriorParametersExNexAdj setPriorParametersPooled setPriorParametersStratified combinePriorParameters getMuVar prior_parameters_list <- getPriorParameters( method_names = c("berry", "exnex", "exnex_adj", "pooled", "stratified"), target_rates = c(0.1, 0.2, 0.3)) version 0.9.5
{"url":"https://search.r-project.org/CRAN/refmans/bhmbasket/html/getPriorParameters.html","timestamp":"2024-11-02T21:04:08Z","content_type":"text/html","content_length":"7808","record_id":"<urn:uuid:e103dcb2-59d3-4164-835a-6195d527a2ae>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00718.warc.gz"}
Quantum Computing Moore’s Law states that the number of transistors integrated on a silicon chip doubles every year. By 2020 to 2025, the size of the transistor will be too small and it will generate heat that cannot withstand by the conventional silicon technology. The number of transistors that can integrate on the chip reaches its maximum. The speed of the devices will reach its limit. The current INTEL uses 32nm silicon technology. If scaling progresses, it will reduce the size, then the electrons start tunneling through micro-thin barriers between the wires corrupting the signals. In 1982, physicist Richard Feynman proposed the idea of creating machines based on laws of quantum mechanics instead of laws of classical physics. In 1985, David Deutsh published a paper which describes that quantum circuits are universal. He describes about the universal quantum computer. In 1994, Peter Shor derived the first quantum algorithm, to factor the large numbers in polynomial time. He used entanglement and superposition properties of quantum mechanics to find the prime factors of integers which are used in the quantum encryption technology. In 1997, Lov Grover developed a quantum search algorithm which is very faster. After that, David Cory, A.F.Fahmy, Timothy Havel, Issac Chuang, Neil Gershenfeld published a paper about quantum computers which is based on bulk spin resonance or thermal ensembles. In 2001, researchers used Shor’s algorithm .In 2005, scientists built a semiconductor chip ion trap which lead to scalable quantum computing. In 2007, first commercial quantum computer demo was published by D-wave on Orion system called superconducting adiabatic quantum computer processor. But the scientists said that it did not achieved the quantum speed yet. They are not sure that this system is efficient than conventional computers. Research is still going on. Figure below shows the world’s first commercial quantum computer built by D-Wave systems. Quantum Computing Processor First solid state quantum processor is built in 2009. Researchers created a silicon chip based on quantum optics that runs Shor’s algorithm. Further modifications are made in 2010. In 2011, Scientists made quantum teleportation. Teleportation is the technique used for transfer information in the quantum level without any signal path. In the same year D-wave systems first commercial 128 qubit processor. In 2012 IBM scientists said that they had several findings in quantum computing with superconducting integrated circuits. In 2015 they had further advancements towards the realization of practical quantum computer. Quantum Computing It is based on the theory of quantum mechanics. Quantum mechanics is the study of small objects (nanoscopic scales). It overcomes the inability of classical mechanics.Quantum mechanics provides mathematical description of dual particle like and wave like behavior. Atomic theory and corpuscular theory of light are the earliest versions of quantum mechanics. It produces high accurate output. Quantum mechanics is on the basis of quantum theories of matter and electromagnetic radiation. It explains the nature and behavior of energy and matter on the quantum. It uses the quantum mechanical phenomenon for the operations on data. These operations done at atomic or subatomic level. Classical mechanics deals with the macroscopic system while quantum mechanics deals with the microscopic system which is in atomic and subatomic level. The first generation of computer uses vacuum tubes, the second generation computer is based on transistors, the third generation computers are based on IC (Integrated circuits) and the fourth generation computers are based on the micro processors. The sizes of the computer system (components) are reduced and at last the classical theory will fail to explain. This explains the essentiality of quantum theory. Quantum Computer is the computational system which directly uses quantum mechanical phenomenon to perform operations on the data. The superposition and entanglement are the quantum mechanical phenomenon used by the quantum computers. The computational operation performs on quantum data which is called as Qubits. The quantum computers are entirely different from the classical computer based on transistors. In classical computers, the data are encoded in to binary digits, either 1 or 0. The classical computer process only binary information. That is either ON state or OFF state. But in quantum computation, it uses quantum bits (Qubits) are used for encoding the data. So it can encode infinite amount of information. It uses both ON and OFF at the same time. Consider a situation; trying a calculation with a lot of different numbers to find the correct one. In classical computers, it tried each number in each turn which takes time for computation. In quantum computer, it can try different numbers simultaneously by using the superposition of states which increases the computation speed and accuracy. The classic computer performs operations by using logic gates while quantum computer uses quantum logic gates. The qubit is the unit of quantum information. The physical objects such as electrons, atoms, etc uses the qubits. It can exist as either 0, 1 or superposition of 0, 1 at the same time. A single qubit has any quantum superposition of 2 states. A pair of qubits has any quantum superposition of 4 states. Three qubits has any quantum superposition of 8 states. Generally, a quantum computer with n qubits has 2^n different states at the same time. The quantum computer operates by intitially setting the qubits in a controlled initial state. These qubits are manipulated by using fixed sequence of quantum logic gates. The particular sequence of quantum logic gates is called quantum algorithm. At last they provide the correct solution. The qubit uses two energy levels of an atom. They are excited state and ground state. Properties of quantum mechanics The single qubit forced into superposition of 0 and 1 simultaneously which can denote by the addition of two state vectors. The qubit of superposition can exist in both states at the same time. Two particles overlap each other without interfering each other. Polarization of single photon is an example. It has the propery of exist in multiple states simultaneously. In the quantum system, If the particle have states a and b, then it also have the states α1a+α2b, whereα1 and α2 are complex numbers. It is the most important property in quantum information. It can exhibits correlation between states which are in superposition. That is, in quantum mechanics, the properties of particles can be changed even if there is no direct interaction between the particles. If the particles are linked, then the change in property of one particle will affect the another. The particles can linked and change the properties without having direct interaction. After that, the states of two particles are called entangled. Consider two qubits exist in the superposition of 0, 1 state.The measurement of one qubit is always correlated with the measurement of another qubit if the qubits are entangled. Quantum Computer is a machine which performs calculations based on the quantum mechanics .It is like the classical computer which cannot use transistors and diodes. The single atom transistor lead to the building of quantum computer which is controlled by electrons and qubits. Quantum Computer In 1930, Quantum Turing machine developed by Alan Turing is the theoretical model of such computer which is known as universal quantum computers. Todays computer can only manipulate one of the two bit 0,1. Quantum computers can encode the qubits exist in the superposition. Qubits represents atoms,ions, electron and control devices that working together will act as memory and processor. The quantum computer uses the multiple states at the same time, so the quantum computers are much efficient than traditional computers. The quantum computers composed of given number of qubits is fundamentally different from same number of buts composed by classical computers.The classical computer requires storage of 2^n complex coefficients for representing n qubit system. So it is clear that qubits can hold more information than classical computers. Consider a classical computer that can operates on three bit register. Then the different states are 2^3=8. The different states are 000,001,010,011,100,101,110,111. In case of deterministic computer, the computer is in exactly one of these states with probability 1. In the case of probabilistic computer, the computer is in any one of the possible states. Let the probabilistic states are A,B,C,D,E,F,G,H;where A is the probability of state 000,B is the probability of state 001,and so on. The sum of these probabilities must be 1. Similarly the states of three qubit quantum computer can be considered as a,b,c,d,e,f,g,h. The coefficients of three bit quantum computers have complex values. The sum of squares of coefficients magnitudes must be equal to 1. i.e, IaI^2 + IbI^2+….IhI^2=1. The probability of each state is represented by squared magnitudes. The complex number encodes magnitude and directions in the complex plane. The phase difference between any two states (coefficients) represents a meaningfull parameter. This is the fundamental difference between probabilistic classical computer and quantum computer. The three bit state in classical computer and three qubit state in quantum computers are eight dimensional vectors. But these bits are manipulated to perform operations in different ways. In both cases the systems must be initialized. i.e, In the case of classical computers, the sum of probabilities must be 1. In case of quantum computers, The sum of squares of states must be 1. After execution of algorithm, the results are read off. In classical computers, one three bit state is obtained from the samples of probability distribution of states. In quantum computers, three qubit state are measured ( quantum state equivalent to classical state by squaring the magnitudes of quantum states)which is the sample from distribution. This destroys the original quantum state. By repeating the processes such as initializing, running, measuring of quantum computer result will increases the probability of getting correct answers. The transistor replaces vacuum tubes. Quantum computers will replaces silicon chips. Most of the quantum computing research is still theoretical. Still the most advanced quantum computers cannot manipulate the qubits beyond a limit. Researches are going on to develop the quantum computers. It can solve problems very faster than classical computers by using the best algorithms such as Shor’s algorithm. The Shor’s algorithm solves discreate logarithm problem and factorization of integers in polynomial time. Scientists already built quantum computers to perform certain calculations. But it takes ten more years to build the practical desktop quantum computer. It can use for several applications such as communication, security, and encryption purposes etc. Advantages of Quantum Computing • The computing speed and accuracy increases. • Highly secure communication is possible. • High efficiency compared to classical computer. Disadvantages of Quantum Computing • The quantum systems are too small. • To measure the properties of quantum system without disturbing it is impossible. • It is impossible to predict the properties of a particle in the quantum system. • Qubit possess in different states (many values), but it produces only one result for one execution. • Different execution required to produce the desired output. • It is impossible to copy the qubits. Applications of Quantum Computing • Factorization: - For classical computer, the integer factorization of large integers ( product of prime numbers) is difficult. But quantum computers can solve this problems using shor’s algorithm. So the quantum computers can used in cryptographic applications. It is useful for encoding and decoding of secret information. It is highly secure. The third party cannot read the message. The current encryption methods are simpler. So the information sent through internet are not safe. Quantum computer can easily break the encypted messages used today. • Solving complex mathematical problems. • Searching huge amount of data :- Quantum computes can find something from large amount of data. for example, To find two equal numbers from a large amount of data, the classical computer have to try all numbers which needs lot of steps. The quantum computer can do it with few steps. • Google image search • Teleportation- Technique used for transfer information in the quantum level without any signal path. • Ultra- Secure Communication- By quantum computing, more informations are encoded and transmitted. It is possible to transmit information without any signal path. So there is no interfere path for extracting information. • Error correction and error detection can be improved. • Quantum Networking which uses internet and intranet networking. • Molecular simulations helps pharmacists and chemists to study about the interation of products with each other and with biological processes.eg:- drug interact with disease. • Accurate weather forecasting - Quantum computing can analyze all data and give better idea about when and where the bad weather occurs.It will help for taking preparation to save millions of lives before natural disasters occurs. Also it helps to take precautions to prevent the disasters. Accurate weather forecasting - Application of Quantum computing • Discovery of new drugs.Molecular simulations helps pharmacists and chemists to study about the interation of products with each other and with biological processes.eg:- drug interact with disease. Chemists have to test the several molecular combinations for finding the best one which can prevent disease. It is an expensive process and also needs many years. By using the quantum computing, it try different molecular combinations and find the best one.It will reduce the cost and time for developing new drugs. • Traffic Control- The quantum computing can quickly calculate the optimal route.The quantum computer can compute the length of all routes simultaneously and quickly finds the best one.It can control air based and ground based traffic. Traffic Control- Application of quantum computing • Military and defence purposes- Satellites collecting millions of images and videos. It is difficuilt to analyze by a classical computer. But the quantum computer can sort this images a faster as a classical computer. • Space exploration- Using keplers space telescope, astronomers discovered over 2000 planets outside our solar system. The quantum computer can spot more planets and give more information using the telescopic images. • Artificial Intelligence -Quantum computer can learn from experience.It can self correct and even modify the program code.The machine learning ability of quantum computer help to do things faster and efficiently. It can used for artificial intelligence experiments.NQAIL(New Quantum Artificial Intelligence Lab) at NASA's Ames research centre in silicon valley will be operated by NASA, Google and USRA. Google and NASA use the quantum computer for developing more advancements in Artificial Intelligence.
{"url":"https://www.mepits.com/tutorial/355/trending-technologies/quantum-computing","timestamp":"2024-11-08T08:56:49Z","content_type":"text/html","content_length":"72859","record_id":"<urn:uuid:32c12c6e-9f03-4ae6-9241-50e4d7c6c18e>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00709.warc.gz"}
The unsung heroes of Cryptography Cryptography is full of amazing stories. From the construction of amazing Ciphers to some intriguing cryptanalysis of well known ciphers. History has seen it all. However, there has always been people who weren’t credited for their miraculous contributions over the years due to whatever reasons. Today, it’s time to learn about those unsung heroes of cryptography who dedicated all their life for the love of what they were best known for. The best and very first techniques of cryptanalysis known as “the frequency analysis” is due to less known ninth century scientist, the author of 290 books in the fields of medicine, astronomy, mathematics, linguistics and music. Best known as the “Philosopher of the Arabs”, Abū Yūsūf Ya’qūb ibn Is-hāq ibn as-Sabbāh ibn as-Sabbāh ibn’omrān ibn Ismaīl al-Kindī His greatest treatise entitled: A Manuscript on Deciphering Cryptographic Messages was rediscovered in 1987 in the Sulaimaniyyah Ottoman Archive in Istanbul. Below is the first page of al-Kindī’s manuscript “On Deciphering Cryptographic Messages”, containing the oldest known description of cryptanalysis by frequency analysis. Before moving on to the next story. Allow me to quote Sun-Tzu, author of one of the greatest treatise on military strategy “ART OF WAR”. Sun-Tzu stated: Nothing should be as favourably regarded as intelligence; nothing should be as generously rewarded as intelligence; nothing should be as confidential as the work of intelligence. Sun-Tzu was familiar with his notion of intelligence which is why with precise measurements he decided to put it this way. Intelligence can be thought of as an abstract concept and putting it into physical existence is mere perseverance, patience and hardwork. Le Chiffre Indéchiffrable Le Chiffre Indéchiffrable translated into “The Indecipherable Cipher”. The title was accredited to the “Vigenère Cipher” developed into it’s final form by: Blaise de Vigenère The Cipher’s strength lies in using 26 distinct cipher alphabets to encrypt a message. This was achieved by drawing the “Vigenère Square”. Vigenère’s work was actually culminated in his: Traicté des Chiffres (A Treatise on Secret Writing’) which was published in 1586. The decipherment of the Vigenère Cipher was accredited to Friedrich Kasiski. However it was Charles Babbage who broke the Vigenère cipher but it was never revealed during his lifetime, because his work was thought to be of no value to British forces in Crimea. It’ll be unfortunate if i’d have missed to include the German masterpiece by inventor “Arther Scherbius”. Yes! You guessed it right. It’s about the “German Enigma”. The Enigma has standout as the greatest Cryptographic masterpiece ever built. Undoubtedly it is. Scherbius combined a number of ingenious components turning it into a formidable and intricate cipher machine. Though the technicalities of the machine can be found in: “TheCodebook” by Simon Singh , “Enigma:The Battle for the Code” by Hugh Sebag Montefiore, “Alan Turing:The Enigma” by Andrew Hodges and “Seizing the Enigma” by David Kahn Since “Alan Turing” is accredited with the one who broke the German Enigma. Very few people know that it were three Polish cryptographers: Jerzy Rózycki, Marian Rejewski and Henryk Zygalski who had broken Enigma. Infact they had not just broken Enigma, but had been reading the messages for more than five years in 1932. The youngest of these three cryptographers “Jerzy Rózycki” can be seen in the picture below. Jerzy Rózycki, who broke Enigma with the other two Polish cryptographers, drowned in 1942 while on his way from Algeria to France. Talking about the modern cryptography, there won’t be a person in the field of Cryptography or Computer Science who isn’t familiar with the modern day giants: Whitfield Diffie, Ralph Merkle and Martin Hellman Everone is familiar with their contribution as the solution to the “Key Distribution Problem” which is regarded as the greatest Cryptographic achievement since the invention of the monoalphabetic cipher, over two thousand years ago. However, there was a brilliant,, exceptional and prodigious talent who had gained a reputation as a “cryptoguru” named : James Ellis Ellis’s idea was very similar to those of Diffie, Hellman and Merkle with the exception that the trio had reached the milestone in 1975, while he achieved the same in 1969; which meant that he was several years ahead of them. While he realised that there was a need of a “one-way function”, he tried to experiment with few mathematical functions but didn’t progressed any further because he wasn’t a mathematician. While revealing this idea to his bosses, they were impressed by the idea but unaware of as how to take advantage of it. For like three years, the bright minds at Britain’s GCHQ struggled to find a one-way function. Then in 1973, a new member joined the team by the name: Clifford Cocks Cock’s, a recent graduate from the Cambridge with specialization in Nunber theory who knew a little about encryption and other military and diplomatic communications was assigned a mentor to help him through his first weeks at GCHQ. After about six weeks, his mentor “Nick Patterson” told him about the idea for Public-Key Cryptography and the need for a mathematical function that could fit the need. Cocks recall’s: It took me no more than half an hour from start to finish. I was quite pleased with myself. I thought, “Ooh, that’s nice. I’ve been given a problem and i’ve solved it.” These are few among those innumerous of the Unsung hereos in the field of cryptography who have made a huge contribution in one way or the other. Their work can’t be overshadowed by what was the demand of the time. Whether it was due to the need of military secrecy or the lack of vision which lead to such and such. Their contribution is still to be acknowledged by us. While concluding, there’s nothing better then to quote the following: Real courage is doing the right thing when nobody’s looking. Doing the unpopular thing because it’s what you believe, and the heck with everybody.
{"url":"https://aaqibb13.medium.com/the-unsung-heroes-of-cryptography-3565dc26f138?source=user_profile_page---------3-------------4c74ade932c5---------------","timestamp":"2024-11-13T15:27:47Z","content_type":"text/html","content_length":"124006","record_id":"<urn:uuid:f2c70c5a-7f7d-445e-aa74-5314f5ea872e>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00082.warc.gz"}
End of Season Statistics, 2024 The source data comes exclusively at this point from Baseball-Reference. I started doing this in 1996, at which time my process involved the Tuesday (AL) and Wednesday (NL) editions of USA Today which printed the full season stats, a hand calculator (not even scientific!) and paper and pencil. So I am very appreciative that the next year I learned about spreadsheets and that now Baseball-Reference exists. The basic philosophy behind these stats is to use the simplest methods that have acceptable accuracy. Of course, "acceptable" is in the eye of the beholder, namely me. I use Pythagenpat not because other run/win converters, like a constant RPW or a fixed exponent are not accurate enough for this purpose, but because it's mine and it would be kind of odd if I didn't use it. If I seem to be a stickler for purity in my critiques of others' methods, I'd contend it is usually in a theoretical sense, not an input sense. So when I exclude hit batters from a particular calculation, I'm not saying that hit batters are worthless or that they *should* be ignored; it's just easier not to mess with them and not that much less accurate (note: hit batters are actually included in the offensive statistics now). I also don't really have a problem with people using sub-standard methods (say, Basic RC) as long as they acknowledge that they are sub-standard. If someone pretends that Basic RC doesn't undervalue walks or cause problems when applied to extreme individuals, I'll call them on it; if they explain its shortcomings but use it regardless, I accept that. Take these last three paragraphs as my acknowledgment that some of the statistics displayed here have shortcomings as well, and I've at least attempted to describe some of them in the discussion below. The League spreadsheet is pretty straightforward--it includes league totals and averages for a number of categories, most or all of which are explained at appropriate junctures throughout this piece. The advent of interleague play has created two different sets of league totals--one for the offense of league teams and one for the defense of league teams. Before interleague play, these two were identical. I do not present both sets of totals (you can figure the defensive ones yourself from the team spreadsheet, if you desire), just those for the offenses. The exception is for the defense-specific statistics, like innings pitched and quality starts. The figures for those categories in the league report are for the defenses of the league's teams. However, I do include each league's breakdown of basic pitching stats between starters and relievers (denoted by "s" or "r" prefixes), and so summing those will yield the totals from the pitching side. While Manfred runners continue to wreck havoc on runs and runs allowed, the end of seven inning doubleheaders has allowed extra inning runs to be cleanly removed from team totals by simply removing runs scored/allowed in extra innings. So I have added columns for “EIR” and “EIRA” (denoting extra inning runs) to the team and league spreadsheets, along with columns for “RegR” and “RegRA” (“regular runs”, for lack of a better term), which are equal to runs less extra inning runs. I have also added “RegIP”, which is innings pitched excluding extra innings, to the league totals and will use the resulting RA from RegRA and RegIP as the league average baseline for player value calculations. I should acknowledge here that there are some issues with just looking at runs in the first nine innings apart from the obvious that you’re ignoring what happens in extra innings, however tainted the runs figures from extra innings may be. If you wanted to do a true per out rate stat using regular runs, you would also have to remove outs/innings from extra innings from the team total. Alternatively, you could do what I have done for years and just use games played as the denominator for team run rates. This is admittedly a shortcut, but removing the extra inning games makes it a much more defensible one, as it eliminates one of the major sources of divergence in game length across teams. There are still rain-shortened games that don’t make it a full nine to contend with, but these are generally rare and not of particular concern. Of course, teams not batting in the bottom of the ninth, and truncated bottom of the ninths, are a problem in general, but one that is always implicitly ignored if you work with a team’s actual runs scored or allowed. My goal here is to reasonably clean up the 2024 statistics so that they can be used in the manner that these statistics have traditionally been used, not to eliminate all of the biases that were inherent to that traditional usage (although I certainly do think it is good to be aware of them). The Team spreadsheet focuses on overall team performance--wins, losses, runs scored, runs allowed. The columns included are: Park Factor (PF), Winning Percentage (W%), Expected W% (EW%), Predicted W% (PW%), wins, losses, runs and runs allowed (along with their aforementioned extra inning and “regular” counterparts), Runs Created (RC), Runs Created Allowed (RCA), Home Winning Percentage (HW%), Road Winning Percentage (RW%) [exactly what they sound like--W% at home and on the road], RegR/G, RegRA/G, Runs Created/G (RC/G), Runs Created Allowed/9 (RCA/G), and RegRuns Per Game (the average number of regular runs scored and allowed per game). Those are all park adjusted except RegRPG. I based EW% on “regular” runs rather than the actual runs and runs allowed totals. This means that what they are estimating what a team’s winning percentage would have been based just on their performance in the first nine innings of games. Since Runs Created are not affected by Manfred runners, PW% is still based on RC/G and RCA/G. The runs and Runs Created figures are unadjusted, but the per-game averages are park-adjusted, except for RPG which is also raw. Runs Created and Runs Created Allowed are both based on a simple Base Runs formula. The formula is: A = H + W - HR - CS B = (2TB - H - 4HR + .05W + 1.5SB)*.76 C = AB - H D = HR Naturally, A*B/(B + C) + D. Park factors are based on five years of data when applicable (so 2020 - 2024), include both runs scored and allowed, and they are regressed towards average (PF = 1), with the amount of regression varying based on the number of games in total in the sample. Camden Yards is treated as a new park effective 2022. There are factors for both runs and home runs. The initial PF (not shown) is: iPF = (H*T/(R*(T - 1) + H) + 1)/2 where H = RPG in home games, R = RPG in road games, T = # teams in league (14 for AL and 16 for NL). Then the iPF is converted to the PF by taking x*iPF + (1-x), where x = .1364*ln(G/162) + .5866. I will expound upon how this formula was derived in a future post. It is important to note, since there always seems to be confusion about this, that these park factors already incorporate the fact that the average player plays 50% on the road and 50% at home. That is what the adding one and dividing by 2 in the iPF is all about. So if I list Fenway Park with a 1.02 PF, that means that it actually increases RPG by 4%. In the calculation of the PFs, I did not take out “home” games that were actually at neutral sites (of which there were a rash in 2020). The Blue Jays multiple homes make things very messy, so I discarded their 2020 and 2021 data. There are also Team Offense and Defense spreadsheets. These include the following categories: Team offense: Plate Appearances, Batting Average (BA), On Base Average (OBA), Slugging Average (SLG), Secondary Average (SEC), Walks and Hit Batters per At Bat (WAB), Isolated Power (SLG - BA), R/G at home (hR/G), and R/G on the road (rR/G) BA, OBA, SLG, WAB, and ISO are park-adjusted by dividing by the square root of park factor (or the equivalent; WAB = (OBA - BA)/(1 - OBA), ISO = SLG - BA, and SEC = WAB + ISO). Team defense: Innings Pitched, BA, OBA, SLG, Innings per Start (IP/S), Starter's eRA (seRA), Reliever's eRA (reRA), Quality Start Percentage (QS%), RA/G at home (hRA/G), RA/G on the road (rRA/G), Battery Mishap Rate (BMR), Modified Fielding Average (mFA), and Defensive Efficiency Record (DER). BA, OBA, and SLG are park-adjusted by dividing by the square root of PF; seRA and reRA are divided by PF. The three fielding metrics I've included are limited it only to metrics that a) I can calculate myself and b) are based on the basic available data, not specialized PBP data. The three metrics are explained in this post, but here are quick descriptions of each: 1) BMR--wild pitches and passed balls per 100 baserunners = (WP + PB)/(H + W - HR)*100 2) mFA--fielding average removing strikeouts and assists = (PO - K)/(PO - K + E) 3) DER--the Bill James classic, using only the PA-based estimate of plays made. Based on a suggestion by Terpsfan101, I've tweaked the error coefficient. Plays Made = PA - K - H - W - HR - HB - .64E and DER = PM/(PM + H - HR + .64E) Next are the individual player reports. I defined a starting pitcher as one with 15 or more starts. All other pitchers are eligible to be included as a reliever. If a pitcher has 40 appearances, then they are included. Additionally, if a pitcher has 50 innings and less than 50% of his appearances are starts, he is also included as a reliever (this allows some swingmen type pitchers who wouldn’t meet either the minimum start or appearance standards to get in). For all of the player reports, ages are based on simply subtracting their year of birth from 2024. I realize that this is not compatible with how ages are usually listed and so “Age 27” doesn’t necessarily correspond to age 27 as I list it, but it makes everything a lot easier, and I am more interested in comparing the ages of the players to their contemporaries than fitting them into historical studies, and for the former application it makes very little difference. The "R" category records rookie status with a 1 for rookies and a 0 for everyone else. Also, all players are counted as being on the team with whom they played/pitched (IP or PA as appropriate) the most. For relievers, the categories listed are: Games, Innings Pitched, estimated Plate Appearances (PA), Run Average (RA), Relief Run Average (RRA), Earned Run Average (ERA), Estimated Run Average (eRA), DIPS Run Average (dRA), Strikeouts per Game (KG), Walks per Game (WG), Guess-Future (G-F), Inherited Runners per Game (IR/G), Batting Average on Balls in Play (%H), Runs Above Average (RAA), and Runs Above Replacement (RAR). IR/G is per relief appearance (G - GS); it is an interesting thing to look at, I think, in lieu of actual leverage data. You can see which closers come in with runners on base, and which are used nearly exclusively to start innings. Of course, you can’t infer too much; there are bad relievers who come in with a lot of people on base, not because they are being used in high leverage situations, but because they are long men being used in low-leverage situations already out of hand. For starting pitchers, the columns are: Wins, Losses, Innings Pitched, Estimated Plate Appearances (PA), RA, RRA, ERA, eRA, dRA, KG, WG, G-F, %H, Pitches/Start (P/S), Quality Start Percentage (QS%), RAA, and RAR. RA and ERA you know--R*9/IP or ER*9/IP, park-adjusted by dividing by PF. The formulas for eRA and dRA are based on the same Base Runs equation and they estimate RA, not ERA. * eRA is based on the actual results allowed by the pitcher (hits, doubles, home runs, walks, strikeouts, etc.). It is park-adjusted by dividing by PF. * dRA is the classic DIPS-style RA, assuming that the pitcher allows a league average %H, and that his hits in play have a league-average S/D/T split. It is park-adjusted by dividing by PF. The formula for eRA is: A = H + W - HR B = (2*TB - H - 4*HR + .05*W)*.78 C = AB - H = K + (3*IP - K)*x (where x is figured as described below for PA estimation and is typically around .94) = PA (from below) - H - W eRA = (A*B/(B + C) + HR)*9/IP To figure dRA, you first need the estimate of PA described below. Then you calculate W, K, and HR per PA (call these %W, %K, and %HR). Percentage of balls in play (BIP%) = 1 - %W - %K - %HR. This is used to calculate the DIPS-friendly estimate of %H (H per PA) as e%H = Lg%H*BIP%. Now everything has a common denominator of PA, so we can plug into Base Runs: A = e%H + %W B = (2*(z*e%H + 4*%HR) - e%H - 5*%HR + .05*%W)*.78 C = 1 - e%H - %W - %HR cRA = (A*B/(B + C) + %HR)/C*a z is the league average of total bases per non-HR hit (TB - 4*HR)/(H - HR), and a is the league average of (AB - H) per game. Also shown are strikeout and walk rate, both expressed as per game. By game I mean not nine innings but rather the league average of PA/G. I have always been a proponent of using PA and not IP as the denominator for non-run pitching rates, and now the use of per PA rates is widespread. Usually these are expressed as K/PA and W/PA, or equivalently, percentage of PA with a strikeout or walk. I don’t believe that any site publishes these as K and W per equivalent game as I am here. This is not better than K%--it’s simply applying a scalar multiplier. I like it because it generally follows the same scale as the familiar K/9. Pitcher plate appearances are estimated by this formula: PA = K + (3*IP - K)*x + H + W Where x = league average of (AB - H - K)/(3*IP - K) Then KG = K*Lg(PA/G) and WG = W*Lg(PA/G). G-F is a junk stat, included here out of habit because I've been including it for years. It was intended to give a quick read of a pitcher's expected performance in the next season, based on eRA and strikeout rate. Although the numbers vaguely resemble RAs, it's actually unitless. As a rule of thumb, anything under four is pretty good for a starter. G-F = 4.46 + .095(eRA) - .113(K*9/IP). It is a junk stat. JUNK STAT JUNK STAT JUNK STAT. Got it? %H is BABIP, more or less--%H = (H - HR)/(PA - HR - K - W), where PA was estimated above. Pitches/Start includes all appearances, so I've counted relief appearances as one-half of a start (P/S = Pitches/(.5*G + .5*GS). QS% is just QS/GS; I don't think it's particularly useful, but Doug's Stats include QS so I include it. I've used a stat called Relief Run Average (RRA) in the past, based on Sky Andrecheck's article in the August 1999 By the Numbers; that one only used inherited runners, but I've revised it to include bequeathed runners as well, making it equally applicable to starters and relievers. One thing that's become more problematic as time goes on for calculating this expanded metric is the sketchy availability of bequeathed runner data for relievers. As a result, only bequeathed runners left by starters (and "relievers" when pitching as starters) are taken into account here. I use RRA as the building block for baselined value estimates for all pitchers. I explained RRA in this article, but the bottom line formulas are: BRSV = BRS - BR*i*sqrt(PF) IRSV = IR*i*sqrt(PF) - IRS RRA = ((R - (BRSV + IRSV))*9/IP)/PF Given the difficulties of looking at the league average of actual runs due to Manfred rules, I decided to use eRA to calculate the baselined metrics for relievers. So they are no longer based on actual runs allowed by the pitcher, but rather on the component statistics. For starters, I will use the actual runs allowed in the form of RRA, but compared to the league average regRA. Starters’ statistics are not influenced by the Manfred runners, but the league average RA is still artificially inflated by them, so the league regRA should be a better measure of what the league average RA would be in lieu of Manfred runners. RAA (relievers) = (.951*LgRegRA) - eRA)*IP/9 RAA (starters) = (1.025*LgRegRA) - RRA)*IP/9 RAR (relievers) = (1.11*LgRegRA) - eRA)*IP/9 RAR (starters) = (1.28*LgRegRA) – RRA)*IP/9 All players with 250 or more plate appearances (official, total plate appearances) are included in the Hitters spreadsheets (along with some players close to the cutoff point who I was interested in). Each is assigned one position, the one at which they appeared in the most games. The statistics presented are: Games played (G), Plate Appearances (PA), Outs (O), Batting Average (BA), On Base Average (OBA), Slugging Average (SLG), Secondary Average (SEC), Runs Created (RC), Runs Created per Game (RG), Speed Score (SS), Hitting Runs Above Average (HRAA), Runs Above Average (RAA), and Runs Above Replacement (RAR). PA is equal to AB + W+ HB. Outs are AB - H + CS. BA and SLG you know, but remember that without SF, OBA is just (H + W + HB)/(AB + W + HB). Secondary Average = (TB - H + W + HB)/AB = SLG - BA + (OBA - BA)/(1 - OBA). I have not included net steals as many people (and Bill James himself) do, but I have included HB which some do not. BA, OBA, and SLG are park-adjusted by dividing by the square root of PF. This is an approximation, of course, but I'm satisfied that it works well (I plan to post a couple articles on this some time during the offseason). The goal here is to adjust for the win value of offensive events, not to quantify the exact park effect on the given rate. I use the BA/OBA/SLG-based formula to figure SEC, so it is park-adjusted as well. Runs Created is actually Paul Johnson's ERP, more or less. Ideally, I would use a custom linear weights formula for the given league, but ERP is just so darn simple and close to the mark that it’s hard to pass up. I still use the term “RC” partially as a homage to Bill James (seriously, I really like and respect him even if I’ve said negative things about RC and Win Shares), and also because it is just a good term. I like the thought put in your head when you hear “creating” a run better than “producing”, “manufacturing”, “generating”, etc. to say nothing of names like “equivalent” or “extrapolated” runs. None of that is said to put down the creators of those methods--there just aren’t a lot of good, unique names available. RC = (TB + .8H + W + HB - .5IW + .7SB - CS - .3AB)*.310 RC is park adjusted by dividing by PF, making all of the value stats that follow park adjusted as well. RG, the Runs Created per Game rate, is RC/O*25.5, as 25.5 is a good long term approximation for the number of (AB – H + CS) per game. I do not believe that outs are the proper denominator for an individual rate stat, but I also do not believe that the distortions caused are that bad (see “Rate Stat Series” for a thorough record of my thoughts on individual batter rate statistics). Several years ago I switched from using my own "Speed Unit" to a version of Bill James' Speed Score; of course, Speed Unit was inspired by Speed Score. I only use four of James' categories in figuring Speed Score. I actually like the construct of Speed Unit better as it was based on z-scores in the various categories (and amazingly a couple other sabermetricians did as well), but trying to keep the estimates of standard deviation for each of the categories appropriate was more trouble than it was worth. Speed Score is the average of four components, which I'll call a, b, c, and d: a = ((SB + 3)/(SB + CS + 7) - .4)*20 b = sqrt((SB + CS)/(S + W))*14.3 c = ((R - HR)/(H + W - HR) - .1)*25 d = T/(AB - HR – K)*450 James actually uses a sliding scale for the triples component, but it strikes me as needlessly complex and so I've streamlined it. He looks at two years of data, which makes sense for a gauge that is attempting to capture talent and not performance, but using multiple years of data would be contradictory to the guiding principles behind this set of reports (namely, simplicity. Or laziness. You're pick.) I also changed some of his division to mathematically equivalent multiplications. The baselined stats are calculated in the same basic manner the pitcher stats are, using the league average RG: HRAA = (RG – LgRG)*O/25.5 RAA = (RG – LgRG*PADJ)*O/25.5 RAR = (RG – LgRG*PADJ*.73)*O/25.5 PADJ is the position adjustment, based on 2010-2019 offensive data. For catchers it is .92; for 1B/DH, 1.14; for 2B, .99; for 3B, 1.07; for SS, .95; for LF/RF, 1.09; and for CF, 1.05. As positional flexibility takes hold, fielding value is better quantified, and the long-term evolution of the game continues, it's right to question whether offensive positional adjustments are even less reflective of what we are trying to account for than they were in the past. But while I do not claim that the relationship is or should be perfect, at the level of talent filtering that exists to select major leaguers, there should be an inverse relationship between offensive performance by position and the defensive responsibilities of the position. Not a perfect one, but a relationship nonetheless. An offensive positional adjustment than allows for a more objective approach to setting a position adjustment. Each player has his own weighted average position adjustment based on their actual games played by position (ideally, innings would be used, but that gets messy when taking DH into account, and Baseball-Reference has a very handy table of games by position). Again, I have to clarify that I don’t think subjectivity in metric design is a bad thing - any metric, unless it’s simply expressing some fundamental baseball quantity or rate (e.g. “home runs” or “on base average”) is going to involve some subjectivity in design (e.g linear or multiplicative run estimator, any myriad of different ways to design park factors, whether to include a category like sacrifice flies that is more teammate-dependent). The replacement levels I have used here are very much in line with the values used by other sabermetricians. This is based both on my own "research", my interpretation of other's people research, and a desire to not stray from consensus and make the values unhelpful to the majority of people who may encounter them. Replacement level is certainly not settled science. There is always going to be room to disagree on what the baseline should be. Even if you agree it should be "replacement level", any estimate of where it should be set is just that--an estimate. Average is clean and fairly straightforward, even if its utility is questionable; replacement level is inherently messy. So I offer the average baseline as well. For position players, replacement level is set at 73% of the positional average RG (since there's a history of discussing replacement level in terms of winning percentages, this is roughly equivalent to .350). For starting pitchers, it is set at 128% of the league average RA (.380), and for relievers it is set at 111% (.450). The spreadsheets are published as Google Spreadsheets, which you can download in Excel format by changing the extension in the address from "pubhtml" to "pub?output=xlsx", That way you can download them and manipulate things however you see fit.
{"url":"https://walksaber.substack.com/p/end-of-season-statistics-2024?isFreemail=true&post_id=149925973&publication_id=748972&r=v2cg&triedRedirect=true&utm_campaign=unexpected-points-added&utm_medium=web&utm_source=unexpected-points-added162","timestamp":"2024-11-07T16:24:43Z","content_type":"text/html","content_length":"181238","record_id":"<urn:uuid:d0558699-c990-45e4-b677-df2559f0f2b5>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00697.warc.gz"}
Swift Programs Of Kids Math For 2019 Studying platforms, on-line communities, math instruments and on-line courses for increased math learners and lovers. Math college students in high-stage courses and packages can benefit from sources targeted on difficult see here areas like superior algebra and calculus. The best way to assist prepare is with on-line ACT classes or in-particular person classes with experienced ACT lecturers. Effortless Methods In Prodigy Math Game – The Best Routes A math website is the perfect way to supply extra help for others, create a place for college students to share notes and examine, and permit anybody to study Math Kids new or refresh existing math expertise. Math Playground is a math web site for teenagers that helps assist the training of math ideas by way of both games and puzzles. Treat your self to an immersive studying experience with our ‘Including One by Making a Mannequin’ recreation. Partaking animated learning movies, video games, quizzes, and actions to encourage kids on their unique Free Math Websites For Kids learning path. College students will be taught to unravel issues and explain their considering utilizing mathematician George Polya’s four-step approach. They provide all kinds of enjoyable video games and ready-to-go courses and even actual and simple coding languages, and by utilizing this web site, kids can build up their math and coding skills Maths Websites For Kids. Due to this, many students study a lot from visible displays, like utilizing small objects (coins, marbles) when studying their multiplication and division. An important different to workbooks and flashcards to help youngsters observe math and build lifelong expertise is using enjoyable and interactive math games. Math games offer an pleasing and accessible approach to studying math, offering an alternate method for college students to method and assess their mathematical talents. IXL affords an almost infinite quantity of follow questions to develop and take a look at college students’ Maths abilities for all ages. Teaching English and math ideas through kids’s learning Math For Kids games might be really participating and satisfying for college students. Find out about money with these cool video games for youths. Math Prodigy Advice – Insights Introducing math via games helps kids be taught and luxuriate in math. On-line math games for youths incorporate challenges and math problems, so kids Free Math Websites For Kids be taught number recognition, counting, fundamental operations and equations simply. Log in to save lots of your whole favorite Coolmath video games. For math video games, apply assessments, and other math web sites, try these 77 free math resources for college kids. Using on-line video games to show each math and ELA might be really efficient as they simplify troublesome and challenging concepts and make them appear much less scary to young learners. Whereas not as enjoyable as some other math web sites with games this can be a good in-between for these eager to balance studying and enjoyable. Yow will Learning Math For Kids discover houndreds of free online math video games right here and new enjoyable math games for teenagers are added commonly.
{"url":"https://artemid.pl/swift-programs-of-kids-math-for-2019/","timestamp":"2024-11-02T05:52:43Z","content_type":"text/html","content_length":"17243","record_id":"<urn:uuid:040117d8-5106-4dcf-8ee8-bcf2c9a86315>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00551.warc.gz"}
Utilization of the ExactMPF package for solving a discrete analogue of the nonlinear Schrödinger equation by the inverse scattering transform method We are revisiting the problem of solving a discrete nonlinear Schrödinger equation by the inverse scattering transform method, by use of the recently developed ExactMPF package within MAPLE Software. ExactMPF allows for an exact Wiener-Hopf factorization of matrix polynomials regardless of the partial indices of the matrix. The package can be widely used in various problems, where Wiener-Hopf factorization as one of the effective mathematical tools is required, as its code has already been disclosed. The analysis presented in this paper contains not only numerical examples of its use, but is also supported by appropriate and accurate a priori estimations. The procedure itself guarantees that the ExactMPF package produces all computations arithmetically exactly, and a detailed numerical analysis of various aspects of the computational algorithm and approximation strategies is provided in the case of a finite initial impulse. • discrete analogue of the nonlinear Schrödinger equation • error-free calculation • Padé approximation • the ExactMPF package • the inverse scattering transform • Wiener-Hopf factorization Dive into the research topics of 'Utilization of the ExactMPF package for solving a discrete analogue of the nonlinear Schrödinger equation by the inverse scattering transform method'. Together they form a unique fingerprint. • Mishuris, G. (PI) 01 Sept 2021 → 31 Aug 2025 Project: Externally funded research • Mishuris, G. (PI) 01 Jul 2021 → 30 Jun 2023 Project: Externally funded research
{"url":"https://research.aber.ac.uk/en/publications/utilization-of-the-exactmpf-package-for-solving-a-discrete-analog","timestamp":"2024-11-06T15:07:46Z","content_type":"text/html","content_length":"76502","record_id":"<urn:uuid:55f65446-2b14-41b6-bc67-655c9b861856>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00614.warc.gz"}
Newton’s laws Newton’s laws were invented by Newton and they form the basis of physics and they help in explaining most of the phenomena that are seen around us. Most of our everyday experiences can be explained exclusively by the Newton’s laws. The Newton’s laws were among his findings which were a fundamental factor of the Enlightment age when Europe went from a state of almost unquestioning, church-endorsed belief in the infallibility and correctness of the Aristotelian worldview to a state where people, for the ?rst time in history, let nature speak for itself. The Newton’s laws are mainly three: the Law of Inertia, Law of Dynamics and the Law of Reaction. We shall look at each of the three laws briefly. Law of inertia (Newton’s first law of motion) The newton’s law of inertia states that “an object at rest or in uniform motion in a reference frame, remains so unless it is acted upon by an external force.” This law shows that if an object remains at rest/stationary or in a constant uniform motion (without a change in its velocity) then the net external force acting on the object is zero. Net external force acting on an object is equal to the sum of all the forces acting on the object. If there is a net force acting on the body, the body will start moving if it was at rest or there will be a change in its velocity if it was in motion. Let’s take for example, a box that is placed on the ground and it is at rest (not in motion). If a person exerts a force (pushes or pulls) on the box, the box will begin to slide on the ground. But if two people exerts forces in opposite directions; one pushes and one pulls and the amount of forces exerted by each person is equal to the other, then the box will remain at rest (it will not move). But if one person exerts more force than the other, then the box will slid in the direction of the excess force. On the other hand if an object is in motion and a force is exerted to it, the velocity of the object will either reduce or increase according to the direction of the net applied force. If the net force is in the same direction as the direction of the body’s velocity, then the velocity will increase. But if the net force is in the opposite direct to the velocity of the body, then the velocity of the body will reduce. As mentioned during our Physics tuition classes, mathematically the law of inertia is stated as: Where dv/dt is the derivative of the velocity which yields the change in velocity that is also called acceleration.
{"url":"https://tuitionphysics.com/2015-dec/newtons-laws-2/","timestamp":"2024-11-09T03:34:48Z","content_type":"text/html","content_length":"94083","record_id":"<urn:uuid:1850f482-b798-453b-9f41-c23a67e123d4>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00805.warc.gz"}
Project Ideas These are queueing research ideas that I’m interested in, but haven’t gotten around to yet. If you’re interested in any of them as a potential collaborator or advisee, let me know! I’m particularly interested in working with either Northwestern students, undergrad or grad, or people who already have a background in queueing theory research. Last updated: November 3rd, 2024. Table of contents I’ve separated my ideas into five categories: • Ideas that I think are quite promising, where I have promising directions for a result. • Ideas that are just starting out, or where I don’t quite know how I’d prove a result. • Ideas that I’m actively pursuing. • Ideas I pursued to some kind of completion, and am not currently pursuing. I’m keeping these around for archival purposes. • Ideas I’m no longer interested in. I’m keeping these around for archival purposes. I’m most interested in collaborating on ideas in the “quite promising” category, but all of them are worth looking at and discussing. This page is for ideas that look promising but I’m not pursuing yet, or were in that stage earlier but I am now pursuing. I typically have several more projects that I’m actively pursuing, that never went through the “look promising but not pursuing” stage, and so aren’t listed here. The order within a category is roughly chronological. Quite promising Scheduling to minimize E[T^2] See Section 8.3.5 of my thesis. Setting: M/G/1 scheduling for tail, e.g. minimize E[T^2]. Scheduling policy: t/s Policy: Priority is t/s, where t is a job’s time in system and s is the job’s size. Higher is better. Without preemption to start, for simplicity of analysis. Note that this is an “Accumulating Priority Queue”, but with infinite continuous classes, not 2 classes. Waiting time distributions in the accumulating priority queue, David A. Stanford, Peter Taylor & Ilze Ziedins First step: Implement this policy. Compare it against FCFS, SRPT. Poisson arrivals, medium variance, medium load. Does it do well empirically for E[T^2]? Future steps: Use APQ methods to characterize steady state. Poisson point process of (size, time in system). Characterize for arbitrary joint (Size, Accumulation rate) distribution, specialize to above setting. Characterize transform of response time, moments of response time. Achievable region method of lower bounds Background: The key idea behind E[T] lower bounds is that if you prioritize jobs by remaining size, you minimize the amount of work below a given remaining size in the system. Using WINE, this bound can be converted into a response time bound, proving that SRPT is optimal. Similar proofs work for Gittins and in various multiserver systems. We can call this the “achievable region” method - for each relevant work threshold, there is an achievable region of how much relevant work there can be in steady state, depending on the policy. Idea: If you prioritize jobs by time in system (FCFS), you minimize the amount of work above each time threshold in the system. “Work with time-in-system above t”. However, this cannot be converted into a bound on tail metrics such as E[T^2], because there is not a conversion from work to number of jobs or response time or anything like that. However, we can get additional bounds by assigning jobs different deadlines based on their sizes, and then prioritizing jobs in a Earliest Deadline First fashion, where jobs reach maximum priority when they are past their deadline. Each such policy minimizes the amount of work of jobs past their deadline. If jobs are classed exponential jobs (Exp(μ_1), Exp(μ_2), …), then we can convert directly from an amount of work to a number of jobs. We’ll get bounds of the form “N_1^t/μ_1 + N_2^t/μ_1 \ge x”, where N_i^t is the number of jobs of class i that have been in the system for over t time. Perhaps we can integrate results like this, incorporating different deadlines for different classes of job, to get a good T^2 lower bound. Optimal Relative Completions in the Multiserver-job system As I showed in my preliminary RESET paper, the mean response time in the First-Come First-Served Multiserver-job system is controlled by the throughput and relative completions of the corresponding saturated system. It is therefore natural to find the optimal scheduling policy, minimizing throughput and relative completions, under some scheduling restriction, such as the restriction that only the k oldest jobs in arrival order can be served. First step: Implement a way to specify a such a policy and compute its throughput and relative completions, perhaps in a 3 server system. Future steps: Search over all possible policies with an MDP solver. Find the Pareto-optimal tradeoff of relative completions vs. throughput, perhaps corresponding to best policies at a variety of loads. Solve symbolically for throughput and relative completions. Mean-variance tradeoff: Follow up with Shubhada. Hybrid ServerFilling and MSJ FCFS to avoid starvation I think I now understand what practitioners mean when they talk about “starvation”. Consider a job that encounters a system where there are relatively few jobs present, but the arrival rate is high, around the critical load. The response time of that job should be relatively low: Proportionate to the number of jobs that were present on arrival, ideally. Practical systems often have feedback mechanisms on the arrival rate, resulting in this pattern of high load but relatively short queue lengths. MSJ FCFS satisfies this “no starvation” goal, as do many backfilling policies. In contrast, ServerFilling does not: A small-server-need job can be delayed until the system empties. To overcome this, consider a policy which serves a 95%/5% mixture of ServerFilling and FCFS, or ServerFilling and a backfilling policy. We could then give a sample-path bound on job’s response time in terms of the number of jobs seen on arrival, and the size of the job. Load doesn’t enter into it. We could define this as “no starvation”. We could analyze this policy with our finite-skip Scheduling with epsilon prediction errors Setting: M/G/1 scheduling for the mean. Predictions are given, and there’s a small (epsilon-sized) error in the predictions. Goal: Schedule to achieve a mean response time performance of the form E[T^SRPT](1+f(epsilon)), for some function f that goes to zero as epsilon goes to zero. This is called “Consistency”. Twist: There are many kinds of epsilon-error to consider. In our SRPT-Bounce paper, we show that our SRPT-Bounce policy can handle the situation where all predictions may be off by a multiplicative error of epsilon. Adam Wierman and Misja Nuyens have a paper, “Scheduling despite inexact job-size information”, looking at predictions being off by an additive error - consistency is not possible I’d like to instead consider the situation where an epsilon fraction of jobs may have seriously poor predictions, but all other predictions are accurate. There are two natural scenarios: 1. Epsilon-fraction of jobs have null predictions. In this case, we know that we’re getting no prediction information. 2. Epsilon-fraction of jobs have predictions that are incorrect by an arbitrary magnitude. In this case, we don’t know that we’re getting no prediction information. We could also look at load-fractions rather than job fractions. Even in scenario 1, which is much simpler, things aren’t trivial by any means. If we did something simple like put the null-prediction jobs at the end of the queue, their response time would be something horrible like 1/(1-rho)^2 in the epsilon->0 limit, which would dominate mean response time and not remotely achieve consistency. Initial question: What’s the mean response time impact of a misprediction from true size x to predicted size x’, under a policy like SRPT-Checkmark or SRPT-Bounce? Longer-term question: Can we achieve consistency against rare large errors? Can we define a useful misprediction-distance metric such that if that metric is small, we have consistent performance? Tails for ServerFilling Setting: MSJ Scheduling, ServerFilling policy (or generally WCFS scheduling). Want to analyze tail of response time. In our WCFS paper, we analyze the ServerFilling policy’s mean response time, and more generally any WCFS policy’s mean response time, showing that it is near that of resource-pooled FCFS. Because these policies are near-FCFS, we would like to analyze their tail of response time, which is likely quite good in the light-tail-sizes/bounded expected remaining size setting of the paper. Our analysis separates response time into two pieces: Time in the “front” and time in the “back” (the back is called the queue in the paper, but we’ve since changed terminology to clarify that some of the jobs in the front are not in service). Time in the back dominates mean response time in heavy traffic, and our analysis could likely be generalized to tightly bound the transform of time in the front to be near that of resource-pooled FCFS. We would just have to change out the W^2 test function in the paper for an exponential test function. For time in the front, things are trickier. In the paper, we used Little’s law, which allows us to bound mean time in front but does not say anything about the distribution of time in the front. Because ServerFilling prioritizes the largest server-need jobs in the front, we have to worry about the smallest server-need jobs and the tail of their time in the front. In the worst-case, a 1-server job can only be guaranteed to run once there are k 1-server jobs in the front. Thus, the time-in-front of 1-server jobs could be about k times the interarrival time of 1-server jobs. The k-times is fine, but the issue is the interarrival time of 1-server jobs. If 1-server jobs are very rare, then their interarrival time will be very large. However, because these jobs are so rare, that won’t have a big impact on metrics like the transform or the tail probability. If 1-server jobs are exceptionally rare, then we can bound their response time by the excess of a busy period, at which point the system will run low on jobs and all jobs remaining in the system will be in service. Initial question: Let’s bound the transform or tail probability of time in front, either for a specific policy or uniformly over all policies. Continuous MSJ Existing theoretical multiserver-job research has overwhelmingly focused on the case of a discrete, finite set of resource requirements (e.g. numbers of servers). Moreover, policies make heavy use of this fact, often selecting a job to serve based on the number of identical jobs that are present in the system (e.g. MaxWeight). To learn more, see my MSR MAMA paper with Ben and Zhongrui. In the real world, it’s common for resource requirements to be real-valued (e.g. 0.35 CPUs), and for no two jobs to be completely identical. Model: A natural model to capture this is: The total capacity of the system is 1. Jobs have a server need which is in the interval (0, 1]. Any set of jobs with total server need at most 1 can be served simultaneously. Jobs arrive according to a Poisson process with fixed rate λ, and have server need and duration sampled i.i.d. from some joint distribution. Goal: A natural goal is throughput optimality – stabilizing the system whenever it is possible to do so. Example setting: A concrete example setting is: Server needs are Uniform(0, 1), and durations are Exp(1), independent of server need. This setting should be stabilizable for any λ<2. Policies: One way to construct a policy which is definitely stable is to discretize the server needs by rounding them up to the nearest multiple of 1/n, for some large integer n chosen as a function of the system load. Then, we apply a standard policy like MaxWeight or the MSR policies on the discretized system. This is fine as far as it goes, but these policies have poor performance as a function of n, the discretization parameter. It would be nice to have a throughput-optimal policy that isn’t One could empirically evaluate the performance of existing non-identical-job-based policies, such as best-fit backfilling (most servers first) and first-fit backfilling. However, a policy worthy of further investigation is the infinite-n limit of the infinite-switching version of the MSR policy. MaxWeight doesn’t have a natural infinite-n limit, it relies on having many jobs of the same class to function. MSR can be adapted to a continuous policy. For instance, with uniform server need, one version of the policy would be: • For each threshold x in (0, 0.5], find the job with largest server need below x, and with largest server need below 1-x. • For each interval of thresholds of width dx, serve that pair of jobs at rate 2dx. Concretely, if the jobs in the system are of sizes {0.21, 0.41, 0.61, 0.81}, this policy would serve: • [0.81] at rate 0.38 (thresholds [0, 0.19]), • [0.61] at rate 0.04 (thresholds (0.19, 0.21)), • [0.61, 0.21] at rate 0.36 (thresholds [0.21, 0.39]), • [0.41, 0.21] at rate 0.22 (thresholds (0.39, 0.5)). Note that this policy is straightforwardly suboptimal. It would be better to serve [0.61, 0.21] rather than ever serving [0.61], for instance. Nonetheless, I believe that this policy would be throughput optimal via a simple discretization proof, along the same lines as the discretized MSR policy, but without requiring actual discretization. Project: Define the limit-MSJ policy concretely, and prove that it is throughput optimal, likely using a discretization proof. Further direction: Define a strictly-better version of the policy which doesn’t do silly things like idle servers, and show the result carries over. Starting out/Not sure how to proceed Scheduling in the low-load limit Setting: The known-size M/G/k, under low load. Intuition: The dominant term comes from jobs arriving alone, then two-job interactions, etc. We find which policy is optimal, for which number of jobs. It’s similar to the no-arrivals setting, for which SRPT-k is optimal, but more stochastic. SRPT-k was proven to be optimal in the no-arrivals setting by Robert McNaughton in “Scheduling with deadlines and loss functions.” Basics: For at most k jobs, there are no nontrivial decisions. For k+1 jobs, just be sure to serve the smallest job. For k+2, it becomes nontrivial. First step: If k=2, and we consider 4-job sequences, I believe we find that we must serve the smallest pair of any 3 jobs. Confirm? Future steps: Is SRPT-2 uniquely optimal at low load? Is SRPT-k? Expand to dispatching, MSJ, unknown sizes? Value function service and dispatching Can define a SRPT value function, which quantifies the total future response time impact of a set of jobs. If we started two systems, one from empty and one from this set of jobs, and then ran both forever, in expectation by how much would the total response time go up? Relatively simple function, e.g. using WINE. Using this value function, in systems with constrained service such as MSJ or the switch, serve the subset of jobs that most rapidly decreases the value function. Or dispatch to minimize the value function impact. First step: Derive the value function. Future steps: Implement the policy. Compare against ServerFilling-SRPT, Guardrails, etc. Multiserver Nudge Nudge was defined for the single-server setting. However, much of the analysis of Nudge relative to FCFS only relied on the arrival process, not the departure process. Does Nudge have better asymptotic tail than FCFS in the M/G/k? Stochastic dominance? First step: Simulate Nudge in the M/G/k. Future step: Port the analysis to the M/G/k. How much transfers? Optimal dispatching to Gittins queues See Section 8.3.3 of my thesis. In my guardrails paper, I studied optimal dispatching with full size information. But what if we just have estimates? Or no info? A good candidate for the scheduling policy is the Gittins index policy, and we are trying to match resource-pooled Gittins, which intuitively requires that we always spread out the jobs of each rank across all of the servers. If estimates are relatively good, a combination that makes sense is estimated-Gittins + PSJF with estimates. If we have no information, we might just use the greedy policy. For each server, calculate how long the arriving job will have to wait behind all other jobs at that server, in expectation. Also calculate how long other jobs will have to wait behind the arriving job, in expectation. Send to the server where the total expected added waiting time is minimized. We can use SOAP to do this First step: Choose a size distribution for which Gittins is simple. Try the above greedy policy. Compare against e.g. Join the Shortest Queue (JSQ). Future steps: Can we prove that unbalancing isn’t worth it, if the dispatcher and the server have the same information? Can we prove any convergence to resource-pooled Gittins, if the distribution is simple enough? Optimal Nonpreemptive MSJ scheduling An important aspect of MSJ scheduling in the really world is that we often want to scheduling nonpreemptively – jobs that need lots of resources simultaneously tend to be expensive to set up and shut The challenge of nonpreemptive scheduling in the MSJ is that it’s expensive to switch between service configurations. Consider an MSJ system, where we have 10 servers and jobs either take 1 or 10 servers. If we feel like serving 10-server jobs, we can keep serving 10-server jobs for as long as we have them available, with no wasted capacity. If we want to switch to serving 1-server jobs, we can wait until a 10-server job finishes, and put 10 1-server jobs into service, again without wasting capacity. But if we now want to switch back to serving 10-server jobs, we’re going to have to wait until all of the 1-server jobs finish, to free up space. Due to stochasticity, we’ll waste a substantial amount of capacity in the process. If the 1-server jobs take Exp(1) seconds, it’ll take an average of 2.93 seconds to empty out the system, and we’ll only complete 10 jobs, compared to the average completion rate of 29.3 jobs in that time. We’ve wasted an amount of capacity equal to 19 job completions in this process. This raises the question: When is it worth it to waste this capacity and perform this switch, towards the goal of reducing response times? The advantage of doing a switch is that we can start on a different class of jobs without waiting for the system to empty. The different class of jobs could have lower mean size (or higher importance), aiding response times. If we waste capacity too often, the system will become unstable. As we approach that boundary, response times will suffer. Question: What’s the optimal threshold (# of small and large jobs in the queue) to switch configurations? Starter question: Let’s simulate a variety of thresholds and see what seems good. Alternate approach: In heavy traffic, how often should we be switching, optimally? We could get into cycles, where we wait until we accumulate x jobs of one class, then serve then all, paying the overhead, then switch back to the other class, which has a very long queue. The response time of the higher-priority class will be determined by the cycle length, while the response time of the lower-priority class will be determined by the amount of wasted capacity pushing the system closer to the capacity boundary. Can we (approximately) determine how long these cycles should optimally be, given the switching overhead? Largest Remaining Size in the M/G/k In our recent MAMA paper, Ziyuan and I introduced the WINE lower-bounding framework, which proves lower bounds on mean response time in the M/G/k under arbitrary scheduling via lower bounds on mean relevant work under arbitrary scheduling. The optimal policy for mean total work under arbitrary scheduling is the most servers first policy, I’m fairly certain. The reasoning is fairly simple: Total work is minimized if as many servers are running as possible at each moment in time. The only thing that can get in the way of that is too much work concentrated into too few jobs. So, we should schedule the job of largest remaining size. This is the opposite of the SRPT policy/ First step: Can we prove rigorously that LRS minimizes total work in the M/G/k? This should be true in a sample-path sense, which should make the proof relatively straightforward. Next step: Can we analyze LRS for the M/D/k, or even just the M/D/2? (D for Deterministic). I think that the ISQ methods described in the same MAMA paper might be useful. In particular, drift functions derived via differential equations to give constant drift might be useful. Future: Can we prove tighter bounds by thinking about bounding LRS, rather than trying to bound an arbitrary policy? Even if we can’t get an exact analysis. Active projects Product form steady-state distributions from graph structure The 2-class MSJ saturated system has a product form steady-steady distribution, as a consequence of the graph structure of the Markov chain. This is in contrast to the single-exponential saturated system, for which the transition rates are also important to the product-form argument. In general, a directed graph has a product form if there is an “directed elimination ordering” to its vertices, defined as follows: • For each vertex i, define its neighborhood to be all vertices j for which there exists an edge j->i, as well as i itself. • Start with a source vertex, and place it in the “eliminated” set. • Repeatedly select vertex neighborhoods that contains exactly one uneliminated vertex. Each time such a neighborhood is selected, eliminate the new vertex. • If this process can be continued until all vertices are eliminated, the directed graph has a “directed elimination ordering”. All Markov chains with such an underlying graph have product form steady-state distributions. Moreover, such chains have summation-form relative completions, a new concept which allows relative completions to be characterized in closed-form. First step: What are some classes of graphs that have elimination orderings? I know undirected trees and the ladder graphs are examples. What others? Future steps: Have these graphs been studied already, likely under a different name? Can we give a closed-form characterization of this family of graphs? Are they closed under any operations, such as taking minors? Update: Elimination ordering seems better for summation-form relative arrivals/relative completions. Instead, for product-form you need something slightly stronger: A sequence of cuts such that on each side of the cut, there’s exactly one vertex with transitions across the cut. Optimal scheduling in the general MSJ model See Section 8.3.4 of my thesis. Outside of the divisible server need setting behind the DivisorFilling-SRPT policy, we can’t guarantee that all of the servers can be filled by an arbitrary set of k jobs. This can cause problems in two ways: 1. The smallest jobs might not pack well. 2. If we prioritize the smallest jobs, the jobs that are left over might not be able to fill the servers. For example, consider a system with k=3 servers and jobs of server need 1 and 2. If the 2-server jobs have smaller size, we can’t fill the servers with just 2-server jobs. If the 1-server jobs have smaller size, and we prioritize them, we’ll run out of 1-server jobs and have just 2-server jobs left, which can’t fill the servers. To fix problem 1, we should just find the set of jobs with smallest sizes that can fill the servers, and serve those jobs. Proving that this is optimal will be challenging. To fix problem 2, we should set a floor on the number of 1-server jobs that we want to keep in the system, in the style of CRAB, and when we reach the floor, use the least 1-server-intensive strategy. Proving this is optimal will also be hard. First step: Find a prospective policy for the k=3 setting that “feels” optimal. Relative arrivals/completions with infinite state spaces Setting: Markovian arrivals/markovian service systems. In my RESET and MARC paper, the MARC technique allows us to characterize the mean response time of systems with markovian service rates, if those service rate process is finite. See also my SNAPP talk, which is a cleaner presentation of the idea and focuses on markovian arrivals. The “finite modulation chain” assumption isn’t really necessary - the actual assumptions needed are much more minor. In particular, we should be able to analyze systems like the N-system or Martin’s system by thinking of the non-heavily-loaded server as a modulation process on the service rate of the main server. A good starting point would an N-system where the recipient server is critically loaded, but the donor server is not. Starting point: Compute relative completions in the aforementioned N-system, compare against simulation. Perhaps pursue with Hayriye? M/G/k response time lower bounds (known size) See our preliminary work on this topic. See Section 8.3.2 of my thesis. There are two straightforward lower bounds on mean response time for the M/G/k: kE[S], the mean service duration, and E[T^SRPT-1], response time in an M/G/1/SRPT. Empirically, as ρ->1, SRPT-k achieves a mean response time around E[T^SRPT-1] + kE[S]. Can we prove a lower bound that’s asymptotically additively larger than E[T^SRPT-1]? Idea: Use WINE (see my thesis), with M/G/1 and M/G/infinity work bounds at different sizes. Mainly only improves things at lower loads. Idea: Look at the “Increasing Speed Queue”, which starts at speed 1/k at the beginning of a busy period, then 2/k, etc., capping at speed 1 until the end of the busy period. Provides a lower bound on work. A higher lower bound than the M/G/1. Incorporate into the WINE bound. First step: Derive the WINE bound. Future step: Quantity expected work in the increasing-speed queue, perhaps with renewal-reward. Update: We can analyze the increasing-speed queue via the constant-drift/affine-drift method, akin to the MARC method from my RESET and MARC paper and my SNAPP talk. See my photo-notes on the subject. For the 2-server setting, the constant-drift test function is: f(w, 1) = w, f(0, 0) = 0, f(w, 1/2) = w + (1-e^(-2lw))/2l The affine-drift test function is: f(w, 1) = w^2, f(0, 0) = 0, f(w, 1/2) = w^2 + w/l + (1-e^(-2lw))/2l^2 These should be sufficient to compute mean work! If we make the state space consist of work, time until next arrival, and speed, we can simplify this considerably. The constant-drift test function is now: f(w, 1) = w, f(0, 0) = 0, f(w, a, 1/2) = w + min(w, a/2) If we plug in a = Exp(l) and take expectations, we get the first expression above. Beating SRPT-k See Section 8.3.1 of my thesis. See our MAMA paper. Setting: SRPT-k (M/G/k/SRPT) is heavy-traffic optimal for mean response time, as I proved in SRPT for Multiserver Systems, but it can be beaten outside of heavy traffic. Idea: Consider a 2-server system with 3 jobs in it: Two are small, one is large. There are two scheduling options: Run both small jobs first (SRPT), or one small and one large first (New concept). Once a small job finishes, start running the third job. If no new jobs arrive before the long job finishes, both options have the same total response time. If new jobs arrive after the small jobs finish but before the large job finishes, starting the large job sooner (New concept) is better. If new jobs arrive before both small jobs are done, SRPT is preferable. Policy: Flip-3. (A variant of this is the SEK policy from my thesis and the MAMA paper). In an M/G/2, if there are at least 4 jobs, just run SRPT. If there are 3 jobs, and 2 have remaining size below a, and the third has size above b, run the smallest and largest jobs. Otherwise, SRPT. Set a at roughly 20% of the mean job size, and b at roughly the mean job size. First step: Implement this policy. Compare it against SRPT-k. Fiddle around with job size distributions, loads, and a and b thresholds to find a relatively large separation (0.1% is normal, 1% is Future steps: Use a Nudge-style argument to prove that if a is small enough and b is large enough, the Flip-3 policy has lower mean response time than SRPT-2. Details: Consider the case where we have jobs of size ε, ε, 1. The SEK policy runs ε, 1 for ε time, then ε, 1 for ε time, at time 2ε having the state 1-2ε. SRPT-k runs ε, ε for ε time, then 1 for ε time, at time 2ε having the state 1-ε. This is worse. To prove that SEK in this specific instance is beneficial, proof sketch: 1. 1-2ε sample-path dominates 1-ε, by at least ε response time. 2. If a job arrives in the first 2ε time to disrupt things, there is a coupling for SRPT-k and SEK such that until the end of the busy period, system states differ by at most ε at all times, resulting in only O(ε) worse total response time in the SEK system. 3. 1-2ε achieves ε+Omega(ε) better total response time that 1-ε, because more jobs could arrive. Archived: Completed Known size dispatching to FCFS queues This paper has been accepted to SIGMETRICS 2024: Heavy-Traffic Optimal Size-and State-Aware Dispatching! Starting point: CRAB by Runhan Xie and Ziv Scully, initial work presented at MAMA 2023. Setup: Imagine web requests are arriving to a server farm. Jobs arrive, are dispatched to servers, and are served. Let’s optimize this. When a job arrives, it must be dispatched to one of several servers. At dispatch time, the size of the job is known (or estimated), and that size is used for the dispatching decision. Once at a server, jobs are served in FCFS order. What’s a good dispatching policy to minimize mean response time? What’s optimal? I’m especially interested in heavy traffic (arrival rate near capacity). Idea: There’s an unavoidable amount of work in the system, M/G/1 lower bound. However, if we concentrate almost all of the work onto one server, and only dispatch large jobs to that server, then almost all of the jobs will avoid that long delay. Of course, we need to keep the other servers busy to avoid wasting capacity, but we’ll keep their queue lengths short. Concrete policy: “Many Short Servers” (MASS). Based on size, divide jobs into classes small, medium, and large. Set these cutoffs so that the small jobs make up (k-1)/k - ε fraction of the load, where k is the number of servers, the large jobs are 1/k - ε of the load, and the medium jobs are the other 2ε of the load. ε is a small constant to be determined. Designate k-1 servers as the short servers (low workload), and one as the long server (high workload). Small jobs go to the short server, large jobs go the long server, and for medium jobs it Designate a target amount of work for the short servers. This should be o(1/(1-ρ)), to be smaller than the long server, and it should be omega(log(1/(1-ρ))), so it doesn’t run empty due to bad luck. sqrt(1/(1-ρ)), for instance. Whenever a small job arrives, send it to the short queue with least work. Whenever a large job arrives, send it to the long server. When a medium job arrives, if the short server with the least work is below the target amount of work, send the medium job there. If all short servers are above the target, send the medium job to the long server. First step: Implement this policy. Start with k=2, for simplicity. Compare it against JSQ, LWL, SITA. Poisson arrivals, high variance sizes, high load. Does it do well empirically, for appropriate settings of ε and the target work? Refinement: Dynamic relabeling. Whenever a job arrives, the long server is whichever has the most work at that moment, not static. Future steps: Prove state space collapse. The system is almost always close to having all short servers at the target, or all servers below the target. Use SSC to bound response time/waiting time. Lower bound waiting time. Argument: All servers must have 1/k of the load going through them. The work has to be somewhere, and there’s theta (1/(1-ρ)) of it in total. Best case scenario is that the largest jobs are the only jobs delayed by the work. This should dominate waiting time. This should match the waiting time of MASS, up to ratio 1, if the distribution is not too crazy. Archived: No longer interested These are projects that I was once interested in, but I’m not interested in any more. Maybe the approaches that I wanted to pursue didn’t pan out, maybe others took it in more interesting directions. Either way, you can see my old ideas here. The Time Index scheduling policy Setting: M/G/1 scheduling for the tail, especially the asymptotic tail, especially in comparison to FCFS. Policy: Time Index. Priority is s - t, where s is a job’s size and t is the job’s time in system.Lower is better. Relatively simple proof that waiting time dominates FCFS waiting time. First step: Implement this policy. Compare against FCFS, Nudge. Future steps: By how much does it dominate FCFS? Characterize leading constant of asymptotic? Optimal Transform My Nudge paper works very hard to do even the most basic analysis of the tail probability P(T>t). But maybe the reason this is hard is because we’re effectively comparing the response time random variable against a constant, and the constant random variable is obnoxious to work with – it has a sharp cutoff. The smoothest random variable is the exponential random variable. If we use that as our cutoff, we get P(T>Exp(s)), which is the Laplace-Stieltjes Transform of response time (Technically, it’s P(T <Exp(s)), not P(T>Exp(s)). This still captures similar information, if we set s=1/t. It is also much easier to analyze: All SOAP policies and Nudge have transform analysis. So let’s try to optimize the transform. Intuition: Effectively, jobs abandon at rate s, and we want to maximize the fraction that we complete before they abandon. If jobs told us when they abandoned, the optimal policy is straightforward: run the small job that hasn’t abandoned yet. But we don’t know which jobs have abandoned. We need to use time in system as a proxy. First step: Compute the transform for some common policies, like FCFS, SRPT, Nudge, via simulation and/or formula. Compare against simulated P(T>t), which we can call the “hard tail”. Future steps: Is the transform a good proxy for the hard tail? Is the inverse transform a good proxy for the inverse hard tail, e.g. a percentile? For a given pair of (time in system, remaining size), what’s the optimal 2-job policy? Is it that index policy I came up with a while back? What’s a semantic understanding of that policy? Can we analyze it? Is it empirically optimal in the full M/G/1? Does it perform well for the hard tail/percentiles? Update: The optimal 2-job strategy is to serve the job that maximizes e^-st e^-sr/(1-e^-sr). Semantically, this is the probability of not abandoning prior to the time of the decision, times the probability of not abandoning while run, divided by the probability of abandoning while being run. Note that this is a “conveyor belt” policy: jobs never interchange priority. This is a class containing SOAP and Nudge. Further steps: Implement this policy in simulation. What’s its empirical transform? Is it empirically optimal? Is the policy the optimal 3-job policy? Optimal without arrivals? Important update: This is a pretty bad tail metric, and hence a pretty bad policy. This metric gives jobs diminishing importance as they age, while a good tail metric should give jobs increasing importance as they age. This issue is reflected in the policy, which rates jobs as less important the larger their time in system. Instead, one should consider the metric E[T^(st)], in contrast to the above discussion of E[e^(-st)]. The optimal 2-job strategy is then to maximize e^st e^sr / (1-e^sr). This is a better metric and a better policy. It’s equivalent to using negative inputs to the transform, so it’s still extractable from the transform. One must be careful to only consider values of s for which the metric is General constrained-service queue The Multiserver-job system and the switch can both be thought of as special cases of the “Constrained service queue”: Jobs have classes, and a certain multisets of classes can be served at once. In the 2x2 switch, the service options are (ad, bc), while in the 2-server MSJ setting, the service options are (aa, b). What policies and analysis make sense in the general constrained-service queue? MaxWeight, used e.g. in “Stochastic models of load balancing and scheduling in cloud computing clusters”, seems to be always throughput-optimal. When does a ServerFilling equivalent exist? My RESET paper seems like it always applies to FCFS-type service. Restless MDPs for tail scheduling In our Gittins-k paper and in Ziv Scully’s Gittins paper, “The Gittins Policy in the M/G/1 Queue”, we relate the Gittins policy to that of the Gittins Game, a corresponding MDP whose optimal solution describes the Gittins scheduling policy, and gives rise to the optimality of the scheduling policy. This relationship is at the heart of Gittins’ original paper, “Bandit Processes and Dynamic Allocation Indices”, which introduces both the MDP policy and the scheduling policy. For the mean response time objective, the corresponding MDP is a restful MDP, giving the optimal solution strong enough properties to carry over to the scheduling setting. In contrast, for a tail response time objective such as T^2, the corresponding MDP is a restless MDP. Recently, there have been advances in the theory of multiarmed restless MDPs, such as the Follow-the-Virtual-Leader (FTVL) policy of Yige Hong and Weina Wang, in their paper “Restless Bandits with Average Reward: Breaking the Uniform Global Attractor Assumption”. Question: Can we formulate a restless Gittins game, solve the single-arm version, and use the FTVL policy or something similar to design a scheduling policy?
{"url":"https://isaacg1.github.io/project-ideas/","timestamp":"2024-11-04T18:10:41Z","content_type":"text/html","content_length":"52028","record_id":"<urn:uuid:01b18340-39d2-4d6c-ba88-37b8de4edbfd>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00760.warc.gz"}
You're reading an old version of this documentation. For the latest released version, please have a look at v0.8.16. Error ModelingΒΆ CADET-Match includes error modeling and parameter estimation. Error modeling in CADET-Match is based on using a fitted simulation and creating an error model by manipulations to the fitted simulation. Error modeling uses MCMC and can be a slow process which requires a lot of computing time. Some simple problems can be solved in a few hours on a powerful desktop and others can take weeks on a powerful server. MCMC search settings. The error model is pretty simple. The best fit simulation is used as a template. Variations are made based on the template using the errors supplied. Pump Delays are implemented using a uniform random distribution. Any time a new section starts in CADET a pump delay may be applied. Setting upper and lower bound to 0 disables this error. Flow rate variations use a normal distribution with a supplied mean and standard deviation. These numbers can usually be found from a pump manufacturer. The flow rate in the simulation is multiplied by the pump flow error. Setting the mean to 0 disables this error. Loading concentration variations use a normal distribution with a supplied mean and standard deviation. These numbers normally have to be determined from experiments. The concentration is multiplied by the concentration error. Setting the mean to 0 disables this error. The UV error is modeled as a scale dependent error and a scale indepdennt error so that the total error applied to the chromatogram = signal * uv_noise_norm + uv_noise. Both of the errors sources are the same length as the chromatogram. UV noise norm almost always has a mean value of 1.0 and UV noise almost always has a mean noise of 0.0 since they are the multiplicative and additive identities Key Values Default Required Description name String None Yes name of the experiment this error model applies to units List of Integers None Yes unit numbers that uv noise should be applied to delay [Float, Float] None Yes min and max value of a uniform random distribution for pump delays flow [Float, Float] None Yes mean and standard deviation for a normal distribution load [Float, Float] None Yes mean and standard deviation for a normal distribution uv_noise_norm [Float, Float] None No mean and standard deviation for a normal distribution uv_noise [Float, Float] None No mean and standard deviation for a normal distribution "errorModel": [ "file_path": "non.h5", "experimental_csv": "non.csv", "name": "main", "units": [2], "delay": [0.0, 2.0], "flow": [1.0, 0.001], "load": [1.0, 0.001], "uv_noise_norm": [1.0, 0.001]
{"url":"https://cadet.github.io/CADET-Match/v0.8.14/configuration/error.html","timestamp":"2024-11-14T05:55:15Z","content_type":"text/html","content_length":"11535","record_id":"<urn:uuid:0b8df969-c35a-48c0-b033-0e45d063476f>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00873.warc.gz"}
Subreddit SearchSubreddit Search Those wanting to learn about formal aspects of reverse engineering should start here and those wishing to study implementations can start here and here --- Courses --- Program Analysis by Rolf Rolles Advanced Tool Development with SMT Solvers by Sean Heelan Advanced 0Day Discovery using SMT Solvers by Edgar Barbosa Computer code is a complex logical artifact and high dimensional transition system and data source whose synthesis, extension, maintenance, execution abstraction, visualization, comprehension, and verification is difficult but can be made easier by creating tools that aid humans in these domains. Statistical Reverse Engineering, Machine Language Processing, and Program Analysis are fields of computer science devoted to creating tools and theories for the understanding of programs with inspiration from the fields of formal methods, reverse engineering, pure mathematics, natural language processing, human–computer interaction, bioinformatics, and machine learning. We are an interdisciplinary community concerned with the discovery and understanding of computational systems beyond what is available on the surface. Join us on Slack here or here Join us on IRC: #r_netsec on freenode Other places of interest:
{"url":"https://r-weld.vercel.app/r/REMath","timestamp":"2024-11-12T22:47:02Z","content_type":"text/html","content_length":"202227","record_id":"<urn:uuid:b5fdae72-1a88-407d-9206-69c536fe64e7>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00712.warc.gz"}
Nuance Probability of Bankruptcy | Nuance Communications Inc (NUAN) Nuance Probability of Bankruptcy The Probability of Bankruptcy of Nuance Communications Inc (NUAN) is -% . This number represents the probability that Nuance will face financial distress in the next 24 months given its current fundamentals and market conditions. Multiple factors are taken into account when calculating Nuance's probability of bankruptcy : Altman Z-score, Beneish M-score, financial position, macro environments, academic research about distress risk and more. Nuance - ESG ratings ESG ratings are directly linked to the cost of capital and Nuance's ability to raise funding, both of which can significantly affect the probability of Nuance Communications Inc going bankrupt. ESG Score - Environment Score - Social Score - Governance Score -
{"url":"https://valueinvesting.io/NUAN/probability-of-bankruptcy","timestamp":"2024-11-04T20:32:45Z","content_type":"text/html","content_length":"147315","record_id":"<urn:uuid:e02b0762-ca14-4721-adc2-64355e252cba>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00295.warc.gz"}
Electric Field due to charges,12,PMT,Physics,Electric Charges And Fields,Electric Field due to charges: Search page for Notes,Tests and Videos:SureDen Your Education Partner Electric Field due to charges Electric field due to a point charge: As shown in fig. consider a point charge q placed at the origin o. we wish to determine its electric field at a point P at distance r form it. For this, imagine a test charge q[0] placed at point p. according to the coulomb’s law, the force on charge q[0] is this means that at all points on the spherical surface drawn around the point charge, the magnitude is same and does not depend on the direction of r. such a field is called spherically symmetric or radial field. i.e. a field which looks the same in all directions when seen from the point charge. Electric field due to a system of point charges: As shown in fig. consider N point charges q[1], q[2], q[3], ………..q[n], place in vacuum at points whose position n vectors w.r.t. origin o are [0] due to charge q[1] is The electric field at pint p due to charge q[1] is According to the principle of superposition, the electric field at point P to a system of N charges is in terms of position vectors, Continuous Charge Distribution: In practice, we deal with charges much greater in magnitude than the charge on an electron, so we can ignore the quantum nature of charges and imagine that the charge is spread in a region in a continuous manner. Such a charge distribution is known as continuous charge distribution. Force on O point charge qo due to continuous charge distribution As shown in fig. consider a point charge q[0] lying near a region of continuous charge distribution. This continuous charge distribution can be imagined to consist of a large number of small charges dq. According to the coulomb’s law, the force on charge q[0] due to small charge dq is By the principle of superposition , the total force on charge q[0] will be vector sum of the force exerted by all such small charges and is given by Related Keywords
{"url":"http://www.sureden.com/topics/12-pmt-physics-electric-charges-and-fields-electric-field-due-to-charges-45.html","timestamp":"2024-11-14T14:16:43Z","content_type":"text/html","content_length":"69736","record_id":"<urn:uuid:a7412c56-6c97-4498-b48d-529eacdac0aa>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00411.warc.gz"}
Quick Start – Documentation – Standard Functions – Settings Files Download Calculon Current version 1.0-alpha^1 Some of the key features: • Lean and mean — designed for keyboard use, minimal UI kludge. • Evaluates expressions — makes it easier to fix typos and minimizes the need to retype long calculations. Just type in the full equation and press enter. • Unit, currency and other conversions — you can mix inches, metres, nanoseconds, kelvins, radians and hectares with no need converting measures to a common base. • Extendable — units, functions, conversions and constants can be added and customized. Calculon is a free^2, tiny desktop calculator for Microsoft Windows (98, 2000, XP, Vista) that calculates by evaluating expressions instead of emulating a traditional desk calculator. Calculon is extremely extendable which means you can use it to solve practical problems such as “How long will it take to download this file?” and so on^3. The ideal usage pattern for Calculon goes a bit like this: launch Calculon with a hotkey, type or paste in the calculation, press enter and copy the result. Calculon ships with predefined constants, functions and units but anyone can easily add the functionality they need. If you are a historian, you can add ancient measures or if like to cook, you can make Calculon convert between teaspoons and gallons if that is what you need. Known bugs in 1.0-alpha Random numbers are generated every time Calculon evaluates an expression and also when it looks for the unit of the expression. This means if you are using a ternary operator, it will lead into random behavior. (rnd > 0.5) ? 1 m : 3 h The above will return any of the following: 1 m, 3 h, 3 m, 1 h. This is because the expression (rnd > 0.5) is evaluated first for the number part and for a second time for the unit. 1. Calculon is in a working preview state. Some features are missing or unfinished. ↩ 2. Calculon is freeware. ↩ 3. See the quick start page for more practical examples ↩ You must be logged in to post a comment.
{"url":"http://kometbomb.net/projects/calculon/","timestamp":"2024-11-06T14:09:20Z","content_type":"application/xhtml+xml","content_length":"79203","record_id":"<urn:uuid:072d48f4-7793-4fed-80fb-f3d3add94950>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00182.warc.gz"}
Activity Ratio: Definition, Formula and Types - Akounto An activity ratio measures a company’s ability to convert assets to generate revenue and cash. Asset utilization ratios, also known as activity ratios, are indicators of a company’s operational efficiency in utilizing its total assets and liquidity position. What Is An Activity Ratio? An activity ratio is a financial metric that measures a company’s efficiency in leveraging assets, accounts receivables, and inventories on its balance sheet and generating revenue from them. The activity ratio is the rate at which assets such as machinery and raw materials generate sales and profits. Activity ratio formula is calculated by dividing one financial metric by another and is used to assess a company’s liquidity, solvency, and profitability. They are also known as efficiency ratios, performance ratios, or turnover ratios. The different activity ratios, like working capital ratio, asset turnover ratio, etc., demonstrate how a company manages the different elements on the balance sheet and generates revenue. What Is The Activity Ratio In Lean? Activity ratios in lean are used to measure a company’s efficiency and production processes. These activity ratios measure metrics such as the number of units produced per hour, the number of defective units, and the amount of time spent on non-value-adding activities in the same sector. By measuring these ratios, industry competitors can identify areas of inefficiency and take steps to improve their production processes. How To Calculate Activity Ratio? Activity ratios are called turnover ratios because the efficiency of converting an asset, like inventory, goods, fixed assets, etc., into cash is measured by it. It measures how quickly an asset cycle is churned during a business period. One can calculate the accounts receivable turnover ratio by dividing average accounts receivable by net credit sales. Other activity ratios may be calculated differently, but the basic formula is always the same: one financial metric divided by another. What Are The Different Types Of Activity Ratios? There are several activity ratios, each of which measures a different aspect of a company’s performance. Some common types of activity ratios include: Accounts Receivables Turnover Ratio The accounts receivables turnover ratio measures a company’s ability to collect its accounts receivable on time. A person can calculate it by dividing net credit sales by the average accounts The accounts receivable ratio evaluates a company’s proficiency in converting its credit sales into cash and managing its accounts receivable. Formula: Accounts Receivable Turnover Ratio = Net Credit Sales / Average Accounts Receivable Example: If a company had net credit sales of $1,000,000 and average accounts receivable of $200,000, the accounts receivable turnover ratio would be 5. Working Capital Ratio The working capital ratio measures a company’s ability to manage its short-term assets and liabilities. One can calculate it by dividing current assets by current liabilities. A healthy working capital ratio should ideally be in the range of 1.2 to 2.0. Formula: Working Capital Ratio = Current Assets / Current Liabilities Example: Suppose a company had assets of $500,000 and liabilities of $300,000; the working capital ratio would be 1.67. Total Assets Turnover Ratio The total assets turnover ratio allows a company to measure its efficiency in using its assets to generate sales. They can calculate it by dividing net sales by total assets. A healthy assets turnover ratio differs in every sector; no ideal figure applies to all businesses. Formula: Total Assets Turnover Ratio = Net Sales / Total Assets Example: Let’s suppose a company had net sales of $2,000,000 and total assets of $1,500,000; the total assets turnover ratio would be 1.33 Fixed Asset Turnover Ratio It measures the efficiency of a company by using its fixed assets to generate revenue. And one can calculate it by dividing net sales by fixed assets. Formula: Fixed Asset Turnover Ratio = Net Sales / Fixed Assets Example: Let’s suppose a company had net sales of $1,500,000 and fixed assets of $800,000; the fixed assets turnover ratio would be 1.88. Inventory Turnover Ratio The inventory turnover ratio measures a company’s operating efficiency in managing its inventory. Formula: It is calculated by dividing the cost of goods sold by the company’s average inventory. A slow-moving inventory, indicated by a low inventory turnover ratio, can indicate that capital is being tied up. Conversely, a high inventory turnover ratio can mean that inventory is being sold quickly. So if the ratio is too high, it may result in stock-outs and missed sales opportunities. Formula: Inventory Turnover Ratio = Cost of Goods Sold / Average Inventory Example: If a company had a cost of goods sold of $1,200,000 and an average inventory of $200,000, the turnover ratio would be 6. Benefits Of Activity Ratio Activity ratios can provide several benefits to a company. For example, they can help a company identify areas of inefficiency. • Identifying inefficiencies: Activity ratios can reveal areas where a company may be inefficient in managing its resources, such as a high inventory turnover ratio or low accounts receivable • Benchmarking: Activity ratios can compare a company’s performance to industry averages or competitors. • Identifying trends: By tracking activity ratios over time, a company can identify trends in its financial performance and adjust accordingly. • A quick assessment of liquidity: Ratios such as the current and quick ratios give an idea of a company meeting its short-term obligations. • Assessing the company’s ability to generate cash: Ratios such as the cash conversion cycle show how well the company converts its net annual sales into cash. Activity Ratios Vs. Profitability Ratios • Activity and profitability ratios measure different aspects of a company’s performance; both are necessary to understand a company’s financial health comprehensively. Activity ratios focus on a company’s average fixed assets efficiency and effectiveness in managing its resources, while profitability ratios focus on its ability to generate profits. • For example, the asset turnover ratio is an activity ratio that measures a company’s efficiency in using its assets to generate sales. In contrast, the profitability ratio that measures a company’s ability to generate revenue is called a gross profit margin. Activity ratios are an important tool for evaluating a company’s performance. They provide a clear picture of a company’s gross fixed assets and help determine the payables turnover ratio and fixed assets turnover. By identifying areas of inefficiency, companies can take steps to improve their low inventory turnover ratio. The Akounto blog is a great resource for anyone seeking help with accounting-related concepts or services for their business. Visit the website to sign up to explore expert services like bookkeeping, invoicing, tax filing, etc.
{"url":"https://www.akounto.com/blog/activity-ratio","timestamp":"2024-11-06T20:49:35Z","content_type":"text/html","content_length":"221099","record_id":"<urn:uuid:b80a2ef6-ab3d-4065-a1e2-64a8134acd9a>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00392.warc.gz"}
Cosmic Inflation Making observable predictions for cosmic inflation requires determining when the wavenumbers of astrophysical interest today exited the Hubble radius during the inflationary epoch. These instants are commonly evaluated using the slow-roll approximation and measured in e-folds time. Pierre, Baptiste and Christophe propose a new analytical method to determine these instants with a precision reaching the tenth of an e-fold. Cosmic Inflation is the leading explanation for the origin of cosmic structures: these are seeded by quantum fluctuations occurring around the event horizon of a exponentially fast accelerating space-time, see this post. By measuring the distribution of galaxies in our universe, the Euclid satellite is expected to provide very soon new exquisite measurements of these fluctuations. Testing cosmic inflation will therefore require to have exquisite predictions as well. Observable predictions currently rely on the slow-roll approximation to determine the so-called e-fold times \(\Delta N=N−N_\mathrm{end}\), in reference to the e-fold \(N_\mathrm{end}\) at which inflation ended. These instants are actually used to map structures in the sky to quantum fluctuations during inflation. The precision at which they are determined is not so good, they are typically known at \(\mathcal{O}(1)\) e-fold precision. The following figure uses a full numerical integration of various inflationary models to compute the error made (vertical axis) using the slow-roll approximation as a function of the exact timing (horizontal axis). In Ref. [1], we propose a new and simple velocity correction, on top of slow-roll, that increases by one order of magnitude the precision on \(\Delta N\). As shown in the following figure, when compared to the exact solution, our new method reaches an precision of about a tenth of e-fold (blue curve compared to the red one). The other curves (green and purple) show other corrections, improving the determination of \(\phi_\mathrm{end}\), the field value at which inflation ends. These ones may, or may not, improve over the velocity correction, depending on the model at hand.
{"url":"https://curl.group/news/2024/10/01/2406.14152.html","timestamp":"2024-11-04T07:34:25Z","content_type":"text/html","content_length":"9204","record_id":"<urn:uuid:60ac630c-e74c-4f8c-837f-b21178635c38>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00749.warc.gz"}
Fluid Mechanics - Chemical Engineering Questions and Answers Chemical Engineering :: Fluid Mechanics 1. The fluid property, due to which, mercury does not wet the glass is surface tension 2. Laminar flow of a Newtonion fluid ceases to exist, when the Reynolds number exceeds 3. When the momentum of one fluid is used for moving another fluid, such a device is called a/an jet pump acid egg none of these 4. The normal stress is the same in all directions at a point in a fluid, when the fluid is both (a) and (b). having no motion of one fluid layer relative to the other. 5. Head developed by a centrifugal pump depends on its impeller diameter both (a) and (b) neither (a) nor (b) 6. Hydraulic mean depth (D[m]) for a circular pipe of diameter 'D' flowing full is 0.25 D. For a circular channel, at D[m] = 0.3 D, gives the condition for the maximum flow rate mean velocity both 'a' & 'b' neither 'a' nor 'b' 7. The head loss in turbulent flow in a pipe varies as velocity as (velocity)^2 inversely as the square of diameter inversely as the velocity 8. Most commonly used joint in the underground pipe lines is the expansion joint 9. Schedule number of a pipe, which is a measure of its wall thickness, is given by 1000 P'/S 100 P'/S 1000 S/P' 10000 P'/S 10. The net positive suction head (NPSH) of a centrifugal pump is defined as the sum of the velocity head and the pressure head at the suction minus vapor pressure of the liquid at suction temperature. discharge minus vapor pressure of the liquid at the discharge temperature.
{"url":"https://freshergate.com/chemical-engineering/fluid-mechanics","timestamp":"2024-11-05T04:00:44Z","content_type":"application/xhtml+xml","content_length":"226998","record_id":"<urn:uuid:5d34782d-147f-4eff-a1d8-cf7b1eb648f5>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00151.warc.gz"}
Browser-Based Ultrasound Condition Monitoring Image Reporting App: SonaVu InSights™ Browser-Based Reporting Application Creates Reusable Compressed Air and Electrical Scan Reports in Seconds. SonaVu InSights™ is a web application for creating instant reports of compressed air leak and electrical asset surveys performed with the SonaVu Acoustic Imaging Camera from SDT. Document findings, prioritize repairs, estimate cost impact, eliminate energy waste, and save money with this free application. SonaVu™ is a multi-frequency acoustic imaging camera that blends visual and auditory senses to bring compressed air waste and failing or faulty electrical equipment into focus. Images and videos of compressed air leaks and electrical faults discovered using the SonaVu™ Acoustic Imaging Camera are uploaded to SonaVu InSights™ Report Library for seamless reporting and sharing. The SonaVu InSights™ Image and Report Library is a secure place to store, organize, analyze, and prioritize your findings with your SonaVu™ Acoustic Imaging Camera. SonaVu InSights™ Compressed Air Leak Management Energy waste, lost profits, poor product quality, and slower production output… a production managers worst nightmare! Is their reaction the same to compressed air leaks? Used by millions of manufacturers around the world to power their processes, compressed air is often referred to the fourth utility, standing right beside gas, electricity, and water. It also happens to be the most expensive resource of the four utilities to produce. Compressed air is often manufactured in-house so that manufacturers can maintain control over its uses, availability, abundance, and cost… or so they hope. A Few Facts about Compressed Air • Fact #1 Over a compressor’s lifetime, the energy costs associated with operating the compressor and manufacturing the compressed air equal around three times as much as the capital investment of the compressor and all associated costs (repairs, maintenance, installation). • Fact #2 Compressing air is not an energy efficient process – but it’s the only way to obtain the widely utilized resource. • Fact #3 35%-45% of the compressed air produced by a compressor is typically lost to leaks somewhere in the system. • Fact #4 When manufacturing processes are heavily reliant on compressed air, inconsistent or short supplies can result in sub-standard product quality, and longer production times. Half of this expense being lost to compressed air leaks is neither economically nor environmentally responsible, and it is plain to see that compressed air remains the most mis-managed of the four utilities. Either because manufacturers are unaware of their potential waste, or due to a lack of compressed air leak management software and ultrasound devices on-hand. Manage Different Compressor Systems in Different Parts of your Facility Compressor systems vary in size, output, components, demand, and wattage; and many facilities have multiple compressed air systems powering different processes. This information isn’t always readily available to all condition monitoring techs - but is vitally important for them to know for accurate calculation of losses (or savings) accumulated from compressed air leaks and their repairs. How does SonaVu InSights™ Help Cut Down on Compressed Air Waste? How does Acoustic Imaging Technology work for Compressed Air? When compressed air passes from an area of high pressure (inside a compressed air line) through a leak to an area of low pressure (atmosphere on a manufacturing floor), ultrasonic turbulence is created. SonaVu™ uses its 112 digital mems sensors to detect these sources of ultrasonic turbulence. The leaks can then be recorded with the SonaVu™ and later be uploaded to SonaVu InSights™. SonaVu InSights™ is a Complete Compressed Air Leak Management System SonaVu™ can quickly scan sections of a compressed air system, capture images, videos, and measurements of the leaks that can later be uploaded to SonaVu InSights™ to generate fast, editable reports. From the SonaVu InSights™ Application, leaks can be organized and documented in a number of ways (including cost savings impact of a leak). From there, leaks can be scheduled for repair based on priority parameters set out by the condition monitoring technician and maintenance planner. Features for Compressed Air Leak Management • Upload unlimited leak survey data, images, and videos with lightning-fast speeds. • Instantly calculate financial losses and savings impacts caused by leaks. • Organize surveys by location, date & time, technician, smart naming conventions, notes. • Manage compressor systems & compressed air electricity costs. • Continuously integrate team discussion into a single living report. SonaVu InSights ™ For Electrical Systems Condition Monitoring Unreliable electrical equipment and systems can cost manufacturers millions of dollars in unplanned downtime and repairs. These faulty electrical systems even have the potential to maim and kill. The key to reducing the amount of these costly and dangerous malfunctions is early detection. With age, the insulation on electrical equipment breaks down. The undesired result in higher-voltage equipment is a phenomenon known as partial discharge (PD). Once this begins, the electrical equipment and components affected do not get better on their own… only worse, with rapidly increasing speeds. As the insulation of failing electrical equipment deteriorates, the process only happens faster and faster, and eventually maintenance will need to step in. At this stage, performing repairs and replacing damaged and corroded components is the only way to avoid catastrophic failure. Another side effect of Partial discharge is the production of combustible gasses. When partial discharge is present in an enclosed space, like in an electrical cabinet, these gasses build up. Sudden increases in oxygen can cause an ignition resulting in a deadly and damaging phenomenon known as arc flash and arc blast. SonaVu InSights™ Phased Resolved Partial Discharge (PRPD) Capabilities Phase Resolved Partial Discharge (PRPD) is an algorithm for performing analysis of sound patterns produced by partial discharge. It is widely used in the field as a form of electrical systems fault analysis. Using the SonaVu™ Acoustic Imaging Camera, ultrasound inspectors can identify sources of partial discharge using airborne ultrasound signals in the frequency range of 25-40kHz. SonaVu™ can then display these ultrasound signals as Phase Resolved Partial Discharge alongside the acoustic image it produces. From here, inspectors can use the PRPD Pattern to determine the type of partial discharge they’re encountering. The three most common partial discharge patters are corona partial discharge, surface partial discharge, and floating partial discharge. Corona Partial Discharge is electrical discharge caused by the ionization of fluid such as air surrounding a conductor carrying a high voltage. It is present at high 90 degrees. Surface partial discharge is much like Corona Partial Discharge, as it is electrical discharge caused by the ionization of fluid such as air surrounding a conductor carrying a high voltage. However, it is present at 270 degrees in addition to 90 degrees. Floating partial discharge is also known as the floating electrode. It is an internal discharge that occurs within cavities of electrical insulation and increases as the material wears. Floating partial discharge is present at high 90 degrees (floating) as well as lower 270 degrees. How Does SonaVu InSights™ help cut down Electrical Asset Failures SonaVu™ Acoustic Imaging Camera has 112 highly sensitive digital mems sensors that can accurately detect ultrasound signals from over 50 meters (150 feet) away and transmit them into a visible image on screen. How does Acoustic Imaging Technology work for detecting Partial Discharge? The ionization of air molecules caused by partial discharge on failing electrical components creates ultrasonic turbulence in the air. SonaVu™ can detect this ionization from over 50 meters away. SonaVu™ can monitor power transmission and distribution lines just as well as it can monitor high voltage electrical cabinets. Managing your Electrical Assets using SonaVu InSights™ Simply scanning your electrical assets with a SonaVu™ Acoustic Imaging Camera will reveal any defects and deterioration on the insulation that is present. To complete a survey, record findings, then upload them to the Report Library on SonaVu InSights™. Here they can be securely stored. Electrical defects detected with the SonaVu™ can be organized based on notes, location, reporting technician, part number/equipment, date and time, and production process powered. From here they can be further prioritized and scheduled for repair based on the maintenance planners' parameters. Features for Managing Electrical Assets with SonaVu InSights™ • Upload and store unlimited survey data, images, and videos with lightning-fast speeds. • Organize findings based on different parameters chosen by the technicians, managers, or planners. • Determine types of Partial Discharge using Phase Resolved Partial Discharge (PRPD). • Continuously integrate team discussion into a single living report. Store, Organize, Analyze and Prioritize with SonaVu InSights™ Your ultrasound, and acoustic imaging data is just that… your data! It should be securely stored and properly organized, for easy access to the right people. Your data should be presented in a way that’s easy for the technician’s and maintenance planners to understand and navigate – making their job of analysis, and prioritizing repairs as seamless as possible. The SonaVu InSights™ Image & Report Library • Upload and store unlimited ultrasound and acoustic imaging data fast. • Generate in-depth reports in the blink of an eye. • Safely and securely store ultrasound and acoustic imaging data & reports. • Organize, search and filter through your Image & Report library based on title, cost savings impact, notes, location, reporting technician, part number/asset, “repeat offenders”, date & time, repair status, and production process powered. • Analyze data and reports based on chosen parameters (cost savings impact, and production process powered are the two most popular), then prioritize defects for repair. Continuously Integrated Reporting SonaVu InSights™ acts as one living document for a maintenance team, from technician to planner. Continuously integrated reporting builds cohesiveness and collaboration within a maintenance team. Update reports with team discussion, notes, priorities, maintenance plans and more! Contact SDT Ultrasound Solutions Interested in Learning More about SonaVu InSights™? Request a FREE 30 Minute Demo here!
{"url":"https://www.sonavu.com/insights/","timestamp":"2024-11-08T01:17:57Z","content_type":"text/html","content_length":"46162","record_id":"<urn:uuid:658f2a46-1287-4e55-a2a5-b942291112f4>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00249.warc.gz"}
99 in Words - How to Write 99 in Words? | Brighterly 99 in Words Updated on January 5, 2024 In words, 99 is spelled as “ninety-nine”. This number is one less than one hundred. If you are counting ninety-nine balloons, this means you have a bunch of ninety balloons, and then you add nine more balloons to this bunch, making a total of ninety-nine balloons. How to Write 99 in Words? The number 99 is expressed in words as ‘Ninety-Nine’. It has a ‘9’ in both the tens and the ones places. Like counting ninety-nine coins, you say, “I have ninety-nine coins.” Therefore, 99 is written in words as ‘Ninety-Nine’. 1. Place Value Chart: Tens: 9, Ones: 9 2. Writing it down: 99 = Ninety-Nine This method is fundamental in learning how to convert numbers to words. FAQ on 99 in Words How do you write the number 99 in words? The number 99 is written as ‘Ninety-nine’. What is the word form for the number 99? Ninety-nine’ is the word form for the number 99. If you have ninety-nine toys, how would you spell the number? You spell the number as ‘Ninety-nine’. Other Numbers in the Words: 34500 in Words 2400 in Words 44000 in Words 350 in Words 95000 in Words 1180 in Words 7000 in Words Poor Level Weak math proficiency can lead to academic struggles, limited college, and career options, and diminished self-confidence. Mediocre Level Weak math proficiency can lead to academic struggles, limited college, and career options, and diminished self-confidence. Needs Improvement Start practicing math regularly to avoid your child`s math scores dropping to C or even D. High Potential It's important to continue building math proficiency to make sure your child outperforms peers at school.
{"url":"https://brighterly.com/math/99-in-words/","timestamp":"2024-11-02T11:49:50Z","content_type":"text/html","content_length":"84432","record_id":"<urn:uuid:8a5bda1c-f38d-494f-a2d7-7395acaa0623>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00716.warc.gz"}
149.1... | Filo Question asked by Filo student 149.1. Evaluate Sol: Put 150. Evaluate Not the question you're searching for? + Ask your question Video solutions (1) Learn from their 1-to-1 discussion with Filo tutors. 5 mins Uploaded on: 12/30/2022 Was this solution helpful? Found 6 tutors discussing this question Discuss this question LIVE for FREE 10 mins ago One destination to cover all your homework and assignment needs Learn Practice Revision Succeed Instant 1:1 help, 24x7 60, 000+ Expert tutors Textbook solutions Big idea maths, McGraw-Hill Education etc Essay review Get expert feedback on your essay Schedule classes High dosage tutoring from Dedicated 3 experts Students who ask this question also asked View more Stuck on the question or explanation? Connect with our Mathematics tutors online and get step by step solution of this question. 231 students are taking LIVE classes Question Text 149.1. Evaluate Sol: Put 150. Evaluate Updated On Dec 30, 2022 Topic Calculus Subject Mathematics Class Class 12 Answer Type Video solution: 1 Upvotes 73 Avg. Video Duration 5 min
{"url":"https://askfilo.com/user-question-answers-mathematics/begin-array-l-int-left-sin-2-x-sin-2-x-right-d-x-int-33353938343039","timestamp":"2024-11-07T10:08:45Z","content_type":"text/html","content_length":"387280","record_id":"<urn:uuid:a201846c-842d-4632-8ad7-8c2a01ff5bb9>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00432.warc.gz"}
An Introduction to Coding Theory An Introduction to Coding Theory. Instructor: Dr. Adrish Banerjee, Department of Electrical Engineering, IIT Kanpur. Error control coding is an indispensable part of any digital communication system. In this introductory course, we will discuss theory of linear block codes and convolutional codes, their encoding and decoding techniques as well as their applications in real world scenarios. Starting from simple repetition codes, we will discuss among other codes: Hamming codes, Reed Muller codes, low density parity check codes, and turbo codes. We will also study how from simple codes by concatenation we can build more powerful error correcting codes. (from nptel.ac.in) Introduction to Error Control Coding An Introduction to Coding Theory Instructor: Dr. Adrish Banerjee, Department of Electrical Engineering, IIT Kanpur. This course will discuss theory of linear block codes and convolutional codes, their encoding and decoding techniques as well as their applications in real world scenarios.
{"url":"http://www.infocobuild.com/education/audio-video-courses/electronics/introduction-to-coding-theory-iit-kanpur.html","timestamp":"2024-11-08T15:04:24Z","content_type":"text/html","content_length":"12771","record_id":"<urn:uuid:5d059226-ccd3-48b7-8cec-310e85ae4660>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00508.warc.gz"}
5 Best Ways to Find All Strings Formed from Characters Mapped to Digits of a Number in Python π ‘ Problem Formulation: Consider a given mapping where digits are associated with certain characters, similar to the way digits are linked to letters on a telephone keypad. The problem involves finding all possible strings that can be formed from the characters corresponding to a given sequence of digits. For instance, if ‘2’ maps to ‘abc’ and ‘3’ maps to ‘def’, the input ’23’ should yield the output [‘ad’, ‘ae’, ‘af’, ‘bd’, ‘be’, ‘bf’, ‘cd’, ‘ce’, ‘cf’]. Method 1: Recursive Approach This method involves using a simple recursive function that takes the current combination and the remaining digits as parameters. As it progresses, the function appends one character at a time and recursively calls itself until all digits are processed, building up all possible combinations. Here’s an example: def generate_combinations(digits): digit_to_char = {'2': 'abc', '3': 'def', '4': 'ghi', '5': 'jkl', '6': 'mno', '7': 'pqrs', '8': 'tuv', '9': 'wxyz'} def backtrack(combination, next_digits): if len(next_digits) == 0: for letter in digit_to_char[next_digits[0]]: backtrack(combination + letter, next_digits[1:]) output = [] if digits: backtrack('', digits) return output ['ad', 'ae', 'af', 'bd', 'be', 'bf', 'cd', 'ce', 'cf'] This code defines a function, generate_combinations, that utilizes a backtracking approach. The inner function, backtrack, is employed to create every combination possible by iterating through the mappings. This recursive method is quite efficient for small to moderate input sizes but can become less efficient with larger inputs due to the recursive calls. Method 2: Iterative Depth-First Search (DFS) The iterative DFS relies on a stack to keep track of the state during the traversal. As it iterates through the digits and their respective characters, it progressively builds up the solution set. This iterative method can be more space-efficient than the recursive one, especially for deeper stacks of function calls. Here’s an example: def generate_combinations(digits): if not digits: return [] digit_to_char = {'2': 'abc', '3': 'def', '4': 'ghi', '5': 'jkl', '6': 'mno', '7': 'pqrs', '8': 'tuv', '9': 'wxyz'} result = [""] for digit in digits: temp = [] for r in result: for char in digit_to_char[digit]: temp.append(r + char) result = temp return result ['ad', 'ae', 'af', 'bd', 'be', 'bf', 'cd', 'ce', 'cf'] This example illustrates generate_combinations function that uses an iterative approach, building from one digit to the next by expanding the current list of combinations with the new characters. It avoids recursive function calls, thus it can manage larger input sequences more easily. Method 3: Using itertools.product This method exploits the product function from Python’s itertools module, which computes the Cartesian product of input iterables. It elegantly generates all possible combinations without the explicit use of recursion or iteration through the digits. Here’s an example: from itertools import product def generate_combinations(digits): digit_to_char = {'2': 'abc', '3': 'def', '4': 'ghi', '5': 'jkl', '6': 'mno', '7': 'pqrs', '8': 'tuv', '9': 'wxyz'} chars = [digit_to_char[digit] for digit in digits if digit in digit_to_char] return [''.join(comb) for comb in product(*chars)] ['ad', 'ae', 'af', 'bd', 'be', 'bf', 'cd', 'ce', 'cf'] The given code uses itertools.product to create combinations of characters mapped to the input digits. This is a concise and efficient implementation, but it might be less intuitive for beginners unfamiliar with the Cartesian product concept. Method 4: Using recursion with queue A slightly different recursive tactic uses a queue for processing the digits and constructing strings simultaneously. This method is more akin to a breadth-first search (BFS) where all possible strings of equal length are built before moving on to the next level of depth. Here’s an example: from collections import deque def generate_combinations(digits): if not digits: return [] digit_to_char = {'2': 'abc', '3': 'def', '4': 'ghi', '5': 'jkl', '6': 'mno', '7': 'pqrs', '8': 'tuv', '9': 'wxyz'} q = deque(digit_to_char[digits[0]]) for digit in digits[1:]: for _ in range(len(q)): s = q.popleft() for char in digit_to_char[digit]: q.append(s + char) return list(q) ['ad', 'bd', 'cd', 'ae', 'be', 'ce', 'af', 'bf', 'cf'] This snippet utilizes a queue to perform BFS-like traversal by building strings of equal lengths and expanding them as it goes through each digit. This is an efficient approach for a level-wise construction of strings, although it might not have as intuitive a logic structure as DFS approaches. Bonus One-Liner Method 5: Using a List Comprehension and itertools.product Combining the power of list comprehensions with itertools.product, this one-liner method offers a succinct solution that is both readable and efficient for those who are comfortable with Python’s functional programming constructs. Here’s an example: from itertools import product generate_combinations = lambda digits: [''.join(p) for p in product(*(map({'2': 'abc', '3': 'def'}.get, digits)))] ['ad', 'ae', 'af', 'bd', 'be', 'bf', 'cd', 'ce', 'cf'] This compact lambda function leverages itertools.product and a list comprehension to assemble all possible strings. The elegance of this approach is counterbalanced by a potentially steeper learning curve, as it condenses several Python concepts into a single line of code. • Method 1: Recursive Approach. Provides a clear and logical algorithm that closely follows the problem’s structure. Strengths: Intuitive for recursive problems, follows natural problem decomposition. Weaknesses: Inefficient for very large datasets due to potential stack overflow. • Method 2: Iterative DFS. Offers a method that scales better than recursion for large inputs. Strengths: More space-efficient iterative solution. Weaknesses: Could be less intuitive than the recursive approach. • Method 3: Using itertools.product. Leverages Python’s standard library for a clean one-liner functional approach. Strengths: Compact and efficient. Weaknesses: Less readability and potential unfamiliarity with itertools. • Method 4: Using recursion with queue. Utilizes BFS to build up the solution. Strengths: Good with scalability and constructing solutions level by level. Weaknesses: A bit more complex to understand and the queue might grow significantly. • Bonus Method 5: One-Liner with List Comprehension and itertools.product. Condensed and proficient for those acquainted with functional programming in Python. Strengths: Very concise and Pythonic. Weaknesses: Might be confusing for beginners or those not well-versed with itertools and lambda functions.
{"url":"https://blog.finxter.com/5-best-ways-to-find-all-strings-formed-from-characters-mapped-to-digits-of-a-number-in-python/","timestamp":"2024-11-12T06:21:02Z","content_type":"text/html","content_length":"75081","record_id":"<urn:uuid:925c3348-0641-4a89-a228-a69119cb03a6>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00247.warc.gz"}
Nonlinear dynamic characteristics of load time series in rock cutting The characteristics of the cutting load time series were investigated using chaos and fractal theories to study the information and dynamic characteristics of rock cutting. The following observations were made after analyzing the power spectrum, denoising phase reconstruction, correlation dimension and maximum Lyapunov exponent of the time series. A continuous broadband without a significant dominant frequency was found in the power spectrum. The restructured phase space presented a distinct strange attractor after wavelet denoising. The correlation dimension was saturated at an embedding dimension of 7. Lastly, and the maximum Lyapunov exponent exceeded 0 via the small data method. These findings reflected the chaotic dynamic characteristics of the cutting load time series. The box dimensions of the cutting load were further investigated under different conditions, and the difference in cutting depth, cutting velocity and assisted waterjet types were found to be ineffective in changing the fractal characteristic. As cutting depth become small, rock fragment size also decreased, whereas fractal dimension increased. Moreover, a certain range of cutting velocity increased fragment size but decreased fractal dimension. Therefore, fractal dimension could be regarded as an evaluation index to assess the extent of rock fragmentation. The rock-cutting mechanism remained unchanged under different assisted waterjet types. The waterjet front cutter impacts and damages rock, however, the waterjet behind of cutter is mainly used to clean fragments and to lubricate the cutter. 1. Introduction The cutting load time series directly records the dynamic characteristic and other information on rock fragmentation, the irregularity and aperiodicity of which are closely related to rock fragmentation itself [1]. The cutting load time series also reflects the dynamics of the cutting system and the information on rock breakage [2]. Collision complicates the nonlinearity and singularity of the cutting system. Therefore, the nonlinear characteristics of the cutting load must be investigated to understand the dynamics of the cutting system. Chaos and fractals have been used to investigate rock fragmentation through an interrelated time series, such as cutting load and acoustic emission. Duan et al. [3] used chaos theory to investigate the nonlinear characteristics of the cutting load time series by a self-controlled hydro-pick and confirmed that the cutting load typically has chaotic characteristic. The correlative dimension can be used as a sensitivity parameter to estimate the rock fracture mechanism. Liu et al. [4] used the fractal theory to study the fractal characteristics of the load time series of a conical pick. They also developed fractal dimension, fractal length, and other feature parameters to explain the physical phenomena. Nie et al. [5] used the saturated correlation dimension method to calculate the correlation dimension of electromagnetic radiation and acoustic emission signals. They confirmed that these signals have chaotic characteristics. Wu et al. [6] adopted fractal theory to investigate the fractal features of the acoustic emission time series in rock failure under uniaxial compression and found that all time series at different stages have fractal features. Fractal dimension can be used to describe the evolving regularity of microscopic cracks. Wang et al. [7] analyzed the fractal feature of the electromagnetic radiation signal in coal and rock failure process. They found that the variation in correlation dimension is consistent with coal and rock burst processes. To a certain text, the aforementioned studies have shed light on rock fragmentation and cutting system dynamics. Although most of these studies have focused on the characteristics of acoustic emission signals in rock fragmentation, only a few works have explored the nonlinear characteristics of the cutting load. In the presented study, fast Fourier transform, denoising, phase space reconstruction, correlation integral and small data methods were adopted to obtain the power spectrum, phase space diagram, correlation dimension and maximum Lyapunov exponent for verifying the chaotic dynamic characteristics of the cutting load time series. Fractal dimensions under different cutting conditions were also calculated and analyzed to investigate the rock fracture mechanism and cutting performance. 2. Methods 2.1. Phase space reconstruction The irregular contact between the cutting mechanism and rock usually contributes to the complexity, nonlinearity, ambiguity, and dissipativeness of a cutting system and to the nondeterminacy of a cutting load time series. However, the complex dynamic characteristics of a cutting system are difficult to verify in a low-dimensional coordinate systems. To investigate such dynamic systems, the time series must undergo phase reconstruction. However, the cutting load signal can only be collected usually during rock cutting, and thus, obtaining reconstructed phase space along with all the derivatives of the cutting load signal can yield plenty of errors. A time series is determined by the self-relevant components defined in the Takens theorem [8], and the information on its components is hidden within the time series itself. Therefore, one or several equivalent phase spaces of the time series can be built using the time delay method to recover the information on the cutting system. Several methods have been developed recently for phase-space reconstruction, such as uniform and non-uniform embedding procedures [9-10]. The delay-coordinate method for phase-space reconstruction of 1D time series ($x\left(i\right)$, $i=1,2,\dots ,n$) has been widely used in different fields by Packard [11], and the $m$ – dimensional reconstructed vectors: $X\left(t\right)={\left(x\left(t\right),x\left(t+\tau \right),\dots ,x\left(t+\left(m-1\right)\tau \right)\right)}^{T},t=1,2,\dots ,N,$ where $m$ represents the embedded dimension of restructured phase space; $\tau$ stands for the delay time; $N$ is the points number in the phase space, $N=n-\left(m-1\right)\tau$. 2.2. Denoising based on wavelet analysis As a result of the influence of measurement devices and the accuracy of signal acquisition, the desired signal of the cutting load is often disturbed by noise. Consequently, the internal dynamics of the cutting system may be masked by the unpredictability and destructive effect of noise. Therefore, noise must be reduced to reveal the internal characteristics of the cutting load. Numerous methods have been developed recently for signal denoising, such as filtering, shadow theorem, singular spectrum analysis, and wavelet analysis [12-13]. Among these methods, wavelet analysis is considered as the most suitable for signal denoising [14]. Denoising analysis is generally conducted in either the time or frequency domain, and thus, wavelet analysis can be conducted in both domains, which helps in processing random and non-stationary time series to achieve a high signal-to-noise ratio. For a discrete signal $x\left(i\right)$ with a family of shifted and scaled wavelets associated with $\phi \left(t\right)$, the discrete wavelet transform is given as [15]: $DW{T}_{\phi }f\left(g,k\right)=\frac{1}{\sqrt{{a}_{o}^{g}}}\sum _{i}x\left(i\right)\phi \left(\frac{k-i{b}_{o}{a}_{o}^{g}}{{a}_{o}^{g}}\right).$ The mother wavelet in Equation (2) is discretely dilated by the scale parameter ${a}_{o}^{g}$ and is translated by the translation parameter $i{b}_{o}{a}_{o}^{g}$, where ${a}_{o}$ and ${b}_{o}$ are fixed values, with ${a}_{o}>1$ and ${b}_{o}>0$, whereas $g$ and $i$ are positive integer values [16]. 2.3. Delay time and embedded dimension Delay time is an important parameter for phase-space reconstruction. The trajectories in reconstructed phase space gather at the same place if the delay time was too short because of the influence of the acquisition instrument and interference noise on the cutting load time series. Moreover, the shortest time interval for capturing the cutting load information cannot be covered. If the delay time is too long, the information cannot be captured because of the violent fluctuations in the cutting system. Several methods, such as the autocorrelation and mutual information methods can be used to estimate the delay time. The autocorrelation method is preferred for its simplicity and maturity in estimating delay time. For a 1D discrete time series, the autocorrelation function of delay time $j \tau$ can be expressed as [17]: ${R}_{xx}\left(j\tau \right)=\frac{1}{N}\sum _{t=1}^{N-1}x\left(t\right)x\left(t+j\tau \right),\tau =1,2,\dots .$ Given $j$, ratio of the autocorrelation function with delay time $j\tau$ to that of the initial value is solved. $\tau$ can be universally considered as delay time when ${R}_{xx}\left(j\tau \right)/ {R}_{xx}\left(j\right)$ decreases to $1-1/e$ or if it reaches the first minimal value [18]. Embedded dimension is another important parameter for phase-space reconstruction. The trajectories in the reconstructed phase space fold if the embedded dimension is too small. If the embedded dimension is too large, then the restructured phase space increases the calculated amount, and noise may be dominant. Several methods can be used to estimate embedded dimension such as the correlation integral, nearest neighbour and odd value decomposition methods. This study uses the correlation integral method because of the high density of points in the phase space. Given $r>$0 as threshold, the number of points according to $‖{x}_{i}-{x}_{j}‖<r$ is calculated. The ratio of the calculated points to that of the total points is the correlation integral, which can be expressed as ${C}_{N}\left(r\right)=\frac{2}{N\left(N-1\right)}\sum _{i=1}^{N}\sum _{j=i+1}^{N}H\left(r-‖{x}_{i}-{x}_{j}‖\right),$ where $H\left(*\right)$ represents the Heavside function $H\left(>0\right)=1$, $H\left(\le 0\right)=0$. The threshold $r$ is set to an appropriate value, and the correlation dimension $D$ of the time series can be calculated approximately as: 2.4. The Lyapunov exponent The power spectrum aims to reveal the frequency characteristics of the time series in the time domain or frequency domain as well as to distinguish periodic, quasi-periodic, and random signals. Despite its intuitionism and easiness, the power spectrum cannot readily distinguish the large periodic dynamic system or chaos system. The Lyapunov exponent is used to describe the diffusion or convergence rate of the phase-space trajectory of chaotic time series with an exponential law. The chaos system is confirmed by a positive Lyapunov exponent. This exponent is also widely used to identify the chaotic characteristic of time series. The Wolf, Jacobian, small data methods can be used to calculate the maximum Lyapunov exponent at present. The small data method has several advantages, including its high-speed calculation and low sensibility to embedded dimension, delay time and data amount. For phase-space reconstruction, the nearest neighbor of each point can be found in phase space by using the following equation [20, 21]: where $t$ is equal to $1,2,\dots ,N$ and $te \stackrel{^}{t}$; $p$ is the average period of the cutting load time series, that can be set to the reciprocal of the energy spectrum frequency; $X\left(t \right)$ is the state point of the reconstructed phase space; ${d}_{t}\left(0\right)$ is the distance between one point and the nearest point during the initial time. Suppose that an exponent diverging rate $\lambda$ is present between reference the point $X\left(t\right)$ and its nearest point $X\left(\stackrel{^}{t}\right)$ in the basic orbit, then it can be calculated as follows: ${d}_{t}\left(i\right)={d}_{t}\left(0\right){e}^{\lambda \left(i\mathrm{\Delta }t\right)}\to \text{ln}{d}_{t}\left(i\right)=\text{ln}{d}_{t}\left(0\right)+\lambda \left(i\text{Δ}t\right)\to y\left(i\ where $\mathrm{\Delta }t$ is the sample period; ${d}_{t}\left(i\right)$ is the distance of the nearest points of $t$ group in the basic orbit with the elapsed time $i\mathrm{\Delta }t$; $〈\text{ln} {d}_{t}\left(i\right)〉$ is the average value of all about $t$. 2.5. Fractal dimension The strange attractor generally has a self-similar structure in the phase space, and the chaotic time series also has a statistical similarity to the fractal time scales [22]. Fractal dimension is an important parameter in describing the complexity of a system. The load time series can be regarded as an open curve in two dimensions, and its outline has a fractal feature. The fractal dimension can be used to reflect the geometric feature to analyze the complexity and irregularity levels. Topological, capacity, similarity, and box dimensions are considered as fractal dimensions. The box dimension method is a simple and mature method technique for estimating the fractal dimension. The number $N\left(l\right)$ of boxes with side length $l$ covers the curve of the time series $F$, which follows the fractal distribution with a different side length, as follows [23]: $N\left(l\right){l}^{{D}_{b}}=A\left(Constant\right)\to \text{log}2\left(N\left(l\right)\right)=-{D}_{b}\text{log}2\left(l\right)+\text{log}2\left(A\right).$ According to Equation (8), fractal dimension is equal to the slope of $\text{log}2\left(N\left(l\right)\right)$ to $\text{log}2\left(l\right)$. 3. Experiment 3.1. Cutting load signal acquisition The signal acquisition system of rock cutting is shown in Fig. 1. The system mainly included a cutting device, a fixed beam for attaching strain gages, a signal acquisition instrument, a computer to process signals, and other assistive devices. In the cutting device, the cutter would act on the rock, and its cutting speed and depth could be adjusted. The strain gage was attached to the fixed beam and not directly to the cutter to adjust cutting parameters conveniently. The INV306U instrument, which was developed by Beijing Dongfang Vibration And Noise Technology Research Institute, was used to collect the signals to achieve voltage signal acquisition and amplification by using different types of strain gage bridges. The sample frequency was set to 1024 Hz to obtain more comprehensive information. The signal was processed through a personal computer with intelligent data acquisition and software DASP-V10 software. A full-bridge circuit of strain gages was used to collect voltage signal [24]. The changes in voltage signal were eliminated through beam bending when the beam torque was used to calculate the cutting load. Fig. 1The signal acquisition system: (a) a cutting device; (b) a fixed beam and strain gage bridge; (c) a signal acquisition instrument; (d) a computer to process signals 3.2. Experimental phenomena In rock cutting process, the time series of a typical load signal was collected as shown in Fig. 2. The extrusion of the cutter on the rock at the initial stage produced a crushing zone, and the cutting load increased nonlinearly along with penetration displacement. The cutting load continued to increase after the crushing zone was formed, and several small peaks exhibited that a number of local breakings would occur before the peak cutting load could be achieved. The rock cutting load instantaneously decreased to a small value after the peak cutting force was achieved, thus indicating that a large rock fragment was formed because of the extrusion of the cutter on the rock. The aforementioned phenomena are consistent with the rock cutting theory or the experimental results [1, 25-26]. The formation of crushing zone, local breakings, and the avalanche of large fragments alternately occurred, which contributed to the appearance of several smaller pecks between two adjacent large peaks. The aperiodicity of the cutting load amplitude and wavelength confirmed the discreteness and irregularity of rock cutting and breaking. Fig. 2Cutting load time series 4. Results 4.1. Chaos characteristics Power spectrum analysis can be used to understand the characteristics of a time series, such as periodic, quasi-periodic and non-periodic signals. The cutting load time series was translated by fast Fourier transform in the frequency domain [27], and its power spectrum is shown in Fig. 3. The power spectrum of the periodic signal exhibited discreteness; however, that of the cutting load exhibited a continuous broadband, non-dominant frequency, and a wide-band noise. Therefore, the cutting load time series exhibited chaotic characteristics. Given that such features were obtained outwardly, phase-space reconstruction and the maximum Lyapunov exponent were still necessary to identify the characteristics of cutting system. The phase space of the time series was restructured using Equation (1), as shown in Fig. 4. The trajectories were disorganized and unsystematic in the phase space, and exhibited pseudo-randomness. Therefore, the interference noise should be reduced to prevent the time series from being influenced by the unpredictability and destructiveness of noise. Fig. 3Power spectrum of cutting load Fig. 4Reconstructed phase space In the present study, the principle of denoising was adopted based on a multi-resolution signal decomposition analysis. The rock cutting signal was processed through multi-resolution signal decomposition, and was decomposed up to level 8 by using “discrete Meyer” as a mother wavelet. After inspecting the components, the signal was denoised by using detail component 7 and 8. The cutting load signal of denoising is shown in Fig. 5 whereas the reconstructed phase space is shown in Fig. 6. The trajectories of the reconstructed phase space were free from repetition, and enfoldment, and were concentrated at a certain region. Moreover, the reconstructed phase space of denoising exhibited an unambiguous strange attractor, which confirmed the chaotic dynamic characteristic of the cutting load. The denoising of the time series by wavelet analysis was shown to improve the deterministic components of the cutting system. Fig. 5Cutting load signal of denoising The ratio of the autocorrelation function with delay time $j\tau$ to that of the initial value could be obtained using Equation (3), as shown in Fig. 7. The delay time was equal to 10 when ${R}_{xx}\ left(j\tau \right)/{R}_{xx}\left(j\right)$ reached the first minimal value, and to 100 when ${R}_{xx}\left(j\tau \right)/{R}_{xx}\left(j\right)$ decreased to $1-1/e$. In the presented study, 100 was selected as the delay time to avoid the loss of dynamic information of the cutting system. Fig. 6Phase space of denoising signal The correlation dimension saturated to a certain value with increasing embedded dimension if there was a strange attractor in the restricted phase space. The saturation value is often regarded as the correlation dimension of the time series. The variation of the correlation dimension with increasing embedded dimension is shown in Fig. 8. According to Takens theorem, the embedded dimension of the cutting system should be equal to 8. The Lyapunov exponent of the time series can be estimated according to the slopes of the straight lines based on Equation (7). The maximum Lyapunov exponent was equal to the slope of a linear area of $y\left(i\right)~i$ according to least-square method. According to $y\left(i\right)~i$ and $y\left(i\right)-y\left(i-1\right)~i$ (Fig. 9), $y\left(i\right)-y\left(i-1\right)~i$ were equal basically equal when $i$ was within the range of 6 to 12. The slope of 0.065 within this range was regarded as the maximum Lyapunov exponent of the cutting load time series by the linear least-square regression method, which also demonstrated the chaotic dynamic characteristic of the cutting system. Fig. 8Correlation dimension 4.2. Factors that influence fractal dimension 4.2.1. Cutting depth According to Equation (8), the double logarithmic curves with the cutting depths of 15 mm and 20 mm are shown in Fig. 10. The slopes of these curves were 1.57 and 1.46 respectively, via the linear least-square regression method. The correlation coefficients of the fitting straight lines were all higher than 0.99, which reflected the fractal feature of the cutting load time series. Therefore, cutting depth had no influence on the fractal distribution of the load signal. A difference was observed in the fractal dimension when the cutting depth was different. The fractal dimension decreased with increasing cutting depth, thus indicating an increase in the self-similarity of cutting load when the cutting depth was reduced. Based on the size and number of rock fragments, a small fractal dimension produced large size and small number of rock fragments, thus indicating that rock was not seriously broken. By contrast, a big fractal dimension produced large number and small size of rock fragments. Therefore, the fractal dimension of the cutting load time series could be used to evaluate the extent of rock cutting or crushing. Fig. 10Fractal dimension of cutting load with different cutting depth 4.2.2. Cutting velocity The double logarithmic curves with the cutting velocity of 0.5 m/min and 4 m/min are shown in Fig. 11. The correlation coefficients were all higher than 0.99, thus indicating that the cutting velocity did not affect the fractal feature of the time series. The fractal dimension was equal to 1.55 at the cutting velocity of 0.5 m/min, and to 1.48 at the cutting velocity was 4 m/min. The time series with low cutting velocity had high self-similarity, and was capable of filling the phase space of the cutting system. Based on energy dissipation, a bigg fractal dimension with a low cutting velocity highly dissipated the energy of cutting system. With regard to size distribution, a low cutting velocity produced small fragments and big fracture areas, thus indicating that the energy required to generate a new fracture surface is big based on Griffith theory [28]. A low velocity could break down big fragments more than twice, and a high velocity might cause big fragments fly off from the motion path of cutter, thus preventing them from being broken down further. Therefore, the fractal dimension of the time series with a high velocity was smaller than that of the time series with a low velocity. Based on theory of rock crushing, the extent of rock crushing is high when the impact velocity is also high. However, the opposite is true regard to cutting velocity, as mentioned earlier. The present phenomenon could probably be attributed to rock fragments being crushed more than twice by the cutter during rock cutting. Therefore, the influence of cutting velocity on rock crushing and cutting performance should be investigated further. Fig. 11Fractal dimension of cutting load with different cutting velocity 4.2.3. Types of assisted waterjet High-pressure waterjet has been widely used in rock cutting, however, the fracture mechanism of the involved process remains unclear because of the opacity and damage instantaneity of the rock. To determine the rock fracture mechanism with different types of assisted waterjet (Fig. 1), this study investigates the fractal dimension of the cutting load time series. The double logarithmic curves under the waterjet front of the cutter or behind of the cutter are shown in Fig. 12. The correlation coefficients were higher than 0.99, thus indicating that such types of the assisted waterjet had an insignificant on fractal feature. The fractal dimension of the cutting load changed remarkably when the fracture mechanism of the rock was changed [3]. However, the difference in fractal dimension of the two different types was small, thus indicating that the fracture mechanism of rock cutting was unchanged by these types of waterjet. The waterjet in front of the cutter could decrease cutting load because of the the damage on the caused by the impact of the waterjet [29], however, the waterjet is used to clear fragments and to lubricate the cutter when it is behind of the cutter. Fig. 12Fractal dimension of cutting load with different assisted waterjet types (a) Waterjet front of the cutter (b) Waterjet behind of the cutter 5. Conclusions This study investigates the nonlinear dynamic characteristics of the cutting load time series based on chaos and fractal theories. The following conclusions are made based on our investigation: (1) The crushing zone, local breaking, and the avalanche of rock fragments occur alternately, thus producing several small pecks between the two large adjacent peaks. The change in the aperiodicity of the amplitude and the wavelength of the cutting load signal reflects the discreteness and irregularity in rock cutting. (2) The power spectrum of the cutting load time series reflects a continuous, broadband, nondominant frequency, and a wide-band noise. The time series of denoising via the wavelet method can improve the deterministic components of the cutting system. The reconstructed phase space of denoising produces an unambiguous and self-organizing strange attractor. The maximum Lyapunov exponent of the load time series is equal to 0.065 according to the small data method. These results confirm the chaotic dynamic characteristics of the rock cutting time series. (3) The difference in cutting depth, cutting velocity, and types of assisted waterjet all cannot change the fracture mechanism in rock cutting. The fractal dimension has a negatively correlated to the size of rock fragments, and can be used as an evaluation index for the extent of rock cutting or crushing. The waterjet in front of the cutter can decrease the cutting load because of the damage on rock by waterjet, however, the waterjet is used to clear fragments and to lubricate the cutter when it is behind of the cutter. • Rojek J., Onate E., Labra C., Kargl H. Discrete element simulation of rock cutting. International Journal of Rock Mechanics & Mining Sciences, Vol. 48, Issue 6, 2011, p. 996-1010. • Liu C. S., Li D. G. Mathematical model of cutting force based on experimental conditions of single pick cutting. Journal of China Coal Society, Vol. 36, Issue 9, 2011, p. 1565-1569. • Duan X., Yu L., Cheng D. Z. Chaotic dynamical features of rock breaking mechanism with self-controlled hydro-pick. Chinese Journal of Rock Mechanics and Engineering, Vol. 14, 1995, p. 484-491. • Liu C. S. Fractal characteristic study of shearer cutter cutting resistance curves. Journal of China Coal Society, Vol. 29, Issue 1, 2004, p. 115-118. • Nie B. S., He X. Q., Liu F. B., et al. Chaotic characteristics of electromagnetic emission signals during deformation and fracture of coal. Mining Science and Technology, Vol. 19, Issue 2, 2009, p. 189-193. • Wu Z. X., Liu X. X., Liang Z. Z., et al. Experimental study of fractal dimension of AE serials of different rocks under uniaxial compression. Rock and Soil Mechanics, Vol. 33, Issue 12, 2012, p. • Wang C., Xu J. K., Zhao X. X., et al. Fractal characteristics and its application in electromagnetic radiation signals during fracturing of coal or rock. International Journal of Mining Science and Technology, Vol. 22, Issue 2, 2012, p. 255-258. • Packard N. H., Crutchfield J. P., Farmer J. D., et al. Geometry from a time series. Physical Review Letters, Vol. 45, Issue 9, 1980, p. 712-715. • Ragulskis M., Lukoseviciute K. Non-uniform attractor embedding for time series forecasting by fuzzy interface systems. Neurocomputing, Vol. 72, 2010, p. 2618-2626. • Lukoseviciute K., Ragulskis M. Evolutionary algorithms for the selection of time lags for time series forecasting by fuzzy interface systems. Neurocomputing, Vol. 73, 2009, p. 2077-2088. • Takens F. Detecting strange attractors in turbulence in Lecture Notes in Mathematica. Springer-Verlag, Berlin, 1981. • Hammel S. M. A noise reduction method for chaotic system. Physics Letters A., Vol. 148, Issue 8-9, 1990, p. 421-428. • Schreiner T. Extremely simple nonlinear noise-reduction method. Physic Reviewer E., Vol. 47, Issue 4, 1993, p. 2401-2404. • Satish L., Nazneen B. Wavelet-based de-noising of partial discharge signals buried in excessive noise and interference. IEEE Trans Dielectr Electr Insul., Vol. 10, Issue 2, 2003, p. 354-367. • Young R. K. Wavelet theory and its applications. Pennsylvania State University, Kluwer Academic Publishers, 1993. • Elkalashy N. I. Modeling and detection of high impedance arcing fault in medium voltage networks. In: Doctoral dissertation, Helsinki University of Technology (TKK), Finland, 2007. • Zhang W. C., Tan S. C., Gao P. Z. Chaotic forecasting of natural circulation flow instabilities under rolling motion based on Lyapunov exponents. Acta Physica Sinica, Vol. 62, Issue 6, 2013, p. • Liu B. Z., Peng J. H. Nonlinear dynamics. Higher Education Press, Beijing, China, 2004. • Grassberger P. An optimized box-assisted algorithm for fractal dimension. Physical Letter A., Vol. 148, Issue 2, 1990, p. 63-68. • Zhang Y. M., Qi W. G. Chaotic property analysis and predication model study for heating load time series. Acta Physica Sinica, Vol. 60, Issue 10, 2011, p. 10058. • Zhang X. Q., Liang J. Chaotic characteristics analysis and prediction model study on wind power time series. Acta Physica Sinica, Vol. 61, Issue 9, 2012, p. 19050. • Mandelbrot B. B. The fractal geometry of nature. Freeman, San Francisco, USA, 1982. • Shi M. H., Li X. C., Chen Y. P. Determination of effective thermal conductivity for polyurethane foam by use of fractal method. Science in China Series E: Technological Sciences, Vol. 49, Issue 4, 2006, p. 468-475. • Stefanescu D. M. Handbook of force transducers–principles and components. Springer, Berlin and Heidelberg, 2011. • Nishimatsu Y. The mechanics of rock cutting. International Journal of Rock Mechanics and Mining Science, Vol. 9, Issue 2, 1972, p. 261-270. • Ranman K. E. A model describing rock cutting with conical picks. Rock Mechanics and Rock Engineering, Vol. 18, Issue 2, 1985, p. 131-140. • Duhamel P., Vetterli M. Fast Fourier transforms: a tutorial review and a state of the art. Signal Processing, Vol. 19, Issue 4, 1990, p. 259-299. • Jiang H. X., Du C. L., Liu S. Y. The effects of impact velocity on energy and size distribution of rock crushing. Journal of China Coal Society, Vol. 38, Issue 4, 2013, p. 604-607. • Jiang H. X., Du C. L., Liu S. Y., Xu R. Experimental research of influence factors on combined breaking rock with water jet and mechanical tool. China Mechanical Engineering, Vol. 24, Issue 8, 2013, p. 1013-1017. About this article 15 February 2014 rock cutting time series fractal dimension The authors would like to acknowledge the Foundation of National 863 Plan of China (2012AA062104), the National Natural Science Foundation of China (51375478), the project funded by the Priority Academic Program Development of Jiangsu Higher Education Institutions (SZBF2011-6-B35), and the Graduate Education Innovation Project of Jiangsu Province (CXLX12_0948). Copyright © 2014 JVE International Ltd. This is an open access article distributed under the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
{"url":"https://www.extrica.com/article/14721","timestamp":"2024-11-14T04:01:58Z","content_type":"text/html","content_length":"153045","record_id":"<urn:uuid:b5536772-c7f6-4db4-919c-29980e103b07>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00865.warc.gz"}
GARCH Models Consider the series , which follows the GARCH process. The conditional distribution of the series Y for time t is written where denotes all available information at time . The conditional variance is The GARCH model reduces to the ARCH process when . At least one of the ARCH parameters must be nonzero (). The GARCH regression model can be written where . In addition, you can consider the model with disturbances following an autoregressive process and with the GARCH errors. The AR-GARCH regression model is denoted GARCH Estimation with Nelson-Cao Inequality Constraints The GARCH model is written in ARCH() form as where is a backshift operator. Therefore, if and . Assume that the roots of the following polynomial equation are inside the unit circle: where and Z is a complex scalar. and do not share common factors. Under these conditions, , , and these coefficients of the ARCH() process are well defined. Define . The coefficient is written where for and for . Nelson and Cao (1992) proposed the finite inequality constraints for GARCH and GARCH cases. However, it is not straightforward to derive the finite inequality constraints for the general GARCH model. For the GARCH model, the nonlinear inequality constraints are For the GARCH model, the nonlinear inequality constraints are where and are the roots of . For the GARCH model with , only nonlinear inequality constraints ( for to max()) are imposed, together with the in-sample positivity constraints of the conditional variance . Using the HETERO Statement with GARCH Models The HETERO statement can be combined with the GARCH= option in the MODEL statement to include input variables in the GARCH conditional variance model. For example, the GARCH variance model with two dummy input variables D1 and D2 is The following statements estimate this GARCH model: proc autoreg data=one; model y = x z / garch=(p=1,q=1); hetero d1 d2; The parameters for the variables D1 and D2 can be constrained using the COEF= option. For example, the constraints are imposed by the following statements: proc autoreg data=one; model y = x z / garch=(p=1,q=1); hetero d1 d2 / coef=unit; Limitations of GARCH and Heteroscedasticity Specifications When you specify both the GARCH= option and the HETERO statement, the GARCH=(TYPE=EXP) option is not valid. The COVEST= option is not applicable to the EGARCH model. IGARCH and Stationary GARCH Model The condition implies that the GARCH process is weakly stationary since the mean, variance, and autocovariance are finite and constant over time. When the GARCH process is stationary, the unconditional variance of is computed as where and is the GARCH conditional variance. Sometimes the multistep forecasts of the variance do not approach the unconditional variance when the model is integrated in variance; that is, . The unconditional variance for the IGARCH model does not exist. However, it is interesting that the IGARCH model can be strongly stationary even though it is not weakly stationary. Refer to Nelson ( 1990) for details. The EGARCH model was proposed by Nelson (1991). Nelson and Cao (1992) argue that the nonnegativity constraints in the linear GARCH model are too restrictive. The GARCH model imposes the nonnegative constraints on the parameters, and , while there are no restrictions on these parameters in the EGARCH model. In the EGARCH model, the conditional variance, , is an asymmetric function of lagged disturbances : The coefficient of the second term in is set to be 1 (=1) in our formulation. Note that if . The properties of the EGARCH model are summarized as follows: • The function is linear in with slope coefficient if is positive while is linear in with slope coefficient if is negative. • Suppose that . Large innovations increase the conditional variance if and decrease the conditional variance if . • Suppose that . The innovation in variance, , is positive if the innovations are less than . Therefore, the negative innovations in returns, , cause the innovation to the conditional variance to be positive if is much less than 1. QGARCH, TGARCH, and PGARCH Models As shown in many empirical studies, positive and negative innovations have different impacts on future volatility. There is a long list of variations of GARCH models that consider the asymmetricity. Three typical variations are the quadratic GARCH (QGARCH) model (Engle and Ng, 1993), the threshold GARCH (TGARCH) model (Glosten, Jaganathan, and Runkle, 1993; Zakoian, 1994), and the power GARCH (PGARCH) model (Ding, Granger, and Engle, 1993). For more details about the asymmetric GARCH models, see Engle and Ng (1993). In the QGARCH model, the lagged errors’ centers are shifted from zero to some constant values: In the TGARCH model, there is an extra slope coefficient for each lagged squared error, where the indicator function is one if ; otherwise, zero. The PGARCH model not only considers the asymmetric effect, but also provides another way to model the long memory property in the volatility, where and . Note that the implemented TGARCH model is also well known as GJR-GARCH (Glosten, Jaganathan, and Runkle, 1993), which is similar to the threshold GARCH model proposed by Zakoian (1994) but not exactly same. In Zakoian’s model, the conditional standard deviation is a linear function of the past values of the white noise. Zakoian’s version can be regarded as a special case of PGARCH model when . The GARCH-M model has the added regressor that is the conditional standard deviation: where follows the ARCH or GARCH process. Maximum Likelihood Estimation The family of GARCH models are estimated using the maximum likelihood method. The log-likelihood function is computed from the product of all conditional densities of the prediction errors. When is assumed to have a standard normal distribution (), the log-likelihood function is given by where and is the conditional variance. When the GARCH-M model is estimated, . When there are no regressors, the residuals are denoted as or . If has the standardized Student’s t distribution, the log-likelihood function for the conditional t distribution is where is the gamma function and is the degree of freedom (). Under the conditional t distribution, the additional parameter is estimated. The log-likelihood function for the conditional t distribution converges to the log-likelihood function of the conditional normal GARCH model as . The likelihood function is maximized via either the dual quasi-Newton or the trust region algorithm. The default is the dual quasi-Newton algorithm. The starting values for the regression parameters are obtained from the OLS estimates. When there are autoregressive parameters in the model, the initial values are obtained from the Yule-Walker estimates. The starting value is used for the GARCH process parameters. The variance-covariance matrix is computed using the Hessian matrix. The dual quasi-Newton method approximates the Hessian matrix while the quasi-Newton method gets an approximation of the inverse of Hessian. The trust region method uses the Hessian matrix obtained using numerical differentiation. When there are active constraints, that is, , the variance-covariance matrix is given by where and . Therefore, the variance-covariance matrix without active constraints reduces to .
{"url":"http://support.sas.com/documentation/cdl/en/etsug/65545/HTML/default/etsug_autoreg_details12.htm","timestamp":"2024-11-05T07:48:35Z","content_type":"application/xhtml+xml","content_length":"76493","record_id":"<urn:uuid:5e0c4e31-0133-413d-a655-d7a0505a388f>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00848.warc.gz"}
AD9689 PN9 and PN23 I wanted to ask what are the equations used to generate the PN9 and PN23 sequence of the ADC test patterns. In the manual, it is said that the first values are: PN9: 0x0092 -- 0x125B, 0x3C9A, 0x2660, 0x0c65, 0x0697 PN23: 0x3AFF -- 0x3FD7, 0x0002, 0x26E0, 0x0A3D, 0x1CA6 But I am unable to generate those using the typical LFSR shifted by one and X-ored. I believe there is a more complex equation behind, right? Ok, I think the issue was that I was assuming that the LFSR was the random number itself. The idea is to get the last bit of fourteen consecutive LFSRegisters, and tie them together into a 14-bit word, right? All right, I have managed to create a C++ code to reproduce these numbers. Here is what I get por PN9 (notice that the seed given in the manual is big endian, so you need to swap it, but it does not make any difference): 125b 3c9a 2660 c65 697 3d16 eb2 33c7 3741 2b6e 305a 3eaa 14a 3cbb 2073 293d 1d44 219c bdb d0e 3c3f 383d 3c5c 3209 13b4 1e7c 362a 11c6 3571 c44 84 184e 1561 2f4d 3228 15a7 which matches quite well the manual, table 39. However, when I measure, I see totally different sequence (decimal): 3492, 9061, 14751, 5018, 6504, 8937, 4429, 11320, 10430, 13457, 12197, 8533, 7861, 9028, 16268, 14018, 699, 15971, 5156, 4849, 9152, 10178, 9123, 11766, 3147, 387, 10709, 3641, 10894, 5051, 8059, 1969, 2718, 12466, 11735, 2648, 9435, 12806, 4787, 12263, 11634, 5125, 5346, 1688, 15377, 16169, 10399, 12930, 2813, 14982, 575, 9773, 325, 15804, 4584, 12901, 6279, 15, 263, 12699, 7000, 1475, 8595, 2780, 15509, 10087, 15870, 7631, 14549, 12065, 14619, 3028, 3081 Why are the values not matching? Shouldn't the manual be updated? Maybe related: For PN23 (notice that the seed given in the manual is big endian, so you need to swap it, and it makes a big change if you do not swap it), my code predicts now for the first values: 3aff 3fd7 2 26e0 a3d 1ca6 which again matches well the manual, table 39, however when I measure I see a different sequence (decimal): 8232, 8189, 14623, 5570, ... How are these related with data words from the manual ?? DougI UmeshJ Any insights on this question? (I noticed that you found the solution for other ADC models). Thanks in advance. Ok, I finally figured out with the help of two colleagues. The issue was that there was a bug in my vendors' firmware which was causing the ADC data to be inverted. On top of that, there are several things that are not mentioned in the manual. Here a reminder of all tricks: For PN9: - pow(2,9) -1 = 511 is the period - Start with 0x92 as seed in a 9-bit register. It does not matter if you assume big endian or little endian, it is a palindrome. - Get the XOR 9 and 5, call it e.g. newBit - Create a new 9-bit register, which is the like the previous register shifting all bits to the left, and then bit0 = newBit. - Calculate again newBit, etc. - Finish the sequence whenever you reach the PN9 period, which is 511. You will have thus created 511 9-bit registers. - Then, self-duplicate this array of registers, namely as many times so that it is aligned with 14. (so that we can then divide the array in groups of 14). newArraySize = least_common_multiple(511, 14) = 1022 ; nCopies = newArraySize/originalArraySize = 2; so that 1022/14 = 73 exact groups or data output words. - Now, form the real 73 output words using groups of fourteen trans-consectuve bits, namely the highest bit (bit 8) of every consecutive 9-bit register that you in your extrapolated array. - For example, you will obtain 0x125b, 0x3c9a by grouping the first 28 highest bits in two output words. - Finally, configure your ADC to not invert the data (register 0x561) and to not use binary mode, but the default two's complement. Otherwise you would have to flip the last bit of your output words. Similar for PN23: - pow(2,23) -1 = 8388607 is the period - Start with 0x7FAE00 as seed, because manual is in big endian, see /cfs-file/__key/communityserver-discussions-components-files/426/AD9680_5F00_PN_5F00_test_5F00_modes.pdf - Get XOR 23 and 18 as newBit - Create a 23-bit register, etc. as before - newArraySize = lcm(8388607, 14) = 8388607*14, nCopies = 14 - etc. Attached is a spreadsheet illustrating the calculation. Great work Ferdymercury. Thanks for sharing your results on EngineerZone. Thank you for the clue to put it back to two's complement mode. This helped a lot. I am not sure I understand how many copiess or repeats of the table you nedd. I made a single 511 point table like this: #define NPTS_PRBS 511 int16_t prbs[NPTS_PRBS]; x = 0x125B; for (i=0; i<NPTS_PRBS; i++) prbs[i] = x; for (k=0; k<14; k++) x = (x<<1 | (x >> 8 & 1 ^ x >> 4 & 1)) & 0x3fff; For the first test word received, I search to find the first macth, then roll around the table using rpos = (rpos+1) % NPTS_PRBS You are welcome. Concerning the 'repeats' of the table, you do not really need it if you use the % operator to roll around. The 'repeat' is only relevant if you are interested in knowing after how many 'rolls' you will get the same 14-bit word you started with. For PN9, you will need to go twice through the table, as shown in my attachment with 511*2 rows, and you get a closed sequence 0x92, 0x125B, ... 0x13F6 that gets repeated on an on. The sequence length is 1022, that's why I copy-paste the table twice. nCopies = least-common-multiple of (2**N-1, 14)
{"url":"https://ez.analog.com/data_converters/high-speed_adcs/f/q-a/545335/ad9689-pn9-and-pn23","timestamp":"2024-11-11T03:32:52Z","content_type":"text/html","content_length":"261942","record_id":"<urn:uuid:5784fe40-3a20-466d-ad7a-2b1cf3d213d6>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00533.warc.gz"}
LM 4_6 Summary Collection 4.6Summary by Benjamin Crowell, Light and Matter licensed under the Creative Commons Attribution-ShareAlike license. weight — the force of gravity on an object, equal to `mg` inertial frame — a frame of reference that is not accelerating, one in which Newton's first law is true noninertial frame — an accelerating frame of reference, in which Newton's first law is violated `F_W` — weight Other Notation net force — another way of saying “total force” Newton's first law of motion states that if all the forces acting on an object cancel each other out, then the object continues in the same state of motion. This is essentially a more refined version of Galileo's principle of inertia, which did not refer to a numerical scale of force. Newton's second law of motion allows the prediction of an object's acceleration given its mass and the total force on it, `a_(cm)=F`(total)/m. This is only the one-dimensional version of the law; the full-three dimensional treatment will come in chapter 8, Vectors. Without the vector techniques, we can still say that the situation remains unchanged by including an additional set of vectors that cancel among themselves, even if they are not in the direction of motion. Newton's laws of motion are only true in frames of reference that are not accelerating, known as inertial frames. Homework Problems `sqrt` A computerized answer check is available online. `int` A problem that requires calculus. `***` A difficult problem. 1. An object is observed to be moving at constant speed in a certain direction. Can you conclude that no forces are acting on it? Explain. [Based on a problem by Serway and Faughn.] 2. At low speeds, every car's acceleration is limited by traction, not by the engine's power. Suppose that at low speeds, a certain car is normally capable of an acceleration of `3 m"/"s^2`. If it is towing a trailer with half as much mass as the car itself, what acceleration can it achieve? [Based on a problem from PSSC Physics.] 3. (a) Let `T` be the maximum tension that an elevator's cable can withstand without breaking, i.e., the maximum force it can exert. If the motor is programmed to give the car an acceleration `a ( a> 0` is upward), what is the maximum mass that the car can have, including passengers, if the cable is not to break? `sqrt` (b) Interpret the equation you derived in the special cases of `a=0` and of a downward acceleration of magnitude `g`. (“Interpret” means to analyze the behavior of the equation, and connect that to reality, as in the self-check on page 139.) 4. A helicopter of mass `m` is taking off vertically. The only forces acting on it are the earth's gravitational force and the force, F[air], of the air pushing up on the propeller blades. (a) If the helicopter lifts off at t=0, what is its vertical speed at time t? (b) Check that the units of your answer to part a make sense. (c) Discuss how your answer to part a depends on all three variables, and show that it makes sense. That is, for each variable, discuss what would happen to the result if you changed it while keeping the other two variables constant. Would a bigger value give a smaller result, or a bigger result? Once you've figured out this mathematical relationship, show that it makes sense physically. (d) Plug numbers into your equation from part a, using m=2300 kg, F[air] = 27000 N, and t = 4.0 s. `sqrt` 5. In the 1964 Olympics in Tokyo, the best men's high jump was 2.18 m. Four years later in Mexico City, the gold medal in the same event was for a jump of 2.24 m. Because of Mexico City's altitude (2400 m), the acceleration of gravity there is lower than that in Tokyo by about `0.01 m"/"s^2`. Suppose a high-jumper has a mass of 72 kg. (a) Compare his mass and weight in the two locations. (b) Assume that he is able to jump with the same initial vertical velocity in both locations, and that all other conditions are the same except for gravity. How much higher should he be able to jump in Mexico City? `sqrt` (Actually, the reason for the big change between '64 and '68 was the introduction of the “Fosbury flop.”) 6. A blimp is initially at rest, hovering, when at `t=0` the pilot turns on the engine driving the propeller. The engine cannot instantly get the propeller going, but the propeller speeds up steadily. The steadily increasing force between the air and the propeller is given by the equation `F=kt`, where `k` is a constant. If the mass of the blimp is `m`, find its position as a function of time. (Assume that during the period of time you're dealing with, the blimp is not yet moving fast enough to cause a significant backward force due to air resistance.) `sqrt` `int` 7. (solution in the pdf version of the book) A car is accelerating forward along a straight road. If the force of the road on the car's wheels, pushing it forward, is a constant 3.0 kN, and the car's mass is 1000 kg, then how long will the car take to go from 20 m/s to 50 m/s? 8. Some garden shears are like a pair of scissors: one sharp blade slices past another. In the “anvil” type, however, a sharp blade presses against a flat one rather than going past it. A gardening book says that for people who are not very physically strong, the anvil type can make it easier to cut tough branches, because it concentrates the force on one side. Evaluate this claim based on Newton's laws. [Hint: Consider the forces acting on the branch, and the motion of the branch.] 9. A uranium atom deep in the earth spits out an alpha particle. An alpha particle is a fragment of an atom. This alpha particle has initial speed `v`, and travels a distance `d` before stopping in the earth. (a) Find the force, `F`, from the dirt that stopped the particle, in terms of `v,d`, and its mass, `m`. Don't plug in any numbers yet. Assume that the force was constant.(answer check available at (b) Show that your answer has the right units. (c) Discuss how your answer to part a depends on all three variables, and show that it makes sense. That is, for each variable, discuss what would happen to the result if you changed it while keeping the other two variables constant. Would a bigger value give a smaller result, or a bigger result? Once you've figured out this mathematical relationship, show that it makes sense physically. (d) Evaluate your result for m = 6.7×10^-27 kg, v = 2.0 × 10^4 km/s, and d=0.71 mm. `sqrt` 10. You are given a large sealed box, and are not allowed to open it. Which of the following experiments measure its mass, and which measure its weight? [Hint: Which experiments would give different results on the moon?] (a) Put it on a frozen lake, throw a rock at it, and see how fast it scoots away after being hit. (b) Drop it from a third-floor balcony, and measure how loud the sound is when it hits the ground. (c) As shown in the figure, connect it with a spring to the wall, and watch it vibrate. (solution in the pdf version of the book) 11. While escaping from the palace of the evil Martian emperor, Sally Spacehound jumps from a tower of height `h` down to the ground. Ordinarily the fall would be fatal, but she fires her blaster rifle straight down, producing an upward force of magnitude `F_B`. This force is insufficient to levitate her, but it does cancel out some of the force of gravity. During the time `t` that she is falling, Sally is unfortunately exposed to fire from the emperor's minions, and can't dodge their shots. Let m be her mass, and g the strength of gravity on Mars. (a) Find the time `t` in terms of the other variables. (b) Check the units of your answer to part a. (c) For sufficiently large values of `F_B`, your answer to part a becomes nonsense --- explain what's going on. `sqrt` 12. When I cook rice, some of the dry grains always stick to the measuring cup. To get them out, I turn the measuring cup upside-down and hit the “roof” with my hand so that the grains come off of the “ceiling.” (a) Explain why static friction is irrelevant here. (b) Explain why gravity is negligible. (c) Explain why hitting the cup works, and why its success depends on hitting the cup hard . At the turn of the 20th century, Samuel Langley engaged in a bitter rivalry with the Wright brothers to develop human flight. Langley's design used a catapult for launching. For safety, the catapult was built on the roof of a houseboat, so that any crash would be into the water. This design required reaching cruising speed within a fixed, short distance, so large accelerations were required, and the forces frequently damaged the craft, causing dangerous and embarrassing accidents. Langley achieved several uncrewed, unguided flights, but never succeeded with a human pilot. If the force of the catapult is fixed by the structural strength of the plane, and the distance for acceleration by the size of the houseboat, by what factor is the launch velocity reduced when the plane's 340 kg is augmented by the 60 kg mass of a small man? `sqrt` 14. The tires used in Formula 1 race cars can generate traction (i.e., force from the road) that is as much as 1.9 times greater than with the tires typically used in a passenger car. Suppose that we're trying to see how fast a car can cover a fixed distance starting from rest, and traction is the limiting factor. By what factor is this time reduced when switching from ordinary tires to Formula 1 tires? `sqrt` . In the figure, the rock climber has finished the climb, and his partner is lowering him back down to the ground at approximately constant speed. The following is a student's analysis of the forces acting on the climber . The arrows give the directions of the forces. ┃ force of the earth’s gravity,`downarrow` ┃ ┃ force from the partner’s hands,`uparrow` ┃ ┃ force from the rope,`uparrow` ┃ The student says that since the climber is moving down, the sum of the two upward forces must be slightly less than the downward force of gravity. Correct all mistakes in the above analysis. (solution in the pdf version of the book) Exercise 4: Force and motion • 1-meter pieces of butcher paper • masses to put on top of the blocks to increase friction • spring scales (preferably calibrated in Newtons) Suppose a person pushes a crate, sliding it across the floor at a certain speed, and then repeats the same thing but at a higher speed. This is essentially the situation you will act out in this exercise. What do you think is different about her force on the crate in the two situations? Discuss this with your group and write down your hypothesis: 1. First you will measure the amount of friction between the wood block and the butcher paper when the wood and paper surfaces are slipping over each other. The idea is to attach a spring scale to the block and then slide the butcher paper under the block while using the scale to keep the block from moving with it. Depending on the amount of force your spring scale was designed to measure, you may need to put an extra mass on top of the block in order to increase the amount of friction. It is a good idea to use long piece of string to attach the block to the spring scale, since otherwise one tends to pull at an angle instead of directly horizontally. First measure the amount of friction force when sliding the butcher paper as slowly as possible: -------------------------- Now measure the amount of friction force at a significantly higher speed, say 1 meter per second. (If you try to go too fast, the motion is jerky, and it is impossible to get an accurate reading.) Discuss your results. Why are we justified in assuming that the string's force on the block (i.e., the scale reading) is the same amount as the paper's frictional force on the block? 2. Now try the same thing but with the block moving and the paper standing still. Try two different speeds. Do your results agree with your original hypothesis? If not, discuss what's going on. How does the block “know” how fast to go? 4.6 Summary by Benjamin Crowell, Light and Matter licensed under the Creative Commons Attribution-ShareAlike license.
{"url":"https://www.vcalc.com/wiki/vCollections/LM+4_6+Summary+Collection","timestamp":"2024-11-13T07:54:03Z","content_type":"text/html","content_length":"61790","record_id":"<urn:uuid:1535429a-2019-44a0-b520-bdb5dae256f1>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00716.warc.gz"}
Shape palette from Cleveland Shape palette from Cleveland "Elements of Graphing Data" (discrete). Shape palettes for overlapping and non-overlapping points. In the Elements of Graphing Data, W.S. Cleveland suggests two shape palettes for scatter plots: one for overlapping data and another for non-overlapping data. The symbols for overlapping data relies on pattern discrimination, while the symbols for non-overlapping data vary the amount of fill. This palette attempts to create these palettes. However, I found that these were hard to replicate. Using the R shapes and unicode fonts: the symbols can vary in size, they are dependent of the fonts used, and there does not exist a unicode symbol for a circle with a vertical line. If someone can improve this palette, please let me know. Following Tremmel (1995), I replace the circle with a vertical line with an encircled plus sign. The palette cleveland_shape_pal() supports up to five values. Cleveland WS. The Elements of Graphing Data. Revised Edition. Hobart Press, Summit, NJ, 1994, pp. 154-164, 234-239. Tremmel, Lothar, (1995) "The Visual Separability of Plotting Symbols in Scatterplots", Journal of Computational and Graphical Statistics, https://www.jstor.org/stable/1390760
{"url":"http://jrnold.github.io/ggthemes/reference/cleveland_shape_pal.html","timestamp":"2024-11-07T03:00:59Z","content_type":"text/html","content_length":"12713","record_id":"<urn:uuid:9c145ccc-debf-46d3-9730-dc1633bd8751>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00632.warc.gz"}
[SOLVED] When I double bit rate I do not see a difference in I_spectrum and Q_spectrum, but why? ~ Signal Processing ~ TransWikia.com Do you know how ADS is plotting the spectrum? Plotting the spectrum without doing some kind of normalization will give you a higher magnitude. Once you determine the size of the FFT, normalizing by the length will give you the same magnitude no mater what sampling rate you choose. For example let's take two rectangular signals, one sampled at 1 MHz and the other at 2 MHz. Below are their spectrums without normalization: Since the bottom one is sampled twice as fast, it eventually produces an FFT size that is twice as long, hence the 6 dB increase in the peak. Now compare this to the same exact signals, but now their magnitudes are normalized by their respective FFT sizes: Now you can see that the peaks are the same magnitude. You can play with normalizing all day long to fit your need. It is the shape of the spectrum that is usually most important. Here is some quick MATLAB code so you can maybe try it yourself and play around a bit. %% Signal generation and FFT % Sampling rates fs1 = 1e6; fs2 = 2e6; % Rectangular pulse signals t1 = 0:1/fs1:1e-5; t2 = 0:1/fs2:1e-5; pulseSignal1 = ones(1, numel(t1)); pulseSignal2 = ones(1, numel(t2)); % FFT setup nfft1 = 100*numel(t1); f1 = fs1.*(-nfft1/2:nfft1/2-1)/nfft1; nfft2 = 100*numel(t2); f2 = fs2.*(-nfft2/2:nfft2/2-1)/nfft2; %% Without Normalization subplot(2, 1, 1); plot(f1./1e6, 20*log10(abs(fftshift(fft(pulseSignal1, nfft1))))); xlabel("Frequency (MHz"); ylabel("Magnituide (dB)"); legend("F_s = 1 MHz"); ylim([-40 50]); subplot(2, 1, 2); plot(f2./1e6, 20*log10(abs(fftshift(fft(pulseSignal2, nfft2))))); xlabel("Frequency (MHz"); ylabel("Magnituide (dB)"); legend("F_s = 2 MHz"); ylim([-40 50]); %% With Normalization subplot(2, 1, 1); plot(f1./1e6, 20*log10(abs(fftshift(fft(pulseSignal1, nfft1)./nfft1)))); xlabel("Frequency (MHz"); ylabel("Magnituide (dB)"); legend("F_s = 1 MHz"); ylim([-80 -10]); subplot(2, 1, 2); plot(f2./1e6, 20*log10(abs(fftshift(fft(pulseSignal2, nfft2)./nfft2)))); xlabel("Frequency (MHz"); ylabel("Magnituide (dB)"); legend("F_s = 2 MHz"); ylim([-80 -10]); Correct answer by Envidia on December 11, 2020
{"url":"https://transwikia.com/signal-processing/when-i-double-bit-rate-i-do-not-see-a-difference-in-i_spectrum-and-q_spectrum-but-why/","timestamp":"2024-11-07T03:14:42Z","content_type":"text/html","content_length":"46668","record_id":"<urn:uuid:ef66aa40-f1e3-4707-8aee-22d9b0faddaf>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00825.warc.gz"}
Banach Spaces - (Convex Geometry) - Vocab, Definition, Explanations | Fiveable Banach Spaces from class: Convex Geometry A Banach space is a complete normed vector space, meaning it is a vector space equipped with a norm that allows for the measurement of vector lengths and distances, and every Cauchy sequence in the space converges to a limit that is also within that space. These spaces play a crucial role in functional analysis and provide a framework for understanding various mathematical concepts, including linear operators and fixed-point theorems. congrats on reading the definition of Banach Spaces. now let's actually learn it. 5 Must Know Facts For Your Next Test 1. Every finite-dimensional normed vector space is a Banach space because all norms on finite-dimensional spaces are complete. 2. Common examples of Banach spaces include the space of continuous functions on a closed interval and the space of p-summable sequences for $1 \leq p < \infty$. 3. The Hahn-Banach theorem, an important result in functional analysis, asserts that every bounded linear functional defined on a subspace can be extended to the whole space without increasing its 4. In Banach spaces, the concept of duality plays a vital role; each Banach space has a dual space consisting of all continuous linear functionals defined on it. 5. Applications of Banach spaces extend into various fields such as optimization, differential equations, and economic models, highlighting their importance in both pure and applied mathematics. Review Questions • How does completeness in Banach spaces relate to Cauchy sequences and their convergence? □ Completeness in Banach spaces means that every Cauchy sequence converges to an element within the same space. This relationship is crucial because it ensures that when we work with Cauchy sequences—where distances between terms become arbitrarily small—we can always find a limit point that lies within the Banach space itself. This property allows mathematicians to perform analysis with confidence that limits will remain within the confines of the space. • Discuss the significance of the Hahn-Banach theorem in the context of Banach spaces and its implications for functional analysis. □ The Hahn-Banach theorem is significant because it allows for the extension of bounded linear functionals from subspaces to entire Banach spaces without increasing their norms. This theorem is fundamental in functional analysis as it ensures that every linear functional can reach its maximum under certain conditions, enabling a deeper understanding of dual spaces. It facilitates the development of tools like Lagrange multipliers and optimization techniques in various mathematical fields. • Evaluate how Banach spaces enhance our understanding of linear operators and their applications in real-world problems. □ Banach spaces enhance our understanding of linear operators by providing a structured environment where these operators can be analyzed concerning convergence and boundedness. In practical applications, such as solving differential equations or optimization problems, knowing how these operators behave within Banach spaces allows us to use powerful mathematical tools like fixed-point theorems. This capability leads to effective solutions in engineering, economics, and other scientific disciplines where precise modeling is essential. "Banach Spaces" also found in: © 2024 Fiveable Inc. All rights reserved. AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.
{"url":"https://library.fiveable.me/key-terms/convex-geometry/banach-spaces","timestamp":"2024-11-11T00:59:18Z","content_type":"text/html","content_length":"149420","record_id":"<urn:uuid:6401120c-f47b-4912-ac52-194515d3abd7>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00474.warc.gz"}
KCSE Past Papers Maths A 2015 Paper 1 Date Instructions to candidates (a) Write your name and index number in the spaces provided above. (b) Sign and write the date of examination in the spaces provided above, (c) This paper consists of two sections: Section I and Section II. (d) Answer all the questions in Section I and only five questions from Section II. (e) Show all the steps in your calculations, giving your answers at each stage in the spaces provided below each question. (f) Marks may be given for correct working even if the answer is wrong. (g) Non-programmable silent electronic calculators and KNEC Mathematical tables may be used, except where stated otherwise. (h) This paper consists of 20 printed pages. (i) Candidates should check the question paper to ascertain that all the pages are printed as indicated and that no questions are missing. (j) Candidates should answer the questions in English. SECTION I (50 marks) Answer all the questions in this section in the spaces provided. 1. (a) Evaluate 540 396 — 726450/ 3 (l mark) (b) Write the total value of the digit in the thousands place of the results obtained in (a) above. (1 mark) 2. Muya had a 62/3ha piece of land. He donated 7/8ha to a school and 11/2ha to a children’s home. The rest of the land was shared equally between his son and daughter. Find the size of land that each child got. (3 marks) 3. The volume of a cube is l728 cm3. Calculate, correct to 2 decimal places, the length of the diagonal of a face of the cube. (3 marks) 4. Use logarithms, correct to 4 significant figures, to evaluate (4 marks) 5. A piece of Wire is bent into the shape of an isosceles triangle. The base angles are each 48° and the perpendicular height to the base is 6 cm. Calculate, correct to one decimal place, the length of the wire. (3 marks) 6. The density of a substance A is given as 13.6 g/cm’ and that of a substance B as 11.3 g/cm3. Determine, correct to one decimal place, the volume of B that would have the same mass as 50 cm3 of A. (3 marks) 7. Below is part of a sketch of a solid cuboid ABCDEFGH. Complete the sketch. (2 marks) 8. A salesman is paid a salary of Ksh 15 375 per month. He also gets a commission of 4½ on the amount of money he makes from his sales. In a certain month, he earned a total of Ksh 28 875. Calculate the value of his sales that month. (3 marks) 9. The sum of interior angles of a regular polygon is 24 times the size of the exterior angle. (a) Find the number of sides of the polygon. (3 marks) (b) Name the polygon. (1 mark) 10. The marks scored by a group of students in a test were recorded as shown in the table below. On the grid provided, and on the same axes, represent the above data using: (a) a histogram; (3 marks) (b) a frequency polygon. (1 mark) ll. Given that P = 5a — 2b where a = and b = Find: (a) column vector P; (2 marks) (b) P’, the image of P under a translation vector (1 mark) 12. Given that a = 3, b = 5 and c =- ½ evaluate (3 marks) 13. The figure below represents the curve of an equation. Use the mid-ordinate rule with 4 ordinates to estimate the area bounded by the curve, lines y=0,x= -3 and x=5. (3 marks) 14. The cost of 2 jackets and 3 shirts was Ksh 1 800. After the cost of a jacket and that of a shirt were increased by 20%, the cost of 6 jackets and 2 shirts was Ksh 4 800. Calculate the new cost of a jacket and that of a shin. (4 marks) 15. A tailor had a piece of cloth in the shape of a trapezium. The perpendicular distance between the two parallel edges was 30 cm. The lengths of the two parallel edges were 36 cm and 60 cm. The tailor cut off a semi circular piece of the cloth of radius 14cm from the 60 cm edge. Calculate the area of the remaining piece of cloth. (3 marks) 16. Musa cycled from his home to a school 6km away in 20 minutes. He stopped at the school for 5 minutes before taking a motorbike to a town 40km away. The motorbike travelled at 75 km/h. On the grid provided, draw a distance-time graph to represent Musa’s journey. (3 marks) SECTION II(50 marks) Answer any five questions in this section in the spaces provided. 17 Three partners Amina, Bosire and Karuri contributed a total of Ksh 4 800 000 in the ratio 4:5:7 to buy an 8 hectares piece of land. The partners set aside i of the land for social amenities and sub-divided the rest into 15 m by 25 m plots. (a) Find: (i) the amount of money contributed by Karuri; (2 marks) (ii) the number of plots that were obtained. (3 marks) (b) The puma: sokl the plots at Ksh 50000 each and spent 30% of the profit realised to pay for administrative costs. They shared the rest of the profit in the ratio of their contributions. (i) Calculate the net profit realised. (3 marks) (ii) Find the difference in the amount of the profit eamed by Amina and Bosire. (2 marks) 18. Two shopkeepers, Juma and Wanjiku bought some items from a wholesaler. Juma bought 18 loaves of bread, 40 packets of milk and 5 bars of soap while Wanjiku bought 15 loaves of bread, 30 packets of milk and 6 bars of soap. The prices of a loaf of bread, a packet of milk and a bar of soap were Ksh 45, Ksh 50 and Ksh 150 respectively. (a) Represent: (i) the number of items bought by Juma and Wanjiku using a 2 X 3 matrix. (1 mark) (ii) the prices of the items bought using a 3 X 1 matrix. (1 mark) (b)Use the matrices in (a) above to determine the total expenditure incurred by each person and hence the difference in their expenditure. (3 marks) (c) Juma and Wanjiku also bought rice and sugar. Jurna bought 36 kg of rice and 23 kg of sugar and paid Ksh 8 160. Wanjiku bought 50 kg of rice and 32 kg of sugar and paid Ksh 11 340. Use the matrix method todetermine the price of one kilogram of rice and one kilogram of sugar. (5 marks) 19. Line AB drawn below is a side Of a triangle ABC. (a) Using a pair of compasses and ruler only construct: (i) triangle ABC in which BC = 10 cm and CAB = 90°; (2 marks) (ii) a rhombus BCDE such that CBE = 120°; (2 marks) (iii) a perpendicular from F, the point of intersection of the diagonals of the rhombus, to meet BE at G. Measure F G; (2 marks) (iv) a circle to touch all the sides of the rhombus. (1 mark) (b) Determine the area of the region in the rhombus that lies outside the circle. (3 marks) 20. In the figure below, AC = 12 cm, AD = 15 cm and B is point on AC. BAD = ADB = 30°. Calculate, correct to one decimal place: (a) the length of CD; (3 marks) (b) the length of AB; (3 marks) (c) the area of triangle BCD; (2 marks) (d) the size of £BDC. (2 marks) 21. (a) A straight line L1 whose equation is 3y -— 2x = -2 meets the x-axis at R. Determine the co-ordinates of R. (2 marks) (b) A second line L2 is perpendicular to L] at R. Find the equation of L2 in the form y = mx + c, where m and c are constants. (3 marks) (c) A third line L3 passes through (-4, 1) and is parallel to Ll. Find: (i) the equation of L3 in the form y = mx + c, where m and c are constants. (2 marks) (ii) the co-ordinates of point S, at which L3 intersects L2. (3 marks) 22. On the grid below, an object T and its image T’ are drawn. (a) Find the equation of the mirror line that maps T onto T’. (1 mark) (i) T’ is mapped onto T” by positive quarter tum about (0, 0). Draw T”. (2 marks) (ii) Describe a single transformation that maps T onto T”. (2 marks) T” is mapped onto T”’ by an enlargement, centre (2, 0), scale factor -2. Draw T”’. (2 marks) Given that the area of T’” is 12 cm’, calculate the area of T. (3 marks) 23. The figure below represents a conical flask. The flask consists of a cylindrical part and a frustum of a cone. The diameter of the base is 10 cm while that of the neck is 2 cm. The vertical height ofthe flask is 12 cm. Calculate, correct to l decimal place: (a) the slant height of the frustum part; (2 marks) (b) the slant height of the smaller cone that was cut off to make the frustum part. (2 marks) (c) the external surface area of the flask. (Take pie = 3.142) (6 marks) 24. The gradient of the curve (a) Find: (i) the value of p; (3 marks) (ii) the equation of the tangent to the curve at x = 0.5.’ (4 marks) (b) Find the co-ordinates of the turning points of the curve. (3 marks) Paper 2 Instructions to candidates (a) Write your name and index number in the spaces provided above. (b) Sign and write the date of examination in the spaces provided above, (c) This paper consists of two sections: Section I and Section II. (d) Answer all the questions in Section I and only five questions from Section II. (e) Show all the steps in your calculations, giving your answers at each stage in the spaces provided below each question. (f) Marks may be given for correct working even if the answer is wrong. (g) Non-programmable silent electronic calculators and KNEC Mathematical tables may be used, except where stated otherwise. (h) This paper consists of 20 printed pages. (i) Candidates should check the question paper to ascertain that all the pages are printed as indicated and that no questions are missing. (j) Candidates should answer the questions in English. SECTION I (50 marks) Answer all the questions in this section in the spaces provided. 1. The length and width of a rectangular piece of paper were measured as 60 cm and 12 cm respectively. Determine the relative error in the calculation of its area. (4 marks) 2. Simplify (2 marks) 3. An arc ll cm long, subtends an angle of 70° at the centre of a circle. Calculate the length, correct to one decimal place, of a chord that subtends an angle of 90° at the centre of the same circle. (4 marks) 4. In the figure below, O is the centre of the circle. A, B, C and D are points on the circumference of the circle. Line AB is parallel to line DC and angle ADC = 55°. Determine the size of angle ACB. (3 marks) 5. Eleven people can complete of a certain job in 24 hours. Determine the time in hours, correct to 2 decimal places, that 7 people working at the same rate can take to complete the remaining job. (3 6 The length and width of a rectangular signboard are (3x +12) cm and (x — 4) cm respectively. If the diagonal of the signboard is 200 cm, determine its area. (4 marks) 8. Use the expansion of (x — y)5 to evaluate (9.8)5 correct to 4 decimal places. (3 marks) 9. The diameter of a circle, centre O has its end points at M(— 1, 6) and N(5, —2). Find the equation of the circle in the form x2 + yl + ax + by = c where a, b and c are constants. (4 marks) Find the value of x given that log (x — 1) + 2 = log (3x + 2) + log 25. (3 marks) 10. Below is a line AB and a point X. Determine the locus of a point P equidistant from points A and B and 4 cm from X.(3 marks) 11. In a nomination for a committee, two people were to be selected at random fi’0m a group of 3 men and 5 women. Find the probability that a man and a woman were selected. (2 marks) 12. A school decided to buy at least 32 bags of maize and beans. The number of bags of maize were to be more than 20 and the number of bags of beans were to be at least 6. A bag of maize costs Ksh 2500 and a bag of beans costs Ksh 3500. The school had Ksh 100 000 to purchase the maize and beans. Write down all the inequalities that satisfy the above information. (4 marks) 13. Evaluate . (3 marks) 14. The positions of two points P and Q, on the surface of the earth are P(45 °N, 36 °E) and Q(45 °N, 71°E). Calculate the distance, in nautical miles, between P and Q, correct to 1 decimal place. (3 15. Solve the equation sin (½x — 30°) = cos x for 0 < x < 90°. (2 marks) 16. The position vectors of points P, Q and R are , Show that P, Q and R are collinear. (3 marks) SECTION II (50 marks) Answer any five questions from this section in the spaces provided. 17. In a retail shop, the marked price of a cooker Was Ksh 36 000. Wanandi bought the cooker on hire purchase tenns. She paid Ksh 6400 as deposit followed by 20 equal monthly instalments of Ksh 1750. (a) Calculate: (i) the total amount of money she paid for the cooker. (2 marks) (ii) the extra amount of money she paid above the marked price. (l mark) (b) The total amount of money paid on hire purchase terms was calculated at a compound interest rate on the marked price for 20 months. Determine the rate, per annum, of the compound interest correct to 1 decimal place. (4 marks) (c) Kaloki borrowed Ksh 36 000 from a financial institution to purchase a similar cooker. The financial institution charged a compound interest rate equal to the rate in (b) above for 24 months. Calculate the interest Kaloki paid correct to the nearest shilling. (3 marks) 18. Mute cycled to raise funds for a charitable organisation. On the first day, he cycled 40 km. For the first 10 days, he cycled 3 km less on each subsequent day. Thereafter, he cycled 2 km less on each subsequent day. (a) Calculate: (i) the distance cycled on the 10th day; (2 marks) (ii) the distance cycled on the 16th day. (3 marks) (b) If Mute raised Ksh 200 per km, calculate the amount of money collected. (5 marks) 19. The equation of a curve is given by y = 1 + 3 sin x. (a) Complete the table below for y = 1 + 3 sin x correct to 1 decimal place (2 marks) (b) (i) On the grid provided, draw the graph of (3 marks) (ii)State the amplitude of the curve y = 1 + 3 sin x. (1 mark) (d) Use the graphs to solve the equation (l mark) (c) On the same grid draw the graph of y = tan x for 90° 5x 5 270° (3 marks) 20. The figure below represents a cuboid EFGHJKLM in which EF = 40 cm, FG = 9 cm and GM = 30 cm. N is the midpoint of LM. Calculate correct to 4 significant figures: (a) the length of GL; (1 mark) (b) the length of F]; (2 marks) (c) the angle between EM and the plane EFGH; (3 marks) (d) the angle between the planes EFGH and ENH; (2 marks) (e) the angle between the lines EH and GL. (2 marks) 21. A quantity P varies partly as the square of m and partly as n. When P = 3.8, m = 2 and n = -3. When P = -0.2,m = 3 and n = 2. (a) Find: (i) the equation that connects P, m and n; (4 marks) (ii) the value of P when m = 10 and n = 4. (1 mark) (b) Express m in terms of P and n. (2 marks) (c) If P and n are each increased by 10%, find the percentage increase in m correct to 2 decimal places. (3 marks) 22. A particle was moving along a straight line. The acceleration of the particle after t seconds was given by . The initial velocity of the particle was 7 . (a) the velocity (v) of the paiticle at any given time (t); (4 marks) (b) the maximum velocity of the particle; (3 marks) (c) the distance covered by the particle by the time it attained maximum velocity. (3 marks) 23. The marks scored by 40 students in a mathematics test were as shown in the table below. (a) Find the lower class boundary of the modal class. (1 mark) (b) Using an assumed mean of 64, calculate the mean mark. (3 marks) (C) (i) On the grid provided, draw the cumulative frequency curve for the data. (3 marks) (ii) use the graph to estimate the semi—interquartile range (3 marks) 24. A quadrilateral with vertices at K(1, 1), L(4, 1), M(2, 3) and N(1, 3) is transformed by a matrix to quadrilateral K’L’M’N’ (a) Determine the coordinates of the image (3 marks) (b) On the grid provided draw the object and the image. (2 marks) (c) (i) Describe fully the transformation which maps KLMN onto K’ L’ M’N’. (2 marks) (ii) Determine the area of the image; (1 mark) (d) Find a matrix which maps K’ L’M’N’ onto KLMN. (2 marks)
{"url":"https://pastpapers.top/kcse/kcse-2015/kcse-past-papers-maths-a-2015/","timestamp":"2024-11-05T09:16:07Z","content_type":"text/html","content_length":"172882","record_id":"<urn:uuid:1a2277c4-40c4-4d42-8c5e-c788b7e79efd>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00509.warc.gz"}
Taiyuan's 7 best Mathematics universities [2024 Rankings] 7 Best universities for Mathematics in Taiyuan Below is a list of best universities in Taiyuan ranked based on their research performance in Mathematics. A graph of 170K citations received by 22.7K academic papers made by 7 universities in Taiyuan was used to calculate publications' ratings, which then were adjusted for release dates and added to final scores. We don't distinguish between undergraduate and graduate programs nor do we adjust for current majors offered. You can find information about granted degrees on a university page but always double-check with the university website. Universities for Mathematics near Taiyuan Mathematics subfields in Taiyuan
{"url":"https://edurank.org/math/taiyuan/","timestamp":"2024-11-02T12:43:18Z","content_type":"text/html","content_length":"66709","record_id":"<urn:uuid:95ddb84b-cb26-4071-8bff-aeba677a41e4>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00070.warc.gz"}
Hello, hello! Photo by Ed Robertson on Unsplash Hello, and welcome! Here’s a list of my articles, grouped into broad topics. So grab your favourite drink, get comfortable and dive in. Helpful hints and tips Articles about useful tips and tricks, from data analysis to making presentation-ready charts and exhibits. Data science Articles demonstrating various data science concepts, with real world data and applications. Neural networks A documented journey of a dummy learning about neural networks, culminating in the building of a custom neural network to predict insurance claim frequency. Time series analysis and forecasting Articles relating to all things time series. Happy reading!
{"url":"https://bradley-stephen-shaw.medium.com/hello-hello-cc00f5c3fb3c?source=user_profile_page---------0-------------c5cd0a58b5ae---------------","timestamp":"2024-11-04T21:01:51Z","content_type":"text/html","content_length":"121227","record_id":"<urn:uuid:cc2e1dc1-af17-47d0-8dcd-90916a3f0e4d>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00046.warc.gz"}
Representing Information We have seen that computers (specifically with Python) can store different types of information. Let's look a little a how they do that. Bits and Bytes The fundamental problem: computers themselves only store or transmit bits. That is, values that can either be 0 or 1 (or on/off or true/false or up/down or whatever you want to call them). That's true in the computer's memory (RAM), storage (disks, USB storage, etc), over the network, …. So then how do computers deal with anything else? Bits and Bytes Usually, a single bit is too small a unit of information to work with: multiple bits make more sense. Eight bits is one byte. These are four bytes: There are \(2^8 = 256\) different bytes. Usually a bit is signified by a “b” and byte by “B”. (Not everybody is consistent with that, but we will be.) Bits and Bytes The metric prefixes kilo (k), mega (M), giga (G) are used when describing bits or bytes. But usually, factors of \(2^{10} = 1024\) are used instead of \(1000\): it's often more convenient to work with power of two. Bits and Bytes Usually we would say: Prefix Usual computer-related meaning k \(2^{10} = 1024\) M \(2^{20} = 1048576\) G \(2^{30} = 1073741824\) So, 5MB = 5 × 1048576 × 8 = 41943040bits. If you have a computer with 16GB of memory, that's 137,438,953,472 bits. So your computer can store/manipulate a lot of bits. Great, but we still need to do something useful with them. First thing we can do: represent integers. Specifially (for now), unsigned integers (so no negative values, only 0 and up). At some point, somebody taught you about positional numbers: \[\begin{align*} 345 &= (3\times 100) + (4\times 10) + 5 \\ &= (3\times 10^2) + (4\times 10^1) + (5\times 10^0)\,. \end{align*}\] We usually count in decimal or base 10: there are ten values (0, 1, 2, 3, 4, 5, 6, 7, 8, 9) that can go in each position. Moving left a position increases the value by a factor of 10. e.g. if we add a zero to the right of 345 to get 3450, the value is ten times larger. We can do all the same positional number stuff, but with two instead of ten: \[\begin{align*} 1001_2 &= (1\times 2^3) + (0\times 2^2) + (0\times 2^1) + (1\times 2^0) \\ &= 8 + 1 \\ &= 9\,. \end{align*}\] Then we can represent these things with bits. These are base 2 or binary numbers. Unless it's completely clear, we should label values with their base as a subscript so they aren't ambiguous. \[\begin{align*} 1001_2 &= (1\times 2^3) + (0\times 2^2) + (0\times 2^1) + (1\times 2^0) \,, \\ 1001_{10} &= (1\times 10^3) + (0\times 10^2) + (0\times 10^1) + (1\times 10^0) \,. \end{align*}\] e.g. on the left, “1001” could be any base. On the right, math is pretty much always in base 10. When we're storing these in a computer, we generally have to decide how many bits are allocated to each value. e.g. computers don't generally work with “integers” but rather “8-bit integers” (or some other size). If we wanted to represent \(9_{10}\) as an 8-bit integer, it would be \(00001001_{2}\). So 8-bit integers have these positional values: That means there is a smallest and largest number that can be represented. The smallest will be zero: all 0 bits. The largest will be all 1 bits. For 8-bit values, that's: \[2^7 + 2^6 + \cdots + 2^1 + 2^0 = 2^8-1 = 255\] In general, \(n\) bits store unsigned integers 0 to \(2^n-1\). For real calculations, 64-bit integers are more common, giving a range 0 to \[2^{64} - 1 = 18446744073709551615\,.\] Python works with 64-bit integers, but it hides the limitation from us. >>> 18446744073709551615 + 1 >>> 18446744073709551615 * 2 Adding Integers Addition (and other basic arithmetic operations) work basically the same as base 10. You should be able to add decimal numbers: Adding Integers We can do the same thing in binary if we can only remember… \[\begin{align*} 0_2 + 0_2 &= 0_{10} = 00_2 \\ 0_2 + 1_2 &= 1_{10} = 01_2 \\ 1_2 + 1_2 &= 2_{10} = 10_2 \\ 1_2 + 1_2 + 1_2 &= 3_{10} = 11_2 \,. \end{align*}\] (The 1+1+1 case happens when we add 1+1 and a carry.) Adding Integers When we add the bits in a column and get a sum that's two bits long (\(10_2\) or \(11_2\)), carry the one. Just like in decimal. Adding Integers Let's add these two: \(10100110_{2} = 166_{10}\) \(00110111_{2} = 55_{10}\) Is the result \(221_{10}\)?. Signed Integers That's great, but we need to work with more than positive integers: we at least need negative values. We'll represent signed integers with two's complement. Positive numbers will be the same as unsigned (with a lower upper-limit). Signed Integers The two's complement operation is used to negate a number. 1. Start with the positive number represented in binary; 2. flip all the bits (0↔1); 3. add one (ignoring overflow). Signed Integers So if we want to find the 8-bit two's complement representation of \(-21_{10}\)… 1. Start with the positive: 00010101 2. Flip the bits: 11101010; 3. Add one: 11101011. So, \(-21_{10} = 11101011_2\). Signed Integers For a two's complement value, the first bit is often called the sign bit because it indicates the sign: 0=positive, 1=negative. Signed Integers It turns out the “flip bits and add one” operation doesn't just turn a positive to a negative. It negates integers in general. If we start with a negative… 1. \(11101011\) 2. flip the bits: \(00010100\) 3. add one: \(00010101_2 = 21_{10}\). So, the two's complement value 11101011 was -21. Signed Integers It turns out the “flip bits and add one” operation doesn't just turn a positive to a negative. It negates integers in general. If we start with a negative… 1. \(11101011\) 2. flip the bits: \(00010100\) 3. add one: \(00010101_2 = 21_{10}\). So, the two's complement value 11101011 was -21. Signed Integers It's not obvious why the “flip bits and add one” operation should be used to negate an integer: it's just the rule we're given and we'll have to trust that there's a reason. [There is.] If it works, doing it twice should get you back where you started (because \(-(-x) = x\)). It's perhaps a surprise that the two's complement operation does that, but it does. Signed Integers Let's try some more 8-bit conversions… Signed Integers For a two's complement value, the largest positive value is 0111…111. For 8 bits, that's \[\begin{align*} 01111111_2 &= 2^6 + 2^5 + \cdots + 2^1 + 2^0 \\ &= 64 + 32 + \cdots + 2 + 1 \\ &= 127_{10}\,. \end{align*}\] Or in general, \(2^{n-1}-1\). Signed Integers The smallest negative value is 1000…000. To find the positive version, flip the bits and add one: 0111…111. Adding one to that causes a carry all the way to the first bit: 1000…000. Signed Integers The 8-bit version: 1. \(10000000\) 2. flip the bits: \(01111111\) 3. add one: \(10000000_2 = 128_{10}\). So the two's complement value \(10000000_2\) represents \(-128_{10}\). In general: \(-2^{n-1}\) up to the largest positive value, \(2^{n-1}-1\). Signed Integers The two's complement operation seems weird but it has one huge benefit: the addition operation is exactly the same. It's perhaps surprising, but if we do the exact same addition operation (ignoring signs), then we get the correct result for two's complement values. We need one additional rule: if there's a “carry out”, a carry from the left-most position, throw it away. Signed Integers Let's consider the 4-bit example from earlier: If throw away the carry-out, the result is 0101. Signed Integers If we do some conversions: … we did correct two's complement addition before without realizing it. Signed Integers Let's try this example again, but think of them as two's complement: \(10100110_{2} = -90_2\) \(00110111_{2} = +55_2\) Hopefully we got -35. Signed Integers Signed integer summary: • Starts with a 0: positive. Starts with a 1: negative. • To negate, do the “two's complement” operation: flip the bits and add one. • To add: like unsigned, but throw away carry-out. • If you're given bits, you can't tell if they're a signed or unsigned integer (or something else): you have to know (or the programming language has to). Floating Point We have seen that Python can handle floating point numbers and said they are close to (but not exactly) real numbers. (The same is true in other programming languages.) We won't cover the details, but the idea is roughly binary values + scientific notation. Floating Point Instead of integer values, floating point values get represented as binary fractions with bits for \(\frac{1}{2}\), \(\frac{1}{4}\), \(\frac{1}{8}\), \(\frac{1}{16}\), \(\frac{1}{32}\), \(\frac{1} {64}\), \(\frac{1}{128}\), …. Then multiply by a power of two. So 40.5 would be represented as (some bits that mean): \[\left(\tfrac{1}{2} + \tfrac{1}{8} + \tfrac{1}{128}\right) \times 2^{6}\,.\] Floating Point Since we have to choose a fixed number of bits to do this, we can't represent every real number exactly. In base 10, we can't represent \(\frac{1}{3}\) exactly (with a finite number of decimal digits): 0.333 is close but not exact. Floating Point The same thing happens in binary: there are often tiny errors in precision that can end up noticable. >>> 0.3 >>> 0.1 + 0.2 It seems like 0.1 + 0.2 should be exactly 0.3, but it's just a little different. Floating Point The same kind of thing is happening as trying to add \(\frac{1}{3}+\frac{1}{3}\) as 0.333 + 0.333 = 0.666 even though 0.667 is closer to \(\frac{2}{3}\). For this course: don't panic. Just know that any floating point calculation can have tiny imprecisions. If you're doing serious numeric simulations or something, learn more. Characters & Strings We have also worked with strings (like "abc") which are made up of characters (like a). Since we know how to represent (unsigned) integers already, characters are “easy”: just come up with a list of all characters and number each one. Then store the numbers. Characters & Strings The character ↔ number mapping is called a character set. The most basic character set computers have used historically: ASCII (American Standard Code for Information Interchange). It defines 95 characters that can represent English text. Characters & Strings A few examples from ASCII: Number Character 32 space 37 % 82 R 114 r The ASCII characters are basically the things you can type on a standard US English keyboard. Characters & Strings The 95 characters of ASCII aren't enough for French (ç) or Icelandic (ð) or Arabic (ج) or Japanese (の) or Chinese (是) or …. Non-English speakers would probably like to use computers too (the computing industry realized eventually). Characters & Strings There's a historical story of character sets that I think is interesting, but I'll admit most people wouldn't. Eventually… The Unicode character set was created to represent all written human language. It defines ∼150k characters currently and can be extended as needed. Characters & Strings Characters are organized into blocks by script/language/use. Also included are various symbols and emoji: ↔, €, ≤, 𝄞, ♳, 😀, 🙈, 🐢, 💩. Characters & Strings Python strings are sequences of Unicode characters. (IDLE doesn't seem to handle Unicode characters perfectly in the editor, but the language handles them fine.) >>> s = " %Rrðجの是↔€≤𝄞♳😀🙈🐢💩" >>> print(s) >>> len(s) Characters & Strings If they're not on your keyboard, you can get characters into Python by copying-and-pasting or with a Unicode characters reference. I looked up “C” and found out it's hexadecimal (base 16) representation is 0043, so >>> "\U00000043" >>> "ab\U00000043de" Characters & Strings Your computer probably can display characters for common languages, but may not have a font that can represent less common characters/scripts. e.g. a Chinese character that was added to Unicode recently (March 2020): >>> "我吃\U00030edd\U00030edd面。" Does it appear in that copy-pasted text for you? Characters & Strings On one computer, it appears just fine for me in IDLE: On another it doesn't: Characters & Strings There are several ways to translate Unicode characters to/from bytes: character encodings. The details can wait for another course, but there's one almost-always-correct choice: UTF-8. Depending on your text editor, there may be an option to select the “character encoding” or “encoding” when saving a file: choose UTF-8. Characters & Strings A text editor is program that lets you type some characters and saves the corresponding bytes to disk. We're using IDLE as a text editor (and also to run our Python code). There are many others (like Notepad++, Sublime Text, Brackets). IDEs (Integrated Development Environments) like IDLE and Visual Studio combine text editing with help on the language or running the code. Characters & Strings Representing ASCII characters in UTF-8 takes one byte each. e.g. a space is character 32 and represented by the byte \[00100000_2 = 32_{10}\,.\] If you open a text editor, type a single space, and save as UTF-8 text, the result should be a one-byte file containing that byte. (Maybe two bytes: some editors automatically insert a line break Characters & Strings The file extension on a text file (.py, .txt) is just a hint about what kind of text the file contains. It's just bytes representing characters either way. Compare something like a Word file (.docx) that contains information about formatting, embedded images, edit history, ….
{"url":"https://ggbaker.ca/120/slide-content/binary.html","timestamp":"2024-11-09T02:52:37Z","content_type":"text/html","content_length":"23363","record_id":"<urn:uuid:944b1847-49c6-48d4-8967-de36b11b1b02>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00434.warc.gz"}
Student Math Circles: Building collaborative places of mathematical inquiry across grades Student Math Circles: Building collaborative places of mathematical inquiry across grades In this article, the authors provide an alternative to traditional math clubs through the formation and facilitation of a student math circle (SMC). Student math circles are a community of students across different grades who engage in open-ended low floor, high ceiling tasks. The authors provide history and motivation for SMCs as well as a few sample problems that have been used. Additionally, benefits of an SMC model are provided. American Institute of Mathematics (2017). Math Teachers’ Circle Network. http://www.mathteacherscircle.org/. Retrieved 1 March 2017. Boaler, J. (2016). YouCubed. Tasks by grade: Low floor high ceiling. https://www.youcubed.org/grade/low-floor-high-ceiling/. Retrieved 2 March 2017. Britton, J. (2016). The Frog Puzzle. http://britton.disted.camosun.bc.ca/frog_puzzle.htm/. Retrieved 2 March 2017. Brown, S. I., & Walter, M. I. (2005). The art of problem posing. Psychology Press. French, D. C., Waas, G. A., Straight, A. L., & Baker, J. A. (1986). Leadership asymmetries in mixed-age children's groups. Child Development, 57(5), 1277-1283. Isaacs, Steven. "The Difference between Gamification and Game-Based Learning." Association for Supervision and Curriculum Development. N.p., 15 Jan. 2015. Web. 27 Feb. 2017. Polya, G. (1957). How to Solve it: A New Aspects of Mathematical Methods. Prentice University Press. Pratt, D. (1983). Age segregation in schools. Paper presented at Annual Meeting of the American Educational Research Association. Montreal, Quebec, Canada. Saul, Mark (2006). "What is a Math Circle". National Association of Math Circles Wiki. Mathematical Sciences Research Institute. Retrieved 1 March 2017. Silver, E. A. (1994). On mathematical problem posing. For the learning of mathematics, 14(1), 19-28. How to Cite Bolognese, C., & Shahani, S. (2017). Student Math Circles: Building collaborative places of mathematical inquiry across grades. Ohio Journal of School Mathematics, 75(1). Retrieved from https:// Copyright (c) 2017 Chris Bolognese, Sonam Shahani
{"url":"https://ohiomathjournal.org/index.php/OJSM/article/view/5753","timestamp":"2024-11-06T17:41:25Z","content_type":"text/html","content_length":"27185","record_id":"<urn:uuid:c280f5c4-a808-4910-98ef-90d5f48406da>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00143.warc.gz"}
How to Plot a CDF in Excel | Online Tutorials Library List | Tutoraspire.com How to Plot a CDF in Excel by Tutor Aspire A cumulative distribution function (CDF) describes the probability that a random variable takes on a value less than or equal to some number. We can use the following function in Excel to calculate cumulative distribution probabilities: =NORM.DIST(x, MEAN, STANDARD_DEVIATION, TRUE) The following example shows how to calculate and plot a CDF in Excel. Example: Calculate & Plot CDF in Excel First, let’s create the following dataset in Excel: Next, let’s specify the mean and standard deviation of the distribution: Next, we can calculate the cumulative distribution probability for the first value in the dataset by using the following formula: =NORM.DIST(A2, $F$1, $F$2, TRUE) Next, we can copy and paste this formula down to every other cell in column B: The CDF is now complete. The way we interpret the values is as follows: • The probability that the random variable will take on a value equal to or less than 6 is .00135. • The probability that the random variable will take on a value equal to or less than 7 is .00383. • The probability that the random variable will take on a value equal to or less than 8 is .00982. And so on. To visualize this CDF, we can highlight every value in column B. Then, we can click the Insert tab along the top ribbon and click Insert Line Chart to produce the following chart: The values along the x-axis show the values from the dataset and the values along the y-axis show the CDF values. Additional Resources CDF vs. PDF: What’s the Difference? How to Make a Bell Curve in Excel How to Calculate NormalCDF Probabilities in Excel Share 0 FacebookTwitterPinterestEmail previous post Pandas: How to Append Data to Existing CSV File You may also like
{"url":"https://tutoraspire.com/cdf-in-excel/","timestamp":"2024-11-07T17:23:47Z","content_type":"text/html","content_length":"349482","record_id":"<urn:uuid:5cec6cb2-7132-40ff-a14b-156ee1fd63f4>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00340.warc.gz"}
Testing Linear Regression Assumptions in Python Checking model assumptions is like commenting code. Everybody should be doing it often, but it sometimes ends up being overlooked in reality. A failure to do either can result in a lot of time being confused, going down rabbit holes, and can have pretty serious consequences from the model not being interpreted correctly. Linear regression is a fundamental tool that has distinct advantages over other regression algorithms. Due to its simplicity, it’s an exceptionally quick algorithm to train, thus typically makes it a good baseline algorithm for common regression scenarios. More importantly, models trained with linear regression are the most interpretable kind of regression models available - meaning it’s easier to take action from the results of a linear regression model. However, if the assumptions are not satisfied, the interpretation of the results will not always be valid. This can be very dangerous depending on the application. This post contains code for tests on the assumptions of linear regression and examples with both a real-world dataset and a toy dataset. The Data For our real-world dataset, we’ll use the Boston house prices dataset from the late 1970’s. The toy dataset will be created using scikit-learn’s make_regression function which creates a dataset that should perfectly satisfy all of our assumptions. One thing to note is that I’m assuming outliers have been removed in this blog post. This is an important part of any exploratory data analysis (which isn’t being performed in this post in order to keep it short) that should happen in real world scenarios, and outliers in particular will cause significant issues with linear regression. See Anscombe’s Quartet for examples of outliers causing issues with fitting linear regression models. Here are the variable descriptions for the Boston housing dataset straight from the documentation: • CRIM: Per capita crime rate by town • ZN: Proportion of residential land zoned for lots over 25,000 sq.ft. • INDUS: Proportion of non-retail business acres per town. • CHAS: Charles River dummy variable (1 if tract bounds river; 0 otherwise) • NOX: Nitric oxides concentration (parts per 10 million) • RM: Average number of rooms per dwelling • AGE: Proportion of owner-occupied units built prior to 1940 • DIS: Weighted distances to five Boston employment centers • RAD: Index of accessibility to radial highways • TAX: Full-value property-tax rate per $10,000 • PTRATIO: Pupil-teacher ratio by town • B: 1000(Bk - 0.63)^2 where Bk is the proportion of blacks by town □ Note: I really don’t like this variable because I think it’s both highly unethical to determine house prices by the color of people’s skin in a given area in a predictive modeling scenario and it irks me that it singles out one ethnicity rather than including all others. I am leaving it in for this post to keep the code simple, but I would remove it in a real-world situation. • LSTAT: % lower status of the population • MEDV: Median value of owner-occupied homes in $1,000’s import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns from sklearn import datasets %matplotlib inline Real-world data of Boston housing prices Additional Documentation: https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html data: Features/predictors label: Target/label/response variable feature_names: Abbreviations of names of features boston = datasets.load_boston() Artificial linear data using the same number of features and observations as the Boston housing prices dataset for assumption test comparison linear_X, linear_y = datasets.make_regression(n_samples=boston.data.shape[0], noise=75, random_state=46) # Setting feature names to x1, x2, x3, etc. if they are not defined linear_feature_names = ['X'+str(feature+1) for feature in range(linear_X.shape[1])] Now that the data is loaded in, let’s preview it: df = pd.DataFrame(boston.data, columns=boston.feature_names) df['HousePrice'] = boston.target │ │ CRIM │ ZN │INDUS│CHAS│ NOX │ RM │AGE │ DIS │RAD│ TAX │PTRATIO│ B │LSTAT│HousePrice │ │0│0.00632│18.0│2.31 │0.0 │0.538│6.575│65.2│4.0900│1.0│296.0│15.3 │396.90│4.98 │24.0 │ │1│0.02731│0.0 │7.07 │0.0 │0.469│6.421│78.9│4.9671│2.0│242.0│17.8 │396.90│9.14 │21.6 │ │2│0.02729│0.0 │7.07 │0.0 │0.469│7.185│61.1│4.9671│2.0│242.0│17.8 │392.83│4.03 │34.7 │ │3│0.03237│0.0 │2.18 │0.0 │0.458│6.998│45.8│6.0622│3.0│222.0│18.7 │394.63│2.94 │33.4 │ │4│0.06905│0.0 │2.18 │0.0 │0.458│7.147│54.2│6.0622│3.0│222.0│18.7 │396.90│5.33 │36.2 │ Initial Setup Before we test the assumptions, we’ll need to fit our linear regression models. I have a master function for performing all of the assumption testing at the bottom of this post that does this automatically, but to abstract the assumption tests out to view them independently we’ll have to re-write the individual tests to take the trained model as a parameter. Additionally, a few of the tests use residuals, so we’ll write a quick function to calculate residuals. These are also calculated once in the master function at the bottom of the page, but this extra function is to adhere to DRY typing for the individual tests that use residuals. from sklearn.linear_model import LinearRegression # Fitting the model boston_model = LinearRegression() boston_model.fit(boston.data, boston.target) # Returning the R^2 for the model boston_r2 = boston_model.score(boston.data, boston.target) print('R^2: {0}'.format(boston_r2)) # Fitting the model linear_model = LinearRegression() linear_model.fit(linear_X, linear_y) # Returning the R^2 for the model linear_r2 = linear_model.score(linear_X, linear_y) print('R^2: {0}'.format(linear_r2)) def calculate_residuals(model, features, label): Creates predictions on the features with the model and calculates residuals predictions = model.predict(features) df_results = pd.DataFrame({'Actual': label, 'Predicted': predictions}) df_results['Residuals'] = abs(df_results['Actual']) - abs(df_results['Predicted']) return df_results We’re all set, so onto the assumption testing! I) Linearity This assumes that there is a linear relationship between the predictors (e.g. independent variables or features) and the response variable (e.g. dependent variable or label). This also assumes that the predictors are additive. Why it can happen: There may not just be a linear relationship among the data. Modeling is about trying to estimate a function that explains a process, and linear regression would not be a fitting estimator (pun intended) if there is no linear relationship. What it will affect: The predictions will be extremely inaccurate because our model is underfitting. This is a serious violation that should not be ignored. How to detect it: If there is only one predictor, this is pretty easy to test with a scatter plot. Most cases aren’t so simple, so we’ll have to modify this by using a scatter plot to see our predicted values versus the actual values (in other words, view the residuals). Ideally, the points should lie on or around a diagonal line on the scatter plot. How to fix it: Either adding polynomial terms to some of the predictors or applying nonlinear transformations . If those do not work, try adding additional variables to help capture the relationship between the predictors and the label. def linear_assumption(model, features, label): Linearity: Assumes that there is a linear relationship between the predictors and the response variable. If not, either a quadratic term or another algorithm should be used. print('Assumption 1: Linear Relationship between the Target and the Feature', '\n') print('Checking with a scatter plot of actual vs. predicted.', 'Predictions should follow the diagonal line.') # Calculating residuals for the plot df_results = calculate_residuals(model, features, label) # Plotting the actual vs predicted values sns.lmplot(x='Actual', y='Predicted', data=df_results, fit_reg=False, size=7) # Plotting the diagonal line line_coords = np.arange(df_results.min().min(), df_results.max().max()) plt.plot(line_coords, line_coords, # X and y points color='darkorange', linestyle='--') plt.title('Actual vs. Predicted') We’ll start with our linear dataset: linear_assumption(linear_model, linear_X, linear_y) Assumption 1: Linear Relationship between the Target and the Feature Checking with a scatter plot of actual vs. predicted. Predictions should follow the diagonal line. We can see a relatively even spread around the diagonal line. Now, let’s compare it to the Boston dataset: linear_assumption(boston_model, boston.data, boston.target) Assumption 1: Linear Relationship between the Target and the Feature Checking with a scatter plot of actual vs. predicted. Predictions should follow the diagonal line. We can see in this case that there is not a perfect linear relationship. Our predictions are biased towards lower values in both the lower end (around 5-10) and especially at the higher values (above II) Normality of the Error Terms More specifically, this assumes that the error terms of the model are normally distributed. Linear regressions other than Ordinary Least Squares (OLS) may also assume normality of the predictors or the label, but that is not the case here. Why it can happen: This can actually happen if either the predictors or the label are significantly non-normal. Other potential reasons could include the linearity assumption being violated or outliers affecting our model. What it will affect: A violation of this assumption could cause issues with either shrinking or inflating our confidence intervals. How to detect it: There are a variety of ways to do so, but we’ll look at both a histogram and the p-value from the Anderson-Darling test for normality. How to fix it: It depends on the root cause, but there are a few options. Nonlinear transformations of the variables, excluding specific variables (such as long-tailed variables), or removing outliers may solve this problem. def normal_errors_assumption(model, features, label, p_value_thresh=0.05): Normality: Assumes that the error terms are normally distributed. If they are not, nonlinear transformations of variables may solve this. This assumption being violated primarily causes issues with the confidence intervals from statsmodels.stats.diagnostic import normal_ad print('Assumption 2: The error terms are normally distributed', '\n') # Calculating residuals for the Anderson-Darling test df_results = calculate_residuals(model, features, label) print('Using the Anderson-Darling test for normal distribution') # Performing the test on the residuals p_value = normal_ad(df_results['Residuals'])[1] print('p-value from the test - below 0.05 generally means non-normal:', p_value) # Reporting the normality of the residuals if p_value < p_value_thresh: print('Residuals are not normally distributed') print('Residuals are normally distributed') # Plotting the residuals distribution plt.subplots(figsize=(12, 6)) plt.title('Distribution of Residuals') if p_value > p_value_thresh: print('Assumption satisfied') print('Assumption not satisfied') print('Confidence intervals will likely be affected') print('Try performing nonlinear transformations on variables') As with our previous assumption, we’ll start with the linear dataset: normal_errors_assumption(linear_model, linear_X, linear_y) Assumption 2: The error terms are normally distributed Using the Anderson-Darling test for normal distribution p-value from the test - below 0.05 generally means non-normal: 0.335066045847 Residuals are normally distributed Now let’s run the same test on the Boston dataset: normal_errors_assumption(boston_model, boston.data, boston.target) Assumption 2: The error terms are normally distributed Using the Anderson-Darling test for normal distribution p-value from the test - below 0.05 generally means non-normal: 7.78748286642e-25 Residuals are not normally distributed Assumption not satisfied Confidence intervals will likely be affected Try performing nonlinear transformations on variables This isn’t ideal, and we can see that our model is biasing towards under-estimating. III) No Multicollinearity among Predictors This assumes that the predictors used in the regression are not correlated with each other. This won’t render our model unusable if violated, but it will cause issues with the interpretability of the Why it can happen: A lot of data is just naturally correlated. For example, if trying to predict a house price with square footage, the number of bedrooms, and the number of bathrooms, we can expect to see correlation between those three variables because bedrooms and bathrooms make up a portion of square footage. What it will affect: Multicollinearity causes issues with the interpretation of the coefficients. Specifically, you can interpret a coefficient as “an increase of 1 in this predictor results in a change of (coefficient) in the response variable, holding all other predictors constant.” This becomes problematic when multicollinearity is present because we can’t hold correlated predictors constant. Additionally, it increases the standard error of the coefficients, which results in them potentially showing as statistically insignificant when they might actually be significant. How to detect it: There are a few ways, but we will use a heatmap of the correlation as a visual aid and examine the variance inflation factor (VIF). How to fix it: This can be fixed by other removing predictors with a high variance inflation factor (VIF) or performing dimensionality reduction. def multicollinearity_assumption(model, features, label, feature_names=None): Multicollinearity: Assumes that predictors are not correlated with each other. If there is correlation among the predictors, then either remove prepdictors with high Variance Inflation Factor (VIF) values or perform dimensionality reduction This assumption being violated causes issues with interpretability of the coefficients and the standard errors of the coefficients. from statsmodels.stats.outliers_influence import variance_inflation_factor print('Assumption 3: Little to no multicollinearity among predictors') # Plotting the heatmap plt.figure(figsize = (10,8)) sns.heatmap(pd.DataFrame(features, columns=feature_names).corr(), annot=True) plt.title('Correlation of Variables') print('Variance Inflation Factors (VIF)') print('> 10: An indication that multicollinearity may be present') print('> 100: Certain multicollinearity among the variables') # Gathering the VIF for each variable VIF = [variance_inflation_factor(features, i) for i in range(features.shape[1])] for idx, vif in enumerate(VIF): print('{0}: {1}'.format(feature_names[idx], vif)) # Gathering and printing total cases of possible or definite multicollinearity possible_multicollinearity = sum([1 for vif in VIF if vif > 10]) definite_multicollinearity = sum([1 for vif in VIF if vif > 100]) print('{0} cases of possible multicollinearity'.format(possible_multicollinearity)) print('{0} cases of definite multicollinearity'.format(definite_multicollinearity)) if definite_multicollinearity == 0: if possible_multicollinearity == 0: print('Assumption satisfied') print('Assumption possibly satisfied') print('Coefficient interpretability may be problematic') print('Consider removing variables with a high Variance Inflation Factor (VIF)') print('Assumption not satisfied') print('Coefficient interpretability will be problematic') print('Consider removing variables with a high Variance Inflation Factor (VIF)') Starting with the linear dataset: multicollinearity_assumption(linear_model, linear_X, linear_y, linear_feature_names) Assumption 3: Little to no multicollinearity among predictors Variance Inflation Factors (VIF) > 10: An indication that multicollinearity may be present > 100: Certain multicollinearity among the variables X1: 1.030931170297102 X2: 1.0457176802992108 X3: 1.0418076962011933 X4: 1.0269600632251443 X5: 1.0199882018822783 X6: 1.0404194675991594 X7: 1.0670847781889177 X8: 1.0229686036798158 X9: 1.0292923730360835 X10: 1.0289003332516535 X11: 1.052043220821624 X12: 1.0336719449364813 X13: 1.0140788728975834 0 cases of possible multicollinearity 0 cases of definite multicollinearity Assumption satisfied Everything looks peachy keen. Onto the Boston dataset: multicollinearity_assumption(boston_model, boston.data, boston.target, boston.feature_names) Assumption 3: Little to no multicollinearity among predictors Variance Inflation Factors (VIF) > 10: An indication that multicollinearity may be present > 100: Certain multicollinearity among the variables CRIM: 2.0746257632525675 ZN: 2.8438903527570782 INDUS: 14.484283435031545 CHAS: 1.1528909172683364 NOX: 73.90221170812129 RM: 77.93496867181426 AGE: 21.38677358304778 DIS: 14.699368125642422 RAD: 15.154741587164747 TAX: 61.226929320337554 PTRATIO: 85.0273135204276 B: 20.066007061121244 LSTAT: 11.088865100659874 10 cases of possible multicollinearity 0 cases of definite multicollinearity Assumption possibly satisfied Coefficient interpretability may be problematic Consider removing variables with a high Variance Inflation Factor (VIF) This isn’t quite as egregious as our normality assumption violation, but there is possible multicollinearity for most of the variables in this dataset. IV) No Autocorrelation of the Error Terms This assumes no autocorrelation of the error terms. Autocorrelation being present typically indicates that we are missing some information that should be captured by the model. Why it can happen: In a time series scenario, there could be information about the past that we aren’t capturing. In a non-time series scenario, our model could be systematically biased by either under or over predicting in certain conditions. Lastly, this could be a result of a violation of the linearity assumption. What it will affect: This will impact our model estimates. How to detect it: We will perform a Durbin-Watson test to determine if either positive or negative correlation is present. Alternatively, you could create plots of residual autocorrelations. How to fix it: A simple fix of adding lag variables can fix this problem. Alternatively, interaction terms, additional variables, or additional transformations may fix this. def autocorrelation_assumption(model, features, label): Autocorrelation: Assumes that there is no autocorrelation in the residuals. If there is autocorrelation, then there is a pattern that is not explained due to the current value being dependent on the previous value. This may be resolved by adding a lag variable of either the dependent variable or some of the predictors. from statsmodels.stats.stattools import durbin_watson print('Assumption 4: No Autocorrelation', '\n') # Calculating residuals for the Durbin Watson-tests df_results = calculate_residuals(model, features, label) print('\nPerforming Durbin-Watson Test') print('Values of 1.5 < d < 2.5 generally show that there is no autocorrelation in the data') print('0 to 2< is positive autocorrelation') print('>2 to 4 is negative autocorrelation') durbinWatson = durbin_watson(df_results['Residuals']) print('Durbin-Watson:', durbinWatson) if durbinWatson < 1.5: print('Signs of positive autocorrelation', '\n') print('Assumption not satisfied') elif durbinWatson > 2.5: print('Signs of negative autocorrelation', '\n') print('Assumption not satisfied') print('Little to no autocorrelation', '\n') print('Assumption satisfied') Testing with our ideal dataset: autocorrelation_assumption(linear_model, linear_X, linear_y) Assumption 4: No Autocorrelation Performing Durbin-Watson Test Values of 1.5 < d < 2.5 generally show that there is no autocorrelation in the data 0 to 2< is positive autocorrelation >2 to 4 is negative autocorrelation Durbin-Watson: 2.00345051385 Little to no autocorrelation Assumption satisfied And with our Boston dataset: autocorrelation_assumption(boston_model, boston.data, boston.target) Assumption 4: No Autocorrelation Performing Durbin-Watson Test Values of 1.5 < d < 2.5 generally show that there is no autocorrelation in the data 0 to 2< is positive autocorrelation >2 to 4 is negative autocorrelation Durbin-Watson: 1.0713285604 Signs of positive autocorrelation Assumption not satisfied We’re having signs of positive autocorrelation here, but we should expect this since we know our model is consistently under-predicting and our linearity assumption is being violated. Since this isn’t a time series dataset, lag variables aren’t possible. Instead, we should look into either interaction terms or additional transformations. V) Homoscedasticity This assumes homoscedasticity, which is the same variance within our error terms. Heteroscedasticity, the violation of homoscedasticity, occurs when we don’t have an even variance across the error Why it can happen: Our model may be giving too much weight to a subset of the data, particularly where the error variance was the largest. What it will affect: Significance tests for coefficients due to the standard errors being biased. Additionally, the confidence intervals will be either too wide or too narrow. How to detect it: Plot the residuals and see if the variance appears to be uniform. How to fix it: Heteroscedasticity (can you tell I like the scedasticity words?) can be solved either by using weighted least squares regression instead of the standard OLS or transforming either the dependent or highly skewed variables. Performing a log transformation on the dependent variable is not a bad place to start. def homoscedasticity_assumption(model, features, label): Homoscedasticity: Assumes that the errors exhibit constant variance print('Assumption 5: Homoscedasticity of Error Terms', '\n') print('Residuals should have relative constant variance') # Calculating residuals for the plot df_results = calculate_residuals(model, features, label) # Plotting the residuals plt.subplots(figsize=(12, 6)) ax = plt.subplot(111) # To remove spines plt.scatter(x=df_results.index, y=df_results.Residuals, alpha=0.5) plt.plot(np.repeat(0, df_results.index.max()), color='darkorange', linestyle='--') ax.spines['right'].set_visible(False) # Removing the right spine ax.spines['top'].set_visible(False) # Removing the top spine Plotting the residuals of our ideal dataset: homoscedasticity_assumption(linear_model, linear_X, linear_y) Assumption 5: Homoscedasticity of Error Terms Residuals should have relative constant variance There don’t appear to be any obvious problems with that. Next, looking at the residuals of the Boston dataset: homoscedasticity_assumption(boston_model, boston.data, boston.target) Assumption 5: Homoscedasticity of Error Terms Residuals should have relative constant variance We can’t see a fully uniform variance across our residuals, so this is potentially problematic. However, we know from our other tests that our model has several issues and is under predicting in many We can clearly see that a linear regression model on the Boston dataset violates a number of assumptions which cause significant problems with the interpretation of the model itself. It’s not uncommon for assumptions to be violated on real-world data, but it’s important to check them so we can either fix them and/or be aware of the flaws in the model for the presentation of the results or the decision making process. It is dangerous to make decisions on a model that has violated assumptions because those decisions are effectively being formulated on made-up numbers. Not only that, but it also provides a false sense of security due to trying to be empirical in the decision making process. Empiricism requires due diligence, which is why these assumptions exist and are stated up front. Hopefully this code can help ease the due diligence process and make it less painful. Code for the Master Function This function performs all of the assumption tests listed in this blog post: def linear_regression_assumptions(features, label, feature_names=None): Tests a linear regression on the model to see if assumptions are being met from sklearn.linear_model import LinearRegression # Setting feature names to x1, x2, x3, etc. if they are not defined if feature_names is None: feature_names = ['X'+str(feature+1) for feature in range(features.shape[1])] print('Fitting linear regression') # Multi-threading if the dataset is a size where doing so is beneficial if features.shape[0] < 100000: model = LinearRegression(n_jobs=-1) model = LinearRegression() model.fit(features, label) # Returning linear regression R^2 and coefficients before performing diagnostics r2 = model.score(features, label) print('R^2:', r2, '\n') print('Intercept:', model.intercept_) for feature in range(len(model.coef_)): print('{0}: {1}'.format(feature_names[feature], model.coef_[feature])) print('\nPerforming linear regression assumption testing') # Creating predictions and calculating residuals for assumption tests predictions = model.predict(features) df_results = pd.DataFrame({'Actual': label, 'Predicted': predictions}) df_results['Residuals'] = abs(df_results['Actual']) - abs(df_results['Predicted']) def linear_assumption(): Linearity: Assumes there is a linear relationship between the predictors and the response variable. If not, either a polynomial term or another algorithm should be used. print('Assumption 1: Linear Relationship between the Target and the Features') print('Checking with a scatter plot of actual vs. predicted. Predictions should follow the diagonal line.') # Plotting the actual vs predicted values sns.lmplot(x='Actual', y='Predicted', data=df_results, fit_reg=False, size=7) # Plotting the diagonal line line_coords = np.arange(df_results.min().min(), df_results.max().max()) plt.plot(line_coords, line_coords, # X and y points color='darkorange', linestyle='--') plt.title('Actual vs. Predicted') print('If non-linearity is apparent, consider adding a polynomial term') def normal_errors_assumption(p_value_thresh=0.05): Normality: Assumes that the error terms are normally distributed. If they are not, nonlinear transformations of variables may solve this. This assumption being violated primarily causes issues with the confidence intervals from statsmodels.stats.diagnostic import normal_ad print('Assumption 2: The error terms are normally distributed') print('Using the Anderson-Darling test for normal distribution') # Performing the test on the residuals p_value = normal_ad(df_results['Residuals'])[1] print('p-value from the test - below 0.05 generally means non-normal:', p_value) # Reporting the normality of the residuals if p_value < p_value_thresh: print('Residuals are not normally distributed') print('Residuals are normally distributed') # Plotting the residuals distribution plt.subplots(figsize=(12, 6)) plt.title('Distribution of Residuals') if p_value > p_value_thresh: print('Assumption satisfied') print('Assumption not satisfied') print('Confidence intervals will likely be affected') print('Try performing nonlinear transformations on variables') def multicollinearity_assumption(): Multicollinearity: Assumes that predictors are not correlated with each other. If there is correlation among the predictors, then either remove prepdictors with high Variance Inflation Factor (VIF) values or perform dimensionality reduction This assumption being violated causes issues with interpretability of the coefficients and the standard errors of the coefficients. from statsmodels.stats.outliers_influence import variance_inflation_factor print('Assumption 3: Little to no multicollinearity among predictors') # Plotting the heatmap plt.figure(figsize = (10,8)) sns.heatmap(pd.DataFrame(features, columns=feature_names).corr(), annot=True) plt.title('Correlation of Variables') print('Variance Inflation Factors (VIF)') print('> 10: An indication that multicollinearity may be present') print('> 100: Certain multicollinearity among the variables') # Gathering the VIF for each variable VIF = [variance_inflation_factor(features, i) for i in range(features.shape[1])] for idx, vif in enumerate(VIF): print('{0}: {1}'.format(feature_names[idx], vif)) # Gathering and printing total cases of possible or definite multicollinearity possible_multicollinearity = sum([1 for vif in VIF if vif > 10]) definite_multicollinearity = sum([1 for vif in VIF if vif > 100]) print('{0} cases of possible multicollinearity'.format(possible_multicollinearity)) print('{0} cases of definite multicollinearity'.format(definite_multicollinearity)) if definite_multicollinearity == 0: if possible_multicollinearity == 0: print('Assumption satisfied') print('Assumption possibly satisfied') print('Coefficient interpretability may be problematic') print('Consider removing variables with a high Variance Inflation Factor (VIF)') print('Assumption not satisfied') print('Coefficient interpretability will be problematic') print('Consider removing variables with a high Variance Inflation Factor (VIF)') def autocorrelation_assumption(): Autocorrelation: Assumes that there is no autocorrelation in the residuals. If there is autocorrelation, then there is a pattern that is not explained due to the current value being dependent on the previous value. This may be resolved by adding a lag variable of either the dependent variable or some of the predictors. from statsmodels.stats.stattools import durbin_watson print('Assumption 4: No Autocorrelation') print('\nPerforming Durbin-Watson Test') print('Values of 1.5 < d < 2.5 generally show that there is no autocorrelation in the data') print('0 to 2< is positive autocorrelation') print('>2 to 4 is negative autocorrelation') durbinWatson = durbin_watson(df_results['Residuals']) print('Durbin-Watson:', durbinWatson) if durbinWatson < 1.5: print('Signs of positive autocorrelation', '\n') print('Assumption not satisfied', '\n') print('Consider adding lag variables') elif durbinWatson > 2.5: print('Signs of negative autocorrelation', '\n') print('Assumption not satisfied', '\n') print('Consider adding lag variables') print('Little to no autocorrelation', '\n') print('Assumption satisfied') def homoscedasticity_assumption(): Homoscedasticity: Assumes that the errors exhibit constant variance print('Assumption 5: Homoscedasticity of Error Terms') print('Residuals should have relative constant variance') # Plotting the residuals plt.subplots(figsize=(12, 6)) ax = plt.subplot(111) # To remove spines plt.scatter(x=df_results.index, y=df_results.Residuals, alpha=0.5) plt.plot(np.repeat(0, df_results.index.max()), color='darkorange', linestyle='--') ax.spines['right'].set_visible(False) # Removing the right spine ax.spines['top'].set_visible(False) # Removing the top spine print('If heteroscedasticity is apparent, confidence intervals and predictions will be affected')
{"url":"https://jeffmacaluso.github.io/post/LinearRegressionAssumptions/","timestamp":"2024-11-03T07:10:08Z","content_type":"text/html","content_length":"102603","record_id":"<urn:uuid:60d6e3af-9160-46ba-a270-368554d2ec8e>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00754.warc.gz"}
Computer Science Archives - Page 6 of 6 - Know the Code Let’s talk about the age of 16-bit and new numbering system. We need to compute larger numbers. Working in binary and octal is mind-numbing. Bam, welcome the hexadecimal (or hex) notation, which you use every day for CSS color codes. Your key takeaways in this episode are: Hexadecimal (hex) more condensed notation Allows up 16 digits: 0-9 + A-F One hex notation is the same as 4-bit groupings in binary Useful for larger numbers Color codes use hex Study Notes Remember: Larger numbers = more power The process of converting binary to decimal takes a few steps. It is more […]
{"url":"https://knowthecode.io/catalog/computer-science/page/6","timestamp":"2024-11-03T16:17:07Z","content_type":"text/html","content_length":"52683","record_id":"<urn:uuid:85a488bd-1f85-44a3-9676-d1efd82834a3>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00323.warc.gz"}
Radius Calculator Use our radius calculator to calculate the radius of a circle given its diameter, circumference, or area. How to Calculate the Radius of a Circle A circle is a symmetrical, round, two-dimensional shape with each point along the edge being equidistant from its center point. The size of a circle is defined by several key properties: the radius, diameter, circumference, and area. The radius is the distance from the center point of the circle to the outer edge. The diameter is the longest distance from one edge to the other that passes through the center point. The circumference is the length around the circle’s outer edge. This is the same as the perimeter of the circle. The area is the total space inside the circle. The diagram above shows the radius, diameter, & circumference of a circle Given any of these properties, you can calculate the radius of the circle using a formula. How to Calculate the Radius Given the Diameter The diameter of a circle is equal to twice the length of the radius. So, you can use the following formula to calculate the radius when given the diameter: r = d/2 Thus, the radius of a circle r is equal to the diameter d divided by 2. For example, let’s calculate the radius for a circle with a diameter of 6. r = 6/2 = 3 So, this circle has a radius of 3. You can also find this answer using our circle calculator. How to Calculate the Radius Given the Circumference You can calculate the radius of a circle if you know its circumference using a similar formula. The formula to calculate the radius given the circumference is: r = C/2π The radius r is equal to the circumference C divided by 2 times pi. For example, let’s calculate the radius of a circle with a circumference of 14. 4 = 14/2π = 2.23 The radius of this circle is equal to 2.23. You can use our circumference calculator to find the circumference of a circle, given the radius. How to Calculate the Radius Given the Area Just like the previous conversions, you can use a formula to calculate the radius if you know the area of a circle. The formula to calculate the radius given the area of a circle is: r = √(A ÷ π) The radius r of a circle is equal to the square root of the area A divided by pi. For example, let’s calculate the radius of a circle with an area of 12. r = √(12 ÷ π) = 1.95 The radius of this circle is 1.95. You can also use our circle area calculator to find the area of a circle, given the radius.
{"url":"https://www.inchcalculator.com/radius-calculator/","timestamp":"2024-11-07T02:24:28Z","content_type":"text/html","content_length":"70171","record_id":"<urn:uuid:0c38cbe0-d83f-4d27-9530-b3f9f8ae5ce7>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00191.warc.gz"}
Equation of elipse Bing users found our website today by entering these keywords : │enter algebra problems and get answer │equation for elipse on graphing calculator │programmed algebra textbooks │cat problems on logarithms │ │question papers to solve programms in java │Download aptitude questions and answers │where can I find algebra charts │graph my algebra formula │ │rules on how to manually solve for square roots │printable algebra 2 algebraic fraction worksheets │to calculate online maths sums │math eog sample review │ │add subtract expressions calculator online │algebra 2 an integrated approach │finding one number of another number │Completing the Square TI-83 Program │ │mental math printouts │free boolean algebra solver software │adding and subtracting+integers+worksheets │translation pratice math │ │common algebraic formulas │addition worksheet 13-18 │ti-83 plus emulator │free printable 9th grade level │ │Mathematical CAT Question Paper │radical factor calculator │review 6th grade math free sol worksheets │free teacher addition glencoe advanced │ │ │ │ │mathematical concepts │ │who invented exponentials │Princeton hall Trigonometry chapter 10 │how to use "log" on TI │Algebra problem solver │ │logarithmic equation solver │ADD SUBTRACT INTEGERS, CHART │technique calculator │completing the squares of multiple variables │ │how to do radicals on a TI-83 plus │equation solver,Software,complex numbers │holt algebra two online graphing calculator │introductory algebra study materials │ │simplify each expression lesson plan │trigonometric methods answer generator │Algebra with pizzazz! worksheet answers page│integers, worksheet │ │ │ │232 │ │ │10th grade math examples │Rational expressions and Trig identities │online gr.6 test work sheets │square root solution set calculator │ │simplying factorials │1st grade addition games lesson │algebra basic ks2 │download CAT sample papers │ │gr. 8 algebra quiz │SAT exam paper for grade 1 │extrapolation in algebra │adding and subtracting integers rule │ │algebra homework /assessment worksheet 10 │free algebra problem solver │parabola algebra │calculator program roots of quadratic real │ │chemistry balancing equations-grade 10 │FREE 6TH AND 7TH GRADE PRINTABLE WORKSHEETS │Free Math Answers Problem Solver │linear combination and permutation equation │ │worksheets │ │ │ │ │solving for the slope │how the cheat electronic scale │inequalities printable worksheet │pratice science problems for highschool │ │how to simplify square roots │test of genius worksheet │free polynomial solver │complete the square calculator │ │software for solving 5th-degree polynomial │grade 9 triganometry in canada │worksheet on finding square roots using │solve multiple variables equation with matlab │ │ │ │factorization │ │ │mcdougal littell inc worksheets │lowest common denominator calculator │free biology worksheets for the kid │THIRD GRADE MATH FAX │ │rationalizing the denominator calculator │How to solve algebraic fractions ks3 │trigonometry test ch 13 combinations │free equations using integers games to practice │ │ │ │permutation │ │ │Online College Algebra Calculator │how to turn mix fraction to decimal │how parabola used in real life │printable least common denominator │ │hyperbola formulas equations │florida textbooks math 8 │9th grade algebra worksheets │T1-84 curve of best fit │ │CPM I Algebra I Unit 13 Test │algerbra 1 how to │how to teach algebra │free pre algebra review for 8th grade │ │f1 math exam │advanced algebra instruction software │systems of inequalities worksheet │pre-algebra definitions │ │square root sample problems simplification │McGraw economics Principles and practices ch11 cheat │holt algebra student edition │how do you add and subtract integers │ │ │sheet │ │ │ │expressions and equations worksheet │interger worksheet │holt physics practice problem answers │Decimal to Fraction Formula │ │converting a mixed fractions to percents │solving rational +equtions with binomial denominators │Adding and subtracting dividing and factor │elementry maths │ │ │ │polynomials │ │ │College Algebra worksheet │Easy Route solving systems of equations with │subtracting negative numbers worksheets │glencoe Algebra 1 answers │ │ │elimination method powerpoint │ │ │ │online ti 84 calculator │monomial solver │multiplying integers with different signs │imperfect square root │ │'how to teach algebra I' │free math pizzazz worksheets │TI-86 cubed function │aptitude online test for kids │ │decimals to square root │solving trigonomic equations │what is the greatest common factor for 36, │how to work a liner equation │ │ │ │74, and 144 │ │ │quadratic equations in vertex form │free aptitude books │"addition and subtraction equations" │how to do radical expressions and functions │ │ │ │integers │ │ │Quadratic Inequalities Word Problem Worksheets │Pre-algebra with pizzazz! │log on ti 89 │simultaneous equations with 3 unknowns │ │why does Charles at algebra 2 │simultaneous equations in 4 unknowns │worksheet review linear functions │solve college algebra problems online │ │Log Application for TI 83 │basic algebra trivia questions │algebra 1 worksheets │a free online test for algebra 1 and the answer │ │ │ │ │key │ │algebra help: percentages │Least Common Multiple solver for free │free homework ks3 sheets │ti 84 emulators │ │online factoring program │examples of simple step equations │solved examples permutations combinations │glencoe teachers edition algebra answers │ │fraction problems for 1st grade class │aptitude test paper with answer │TI-83 graphing calculator online use │how to calculate probablity with ti-86 │ │solve my algrebra problem │KS2 maths and language assessment sheeta │Solving Binomial Equations │6th grade eog sample reading questions │ │what formula measure square footage into lineal │+free math lab for radical expressions,equations, and │an online graphing calculator │integers activity free │ │square footage │functions │ │ │ │square root of x cubed │simplify radical expressions worksheet │Adding and Subtracting Fractions calculator │9th grade practice finals │ │ │ │that shows work │ │ │converting fractions to decimals worksheet │Who Invented the Radical Sign │solving fraction equations with "two │maths volume worksheets KS3 │ │ │ │absolute values" │ │ │beginner algebra expressions │Algebrator │saxon math 8/7 placement test free online │simplify radicals fractions │ │ │ │study guide │ │ │holt algebra 2 pascal's triangle │how to calculate inverse tangent using second order │online factorise equations │learn free algebra online │ │ │equation │ │ │ │free pdf McDougal Littell books │dividing fractions word problems │finding common denominators │fraction subtractor │ │solving 3 simultaneous equation │log with base on ti-89 │graphic organizer positive negative integers│geometry skills for 10th gr │ │ │ │add subtract │ │ │algebra exercise for beginners │hardest math worksheet calculus │QUADRATIC SQUARE │ks3 math quiz │ │square root in the denominator │system equation maple add roots │integers worksheets │highest common factor of 22 and 74 │ │Worksheet beginning algerbra multiply division │Algebra McDougal Littell Chapter 12 Test │mcDougal littell algebra 2 End- of year test│Learn Permutation and Combination │ │algabra tutor │algebra II macdougal littell answers to even │trig answers │math geometry trivia with answers │ │pre algebra math book answers │easy algebra │worlds hardest chemical equation to balance │chapter 10 test, form 2d glencoe algebra 2 │ │free online released sat tests 5th grade │fastest way to do algebra? │greatest common factor algebra calculator │prentice hall mathematics algebra 1 │ │practice sheet on dividing and multiplying │solving non linear 1st order differential equations │function machines algebra worksheet │substituting algebra worksheet │ │monomials │ │ │ │ │hardest maths equations │5th grade printables │answers to Algebra 1: Concepts and Skills │mcdougal littell algebra 1 worksheet answers │ │ │ │California Edition │ │ │ks3 maths scale │free download excel calculation work exercise │accounting papers printable │permutations worksheets │ │java code for computing significant digits │solve radicals with calculator │algebra 2 mcdougal littell │algebra exercises Grade 8 │ │maths quizzes for 7 grade │Integrals 5th grade │hyperbola shifting from center │nonlinear algebraic equation excel │ │use an online calculator that converts mixed │how to pass the algebra 1 SOL │GLENCOE ACCOUNTINg answers │square root property calculator │ │numbers to fractions │ │ │ │ │second order homogeneous difference equations │solving equations + division + printable │simplifying exponential │how to turn decimal into fraction on calculator │ │prentice hall math book │multiply rational expressions calc │Hard math equations │trigonometry formulas for dummies │ │glencoe workbook equations │math integer tests 6th grade │fractions least common denominator │linear equations with two variables worksheets │ │ │ │calculator │ │ │intermediate algebra guide │algebra like terms worksheets │worksheet on expression │beginning algebra worksheet │ │English reading HW for 4th grader online tools │addition and subtraction on negative numbers │saxon math books online source algebra 1 │percent equation │ │ │elementary │ │ │ │berkeley prealgebra assessment │third order roots │how to simplify by factoring │north carolina algebra II text book │ │factor algebra equations │free homework help - Completing the Square - Algebra │solving system of two equations excel │holt science and technology directed reading │ │ │ │ │worksheets chapter 25 │ │free printable adding integers worksheets │number base convert open source java │conic online practice │formula sheet for finding area │ │pre algebra credit by exam │parabola calculator │barrons intermediate algebra nys exams │free downloadable books on differential │ │ │ │ │equations │ │factor trees+worksheets free │how to solve quadratic formula on ti89 │phoenix source code for ti 83 │4th grade fractions practice sheet │ │online calculators simultaneous equations │formulas in accounting level 2 │add and subtract integers calculators │free 10 th grtade reading worksheets │ │trigonometry.ppt │division of algebraic radicals │differential equations ppt │algebra 1 part unit 12 test b cheat sheet │ │free primary school science exam │free online calculator dividing │polynomial division free online │teach algebra 5th graders │ │online cubic root calculator │mcdougal littell answers challenge problem │simplifying radical fraction │algebra 2 online textbook edition │ │radical expression solver calculator │math worksheets, cubic units volume, 2nd grade │immediate calculator solvers │calculation worksheet for kids │ │CAT\3 toronto "grade 5" test │simplifying radicals using the ti 84 │download Graphing Calculator "chemistry │add subtract simplify radical expression │ │ │ │cheat sheet" │calculator │ │daily mathematics homework │simplify rational expression worksheet │free printable maths tests papers for grades│cube roots of decimals │ │ │ │6-7 │ │ │Mathematics 3 2nd edition answers CPM │algebraic fraction solver │how to change quadratic functions into │the greatest quadratic equation calculator │ │ │ │vertex form │ │ │6th grade english worksheets │condensing and expanding logarithms practice problems │free aptitude e-book │third grade math free printable │ │73376081451839 │McDougal-Mathematics- Algebra 1 Workbook │algebra calculator rational expressions │math worksheets fractions multiply divide │ │contemporary abstract algebra text download │5th grade teaching worksheets │Elements, Compounds & Mixtures revision-yr 7│ALGEBRA PROBLEM SOLVER │ │online conics equation solver │factorising calculator │maths puzzles yr 10 │free printable worksheets measuring volume in │ │ │ │ │cubic units │ │create a formula to convert decimal to weight │kumon worksheet │3rd +grad math word problems │scale games math │ Bing users came to this page today by using these keywords : Online algebra expression calculator, simplifying radicals equations, Algebra Structure & Method Book 1 quiz, calculator for fractions to decimal with a simplify button. Free help for combination and permutation algebra, free factoring program showing steps, ellipse hyperbola parabola grapher. Ti-85 HOW TO SAVE FORMULAS, multiplying cubed polynomials, worksheet, distributive property, algebra 1 answers. Ti 84 quadratic formula, expanding 2 variable equations, mathematica algebra substitution, algebra word problems in our daily lifes. Practise ks3 stas science paper, answers for Radical expressions and functions, how to do polar complex numbers on ti-83 plus, poem using algebra words. Solving multistep equations worksheets free, cool math for kids.com, real life applications of radical exponents, simplify a rational expression ti 86, maths test for year 8, Progams for solving cube Graphing ordered pairs worksheet puzzles, ti-84 trigometric identities store, florida mathematic textbook sixth grADE. Solving linear, quadratic, exponential and log function, Engineer equation solve free, physics cheat sheet ti-89. "linear programing" examples solutions, online maths help yr 8, maths work sheet for year 3, glencoe mathematics algebra 1. "Glencoe/McGraw-Hill Algebra worksheet 13-3", glencoe algebra 2 answers, abstract algebra solution, LCM MATLAB, conic equation for basketball. Printable maths questions ks2, integrated algebra-practice tests for regents examinations-answers, simultaneous equations calculator 3 unknowns, "coordinate plane" worksheet free, michigan algebra Software program for quadratic equations having real solutions, free downloadable algebra by herstein, pre algebra with pizzazz answers worksheets. Free downloadable Cat exam book, 8th grade history worksheets, McDougal Littell Practice Workbook Course 2-answers. 6th grade decimals review page, Algebra with Pizzazz Answers, how to simplify algebraic expressions using the distributive property, binomial theory problem solver. Online graphing ti-83 calculator, Advanced Math test for 7th grade, ppt on fluid mechanics concepts. Factoring + tic tac toe, prentice hall math books, Download Algebrator, printable ks2 sats papers, high school algebra. Algebra free online math problem solver, matlab factorization of equation, exponents and multiplication, powerpoint on degrees, algebra, Pre-Algebra Quizzes, beginners algebra, solve square roots with indexes. Prentice hall math and algabra, my fast ks2 science questions, ninth grade algebra 1, glencoe online answers algebra 2 self check quiz. Converting a mixed fraction to a decimal, answers to the algerbra 1 workbook, intermediate accounting textbook pdf free download, aptitude test papers+answers. Worksheet adding and subtracting mixed fractions, McDougal Littell Geometry, maths ks3 pythagoras, Grammer English ebook exercise. Free pre-algebra and algebra for 6 and 7 grades, solving second order differential equations u(t), polynomials of multiple variables in matlab, linear equation calculator from given points. Real-life samples involving polynomials, yr 9 maths, online year 8 math tests. Square root maths for idiots, area formula sheet middle school, 6th grade mcdougal littell middle school chapter 10 test answers, free online fraction 5th grader. Worksheet on Factoring trinomial squares, how to factor math problems, online calculator to simplify fractions with variables, math solver rational expressions. Nth term worksheets, formula for volume worksheet with answer, exponential and log maths question solver, practice combination and permutation. Free Logarithm worksheet, operations with radicals puzzle, free math solutions, where can i buy glencoe algebra 2 teachers addition, Free Worksheet for grade 5, understanding decimals fractions and percentages gcse print off, graphing ellipses. Precalculus review worksheets, complex factoring, adding and subtracting integers problems, Parabola Formula Algebra 2, free worksheets of grade 9 algebra. Online maths quiz and solutions, money printable worksheets 1st grade, math printouts, use the ti83 for free, easy way to do logarithms. Worksheet on graphing percents, "RADICAL OPERATIONS", WORKSHEET, algebraically solving definite integrals, textbook answers cheat, cubed factoring, lesson master 7th grade math 9-1 answers. Multiplying rational expressions calculator, free algebra worksheets notes, add,divide,subtract,multiply fractions games, ti-83 programming check exponential answer. First order linear differential equations + existence and uniqueness, free online maths tests percentages ratio, SAT testing for 6th graders, Excel Solving Equations. College algebra worrksheet, sixth edition accounting 1 workbook answers, iqtests ged math 10th, hyperbola equation life. McDougal Littell Algebra 2, functions difference linear equations, north carolina eoc principle of business answers, Square root of a perfect square monomial calculator, Standard Form Calculator, algebra order of operations practice sheets. Adding radical expressions fraction, power fraction, free aptitude questions pdf, how to calculate cubed root, maths module 8 revision sheets, equation for hyperbola in calculator. PRINT OFF PRE. ALGEBRA PROBLEMS, graphing ellipses online, fun word problems for square root, algebra 2 chapter 5 project holt mathematics. Ti89 delta function, adding and subtracting polynomial games, combinations permutations practice, how to solve radicals, Algerbra problems to practice/free. Worksheet graphing integers connect dots, y x graph equation rectangle, Finding the Volume Worksheets, factoring the difference of 2 squares 8th grade, Glencoe Algebra 1 answers, physics practice test Copyright © by Holt, Rinehart and Winston, 6th Grade math solving proportions printable. Foiling fractions, prealgebra- Saxon Math answers and book guide online cheats, function table worksheets, indiana state standards worksheets for 4th grade, basic identities of class7 for maths subject, MCQs in aptitude tests solved, logarithms practice sheet. Permutations and combination problems and solutions, find lcm algebraic expressions, math test generator common factor, pre algebra scale model. Ti 84 plus tips .pdf, alg2 test questions, conceptual physics textbook online 3 edition chapter 33, free usable online calculator, math eog questions for 6th grade, free printable grade 8 sat Software for factorization 512, lattice worksheets, Math Functions For Dummies. Glencoe/McGraw-Hill algebra 1 answer key, +free 7th grade english worksheets, glencoe algebra 1 ch 10 test, management aptitude questions with solutions. Decimal Square root, real life graph functions, adding negative and positive numbers calculator, graphing an cubed exponent. Formula equations for math, the differences between linear and nonlinear differential equations, math+evaluation vs. simplification of an expression, algebra 1 games worksheets, polynomial worksheets grade 10, solving fractions with square roots. How to factorise 3rd order equation, non linear simultaneous equation using basic gauss elimination method, advanced algebra review pdf, Free pre Algebra websites. Conic structure picture on ti-83, algebra substitution printable worksheets, algibra#, integers free 5th grade lesson plans, practice tests about operation of radicals in algebra, finding discriminant on TI84. 2004 8th grade Math SOL Exam, +Algerbra to do without a Calculator, ti 84 meaning of symbols, Order Least to Greatest Fractions, radical expressions with variables calculator. 7th grade math gaMES help SLOPE, volume mathe sheets, teaching algebraic equations, free homework cheats. Holt middle school math chapter 5 cumulative test, form 4 add maths differentiation exam paper, Glencoe Algebra final exam, Simplifying Expressions with Exponents Worksheets, ratio to whole numbers, math simplification. Factor tree, addition, online Algebra 2 chapter test answer key, free algebra for beginners, iowa algebra test example, Slope y- intercept Calculator with division, math problems about the application of algebra in our daily life, algebra software. Fraction expression solver, simplify radical by factor calculator, statistic graphing calculator program, precalculus equation solver, maths exams for year 8's. All ratio formaula, maths aptitude questions, answers to mcdougal littell world history book, chapter 4 factoring worksheet area perimeter foiling. Algebra 1 answers cheat, SUBTRACTING NEGATIVE FRACTIONS, math help algebra cheats and answers, algebra 1 state exam practice, laplace transform calculator, algebra worksheet. Examples 12-5 prentice-hall math 6th grade, "basic math lessons", excel slope expression, high school algebra 2 math books, convert percent to decimal calulator, solving equations by multiplying and dividing worksheets. Creative publication pre- algebra with pizzazz, free exponent of number chart, radical calculator, equation in vertex form, quadratic solver third degree. TEACHERS CAFE WORKSHEET, slope algebra calculator, quadratic formula real nonreal answers program ti 83. Shortcut formulas used solve maths apptitude, free precalulus practice sheets, british method math notes, convert from dec to bin with written code using java. Formulas square root(a+b+c), multiple choice generator pure math, java programs about fractions, math linear equations worksheet. Free math printouts for first graders, free 7th basic math skill puzzles printable, 9th grade math probability with restrictions, Use Online Glencoe Chemistry Book for free, matric calculator. 8th and 9th grade quiz and review with answers math, algebra square root addition, complicated algebra formula. Adding subtracting multiplying and dividing rational numbers, mixed factorising worksheet, prime factorization of a denominator, ks3 year 8 maths factorise. Maths factor problems, equations with variables only, Engineering test printouts, online logarithmic calculator, saxon math algebra 2 answers. Two-step problems algebra, negative fraction equations, show me year 10 maths program. Decimals free works sheets, 6th and 7th grade just math, factor math, TWO STEP EQUATION WORKSHEETS MATH 6TH GRADE, easy to learn math online free, radical expression simplify square root factions Javascript geometric equation solver code, mcdougal littell geometry, Equation Factoring Calculator, math for dummys, grade 6th mathematic problems on slope, how to learn algebra, word problems 1 grade printabe. STATistic CHEATS, alebra tutoring, graphing equalities. Factoring calculator programs, how to solve algebraic equations on ti 84, factoring algebra, practice interval notation game, E-books 5th grade, solution step mixture problems, Time and Distance Addition and Subtraction Worksheets. Grade 9 trigonometry practise, math exam paper A level online, “how to pass algebra 1”, graphing parabola calculator, graphing program showing foci and ellipse conics, PLANE GEOMETRY FOR DUMMIES. Simplifying expressions calculator, probability and statistics review sample final question midterm permutation combination, algebrator free download math, simplifying a conic, simple equations Online algebraic factoring, solving mixture problems algebra 1-2, help solving algebra homework problems. What is the prime factorization of a denominator, probability cliffnotes, matlab solve algebra equation, "Algebra 1 - Factoring Flow Chart", math book solution manual, probability review sheets grade Answers to Math Homework, how to pass college algebra online, adding tenths on a number line worksheet. Edhelper pythagaras triangle free printables, grade 10 maths cheat sheet, trinomial factor solver, Algebra 2 chapter 12 help, algebric question, free easy ways to learn algebra, free online word problem solver. Free algebra puzzle worksheets, cube nets printable 11, using calculator to graph circles. FACTORING OF DIFFERENCE OF TWO SQUARE, FACTORIZATION of quadratic expressions, adding radicals calculator, gr 10 EXAMPLER QUESTION PAPERS, elementary algebra 101 free. Simplify cube root expression, "math sheet""free""elementary", algebra elimination calculator, online grade 8 algebra test. 9th grade math quiz, like terms in algebraic form, scientific calculator with cubic root, teahcing like terms. Parabola graphing programs, pre-algebra with pizzazz! creative publications, quiz on simplifying rational expression, free primary five exam papers, free algebra word equations calculator. Who invented the rational sign, how to cheat on EOG, "symmetry games for kids", free math pizzazz worksheets on percents, free algebra photos, Practice Grade Nine Probability Problems. Power point presentation of numerical linear algebra lecture notes, change the subject algebra T1-83, free aptitude ebooks. Solving complex fractional exponents help, take algebra quizzes online, Free math programs for 7th graders, exponents and roots calculator, Grade 8 math worksheets, free math worksheets conic Free solving algebra equations, simplifying algebraic expressions calculator, trivia questions for 3rd grade, Factor online equations, quotient of a binomial calculator. What number is used to complete the square calculator, stats download for Ti-84+, texas 2nd grade work sheets. Chemistry Addison wesley workbook answers, free online reading and math for frist grade, examples complex college algebra problems, algebraic formulas. Term powerpoint, college algebra, log, exponential calculator, allgebra maths test example, Algebra Trigonometry Addison Wesley Paul answers, math scale. Simplifying Fraction Radicals Cubed, pizzazz puzzles answers for permutations, probAbility problems work 6th grade EOG, free saxon algebra half answers, subtracting and adding negative number test, ks3 math SAT test. Pictograph worksheet, hyperbola real-world, third degree hyperbola, +table of trigonomic functions. Chemistry hALF LIFE WORKSHEET, algebra I software tutorial, interger worksheets, free printable third grade work sheets, math permutation examples for kids, what is the easiest way to find the lcm, math solving program. Grade 3 fractions worksheets, poems about math/algebra, best way to learn the six trigonometric functions, multiply fractions with variables calc, how to learn algrebra. Log calculator exponential, TI-84 greatest common factor, real world application of surface or volume/math problems, 7 grade Trigonometry questions, list of fourth roots. Holt physics tests, free & pdf downloadable math ACTIVITY books for kids, simultaneous equations in two variables, free essay on the comparison of traditional math and sixth grade math, texas ti 83 rom download. Printable algebra final study guide, 3-digit additon worksheets, trigonometric simplifications grade 8, SOL review for 9th grade world history, on-line free past gcse maths foundation papers, how to program a formula TI-84. Answer key to High Marks: Regents Chemistry Made Easy - The Physical Setting, subtracting integers worksheets, practice-hall, inc., ti calculator programs interpolation ti 83 plus, solve LCM algebra, 6th grade tree diagram worksheets. Free usable online graphing calculator, glencoe algebra 1 teacher edition, T1 calculator rom, children revision stats exam papers, free 7 grade editing sheets, solving and graphing linear equation. Software to solve calculations with bearing and range in latitude and longitude, solution exercises for dummit algebra abstract, algebra simplification help, free online graph maker with "best fit line", holt algebra 1 worksheets. "TI-89 vs 9850", notes in abstract algebra, square root variable squared, factor polynomials cubed, using ti-89 surds, free mcdougal littell algebra 1 worksheet answers. Foil math formula, math game using rational expressions, algrebra dummies, TI 84 silver Show asymptotes, free printable integer worksheets. Mathematics online exercise malaysia middle school, how to simplify square root on your calculator, "probability examples" without replacement sixth grade. Algebra readiness test 6th, practice college algebra 3, square root first graders, 10th grade geometry worksheets, pre algebra software. Solving simultaneous equations kids, algebra 2 resource book, printable 8th grade TAKS charts, how to do two step equations with integers. Math made easy elemetary worksheet, maths non linear equations example, square root to the third, rational expressions and adding, college mathematics clep practice answers explained, college algebra, How to multiply fractions with a square root. Intermediate algebra equations, iowa prealgebra test, how to solve algebraic fraction sums, a computer programme example in solving cubic equation using the bisection method of numerical methods. Math game free ks3, simplifying radicals with variable calculator, learn algebra online free problems answers, grade 8 algebra equations, how to solve a quadratic knowing the roots, permutation combination problems. Graphing inequalities ti 89, adding subtracting multiplying and dividing exponents, how to solve radical expressions fractions, how to find lcm on ti83. Free printable 8th grade practice tests, identifying terms pre algebra, graph hyperbolas online. Free download accounting books, hbj algebra 2 with trigonometry, tests on algebra, elementary worksheets for distributive property. Help with Algebra 2: Integration, Applications, Connections, free copy of excel cheats using the function keys, printouts on circumference diameter and radius. Gre notes, Rocket Mathcad worksheets, pre-algebra real-world word problems, middle school permutations combinations, in order least to greatest a fraction number line, quadratic formula TI89 radical answer, free online maths papers gcse. Grade five probability algebra, factoring equations on the TI-83 calculator, rational zero, synthetic divison and complex numbers. Coordinate Plane - powerpoint, PARABOLA+ALGEBRA, convert 0.01 to a fraction. Quadratic equation app for casio calculator, parabola+algebra, adding radical expression calculator, practicing beginner exponents, tutor online for 4th standard kid. Simplify square roots with exponents, matlab code quadratic roots, multiplying integers and worksheets, slope free worksheets, ALEGBRA PRACTICE, holt alegbra 1, how to do vertex form for algebra 2. Maths exercises for 8 year old children, slove algebraic questions, McDougal Littell Inc Resource Book, math tutor sol virginia. Ti-38 texas instruments save data, hardest math equation in the world, lowest common denominator fourth grade. Prentice hall chemistry quiz worksheet pdf, factor trees math worksheets, NC McDougal Littell Algebra EOC Test, simplify radical numbers in fraction form, Inequality Algebra 1 McDougal Littell INC, matlab solving simultaneous equations, Answer cheats holt physical science. Lcm of exponents, tricks to remember algebraic formula, log, simultaneous equations with complex number, solving multistep equations worksheets, a website that gives you the answers to your algebra 1 problems, printable fraction chart for kids. Multiplying integers worksheet, simple math homework for grade 1, fraction formulas, preston Hall math textbooks, algebrat de baldor, year 9 mathematics quizzes. Algebra sheets to print, comparison between abstract algebra and boolean algebra, +elementary +algebra +worksheet, Permutation Combination Problems Practice, Factoring of Polynomials calculator, printable college algebra worksheets. Aptitude questions and answers, mcdougal littell integrated math book answers, print off isometric paper online, geometry tests samples worksheets, math yr.6, mentle solve simple square root, online calculator x intercepts cubic. Statistics Algebra 2 questions, subtraction equations for 6th grade math, How Are ellipse Used in Real Life, excel equations, math model worksheet, calculator factoring, ascending numbers fractions. Quadratic formula setup, hyperbola lesson plan, simplify equation on matlab, graphing picture on a calculator. Maple solve implicit function, worksheets about multiplying and dividing integers, math exam +grade 9 +math +copy +ontario. Online graphing calculator step by step, fraction to square root, printable factor chart, algebra equation solver, calculating proportions. Bracket operations ks3 worksheets, solving the quadratic problems on the ti 84+, mcdougal littell algebra teacher's edition "teacher access code", factor of 68 in Maths, online math printouts, mcdougal answers. Glencoe algebra 1 equations, prentice hall mathematics homework help, how to solve trinomials. Fraleigh abstract algebra solutions, solving algebra equations with factoring, online TI-83 emulator, calculators for simplifying rational expressions, holt middle school math worksheets answer keys. Solving two steps equations quiz, answers to mcdougall littell workbook, maths 2008 gcse non cal past paper. Saxon algebra 2 answer key, SOL Practise, positive negative integers printable worksheet, gr 10 maths hyperbola, 7th Grade practice EOG Online Test, graphing utility vertex. Factoring polynomials used in everyday life, converting problem from standard to vertex form, mathematical problem solver. Prime-factored form of the number, volume worksheets from middle schools, maths exercise for grade 5 to 8 in free cost. Test for volume.math, divide quadratic, graphing calculator hyperbola, Algerbra problems to practice, 6th grade math work sheets and test, general aptitude questions, McDougal Littell. Online algebra questions, 'sample modern college math problems', 3rd garde lesson plans, math permutation examples simplified, online ti-83 emulator, example of games and puzzles factors of production, decomposition in pre algebra. College Algebra fourth edition by Alan s. Tussy, year 8 optional maths test specimen papers, proportion worksheets, algebra with pizzazz worksheet answers 102, online simultaneous equation calculator two or three variables, e mathe exam, free downloads 8th grade math problems with answer keys. Animation in bonding with orbitals, pythagorous formula, Why is it hard to understand algebra II, dividing rational expressions calculator. Online practise tests for florida, Converting decimals into fractions, algebra scale worksheet, parallel lines + identify + free worksheet, Algebra 2 workbook online by Ron Larson, "my son is having trouble in 8th grade math", fifth grade daily math warm ups oregon. Adding Fractions with like Denominators Worksheet, fraction solving step by step, poems about pythagorean theorem?. Formula sheet for ged test, excel partial fraction expansion, glencoe algebra final test, free practice Sat test for 6th grade, download a ti 83 rom. Fx-83 quadratic, java Greatest common denominator modulo program, trig past paper model answer, middle school math with pizzazz! worksheets, Heath Algebra 2 home, solving equations in terms of single variable calculator. Factoring square roots, homework on ring theory, trigonometric identities solver. Algebra 1 book answers, math equasions, year 8 algebra test, free algebra problem solvers. Linear algebra math problem solver, multiply rational expression calculator, pre-algerbra. Trigonometric ratios free worksheet, how to simplify radical expressions fraction, Saxon Math Cheats, radical expressions and equations calculator. Factorize square root radical, writing equation of hyperbola when foci are given, store notes on ti 89, gallian solutions, free algebra1 examples, Algebra I Solve compound inequalities. What is the order of operations when using exponents and a variable?, conceptual physics lessons, percentage formula, geometry worksheet 3rd grade, what is the difference between solving and evaluating, Glencoe Algebra 1 Florida, Worksheets. Pre-algerbra with pizzazz!, convert decimal to fraction calculator, an you get decimals when your doing two step equations. Slope of a straight line online calculator, math + neurological changes, enter difference quotient ti-89, Completing the Square: Circle Equations practice. Passing college algebra clep, geometry practice problems for grade 7, algebra calculator online divide rational expression, grade 6 math in ontario, Solving Quadratics worksheets with answer keys, scale factor worksheets, free learning algebra. "DIVIDING POLYNOMIALS" ti 89, free printable practice 6th grade math test, adding forces in graphs for kids, 6th grade algebra worksheets, convert decimal to improper fraction, adding and subtracting integers games. Java converting double to time, online maths test ks3, solve third order equation solver excel, keystage3 free exam papers science, cost accounting: foundations and evolutions 7th edition solutions, grade 5 exam paper. Algebra free printouts, Liner Algebra new methods teaching, rearranging formula calculator, cost accounting*.ppt. Holt physics book test, math poems,order of operations, convert percent to fraction and decimal worksheet, "lattice math worksheet", adding subtracting integer "work sheet". Pre algebra calculator inactive, Aptitude Question paper, mcdougal littell chemistry chapter 20 matching, hardest maths problem. Algebra game worksheets, glencoe algebra 2 chapter 9 page 7, square root of a fraction. Algebra 2 conics and circles, middle math printable area nets, how to understand algerbra, mcdougall little brown algebra 2 review, advance clock problems in algebra, glencoe teachers edition algebra Printable 3rd grade performance math test, free printable algebra problems, graphing algebraic equations powerpoints, introduction to sets in math.ppt, holt rinehart and winston texas homework and practice workbook, algebra 2 exams. Casio graphic calculator download emulator, solve quadratic equations using square root rule, help on the addition method, the hardest equation in the world, alebra help, "online calculator" "radical Quadradic equasion, trinomial in four terms, pre-algebra tutor software, 7th grade eog reading comprehension printable tests. How to solve algrebra, How Do I Turn a Mixed Fraction into a Decimal, combining like terms, arithematic, glencoe Accounting I answer key, Intermediate Algebra help, gr. 7 Math Expressions Vol 2 Houghton Mifflin. Teach me algebra 2 conics, Free 8th Grade Math Worksheets, multiplication and division of rational expressions, how to solve algebra matrix. Writing algebraic expressions equations worksheet, formulas used in advance algebra trig, formulas of mathematics of class 9th, online simultaneous equation solver three variables, symbolic methods to solve equations. Difference quotient made easy, free online algebraic formulas, printable math adding and subtracting integers practice, algebra solvers step by step online free, logarithm graph base2, ti 89 solve differential equations. Glencoe Algebra 2 test, tennessee prentice hall mathematics algebra 2 teachers edition on online, 8th grade math solving algebra problems using graphing, ti 83 algebra apps full, fluid mechanics concepts with applications, "standard form" subtraction, free online sequence solver. Geometry math solver, pearson education algebra 2 chapter test 11 form B answers, online algebra calculator (substitution). Teach linear equation to beginners, estimation worksheets, addition of a fraction formula, how to solve a graphic liner equation, answer key for mcdougal littell Algebra 1 workbook, advanced mathematical concepts merrill pre answer. Algebra +6th +printable, solve algebra questions, cailfornia 7th grade entrance exam sample. 8th grade algerbra work sheets, multiplying dividing integers worksheets, 6 grade equation worksheets, HOW TO SOLVE FOR PROBABILITY, algebraic expressions middle school quiz, factor tree worksheet. Lcm equation solver, middle school cryptography worksheets, formula to generate permutations in excel, year 8 maths forming equations, Ti-84 Plus differential equations program, algerbra games for 5th graders. Homework sheets for algebra, 8% decimal, cross product algebra fx2. Online graphics calculator using solver, Algebra 1 Chapter 6 Resource Book answers, foil worksheet algebra, linear algebra done right solution manual, pictures of multiplying integers, divide cube Year 8 math test, Gr. 10 polynomial and algebraic expressions, finding the factor of a number with a ti-83 calculator, transposition of equations involving square root, easy way to learn algebra. Hill estimator matlab, fraction decimal converter formula, finding parabola roots, algebra 2 an integrated approach answers, solving problems with parabola vertex, program solve+polynomial, how to solve math ratios. Help and explain my algebra homework, middle school Multiplication Chart, algebra quizzes for beginners, lineal metre definition, C# Equation set solver, Adding, subtracting, multiplying, and dividing integers, worksheets linear equations. Algebra and trigonometry mcdougal littell california standard example problems, worksheet translations rotations ks2, kids math test to print out. Adding and subtracting rule patterns worksheets, powerpoints on solving equations, trigonometric ratios automatic solver. How to do square roots, pre-algebra trig worksheet, algebra software learning programs, binomial formula on TI 83 plus, how to simplify radicals 8th grade math. Factorable Denominators calculator, set theory math symbols for integrated math regent, factoring trinomials calculator, how to study for the iowa algebra abilities test. Cubed radical fractions math problems, download estágio kumon, antiderivative program, probability + advanced algebra, free practice compass test online. Online quadratic factor calculator, sample mahts paper for sixth standard student, 9th grade algebra games, Calculator w/ radicals, algebra II Mcdougal littell. Slope intercept form on ti-89, negative radical calculator, permutations and combinations word problem practice, fifth grade algebra practice, need a graphing calculator to solve equation, Multiplying dividing and adding and subtracting. How to use graphing calculator for ellipses, word problems in simultaneous equations with 2 variables free worksheets, algebra combinations, fun way to review adding, subtracting, multiplying, and dividing decimals, FORMULAS TO CONVERT PERCENTAGES TO FRACTIONS, how do i program a cheat sheet into my ti-84 calculator?, adding and subtracting integers with one unknown. Decimals and algebra worksheets, Mcdougal Littell Geometry Notes, take log of different base on TI-83, quadratic simultaneous equations. Holt Pre-Algebra answers, linear equations worksheets, factoring TI 83, master fraction strip, solution exercices for dummit algebra abstract, qudratic factoring calculator, calculator that solves radical equations. Algebra 2 parabola problems, Quadratic formula fun worksheet, GMAT permutation formula, problems on combinations for grade 5, residual algebra calculate, cost accounting book free doenload. Hyperbola solving systems, calculator on solving equations with fractions, binomial solver online. Scott foresman 3rd grade math worksheets, free math games for first grade +onl line, sixth grade tutorials, AP inter old maths IIB solved papers, help+-in+-math+-with+-slopes, online mcdougal littell algebra 2 book, prentice hall algebra 1 answers free. 7th grade hrw mcgraw hill social studies glossary, mathematical puzzle for 8th class kids, conics formula chart, polynomial gcf calculator, algebra 2 tutor. Gcse maths algebraic proofs, simplify, square root, example, KS2 long multiplication free worksheets, algerba 2 material. Solving linear equalities, math. work sheets volume, high school math review printouts. Mcdougal littell biology answer key, online algerbra calculator, Inequalities Worksheet sixth grade. California pre algebra practice workbook, how to multiply polynomials on the TI-83, trigonometry answers, ti-84 radical form, presentation in multiplication + proplem solving. Times square root calculator, algebra help with factoring, free printable math worksheets 8th grade, math pictures of order of operation, free online texas instruments graphing calculator, algebra TI83 Least common denominator program, Answers for the physics workbook, explanation of multiplication of integers, how to solve permutation and combination problems, method to find factor quadratic Rational expressions number games, 2nd order differential graph, algebra I prentice hall worksheets, how do i solve algebraic equations, permutations and combinations worksheets glencoe, 10th Grade Algebra printable worksheets. Converting decimels to words, online calculator third root, writing a percentage as a fraction, cubic formulas for dummies, variables on both sides online quiz. Math worksheets multiplying and dividing decimals, square root of 48, simultaneous equations tool, tip Ti-84 for trigonometry, free lcm solver, algebraic expressions calculator. ALGEBRA PRATICE, root discriminant equation complex, square root practice, Rationalizing in gcse maths video tutorial. Ti-83 calculator worksheet, radicals & quotients, glencoe algebra 1 chapter test, ken. Absolute value solver, mcdougal littell calculator activities, easy algebra questions, sat 's practice papers (6th grade), DownLoad TI84 calculator games. Radical expression simplifying solver, online practice math test ks3, word problem online calculator, who invented the radical in mathematics, scientific t-83 calculator, saxon test generator, whats a short cut to dividing polynominals. Holt algebra 1 math books, math simplified radical form, graphing inequalities on a number line powerpoint. Hardest math question, free 8th grade math worksheets, math for third grade, texas instrument t83 balancing program, solve for x with multiple variables, pre-algebra practice sheets +interactive. Simplifying compond logorithmic functions, MIDDLE SCHOOL MATH WITH PIZZAZZI BOOK D answers, 5th grade algebra activities, compare saxon algebra II 2 teaching textbooks algebra 2 II. Complex fractions worksheet, TI-84 factoring section, chemistry powerpoints. Free Printable Math sheets for 9th graders, zero factor property with fractions, High School TAKS review worksheets, how to multiply mixed integers, algebra calc. Second order differential, pre algebra school work to do in the ninth grade level, 6th grade math work sheets. Hot mathcom, what are 10 questions for equations and functions, emulator for ti84 graphing calculator. Geometry McDougal Littell Inc. worksheets, convert mixed number to percent, test papers ks3 online, geometry and trigonometry source code programs for ti 83 plus, solving linear equation worksheets. Mathmatical calculations for circles, addition and subtraction of surds worksheet, vertical line restrictions algebra 2, area +printable +free +maths +K12. How to find cube root on TI-83 plus, sets and subsets ti 89, polynomial solver,Software,complex numbers, algebra 2: how to do probability, solving algebra parabolas, square root property of equations definition, phoenix calculator game cheats. How to solve square root in decimal, math worksheet cheats, cube factoring. Solving equations for y and distributive property, 8th grade math practice eog test, subtraction worksheet grade 6, test prep for 7th grade pre algebra, math test yr 9, ks2 maths sample revision bond papers, program for area of triangular prism TI-84 calculator. Calculator /pre algebra, free pdf algebra worksheets from grade 6 to grade 9, How to Pass math Aptitude Tests, algebra two step math worksheet. Plot polynomial equation, solve exponent x^3 radical, math fifth grade work sheets to print for free, x^4+y^4=z^4,graph, subtracting negatives sheet. Sixth grade permutations+combinations+probability, Algera Font, factorising fourth order equation, what is the percentage of people who prefer dogs over cats, free algebra 1 puzzles. Glencoe 7th grade math book workbook sheets for florida, adding,multiplying,subtracting and dividing, Chapter 5 Project in holt mathematics algebra 2, samples of free algebra problems, printable 3 grade fun sheets, algebra online calculator simplify. Maths for students+scale factor, Prentice hall Florida algebra 1 answers, ti 83 plus download chemistry equations, year 8 maths exams, calculating the square root in excel. Free worksheets adding and subtracting negative numbers, GMAT Permutations, read pdf in ti89. Maths yr 9, simplifying binomials calculator, answers to scott foresman science test for teachers, a year 8 online mental maths test. Dividing decimals 5th grade, sats revision papers to print for KS3, free+math+worksheets+for+six+graders, Simple Equations Quizzes for second graders, grade 8 tests math algebra, ti-89 base convert. Solve radical expression calculator, pythagoras problems solutions, Ti-84 Emulator download, mental math game online for ks3. How hard is it to solve radical equations, holt pre algebra workbook answers, mixed decimals to fractions, "square root in green globs". Fifth grade review worksheets math, algebra 1 problem solver, HELP LEARNING Logarithms, Multiplying Radicals and square roots adding and subtracting, real roots polynomial ti 89 calculator, solve nonlinear ode, sample aptitude paper. Boolean Algerbra, sample basic algebra worksheet, accounting books download. Holt chemistry standard test prep answers, "MRI Graphing Calculator" download free, 10th grade trigonometry problem, holt middle school math answers lesson. Solve system equations graphing worksheets, coordinate plane and 5th grade and printables, FREE DOWNLOADING COST ACCOUNTING BOOK .PDF, pythagorean worksheets 7th grade, simplifying variables worksheet basic, comics dilation math. Answers for glencoe algebra 2 test, foil math calculator, College Algebra intro to trig worksheet, dummit and foote and course page and math and solutions. Nonlinear equation solver C#, how do you solve equations by factoring when X is cubed?, adding and subtracting integers worksheets, common factor cheats, need help to calculate a fraction, 6th grade NC games, florida free teacher edition algebra 2. 6 grade math mcdougal littell middle school answers for chapter 10 test (cheating), 8th grade algebra 2 placement test, quadratic factorise calculator, math study sheets algebra 2. Download free c++ programs for calculating LCM, dividing fractions with polynomials calculators, college math printable worksheets. How to convert decimals to mix numbers, Highest Common Factor calculator, FREE PRINTABLE HIGH SCHOOL algebra II WORKSHEETS, 7th grade eog reading printable tests, integer subtracting calculator, expanding binomial worksheet. Elementary addition and subtraction negative numbers worksheet, simplify boolean expression calculator, convert a decimal to a fraction, solving differential equations 2nd order, middle school math with pizzazz free algebra mathsheets. Trigonometry and its use in daily life, Kumon answers, cubed square root calculator, free advanced mathematical concepts teacher edition, 8 en decimal, pre allgebra, partial fractions example Accounting ratio formula, hard algebraic question and solution, foil math, barrons intermediate algebra nys, prentice hall algebra I worksheets. Algebra simultaneous equations 4 unknowns, how is algebra used in everyday living, ti-38 save data, answerkey for Glencoe/McGraw- Hill geometry workbook, "cost accounting"and"free books", pre algebra slope quiz, math worksheets + 9th grade. How to factorize with your calculator, preparation questions for the college algebra clep, |-5| * |9| =, factoring a quadratic equation with a leading coefficient, worksheet subtraction with negative numbers, answers to prentice hall dittos, simplifying polynomial radicals. Sample questions to simplify a cube root, scale factor lessons, GAUSS JORDAN ELIMINATION METHODIN EXCEL SHEET. Calculator worksheets for 3rd grade, factoring in algebra 1 answers, java code "connect four" extracting data from board, year 8 mental maths tests, least common denominator variable, Contemporary Abstract Algebra - gallian - Instructor's Solutions Manual. Aptitude questions book free download in pdf file, division of fractional exponents, Textbook answers to Glencoe/McGraw-hill Algebra 1 Student edition. Easy rational expression and equation solver, integer values, with radical sighns, multiplication of variable exponents worksheets questions, mixed number into decimal, basic geometry 7 grade First grade free homework, algebra equations for combination and permutation, grade 5 math workbook to work with, who invented algebra slopes. Holt algebra 1 online, answers to math books, simplifying expressions calculator, the steps of balancing chemical reactions, free algebra variable worksheets, real-life problems involving Calculator programs trig, difference between power and square root, hard mathematic ecuation, lattice method worksheets. Surd mathematics-GRE, online rational equation calculator, free algebra solutions, trig chart, diamond problems algebra calculators. Mathematics: Applications and Connections, Course 3 Worksheet Answers, how to solve a square root to an exponent, plus two level maths free quizes, convert decimal to square feet, college algebra for Cost accounting linear equations, project on linear equation in two variable, Holt Chemistry 2004 answers, redox for kids, adding and subtracting integers, worksheets, how to do algebraic expression. Math algebra subtract, union and and example, algebra radical expressions web inquiry 86 answers, base 16 to 10 convert code java. Printable trig tables, high school inequality help, using TI-89 to solve algebra, chemical reactions animations of precipitation, binomial factor calculator, how to solve simultaneous equations with complex numbers matlab. Free printable worksheets ks3, algebra green equation books, algebra equation generator calculators, online rational equation solver. Year seven maths help, solve my equations.com, combing like term worksheet for 7th graders, holt algebra 1 Chapter 9 Form A answers, solve system of equations from graph. Complete algebra 2 study guide reference sheet, solving Inequality involving quadratic functions, fractioning trinomials steps. Convert function to convert exponents in excel worksheet, glencoe algebra 1 ch 10 test answers, problem solving quadratic equations by factoring, how algerbra works, how to solve a graphic linear equation, adding radical calculator, past examination papers of ca module"c". Multiplying/dividing exponents, how to solve cubic equation using a table, houghton mifflin middle school algebra text table of contents. Ratio simplifier, solving equations worksheet, football algebra problem, kumon worksheets, ppt law of probability boolean equation, solve for you algebraic equations. Algebra calculator find X, math worksheets on coordinate planes, square with exponents, enthalpy formulas, algebra 1 holt workbook, compound angle math. Solving multi variable differential equations with matlab, solving absolute value equations algebraically, d = rt printable worksheets, Adding and subtracting integers for 5th graders, exponent fractions and order of operations quiz, Trigonometry Practice Booklet in 7th grade. Square roots of variable expressions calculator, algebric formulas, pre algebra with pizzazz-free ebook, java game of hands of equations. Simplifying square roots, online vertex calculator, factor tree worksheet, graphing calculator pictures, fun order of operations dittos, difference between abstract and boolean algebra. Explain square root radical division, SAT Practice worksheets georgia, answers to math homework, question bank for class viii of kerala. Mathematics exercise for 6th standard, learning algebra for free on line, free math word problems primary, biology exam papers grade 10, writing algebraic equations in excel, glencoe economics principles and practices cheat chapter 1, matlab solve nonlinear system. Free oline exercise of mathematics, Rational exponents+worksheets, Prentice Hall Mathematics: Course 2 worksheets. India standard math free worksheets, maths fraction mixed operations worksheets printable, ratio to percents, free math.pdf. Solve nonlinear ode homogeneous, i need a concrete strategy to explain solving equations with a variable, iowa algebra readiness test. Extraneous solutions square roots, online graphing calculators for ellipses, equations of polar graph pictures. 'algebra free ebook', solve convert to polar form with ti-89, Prentice Hall Conceptual Physics chapter 33, pre algebra with pizzazz! Answers, iq test for 9th graders instant results, free printouts first grade. Exponents pizzazz worksheets, EXAM WORK SHEET MATH, Find the inverse using TI-83 Plus, free mcdougal littell algebra 1 worksheet answer key, 9th Grade Algebra1 symbols, "kumon english download". Ti-84 plus midpoint formula, multiplying rational expressions activities, Solve Rational Equation calculator, "second order differential equation solver", solving chemical equations with the TI-83 calculator, simultaneous equation solver, when simplifying a rational expression, why do you need to factor the numerator and the denominator?. Ti 84 plus downloads, solving quadratic radical equations, free kumon papers, print free 5/6 saxon math worksheets, how to solve maths sums. Mcdougall Littell worksheets, algebra formula sheet[doc,pdf], ti-89 root exponential function, algebra II problem solver help, Algerbra Calcutaor. Root key on a calculator, number roots exponent, addition of mixed number table, completing the square with multiple variables, elementary algebra worksheets, how to use ratio and proportions to slove real life problems, using zero factor property. Exponent equation, mixed decimal, Substitution Method of Algebra, algebra solution(parabolas), COLLEGE ALGEBRA FOR DUMMIES, program math formulas on ti84. Glencoe/mcgraw-hill worksheets on EXPONENTIAL MODELS, polynomial fraction solver, convert whole numbers to percentage calculator. Algebra poems, worksheets about solving linear equations, worksheets fractions decimals grade 7, slope formulas in excel. Orleans-Hanna Algebra Readiness practice Test, simplify radical calculator, graph hyperbola, fraction equation worksheets, solve for an unknown in an equation for 5th grade, algebraic expressions and variables explained, simplify square roots by factoring. Summation calculator online, ti-89 surds, best algebra books, Glencoe Algebra 1 notes. 6th grade math EOG, factors sheet for 6th grade math, learn trigonometry for CAT practice. Aptitude paper+free, texas 8th grade final exam amth, math slope worksheets, cheats and answers for pre-algebra, completing the square calculator. Subtracting Integers Worksheet, online calculator with the square route to the cubed, rational expression solver, 5th grade fraction problems solving. Best 6th grade math answers, greatest and least common denominator, mixed decimals into fractions. How to find the scale factor examples, online limits calculator, fraction power, free pre algebra worksheets, "scientific calculator for PPC", examples of grafic linear equations, analyzing the quadratic converters. Addition and subtraction trig formula, multiplying integers worksheets, algebra with pizzazz! worksheets, Algebra helper. Solving algebraic problems, formula find domain in quadratic equation, ti89 base conversion, absolute value worksheets. Algebra math 8A workbooks, free algebra problems, practice test math percents for 5th grade lesson plan to print, real homework cheats, ucsmp algebra 1 volume 2. Alg solving equations fractions games, adding and subtracting rational expressions calculator, How is dividing a polynomial by a binomial similar to or different from the long division you learned in elementary school?, solving simultaneous equations in Excel, Multiplying Binomial Radicals, adding and subtracting negative numbers worksheets, adding subtracting signed numbers. Free online maths games for ks3, ks2 algibra, trinomial solver, how to save formulas in TI85, pearson education algebra 2 chapter 8 worksheets, do my college algebra homework, 5th grade algebra prognosis test. Printable page from 2004 mcdouglas littell algerbra 1: concepts and skills, least common multiple worksheets & elementary, formula for percent of number, free maths questions, maths area work, calculate cubed root. Fluid mechanics with maple, functions, statistics, and trigonometry book answers, percentage equations, statistics questions for GMAT, algebra lang, math adding and subtracting integers practice. Adding and Subtracting Word Problems for 1st grade, printable worksheets on pictographs, algebra mixture. Translating math verbal expression exercises, year 8 maths book chapter and different 10 word, answers for geometry mcdougal littell inc worksheets, step by step algebraic expressions and formulas, fun lessons english gcse, prealgebra crossword puzzle end of course, solving simultaneous equations with basic. Mathematics formula chart, pre algebra practice testing with answers, discrete mathematics and its applications 6th edition download, inequality gcse exam questions, sound f "physic formula", how to pass a 9th grade math test, math tests ks3. Decimal to square foot conversion, calculator cubic ecuation download, how to get rid of a square root in the numerator, simplifying cube roots, log base calculate. Greatest common factor calculator with variables, t1 89, quadratic formula, glencoe chemistry chapter test answers, Online "Year 8 Revision Games" Biology, Formulas for Greatest Common Factor. Online free algebra for dummies mathematics, virginia sixth grade practise test, converting mixed fractions into decimals, Free Integer numberline Worksheets. Free algebra learning, open source linear solver visual studio, holt workbook, free maths aptitude test for9 yr old, Solve for x fraction caluclator, grade nine math worksheets. Factoring and foiling worksheets, hard math test, Greatest Common Factor equations, monomial simplifier. Least common multiple variable, My Alegbra Teacher, factoring binomial equations, poems on mathematics (algebra), Saxon Pre-Alg permutation worksheet. 5th grade worksheets on linear measurement, online math calculator simplify, finding ratio using GCD Method formula, fraction calculator that shows work, Factoring on the TI-84, grade function in matlab, Simulation for prealgebra free lesson. Fraction problem solving worksheet, addition with grouping + worksheet, printable volume and area mathematics test. Simplifying polynomial expressions with exponents, e-book on calculas, fun maths grade 10. "first course in differential equations" 8th online solutions, slope and y-intercept...problems with answers, Grade 10 polynomials+word problems. Online factorise, least common denominator of x+3 and x-2, second grade math adding and subtracting three digits worksheets, algebra II, quadratics, ellipse and algebra II eoc. MATHS,PIE CHARTS, solving using properties of power, math for dummies online, 6th grade math help combinations, printable word ladders 5th grade. Maths tests on statistics, arithmetic series/sequence, subtracting fractions with unlike denominators worksheet, ks3 maths work sheets, trigonometry poem, free ti 89 online calculator. Chapter 15 in Prentice Hall World History Connections To Today answers, contemporary abstract algebra solution, holt algebra I, how to convert polar to rectangular coordinates on a casio graphics calculator, adding and subtracting negative numbers+printable worksheets. Why is it important to simplify radical expressions before adding or subtracting?, display sample substitution method in algebra, Teaching Beginners Algebra, 10th grade logarithm exercises, worksheets on combinations math. Yr 9 algebra worksheet, pre-algebra addition equations calculator, basic ellipses practice problems, lcd calculator, free answers to algebra problems, basic chemistry worksheets+printable. Polynomials definition in math.ppt+for 10th class, percentage price algebra, math poems about variables, permutations and combinations math worksheets for high school, graph hyperbolas on ti-83. 2.6 distributive property examples, steps to graph a rational function using ti-84, when factoring,what is the first thing you should look for ?, limit erfi function, free science worksheets for 8th grade, quadratic function is used in real life, free college algebra clep study guide. Maths subtracting fractions different dominators, online calculator antiderivative, algebra 1word problems practice examples, geometry with pizzazz!, perimeter and algebra ks2. Focus of a circle, teaching aptitude questions, where is the absolute value function on the T1-89, glencoe mathematics chapter 11 algebra 1 "resource book" pdf, wave equation first-order, how to do linear combinations, 2 STEP MATH EQUATIONS WORKSHEETS. Pre-algebra/prentice hall/sample, worksheets for permutation, trig identities solver, solving simultaneous equation quadtratic, Adding And Subtracting Decimal Games, the hardest math equation, 4th square roots. Calculator cu radical, The worlds hardest easy Algebra problem, 8th grade math What is a monomial?, adding and subtracting integers calculator. Vector equation maple, scale factor worksheet, Greatest Common Factor/Least Common Multiple Worksheet, simple fractions, log base calculator texas. Houghton Mifflin Mathemetics chapter 5 Measurment and Integers work sheet, How to Change a Mixed Number into a Decimal, algebrator, trinomials calculator, how to pass college algebra, fun way of adding, subtracting, multiplying, and dividing decimals, help with college algebra. 3rd gradework sheets, elementary algebra practice problems, sat exercises in algebra fraction, numerator solver, +solving exponential equations in additional maths, aptitude questions +solved. Poems dealing with basketball, free games for ti 84 plus, trigonometry for dummies, dividing fractions how to solve, trinomial factoring worksheet generator. Ti-83 imaginary, positive and negative numbers 4th grade, aDDING, mULTIPLYING, DIVIDING AND SUBTRACTING FRACTIONS, math fractions answers, ti-84 online calculator. Graphing in algebraic equations, solver+polynomial expression of degree 3, Ellipse program ti 83, Algebra 1 study skills for solving radicals, download 6th grade math test, simplify square root Algebra 1 games to print out, TRIG CALCULATOR, free printable 8th grade worksheets. Glencoe Mathematics florida edition algebra 1 textbook, gcse bitesize interpolation and extrapolation, teach me basic algebra for free, alg. 2 solving age equations. Least common denominator with variables, basic physics + free download, example of steps how to solve a first degree equation, when graphing a linear inequality, how do you know if the inequality represents the area above or belwo the line?, lowest common denominator calc, cool maths percentages, mcgraw hill math power 8 answer sheet. Google visitors found us yesterday by typing in these math terms : │sixth grade exam study sheet for math │Algebra 2 answers │7th grade trivia math │8th grade pre algebra │ │slope formulas │real life graphs ppt │solving number patterns │accounting real life problems calculus formulas │ │algebra square root calculator? │differentiation calculator │worksheet answers │algerbra expressions │ │exponents formulas │grade 6 mcdougal littell middle school chapter │inequality problems worksheet │how to calculate gcd of 2 numbers │ │ │10 answers │ │ │ │homework printable │how to solve probability algebra 2 question │math scale factor │graphing linear equations+interactive │ │What are some formulas, theorems, and │advance calculas │practice masters, algebra, structure and │integer algebra + worksheet │ │properties of Logarithms? │ │method, book 1 │ │ │free ebooks on aptitude │trinomial factoring solver │modeling mathematics free workbook │square root algebra calculator │ │Prentice Hall Mathematics: Pre-Algebra 2004/│advanced algebra chaper 9 self test │ucsmp algebra 1 volume 2 math quizzes │math worksheets on slope │ │7 Student Home Page │ │ │ │ │graphing hyperbola equations step by step │california course 1 math book answers │trigonometry identity answers for problems │graph calculator ellipse │ │ │ │example │ │ │UCSMP algebra │6th grade math printable exercises │finding the nth term multiplying │math algebra for elementary sixth grade │ │simplfying radical expressions before adding│ti-89 laplace formula │TI-84 plus for use online │what is the highest common factor of 32 and 54 │ │or subtracting │ │ │ │ │ti 83+ rom download │free maths 11-16 worksheets │difference of the square │math games for 6th graders to do for free │ │greatest common factor table │"lattice math sheets" │+"project" +"algebra students" +"factoring" │Algabra │ │great common factor calculator │combining like terms pre-algebra │Conceptual Physics: The High School Physics │math formulas for percents │ │ │ │Program answers │ │ │sample lesson plan for 6th grade math │simultaneous equations MATLAB │9th grade SOL biology Practice tests │scale factor finding calculator │ │simplifying common variables worksheets │geometry proportions worksheet │pre-algebra practice work │palindrome java practical │ │steps to Factoring Algebra │intermediate algebra online tutor │mathmatical equations │"uniform distribution" ti-84 │ │Online quiz over algebra 1 book │TI-83 Plus programs general form to standard │free six grade math work and reading work │hard 6th grade geometry worksheets │ │ │form │sheet │ │ │equations to make a picture using a domain │cubed polynomials │7th grade trigonometry │radical exponent equation calculators │ │on a TI 83 │ │ │ │ │solve and graph │c aptitude questions │combinations and permutations online │ │ │ │ │calculators │ │ │math high school "solve by elimination" │"symbolic simplification" "visual basic" │graphing algebra grade 8 │solved Aptitude questions │ │proportions worksheet │free 7th grade algebra problems │simplifying polynomial degree solver │order of operations decimal worksheet │ │addition radical expressions calculator │decimal( 2,4) java │free middle school geometry worksheets │a fraction as a power │ │basic fractions convert decimals │how to solve binomial │grade 2 free fraction worksheets │QUARATIC FORMULA CLIP ART │ │Balancing equations and using them in │ks3 year 8 maths formula │second order differential equations test │importance of algebra 2 │ │mathematics │ │ │ │ │factorising & expanding equations worksheets│solving difference quotient │simplifying calculator │mcdougal littel algebra 2 book │ │grade 9 how to find the point of │solved aptitude test paper │quadratic equation solver show steps │How to convert parameter in meter in to perimeter lineal │ │intersection with 2 equations │ │ │meter │ │free worksheets on circle,ellipse, hyperbola│Glencoe/McGraw-Hill algebra answer key gr.9 │chemisitry workbook answers │java 2.71828183 │ │and parabola │ │ │ │ │Algebra 2 for dummies │qudratic equation │online math equations Factoring to Solve │maths KS3 papers to print off free │ │ │ │Equations │ │ │Combination formula solving problems │review exam McDougal Littell Middle School Math│PDF Adding and Subtracting integers │algebra one practice final exam cheat sheet │ │ │Course 2 │worksheet │ │ │laplace transform ti 84 │How Do I Solve a Quotient │decimal into radical form │trigonometry games and activities │ │steps solving for probability │quadratic factor calculator │half life algebra 2 help │simplify algebra fractions calculator │ │learning math tutuors for 6th to 9th grade │the factoring of differences of cubes used in │solving systems of equations worksheets │children stats exam papers │ │ │everyday life │ │ │ │factoring cubed │Holt algebra 1 │Polynomial Solver │80 percent into math formula │ │how to do combination method │pre algrebra │functions 2 unit mathematics year 11 for │Mcdougall littell geometry answer key │ │ │ │dummies │ │ │algebra questions and examples yr11 │mulitplying and dividing powers │children's explanation on how to use │greatest common factor finder │ │ │ │exponents │ │ │college homwork worksheets │practice sheets on dividing monomials │online questions and answers for eog │EOC Algebra II sample questions with explanations │ │ │ │practice │ │ │9th grade EOC math practice │worksheets for 6 th grade eog test practice │solve algebra problem │combination permutation Gmat │ │ │sheets │ │ │ │CLEP college algebra practice │trig calculator graphs │saxon math course 1 answer key for 6th grade│Algebra step by step │ │decimal to mixed number │quadratic word problems │algebra 2 cheat sheet │algebra hard questions │ │Algebra Answer Key │integer worksheet │how to solving algebraically equations │square root calculator algebra │ │9th grade math book online │simplified radicals │convert decimal to degree TI84 │Quick Mathematic formulas for Aptitude │ │solving second order ordinary differential │free printable triangle worksheet angles │java convert from base 2 to 10 code │finding a quadratic equation from a table of data │ │equation │ │ │ │ │ti 83 log │what is the vertex of a quadratic function in │sums on quadratic equation+ sample paper for│Eog test questions for pythagorean theorem │ │ │vertex form │standard 7 │ │ │mcdougal littel quiz │PRENTICE-HALL, INC CHEMISTRY WORKSHEETS CHAPTER│adding and subtracting integers worksheet │kumon, math, 7th grade math │ │ │11 │ │ │ │online calculator that reduces percents and │how to add and subtract radical expression │Contemporary Abstract Algebra - Instructors │pre-calculus problem solver │ │fractions │ │Solutions Manual 6th Ed │ │ │5th grade math quiz │program graphics calculator moles │sums of algebra │Free Intermediate Algebra │ │ti 84 plus emulator │the least common multiple of 17 and 13 │math transformation quizzes for eight grade │divide polynomials lesson plan │ │uses of quadratic equation on everyday life │fIFTH GRADE FACTORIALS EXPLAINED │finding quadratic equations given a table │TI 83 emulator download │ │hbj math │free addition and subtraction to 18 worksheets │six grade math quiz │6th math exam │ │Applications of trigonometry in our daily │math work papers │how to get fraction part of decimal in java │free worksheets for consumer math │ │life │ │ │ │ │80 percent into math equation │free printable mathworksheet fraction and │derive formula for linear graph │Beginning Algebra Final Study Sheet │ │ │decimals │ │ │ │year 11 general free worksheets │writing linear inequalities worksheet word │worksheets pictographs grade 1 │emacs with calc gnuplot problem on Windows │ │ │problems │ │ │ │aptitude question paper │solve math problems.com │Free 7th Grade Math Homework Problems │6th grade math exam │ │integers worksheet with answer key │KS3 Algebra │common algebraic formula for number grid │conceptual physics answer book │ │printable elipse hyperbola worksheet │factorising polynominals │online balancing equation │who invented decimals │ │completing the square for ti-84 plus │cost accounting solutions chapter 3 │Factoring to solve quadratic equations on │1st grade math printouts │ │ │ │the TI-84 │ │ │grade 8 math multiple-step linear equations │how do you divide │when when is FOIL math used in real life │Va math workbook workbook answers │ │and inequalities solving │ │ │ │ │9th grade math REVIEW │fulcrum formula + Math help │time distance formula worksheets │how to solve 3rd order equation │ │add binary ti 89 │maths worksheets year8 │math online revision tests │how to do factorial on a ti calculator │ │radical exponent calculator │free math sheets pre algebra │learn grade 11 physics online free │algebra rationalizing │ │fourth grade fractions │basic maths scale factor │algebra 1 problems and tests prentice hall │adding & subtracting negative numbers for 4th grade │ │ │ │georgia │ │ │Solving Equation by Adding or subtracting │free printable worksheets on ordered pairs │Pearson Prentice hall algebra 1 workbook │adding calculator online │ │positive and negative │ │ │ │ │difference quotient with radicals │graphing equation tool exponent │system differential equation+runge kutta │Creative Publications Test of Genius │ │ │ │4th+matlab │ │ │how to do mathematics COMPLEX MATH order of │free holt algebra 2 answers │numerical modeling with polynomial equation │basic trig worksheets │ │operations step by step │ │(*pdf) │ │ │square root polynomial │probability worksheets for 6th graders │maths trivia │Without solving the equation, use the graph to determine │ │ │ │ │the solution(s) to the equation ? │ │properties of radical expressions │rate of change formula │ratio and percents worksheets │functions, statistics, and trigonometry answers │ │algabra formulas │rational equation solver │9th grade math QUESTIONS AND ANSWERS │least common multiple of 86 and 5 │ │solving algebra 2 quadratics interactive │find common denominators calculator │decimals worksheet grade 8 │please solve me maths sums problem of 11th class │ │answers to Algebra 1 Standardized Test │calculator +worksheets +practice │calculator for evaluating an algebric │lineal metre calculate to m2 │ │Practice Workbook │ │expression │ │ │solutions gallian chapter 10 excercises │free decimal addition revision │subtracting, adding, multiplying, dividing │free step by step help with pre algebra │ │ │ │integers │ │ │free 6th grade math worksheet odds for or │algebra combining like terms worksheets │TI89 inverse log │7th grade honors practice math tests │ │against │ │ │ │ │Scientific Notation calculation subtraction │graphing linear equations made easy │free calculas on line help │best book for Algebra │ │free workshets on ratio and proportion │aptitude questions with solutions │"glencoe physics'" chapter review answers │factor trinomial calculator │ │graphing radical expressions │mayan algebra │algebra rule for adding and subtracting │percentage algebra equations │ │ │ │negative postive numbers │ │ │grade 8 algebra sheets │Multiplication and division of positive and │simplify double angle worksheet │algebra 2mcdougal littell 2007 online │ │ │negative integers worksheets │ │ │ │meaning statistics and algebra │solving algebra problems in a quick way to │6th grade notes about circle graphs │measurement conversions worksheets year 7 │ │ │solve │ │ │ │pre-algebra equations calculator │integers math adding, subtracting, multiplying,│free printable test 5th │simplify expression calculator │ │ │dividing │ │ │ │lowest common demoninator finder │Radical Expressions Online Calculator │algebra solver explanation │trig online calculator │ │McDougal littell jr high workbook answers │uneven square roots into radical form │mcdougal littell inc. albegra 1 resource │free maths maths for dummies │ │ │ │book │ │ │GED printable math tests │trinomial online factoring calculator │intermediate algebra notes │use of trigonometry in daily life │
{"url":"https://softmath.com/math-com-calculator/reducing-fractions/equation-of-elipse.html","timestamp":"2024-11-12T01:54:19Z","content_type":"text/html","content_length":"157471","record_id":"<urn:uuid:13128941-3514-47c5-bc96-bcde517d7ee1>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00606.warc.gz"}
[Solved] Calculate the missing values for the prom | SolutionInn Calculate the missing values for the promissory notes described in problem. Issue date Face value ($) Term Calculate the missing values for the promissory notes described in problem. Transcribed Image Text: Issue date Face value ($) Term Interest rate (%) Maturity value ($) Nov. 5 4350 75 days 4445.28 Fantastic news! We've Found the answer you've been seeking! Step by Step Answer: Answer rating: 76% (17 reviews) Answered By Utsab mitra I have the expertise to deliver these subjects to college and higher-level students. The services would involve only solving assignments, homework help, and others. I have experience in delivering these subjects for the last 6 years on a freelancing basis in different companies around the globe. I am CMA certified and CGMA UK. I have professional experience of 18 years in the industry involved in the manufacturing company and IT implementation experience of over 12 years. I have delivered this help to students effortlessly, which is essential to give the students a good grade in their 3.50+ 2+ Reviews 10+ Question Solved Students also viewed these Mathematics questions Study smarter with the SolutionInn App
{"url":"https://www.solutioninn.com/study-help/fundamentals-of-business-mathematics-in-canada/calculate-the-missing-values-for-the-promissory-notes-described-in-846057","timestamp":"2024-11-03T01:04:52Z","content_type":"text/html","content_length":"80287","record_id":"<urn:uuid:8f248393-96fa-4f24-ab2f-84d89068327f>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00022.warc.gz"}
Protein Aggregation Criticality in Protein Aggregation: A Graph Theoretic Aproach Proteins are the fundamental unit of computation and signal processing in biological systems. Unfortunately, when proteins malfunction, biological systems tend to grind to a halt. Take for instance, the unnatural aggregation of proteins, which is reponsible for many neurodegenerative diseases including Alzheimer's disease and Huntington's disease. It is commonly observed that protein aggregates tend to form suddenly and nonlinearly, in response to some experimental perturbation. Here I desribe a simple staistical model to describe phase transitions and critical phenomena in protein aggregation processes. This model intentionally ignores all the relevant biophysics, and instead aims to capture the statistcal features of such processes using the theory of random graphs put forth by Erdos and Renyi. Graph Theory Preliminaries A graph is a collection of nodes connected by edges. The degree of a particular node is the total number of the edges connected to it. An adjacency matrix \({\bf A}_{i,j}\) tabulates whether nodes \ (i\) and \(j\) are connected by an edge. The degree of the \(i^{th}\) node can be easily calculated from the adjacency matrix, \(k_i= \sum_j {\bf A}_{i,j}\). The degree distribution of a graph summarizes the graph in a natural way. The fraction of nodes which are of degree \(k\) is denoted \(p_k\). As an example, if I find an edge at random and follow to the nearest node, the probability of arriving at a node of degree \(k\) is \(p_k n \frac{k}{\sum_i k_i}\). A clique, or connected component, is a set of nodes which are all connected by edges. For every node in a clique, one can reach every other node in that clique by traveling along edges. The giant component is the largest such clique in a graph. We suppose that a finite number, n, of proteins exist in close proximity, yet are of such low concentration that they do not appreciably interact. Further, we model the interaction of two proteins as being attractive and binary. That is, we ignore the physically realistic electrostatic forces between proteins and simply keep track of whether a pair of proteins interact or not. If a pair of proteins interact, then we assume attractive forces between them and say that they form an aggregate (or clique) of size two. Any two proteins will interact with probability \(p\), and all interactions are identical and independent. With these simplifications, we model the collection of proteins as a graph with n nodes. In dilute solution, the n proteins do not interact, the graph has no edges and the degree distribution of the network is a delta function at \(k=0\). By increasing the concentration of the proteins (by any experimental method), one increases the probability that any two proteins may interact attractively. With this model, the act of increasing protein concentration is analogous to adding edges, at random, to the nodes of the network. With probability \(p\), each possible edge between nodes \(i\) and \ (j\) is formed and the expected number of interactions is \(\binom{n}{2}p\). Let \(c\) be the average degree of the whole network, \(c=\langle k \rangle = (n-1)p\), thus the interaction probability is \(p=\frac{c}{n-1}\). By increasing the protein concentration, we are increasing \(c\) and thus \(p\), the probability of two proteins interacting. We can derive an expression for the degree distribution of this random graph. The probability of any two nodes having an edge is \(p\), and the probability of any node having \(k\) edges is proportional to \(p^k\). Following this reasoning, it easy to see that the node edges will be binomially distributed, \[ p_k = \binom{n-1}{k} p^k (1-p)^{n-k}. \] If the size of the network is large, we can apply Poisson's approximation to binomial distributions, \[ \frac{(n-1)!}{((n-1)-k)! k!} \approx \frac{(n-1)^k}{k!}\\ p_k \approx \frac{(n-1)^k}{k!} \frac{c^k}{(n-1)^k} (1 - \frac{c}{n-1})^{(n-1)}\\ = \frac{c^k}{k!}e^{-c} . \] Thus, the degree distribution of this random graph is Poisson. When the interaction probability (and c) is low, then most of the mass of the distribution is concentrated around low \(k\) and a very small fraction of the network has high degree. As the interaction probability increases, this degree distribution shifts and most of the mass is concentrated at high \(k\) with a vanishing fraction of the network having \(k=0\) or \(k=1\). By increasing the protein concentration, the interaction probability is increased, and the degree distribution shifts toward more proteins having many As proteins form aggregates at random, we want to track the fraction of the graph that is a member of an aggregate of a particular size. If \(p=0\) then all proteins are non-interacting and there are \(n\) aggregates of size \(1\). Conversely, if \(p=1\) then all proteins are connected and there is one aggregate of size \(n\). Call \(S\) the fraction of the network which is in the largest aggregate (the giant component of the graph). \[ u=1-S = (1-p + pu)^{(n-1)} = (1 + p(u-1))^{(n-1)} = \left( 1 + \frac{c(u-1)}{n-1} \right)^{(n-1)} \] As \( n \to \infty \), \[ u= e^{c(u-1)}\\ S = 1- e^{cS}. \] This transcendental equation describes the size of the largest aggregate as a function of interaction probability. Solutions to this equation are shown in the figure below on the left. The intersections of the curves (highlighted with red circles) describe the size of the giant componenet for each value of \(c\). Note that for small values of \(c\), solutions are tangent only at \(S=0 \) and thus no giant component is formed. The relationship between giant component size and interaction probability is shown on the right. The figure shows (Right) shows that for low values of \(c\) (low interaction probability), there is no significant large aggregate. When interaction probability is low, any aggregates that form are small and tend not to combine. At \(c=1\), a phase transition is seen, and for \(c>1\) it will be the case that a significant fraction of all the proteins will aggregate. Obviously, this relationship saturates for very large \(c\) since eventually every available protein is a member of the largest aggregate. By increasing protein concentration, one increases the probability that two proteins might interact favorably. At low concentrations, interactions are sufficiently rare that aggregates remain small and unconnected to each other. At a certain critical concentration, these interactions become significant and larger clusters of interacting proteins (aggregates) will form. Increasing the concentration further serves to connect the remaining proteins to the aggregate. From this statistical analysis, the thresholding behavior commonly observed in protein aggregation experiments can be easily understood from the probability of large populations of proteins interacting.
{"url":"http://keeganhin.es/blog/agg.html","timestamp":"2024-11-10T11:08:01Z","content_type":"text/html","content_length":"9793","record_id":"<urn:uuid:476825a7-88c9-4c6e-8360-9548478a3055>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00099.warc.gz"}
Chad Giusti: Research This page is badly out of date; a new version is in the works. Why algebraic topology and neuroscience? What can the function of a system tell us about its structure? Consider place cells in the hippocampus, which are known to have firing rates modulated by the spatial location of an animal in its environment; that is, these neurons have receptive fields corresponding to locations in their environment. Because we understand their receptive fields, it is possible to use their activity to do things like decode position or recover a map of the animal's environment as it moves. However, in the absence of a known behavioral correlate, can we determine whether a population of neurons encode such information? Using tools from applied algebraic topology, we answer this question in the affirmative. We further study a range of biological network models through this lens, providing a blueprint for future applications. What can the structure of a system tell us about its function? Suppose we know something about the structure of a neural system or that of the information it encodes. What can we conclude about its (combinatorial) code, i.e., collection of possible coactivation patterns? Two such structures which are surprisingly deeply related are neural populations with convex receptive field structures and one-layer feedforward networks. Applying the classical nerve theorem, we obtain a class of local obstructions in codes which preclude the existence of either structure. However, in real biological networks where codes are sparse, we can restrict our attention to only maximal coactivity patterns. We show that any such sparse code can be realized by a one-layer feedforward network, so such networks are universal simplicial complex approximators. In the context of convex receptive fields, we show that only the absence of intersections of such maximal patterns obstruct such an architecture, and explicitly construct realizations when all such are present. The human brain is a complex, multi-scale information processing system. At the macroscopic level, activity in anatomical brain regions correlates with behavioral function, suggesting that we can think of these regions as local processing units whose interactions underlie cognition. Understanding the architecture of this system is therefore a fundamental first step toward mechanistic models of cognitive processes. Diffusion imaging data, a proxy for the white matter tracts believed to form the "structure" of this network, provides a picture of this architecture. We studied the interaction between densely connected "local processing" units called and the interplay between strong and weak connections between cliques, represented by uncovering a range of non-local features which may support complex cognitive tasks. Densely packed granular media (gravel, for example) do not exhibit homogeneous distribution of internal forces. Rather, the inter-particle normal forces form a complex pattern of so-called force chains, whose structure is conjectured to be responsible for many of the physical properties of the material. Due to the intricate nature of the networks, it is useful focus on intrinsically mesoscale statistical properties, which have intuitive physical interpretation. To this end, we developed the Topological Compactness Factor of the collection of force chains, which roughly measures rotational stability of the chain across its branch points, and demonstrated that this measurement can be used to recover the external pressure applied to the system. What is the shape of the space of point clouds? The space of point clouds -- that is, of configuration spaces -- is a classical subject of study in algebraic topology which finds its way into theory and applications throughout the sciences. One application of particular interest is that the limiting family of configurations in infinite-dimensional Euclidean space provide models for classifying spaces, and by studying the family as a whole one can extract new understanding of the structure of families of groups. Using the geometry of these models, we have computed the mod-two cohomology of both symmetric groups (another subject of classical study, here understood in a different way) and of alternating groups (the first such computation for a family of simple groups). Can we approximate (spaces of) knots with simpler objects? In his development of the theory of finite type invariants of knots, Vassiliev utilized a sequence of finite-dimensional polynomial approximations of the space of knots. Due to the intricate geometry both of the individual spaces and of the maps between them, these spaces have proved difficult to analyze directly. To put the study of these invariants on a more solid, geometric footing, I developed the spaces of plumbers' knots , which decompose into combintorial cell complexes, and which fit together into a directed system through which one can follow cells. These spaces provide a new basis for undestanding "unstable" finite type invariants, as well as opening the door to computational approaches to Vasilliev theory. I would like to thank the following organizations for their generous (current or former) funding support.
{"url":"https://www.chadgiusti.com/research.html","timestamp":"2024-11-14T20:09:13Z","content_type":"text/html","content_length":"46030","record_id":"<urn:uuid:dfc4deed-05e0-47d1-aa9d-21598e41ec40>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00350.warc.gz"}
(PDF) Mathematical Methods For Mechanics - Eckart W. Gekeler - 1st Edition Mathematical Methods for Mechanics – Eckart W. Gekeler – 1st Edition Mathematical Methods for Mechanics: A Handbook with MATLAB Experiments Eckart W. Gekeler • 3540692797 • 9783540692799 • 1st Edition • eBook • English The interaction between mathematics and mechanics is a never ending source of new developments. This present textbook includes a wide –ranging spectrum of topics from the three body problem and gyroscope theory to bifurcation theory, optimization, control and continuum mechanics of elastic bodies and fluids. For each of the covered topics the reader can practice mathematical experiments by using a large assortment of Matlab-programs which are available on the author’s homepage. The self-contained and clear presentation including Matlab is often useful to teach technical details by the program itself (learning by doing), which was not possible by just presenting the formula in the past. The reader will be able to produce each picture or diagram from the book by themselves and to arbitrarily alter the data or algorithms. Recent Review of the German edition ‘This book introduces the engineering-oriented reader to all the mathematical tools necessary for solving complex problems in the field of mechanics. The mathematics- oriented reader will find various applications of mathematical and numerical methods for modelling comprehensive mechanical-technical practical problems. View more Leave us a comment No Comments 0 Comments Inline Feedbacks View all comments | Reply Warning: Undefined variable $isbn13 in /home/elsoluci/public_html/tbooks.solutions/wp-content/themes/el-solucionario/content.php on line 207 1. Mathematical Auxiliaries. 2. Numerical Methods. 3. Optimization. 4. Variation and Control. 5. The Road as Goal. 6. Mass Points and Rigid Bodies. 7. Rods and Beams. 8. Continuum Theory. 9. Finite Elements. 10. A Survey on Tensor Calculus. 11. Case Studies. • Citation □ Mathematical Methods for Mechanics: A Handbook with MATLAB Experiments □ 3540692797 □ 9783540692799 □ 1st Edition □ Mathematics | Numerical Methods □ eBook □ English
{"url":"https://www.tbooks.solutions/mathematical-methods-mechanics-eckart-w-gekeler-1st-edition/","timestamp":"2024-11-15T04:04:09Z","content_type":"text/html","content_length":"126354","record_id":"<urn:uuid:8002ef6c-3ba9-4728-9ca3-1bbe67af1211>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00329.warc.gz"}
IBM Introduces 'Quantum Volume' to Track Progress Towards the Quantum Age Quantum computing companies are racing to squeeze ever more qubits into their devices, but is this really a solid sign of progress? IBM is proposing a more holistic measure it calls “quantum volume” (QV) that it says gives a better indication of how close we are to practical devices. Creating quantum computers that can solve real-world problems will require devices many times larger than those we have today, so it’s no surprise that announcements of processors with ever more qubits generate headlines. Google is currently leading the pack with its 72 qubit Bristlecone processor, but IBM and Intel aren’t far behind with 50 and 49, respectively. Unsurprisingly, building a useful quantum computer is more complicated than simply adding more qubits, the building blocks used to encode information in a quantum computer (here’s an excellent refresher on how quantum computers work). How long a machine can maintain fragile quantum states for, the frequency of errors, and the overall architecture can dramatically impact how useful those extra qubits will be. That’s why IBM is pushing a measure first introduced in 2017 called quantum volume (QV), which it says does a much better job of capturing how these factors combine to impact real-world performance. The rationale for QV is that the most important question for a quantum computer is how complex an algorithm it can implement. That’s governed by two key characteristics: how many qubits it has and how many operations it can perform before errors or the collapse of quantum states distort the result. In the language of quantum circuits, the computational model favored by most industry leaders, these characteristics can be described as width and depth (more specifically achievable circuit depth). Width is important because more complex quantum algorithms able to exceed classical computing’s capabilities require lots of qubits to encode all the relevant information. A two-qubit system that can run indefinitely still won’t be able to solve many useful problems. Greater depth is important because it allows the circuit to carry out more steps with the qubits at its disposal, and thus run more complex algorithms than a shallower circuit could. The IBM researchers have therefore decided that rather than just counting qubits, they are going to treat width and depth as equally important, and so QV essentially measures the largest square-shaped circuit—i.e. one with equal width and depth—that a quantum computer can successfully implement. What makes the approach so neat is that working out the depth requires you to consider a host of other metrics researchers already use to assess the performance of a quantum computer. It then boils all that information down to a single numerical value that can be compared across devices. Those measures include coherence time—how long qubits can maintain their quantum states before interactions with the environment cause them to collapse. It also takes account of the rate at which the quantum gates used to carry out operations on qubits (analogous to logic gates in classical computers) introduce errors. The frequency of errors determines how many operations can be carried out before the results become junk. It even encompasses the architecture of the device. When qubits aren’t directly connected to each other, getting them to interact requires extra gates, which can introduce error and therefore impact the depth. That means the greater the connectivity the better, with the ideal device being one where every qubit is connected to every other one. To validate the approach, the company has tested it on three of its machines, including the 5-qubit system it made publicly available over the cloud in 2017, the 20-qubit system it released last year, and the Q System 1 it released earlier this year. What they found was a doubling in QV every year, from 4 to 8 to 16, a pattern the company takes pains to stress is the same as Moore’s Law, which has governed the exponential improvement in classical computer performance over the last 50 years. The relevance of that factoid will depend on whether other companies adopt the measure; progress in quantum computing isn’t an internal IBM process. But the company is actively calling for others to get on board, pointing out that condensing all this information into a single number should make it easier to draw comparisons across the highly varied devices being explored. But while QV is undoubtedly elegant, it is important to remember the potential value of being the one to set the benchmark against which progress in quantum computing is measured. It’s unclear yet how other companies’ devices would fair on the measure, but there’s a natural incentive for IBM to promote metrics that favor its own technology. That doesn’t mean you should automatically discount it, though. Veteran high-performance computing analyst Addison Snell described the metric as “compelling” to HPCwire, which also noted that rival quantum computing firm Rigetti has reportedly implemented QV as a measure. Ars Technica’s Chris Lee also thinks it could achieve widespread adoption. Whether QV becomes the quantum equivalent of the LINPACK benchmark used to speed test the world’s most powerful supercomputers remains to be seen. But hopefully it will start a conversation about how companies can compare notes and start to peel away some of the opaqueness surrounding the race for quantum supremacy. Photo by Graham Carlow, IBM Research / CC BY ND-2.0
{"url":"https://singularityhub.com/2019/03/13/ibm-introduces-quantum-volume-to-track-progress-towards-the-quantum-age/","timestamp":"2024-11-10T21:21:34Z","content_type":"text/html","content_length":"373114","record_id":"<urn:uuid:056576a0-a1f9-4fe0-a71b-b72ee5180ec2>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00244.warc.gz"}
Terrain Algorithms from Scratch There are plenty of tools to calculate slope, aspect, and hillshading from elevation data, but if you’ve ever been curious about how they’re calculated, this post goes through the process of implementing those algorithms from scratch in Python using just Numpy. A DEM height map of Mount St. Helens that we’ll use for calculating topography. The concept of slope is simple: How much does elevation change within an area? Steep areas have lots of elevation change over a short distance while flat areas have very little. However, applying that concept to a 2D array raises the question, how do we define change within an area? There have been many different slope algorithms created to answer that question, and each produces slightly different results. We’re going to use the Horn 1981 algorithm since it is widely used (supported by GDAL, GRASS, Whitebox Tools, ESRI, etc.) but if you want to dive into the rabbit hole of slope algorithm comparisons, check out this site or this paper. Horn’s algorithm calculates slope for each pixel using the following equation, where $\frac{dz}{dx}$ is the east-west gradient of neighboring pixels and $\frac{dz}{dy}$ is the north-south gradient. $$ slope _{percent} = \sqrt {\frac{dz}{dx}^2 + \frac{dz}{dy}^2} $$ Let’s break that equation down by calculating the slope of a single pixel. One Pixel at a Time Imagine we want to calculate slope for the center pixel of a 3x3 pixel neighborhood $w$ with the following elevations: East-West Gradient The first term in the Horn algorithm, the east-west gradient $\frac{dz}{dx}$, describes how elevation changes between the east and west side of the pixel neighborhood. To solve it, we’ll break it down into the change in elevation $dz$ over the horizontal distance $dx$. The vertical distance between pixels, $dz$, is calculated with the following equation, where the northwest pixel in the neighborhood is labeled $z_{nw}$, the southwest pixel is labeled $z_{sw}$, and so on. $$ dz = \frac{(z_{nw} + z_{sw} + 2z_{w}) - (z_{ne} + z_{se} + 2z_{e})}{8} $$ There are a few things to notice in the equation above. 1. We’re calculating the difference between the sum of the western and eastern pixels. 2. The directly west and east pixels, $z_{w}$ and $z_{e}$, are multiplied by 2 to increase their weight over the diagonal pixels. 3. The result is divided by 8 to normalize the value based on the input weights. Here’s $dz$ in code: dz = ((w[0][0] + w[1][0] * 2 + w[2][0]) - (w[0][2] + w[1][2] * 2 + w[2][2])) / 8 The horizontal distance between pixels, $dx$, is simply the raster resolution (for this example, 30). Now we can calculate $\frac {dz}{dx}$, the change in elevation over the $x$ dimension, by simply dividing the two terms. North-South Gradient The second term in the Horn algorithm, the north-south gradient $\frac{dz}{dy}$, describes how elevation changes between the north and south side of the pixel neighborhood. It’s solution is nearly identical to the east-west gradient, after swapping in the appropriate pixels. $$ dz = \frac{(z_{sw} + z_{se} + 2z_{s}) + (z_{nw} + z_{ne} + 2z_{n})}{8} $$ And in code: dz = ((w[2][0] + w[2][1] * 2 + w[2][2]) - (w[0][0] + w[0][1] * 2 + w[0][2])) / 8 Assuming our image has square pixels, the distance between pixels in the north-south dimension $dy$ will be the same as $dx$. The last step in calculating the north-south gradient is to divide the change in elevation over the $y$ dimension by the horizontal distance between cells. Putting It Together With the east-west and north-south gradients calculated, $\frac{dz}{dx}$ and $\frac{dz}{dy}$ respectiely, solving the Horn algorithm is straightforward. Just plug the two solved terms into the original equation. $$ slope _{percent} = \sqrt {\frac{dz}{dx}^2 + \frac{dz}{dy}^2} $$ slope_pct = np.sqrt(dz_dx ** 2 + dz_dy ** 2) With a little more work, we can convert the percent slope into more familiar degrees of slope. $$ slope _{degrees} = \arctan \left(slope _{percent}\right) * \left(\frac {180}{\pi}\right) $$ slope = np.arctan(slope_pct) * (180 / np.pi) For convenience, let’s simplify the code above and package it into a function that will calculate the slope of a single pixel given it’s 3x3 neighborhood of pixels. def pixel_slope(w, resolution): dz_dx = ((w[0][0] + w[1][0] * 2 + w[2][0]) - (w[0][2] + w[1][2] * 2 + w[2][2])) / (8 * resolution) dz_dy = ((w[2][0] + w[2][1] * 2 + w[2][2]) - (w[0][0] + w[0][1] * 2 + w[0][2])) / (8 * resolution) return np.arctan(np.sqrt(dz_dx ** 2 + dz_dy ** 2)) * (180 / np.pi) With the fundamentals of Horn’s algorithm down, the challenge now is simply to calculate it for each pixel. Scaling Up To calculate slope from our elevation data, we’ll iterate over each row and column in the DEM^1, grab the 3x3 window of neighboring pixels, use the pixel_slope function to calculate the slope of the center pixel, and store the result in an empty slope array. First, we’ll create the empty array to store slope values. We’ll make it two pixels smaller than the DEM in the x and y dimensions to account for the fact that edge pixels don’t have the eight required neighbors. slope = np.empty((dem.shape[0] - 2, dem.shape[1] - 2)) Now we’ll iterate over rows and columns (dropping one pixel from each side to account for edges) and calculate slope for each neighborhood of pixels. for row in range(1, dem.shape[0] - 1): for col in range(1, dem.shape[1] - 1): w = dem[row-1:row+2, col-1:col+2] slope[row-1][col-1] = pixel_slope(w, 30) Finally, let’s see what our slope map looks like, with flat areas in blue and steep areas in red. Aspect is closely related to slope, describing orientation rather than steepness. Since we’ve already implemented the Horn slope algorithm, we’ll use that for calculating aspect as well, with the following equation. $$ aspect = \arctan2 \left( \frac{dz}{dx} , \frac{dz}{dy} \right) $$ The east-west and north-south gradients, $\frac{dz}{dx}$ and $\frac{dz}{dy}$ respectiely, are calculated identically to slope. The only difference is that instead of taking the square root of their sum to get the overall slope, we use the arctangent to calculate the angle between them. Here’s that equation in code, plus conversion to degrees and rescaling to compass bearings: def pixel_aspect(w, resolution): """Calculate the aspect of a pixel in degrees given its 3x3 neighborhood `w` and cell resolution.""" dz_dx = ((w[0][0] + w[1][0] * 2 + w[2][0]) - (w[0][2] + w[1][2] * 2 + w[2][2])) / (8 * resolution) dz_dy = ((w[2][0] + w[2][1] * 2 + w[2][2]) - (w[0][0] + w[0][1] * 2 + w[0][2])) / (8 * resolution) aspect = np.arctan2(dz_dy, dz_dx) * (180 / np.pi) # Convert to compass bearings 0 - 360 aspect = 450 - aspect if aspect > 90 else 90 - aspect return aspect We calculate aspect for each pixel the same way we did slope, by iteratively filling an empty 2D array with aspects calculated from each pixel’s 3x3 neighborhood. aspect = np.empty((dem.shape[0] - 2, dem.shape[1] - 2)) for row in range(1, dem.shape[0] - 1): for col in range(1, dem.shape[1] - 1): w = dem[row-1:row+2, col-1:col+2] aspect[row-1][col-1] = pixel_aspect(w, 30) With slope and aspect calculated, generating hillshading to visualize the terrain is simple. The formula to calculate hillshading is below, with all units in radians. The zenith and azimuth parameters describe the position of the simulated light source, and can be tuned to adjust the hillshading effect. $$ hillshade = \cos(zenith) * \cos(slope) + \sin(zenith) * \sin(slope) * \cos(azimuth - aspect) $$ Here’s that equation in code, using the slope and aspect arrays we calculated earlier. azimuth = 315 altitude = 45 # Convert solar altitude to zenith and convert everything to radians zenith_rad = 90 - altitude * np.pi / 180 azimuth_rad = azimuth * np.pi / 180 slope_rad = slope * np.pi / 180 aspect_rad = aspect * np.pi / 180 # Calculate hillshade and scale to 8-bit hs = 255 * (np.cos(zenith_rad) * np.cos(slope_rad) + np.sin(zenith_rad) * np.sin(slope_rad) * np.cos(azimuth_rad - aspect_rad)) # Clip out-of-bounds values hs = np.clip(hs, 0, 255) Wrapping Up Now that we know how to implement terrain algorithms from scratch, the next step is to uninstall QGIS, WhiteboxTools, GDAL, and any other geospatial tools we no longer need! Okay, probably not. There are faster and more convenient ways to generate terrain data than rolling your own implementations, but getting a glimpse at the underlying algorithms does provide some interesting insights into how they work. 1. Using Python loops to apply our calculations to each pixel window is good for demonstration, but very slow in practice. If performance was a factor, you’d want to vectorize this to do as much work in C as possible. ↩︎ #Python #Algorithms
{"url":"https://www.aazuspan.dev/blog/terrain-algorithms-from-scratch/","timestamp":"2024-11-07T09:51:06Z","content_type":"text/html","content_length":"33576","record_id":"<urn:uuid:bbc49fbb-b03f-4fdd-9261-4a3eeb1c5577>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00628.warc.gz"}
Achieving Desired Resolution in Column Separation Process What is the process of calculating the number of theoretical plates needed to achieve a resolution of 1.5 for the separation of two compounds with partition coefficients of 15 and 18? To calculate the number of theoretical plates needed for a desired resolution in a separation process, we can use the following equation: N = (4 * Rs^2) / (alpha^2 * (1 - alpha)^2) where: - N is the number of theoretical plates - Rs is the desired resolution - alpha is the separation factor, defined as the ratio of the larger partition coefficient to the smaller partition coefficient (k2 / k1) In this case, we are given: - Rs = 1.5 (desired resolution) - k1 = 15 (partition coefficient of compound 1) - k2 = 18 (partition coefficient of compound 2) Therefore, we can calculate the separation factor: alpha = k2 / k1 = 18 / 15 = 6/5 Now, plugging all the values into the equation: N = (4 * 1.5^2) / ((6/5)^2 * (1 - 6/5)^2) ≈ 1317.69 Therefore, you would need approximately 1317.69 theoretical plates to achieve a resolution of 1.5 for the separation of these two compounds with partition coefficients of 15 and 18. Understanding the Calculation Process Theoretical Plates: The number of theoretical plates in a column separation process reflects the efficiency of the separation. A higher number of theoretical plates indicates better separation and resolution. Desired Resolution (Rs): The desired resolution represents the degree of separation needed between two compounds in a mixture. A higher resolution value indicates that the compounds are more effectively separated. Separation Factor (alpha): The separation factor is a crucial parameter in column chromatography. It is defined as the ratio of the partition coefficient of the compound with higher affinity to the stationary phase to the partition coefficient of the compound with lower affinity. Calculation Process: The formula N = (4 * Rs^2) / (alpha^2 * (1 - alpha)^2) is derived from the Van Deemter equation, which describes the relationship between the number of theoretical plates, plate height, and flow rate in chromatography. In the given scenario, we have partition coefficients of 15 and 18 for the two compounds, along with a desired resolution of 1.5. By substituting these values into the equation and calculating the separation factor, we can determine the number of theoretical plates required for the separation process. By understanding and applying the formula for calculating theoretical plates, chromatographers can optimize their column chromatography methods to achieve the desired separation efficiency and resolution for various compounds.
{"url":"https://www.signofthewhaleca.com/chemistry/achieving-desired-resolution-in-column-separation-process.html","timestamp":"2024-11-11T07:11:26Z","content_type":"text/html","content_length":"24184","record_id":"<urn:uuid:88b54655-6683-4af6-b563-3548b6465685>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00113.warc.gz"}
NCERT Solutions For Class 11 Download PDF free NCERT Solutions For Class 11 Download PDF Download NCERT Solutions for Class 11 available in PDF for all chapters given in your NCERT textbooks for Class 11. All solutions have been designed by expert teachers based on the latest syllabus issued by CBSE and NCERT for the current academic year. All questions which have been given in the NCERT books for Class 11 and their chapters have been carefully solved and detailed step by step answers have been provided by us which will help the Class 11 students to understand how the solutions were derived. You can click on the subject wise links below and download the solutions for Class 11 all subjects free. Class 11 NCERT Solutions Students can click on the following subject wise links to download in PDF the best and latest solutions for all subjects in class 11 prepared by expert teachers by providing detailed explanations and answers so that you are able to clearly understand how the questions are solved. Its always suggested to review these NCERT solutions prior to your exams too. NCERT solutions for class 11 are very critical for students who are studying in class 11 as these books are really important books and are being utilized in almost all affiliated schools in India. These ebooks have been prepared based on the syllabus which has been issued by CBSE. It is needless to mention that the student should concentrate on these books as they have been specially designed for all types of students and we have also seen that questions in the examinations generally come from the Class 11 books issued by NCERT. NCERT Solutions for Class 11 Free PDF Download We have designed the solutions for the questions which are given at the end of each chapter for each of the subjects in the textbooks for class 11. These solutions have been designed by expert teachers based on the latest pattern which is followed in various schools to ensure that you are able to understand the solutions in a step-by-step manner and also when you are using these for doing your classwork or homework then you should be able to properly present the solutions in a manner which helps you to get good marks in your classes. We have made sure that you don’t have to pay for any of these solutions for Class 11 therefore we have provided all the solutions free for you students and you don’t even have to read these online as we have provided simple to download links for class 11 so that you can just click and download and read it as per your convenience. Best solutions for NCERT Class 11 All the solutions for the textbook of NCERT class 11 have been provided for each subject and its chapter in different PDF files so that you can simply click on the download button and get the solutions that you are searching for. Our teachers have made sure that we provide the latest and the best quality of solutions so that you can use them in your school and also share them with your Importance of NCERT Solutions for Class 11 As you all know that the NCERT books have been implemented in almost all the states and their schools in India therefore this further increases the importance of NCERT solutions for class 11 for you. You should be really careful while studying your textbook and make sure that you understand all the concepts so that you can easily solve the questions which are given at the end of the exercise of each chapter. To help all class 11 students we have provided solutions for all the questions which have been provided at the end of each chapter in a very simple and easy-to-understand format so that you have no issues in understanding the solutions. These solutions are really important as you can expect many questions coming in your exams from these NCERT questions themselves. There are a lot of useful exercises which have been given at the end of each chapter which has some very important questions which help the students to clearly understand the concepts of that chapter in class 11. We have provided solutions for every such question to make sure that the students are able to understand the concepts clearly. Free Solutions for Class 11 NCERT Textbook It is very important for the students that they do not waste money on buying answers to the questions which have been given in the chapters in NCERT class 11 and only refer to the best source for getting all the answers. Over here you will get all the solutions free as all the study material which has been provided by us on our website is absolutely free for all students and you don’t have to pay anything for downloading any of the PDF files which is available over here. All the files can be easily downloaded in PDF format without paying anything. Should you study NCERT Books and Solutions for Class 11 ? Many times lot of students ask us this question that whether they should be studying the NCERT book and solutions for class 11. Our trained teachers have already provided the answer to almost all students that NCERT books are very useful and not even in schools, rather in any other competition exam too, Almost everywhere you will find that the concepts which have been provided in the E-textbooks for NCERT are being utilized and questions are asked from the chapters and concepts which have been provided in these chapters for Class 11 NCERT. The Class 11 students should be enthusiastic while reading these textbooks and in case you face any issues in downloading the solutions for NCERT for any of the chapters given above then you can contact us using the contact us form and we will be more than happy to help you. Students for class 11 can also refer to a lot of other useful study material that we have provided here for free in PDF format. Where can I download NCERT Solutions for Class 11 in PDF ? You can download the latest solutions for class 11 from https://cbsencertsolutions.com/ free which have been designed based on the latest syllabus and curriculum issued by CBSE and NCERT I want detailed solutions for all questions given in my textbooks for Class 11, can I download them from your website? Yes, you have come to the right place as we have provided answers to all questions which are given at the end of each chapter in your book Can these solutions be downloaded in PDF format, do I need to pay for it? Yes, all solutions provided here can be easily downloaded in PDF format without paying anything My teacher has asked me to solve questions given at the end of each chapter in my textbook for Class 11, where can I get the solutions to do my homework? We are here to help you as we have provided solutions and answers to all questions given in your class 11 NCERT textbooks for all subjects. Jut click on the link of your subject and download the solutions in PDF I don’t have internet, how can I read the solutions provided here ? We have provided all solutions in PDF and can be downloaded easily. Please go ahead and download all solutions in single click now. Questions given in Class 11 books are very difficult to solve, please help? We have provided all solutions to questions given in your books of Class 11. All answers have been provided in a detailed and step by step manner so that you are able to understand the concepts and the way answers have been derived.
{"url":"https://www.cbsencertsolutions.com/ncert-solutions-for-class-11-download-pdf/","timestamp":"2024-11-01T20:38:05Z","content_type":"text/html","content_length":"148012","record_id":"<urn:uuid:10ec3a40-5e1f-48a4-af66-41648f0e72b7>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00432.warc.gz"}
CA3Dm – Paul Sikora The application allows for point-based forecasting of ground surface subsidence as a result of underground mining (longwall mining by default). The calculations are based on the method of cellular automata and its mathematical characteristics described in scientific publications by Dr. Paweł Sikora. The application allows calculating only the subsidences for the case of a flat deck. A deterministic, lossless transition function was used. The applied algorithm is characterized by the principle of superposition of mining influences and the distribution of subsidence consistent with the t-student distribution.
{"url":"https://my-hw.org/ca3dm-paul-sikora/","timestamp":"2024-11-06T09:15:45Z","content_type":"text/html","content_length":"33743","record_id":"<urn:uuid:284b4192-920a-4d10-a955-5d4bf1185f6e>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00377.warc.gz"}
Understanding Coulomb's Law Formula Coulomb's Law is a fundamental formula in the study of electricity and magnetism. Developed by French physicist Charles-Augustin de Coulomb in the late 18th century, this law describes the force between two charged particles. It is an essential concept for understanding the behavior of electrically charged particles and the principles behind electric fields and forces. In this article, we will delve into the details of Coulomb's Law formula, its significance, and its applications in the field of physics. Whether you are a student or a curious reader, this article will provide you with a comprehensive understanding of this important formula. So let's dive into the world of electricity and explore the intricacies of Coulomb's Law. Electricity and magnetism are two of the fundamental forces that govern our universe. In order to understand the complex interactions between these forces, scientists have developed various formulas to help explain their behavior. One of the most important formulas in this field is Coulomb's Law formula. Named after the French physicist Charles-Augustin de Coulomb, this formula describes the force between two charged particles. It is a crucial concept in the study of electricity and magnetism, and it has numerous practical applications in modern technology.In this article, we will dive deep into the world of Coulomb's Law and explore its significance in the realm of physics. We will break down the formula and explain its components, as well as provide real-world examples of its application. Whether you are a student learning about electricity and magnetism for the first time, or a seasoned physicist looking for a refresher, this article will provide a comprehensive understanding of Coulomb's Law formula.Join us on this journey as we unravel the mysteries behind one of the most fundamental concepts in physics - Coulomb's Law. Coulomb's Law Formula is an essential concept in the field of physics, particularly in the study of electricity and magnetism. It is a fundamental law that governs the relationship between electric charges and the force they exert on each other. In this article, we will delve into the details of Coulomb's Law Formula and its significance in the world of physics. The history behind Coulomb's Law Formula dates back to the late 18th century when French physicist Charles-Augustin de Coulomb discovered it. He was conducting experiments with electric charges and noticed that the force between two charges was directly proportional to the product of their magnitudes and inversely proportional to the square of the distance between them. This led him to formulate what is now known as Coulomb's Law Formula. The formula can be written as F=k*q1*q2/r^2, where F is the force between two charges, q1 and q2 are the magnitudes of the charges, r is the distance between them, and k is a constant of proportionality. The magnitude of the force is directly proportional to the product of the two charges and inversely proportional to the square of the distance between them. Let's break down each variable and understand its significance in the formula. The magnitude of the charges, q1 and q2, determines the strength of the force between them. The larger the charges, the greater the force. The distance between the charges, r, also plays a crucial role. As per the inverse-square law, the farther apart two charges are, the weaker their force of attraction or repulsion will be. The constant of proportionality, k, is a value that depends on the medium in which the charges are located. In a vacuum, this value is known as Coulomb's constant and is approximately equal to 9 x 10^9 N*m^2/C^2.In different mediums, this value may vary, which is why the formula takes into account the specific medium in which the charges are present. Coulomb's Law Formula applies to both like and unlike charges. In the case of like charges, such as two positive charges or two negative charges, the force between them will be repulsive. This means that they will push away from each other. On the other hand, in the case of unlike charges, such as a positive and a negative charge, the force between them will be attractive. This means that they will pull towards each other. As mentioned earlier, Coulomb's Law Formula is closely related to the inverse-square law. This law states that as the distance between two charges increases by a factor of n, the force between them decreases by a factor of n^2.This relationship is crucial in understanding the behavior of electric charges and how they interact with each other. In conclusion, Coulomb's Law Formula is an essential concept in physics, particularly in the study of electricity and magnetism. It was discovered by French physicist Charles-Augustin de Coulomb in the late 18th century and has been a fundamental law in understanding the relationship between electric charges ever since. Its application to both like and unlike charges, as well as its connection to the inverse-square law, makes it a crucial formula in the field of physics. Coulomb's Law Formula is an essential concept in the field of physics, particularly in the study of electricity and magnetism. The formula was discovered by French physicist Charles-Augustin de Coulomb in the late 18th century, making it over 200 years old. The discovery of Coulomb's Law Formula was a significant breakthrough in the understanding of electromagnetism. Prior to this, scientists were struggling to explain the behavior of electric charges and how they interacted with each other. It wasn't until Coulomb conducted his experiments with charged spheres that he was able to formulate a mathematical equation to describe their behavior. The formula itself is relatively simple, yet its implications are far-reaching. It states that the force between two point charges (q1 and q2) is directly proportional to the product of their magnitudes and inversely proportional to the square of the distance between them (r). This relationship is represented by the constant of proportionality, known as Coulomb's constant (k). Breaking down the formula further, we can see that the magnitude of the charges (q1 and q2) plays a crucial role in determining the strength of the force between them. Similarly, the distance between the charges (r) also plays a significant role. As the distance increases, the force decreases due to the inverse-square law, which states that the force decreases with the square of the distance. Coulomb's Law Formula applies to both like and unlike charges. In the case of like charges, they will repel each other, while unlike charges will attract. This can be seen in everyday objects such as magnets or when using static electricity to pick up small pieces of paper. The formula also helps explain the behavior of lightning, as opposite charges build up in the clouds and on the ground until they are strong enough to overcome the resistance of the air and create a discharge. In conclusion, Coulomb's Law Formula is a crucial concept in the world of physics, particularly in the study of electricity and magnetism. Its discovery by Charles-Augustin de Coulomb revolutionized our understanding of electric charges and their interactions. By breaking down the formula and understanding its variables, we can gain a deeper understanding of the forces at play in our everyday lives. Applications of Coulomb's Law Formula Coulomb's Law Formula has numerous real-world applications that demonstrate its importance in the field of physics. One of its primary uses is in calculating the force between two charged particles. This is particularly useful in understanding the behavior of atoms and molecules, as well as in predicting the interactions between them. Another application of Coulomb's Law Formula is in determining the strength of an electric field. This is crucial in designing and analyzing various electrical systems, such as circuits and motors. By using the formula, engineers can accurately predict the behavior of the electric field and make necessary adjustments to improve its efficiency. Furthermore, Coulomb's Law Formula plays a critical role in understanding the behavior of electrically charged objects in a magnetic field. This phenomenon, known as electromagnetic induction, is essential in many modern technologies, including generators, transformers, and electric motors. Applications of Coulomb's Law Formula Coulomb's Law Formula is an essential concept in the field of physics, particularly in the study of electricity and magnetism. In this article, we have delved into the details of Coulomb's Law Formula and its significance in the world of physics. Now, let's take a closer look at some real-world applications of this formula. One of the most common applications of Coulomb's Law Formula is calculating the force between two charged particles. This is important in understanding the behavior of electric charges and how they interact with each other. By plugging in the values of the charges and distance between them, we can determine the magnitude and direction of the force they exert on each other. Another important application is determining the strength of an electric field. Electric fields are created by charged particles and can be used to manipulate and control the movement of charged objects. By using Coulomb's Law Formula, we can calculate the strength of an electric field at any given point, which is crucial in many practical applications. Lastly, Coulomb's Law Formula also helps us understand the behavior of electrically charged objects in a magnetic field. When an electrically charged object moves through a magnetic field, it experiences a force called the Lorentz force. This force can be calculated using Coulomb's Law Formula, allowing us to predict and analyze the motion of charged particles in a magnetic field. Applications of Coulomb's Law Formula The applications of Coulomb's Law Formula are vast and can be seen in various real-world scenarios. This fundamental law has numerous practical uses in the field of physics, particularly in the study of electricity and magnetism. One of the most common applications of Coulomb's Law is in calculating the force between two charged particles. For instance, if we have two particles with known charges, we can use Coulomb's Law Formula to determine the force between them. This is crucial in understanding the behavior of electrically charged objects and predicting their movement. Another important application of Coulomb's Law Formula is in determining the strength of an electric field. By using this formula, we can calculate the electric field strength at a specific point in space due to one or more charged particles. This is essential in analyzing and designing electrical systems. Coulomb's Law Formula also plays a significant role in understanding the behavior of electrically charged objects in a magnetic field. By using this formula, we can calculate the force exerted on a charged particle by a magnetic field, which helps us understand the motion of charged particles in a magnetic field. In conclusion, Coulomb's Law Formula is a crucial concept in understanding the fundamental principles of electricity and magnetism. It allows us to quantify the force between electric charges and has numerous practical applications. For those interested in pursuing a career in physics, it is essential to have a thorough understanding of this formula.
{"url":"https://www.onlinephysics.co.uk/electricity-and-magnetism-formulas-coulomb-s-law-formula","timestamp":"2024-11-07T16:35:57Z","content_type":"text/html","content_length":"179237","record_id":"<urn:uuid:142c0fa9-7605-472f-986b-f826267d4ac7>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00571.warc.gz"}
Theorems on Error Variance in the Black-Scholes Framework | HackerNoon Table of Links 1.2 Asymptotic Notation (Big O) 1.5 Monte Carlo Simulation and Variance Reduction Techniques 2. Methodology 3.2 Theorems and Model Discussion 3.2 THEOREMS AND MODEL DISCUSSION The three steps we take are outlined in Theorems 1 through 3. We begin by deriving an equation for the hedge error variance in the case when each call position in the portfolio has a standard delta hedge applied to it [22]. Following that, To protect against changes in the value of the option portfolio, we create an analogous formula for the scenario in which an arbitrary portfolio in the underlying values is retained [23]. As a result, we are able to show how selecting the right hedge portfolio helps lower hedge error variances. We last provide our major theoretical finding, which demonstrates that by focusing only on the linear and higher order exposures to the systematic risk factor [24], it becomes possible to design a perfect static hedge portfolio in finite time, provided that the portfolio size grows infinitely [25]. Hedge error is defined by โ ๐ ป as The hedge portfolio is the total of the hedge portfolios for each individual position when using the hedging approach described above [26]. We create a power series by expanding the hedge mistake (14) [27]. Regarding the duration of the hedging period โ ๐ ก, See Leland (1985) as well as Mello and Neuhaus (1998). The predicted hedging error and its variation under the current conventional delta hedging method are as follows [28]. Theorem 1: The hedging error โ ๐ ป in (14), which is obtained through delta hedging of the individual option holdings, satisfies Essentially, for N = 1, we obtain the square of the option's gamma, which is the well-known calculation for the hedge error variance [3]. This variance's explicit representation clearly indicates that the option portfolio's return is no longer reproduced riskfree in a defined amount of time using the Black-Scholes hedging strategy. Thus, it appears that the option portfolio does not have a unique preference free pricing. With this unconventional approach to hedging, we get the following outcome for the hedge error variance. There is a trade-off between insuring all idiosyncratic risk (in the limit ๐ โ ๐ ก โ 0 , the traditional Black-Scholes technique) and hedging market risk exclusively (in the limit ๐ โ ๐ ก โ โ ) for finite portfolio size ๐ and finite revision time โ ๐ ก. Clearly, with bigger portfolio sizes and revision intervals, there are more departures from the traditional hedging technique in terms of hedge ratios and variance reduction. Now that we have established the general conclusion, we may argue that it is better to hedge higher order systematic risk rather than linear idiosyncratic risk in big option portfolios. The subsequent theorem does this. One can demonstrate a similar outcome if idiosyncratic risk is valued [3]. In that scenario, the quantity of securities required to build the hedging portfolio grows quadratically with ๐ . Furthermore, there are now prospects for arbitrage since the projected return of this hedging approach need not be zero. This is to be expected because arbitrage is possible in a market structure that doesn't even have options [24]. Theorem 3's result shows that if the ๐ ๐ s are different, we can create a riskless hedging strategy for finite time โ ๐ ก in the limit ๐ โ โ . Stated otherwise, the risk incurred by switching to a discrete time setup entirely disappears: by selecting the suitable non-standard hedging approach, the systematic risk component can be eliminated to any arbitrary order of โ ๐ ก. Diversification causes the idiosyncratic risk component to vanish at the same time. The hedge portfolio is selected so that, up to a high enough order in โ t, its expectation conditional on the systematic risk component corresponds with its unconditional expectation. This is the key to the demonstration. By matching the higher order properties of the systematic risk exposure, this may be proven. The portfolio loadings ๐ ๐ must be subject to a series of restrictions in order to meet the two sorts of expectations. In the ๐ ๐ s, each of these restrictions is linear. The coefficients of these constraints entail powers of the systematic volatility ๐ ๐ and higher order derivatives of the Black-Scholes price (to capture the proper curvature). The number of securities, ๐ โ ฅ ๐ , must be sufficiently large in order for this set of constraints to have a solution. Secondly, the limitations system must not be unique. Requiring that at least ๐ of the ๐ ๐ s be different and not equal to zero ensures the latter. In the current discrete time framework, one can question if the Black-Scholes pricing remain accurate, given the stark differences between the behavior of the conventional and portfolio hedging approaches. They are, according to the next corollary. Corollary 1 (the price is right): If only market risk is priced and the number of securities diverges (๐ โ โ ), and if the ๐ ๐ s are different such that we can set the approximation order n arbitrarily high (๐ โ โ ), then the only arbitrage-free price of the options is equal to their Black-Scholes price, except for a set of measure zero. According to Corollary 1, arbitrage opportunities are those in which there is an almost certain chance of earning a return greater than the risk-free rate [3]. The hedge portfolio is priced the same as Black-Scholes by construction. This suggests that the total of the Black-Scholes prices and the options prices are equal, based on the outcome of Theorem 3. However, Theorem 3 also holds for all call option subseries and their underlying values, meaning that the set of options with prices that deviate from the Black-Scholes price has measured zero. (1) Agni Rakshit, Department of Mathematics, National Institute of Technology, Durgapur, Durgapur, India ([email protected]); (2) Gautam Bandyopadhyay, Department of Management Studies, National Institute of Technology, Durgapur, Durgapur, India ([email protected]); (3) Tanujit Chakraborty, Department of Science and Engineering & Sorbonne Center for AI, Sorbonne University, Abu Dhabi, United Arab Emirates ([email protected]). This paper is available on arxiv under CC by 4.0 Deed (Attribution 4.0 International) license.
{"url":"https://hackernoon.com/theorems-on-error-variance-in-the-black-scholes-framework","timestamp":"2024-11-11T17:11:28Z","content_type":"text/html","content_length":"275382","record_id":"<urn:uuid:e95d2cdf-43f7-4813-aacf-9037439542d0>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00247.warc.gz"}
TS Inter 2nd Year Physics Question Paper May 2018 Thoroughly analyzing TS Inter 2nd Year Physics Model Papers and TS Inter 2nd Year Physics Question Paper May 2018 helps students identify their strengths and weaknesses. TS Inter 2nd Year Physics Question Paper May 2018 Time: 3 Hours Maximum Marks: 60 Section – A (10 × 2 = 20) Note : • Answer ALL questions. • Each question carries TWO marks. • ALL are very short answer type questions. Question 1. A concave mirror of focal length 10 cm is placed at a dis-tance 35 cm from a wall. How far from the wall should an object be placed so that its real image is formed on the wall ? Given f = -10 cm (concave) V = -35 cm; d = ? using \(\frac{1}{\mathrm{u}}\) + \(\frac{1}{\mathrm{v}}\) = \(\frac{1}{\mathrm{f}}\) \(\frac{1}{\mathrm{u}}\) – \(\frac{1}{35}\) = \(\frac{-1}{10}\) \(\frac{1}{\mathrm{u}}\) = \(\frac{1}{35}\) = – \(\frac{1}{10}\) = \(\frac{-25}{35 \times 10}\) ∴ u = -14 cm. ∴ Distance between wall and object, d = 35 – 14 = 21 cm. Question 2. How do you convert a moving coll galvanometer into an ammeter? A small resistance is connected in parallel to the moving coil galvanometer, then it converts to Ammeter. Question 3. Magnetic lines form continuous closed loops. Why? Magnetic lines of force always start from north pole and forming curved path, enter south pole and travel to north pole inside the magnet. Thus lines of force are forming closed loops. Question 4. Define Magnetic declination. Magnetic Declination (D) : The angle between the true geographic north and the north shown by a compass needle is called magnetic declination or simply declination (D). Question 5. What is transformer ratio ? The ratio of secondary e.m.f to the primary e.m.f (or) num¬ber of turns in secondary to the number of turns in the primary is called the transformer ratio. Transformer ratio = \(\frac{\mathrm{V}_{\mathrm{S}}}{\mathrm{V}_{\mathrm{P}}}\) = \(\frac{\mathrm{N}_{\mathrm{S}}}{\mathrm{N}_{\mathrm{P}}}\) Question 6. If the wavelength of electromagnetic radiation is doubled, what happens to the energy of photon ? Photon energy (E) = hv = \(\frac{\mathrm{hc}}{\lambda}\) E ∝ \(\frac{1}{\lambda}\) λ[1] = 1, λ[2] = 2λ, E[1] = E \(\frac{E}{E_2}=\frac{2 \lambda}{\lambda}\) E[2] = \(\frac{E}{2}\) ∴ The energy of photon reduces to half of its initial value. Question 7. Write down Einstein’s photoelectric equation. K[max] = \(\frac{1}{2}\) m V^2[max] = hυ – Φ[0] Question 8. State Heisenberg’s uncertainty principle. Uncertainty principle states that “it is impossible to measure both position (Δx) and momentum of an electron (Δp) [or any other particle] at the same time exactly”, i.e., Δx, Δp ≈ h where Δx is uncertainty in the specificatipn of position and Δp is uncertainty in the specification of momentum. Question 9. What is a p-type semiconductor ? What are the majority charge carriers in it ? If a trivalent impurity is added to a tetravalent semiconductor, it is called p-type semi-conductor. In p-type semiconductor majority charge carriers are holes and minority charge carriers are electrons. Question 10. Define modulation. Why is it necessary ? Modulation: The process of combining low frequency audio signal with high frequency carrier wave is called modulation. The audio frequency signals cannot be transmitted over long distances faithfully. Therefore they are combined with high frequency waves and transmitted. Section – B (6 × 4 = 24) Note : • Answer ANY SIX of the following questions, • Each question carries FOUR marks. • ALL are short answer type questions. Question 11. Explain the formation of a mirage. In a desert, the sand becomes very hot during the day time and it rapidly heats the layer of air which is in its contact. So density of air decreases. As a result the successive upward layers are denser than lower layers. When a beam of light travelling from the top of a tree enters a rarer layer, it is refracted away from the normal. As a result at the surface of layers of air, each time the angle of incidence increases and ultimately a stage is reached, when the angle of incidence becomes greater than the critical angle between the two layers, the incident ray suffers total internal reflection. So it appears as inverted image of the tree is formed and the same looks like a pool of water to the observer. Question 12. Derive the expression for the intensity at a point where in-terference of light occurs. Arrive at the conditions for maximum and zero intensity. Let y[1] and y[2] be the displacements of the two waves having same amplitude a and Φ is the phase difference between them. y[1] = a sin ωt …………. (1) y[2] = a sin (ωt + Φ) ……….. (2) The resultant displacement y = y[1] + y[2] y = a sin ωt + a sin (ωt + Φ) y = a sin ωt + a sin ωt cos Φ + a cos ωt sin Φ y = a sin ωt [1 + cos Φ] + cos ωt (a sin Φ) …………. (3) Let R cos θ = a(l + cos Φ) …………. (4) R sin θ = a sin Φ …………. (5) y = R sin ωt. cos θ + R cos ωt. sin θ y = R sin (ωt + θ) …………. (6) Where R is the resultant amplitude at P squaring equations (4) and (5), then adding R^2 [cos^2 θ + sin^2 θ] = a^2 [1 + cos^2 Φ + 2 cos Φ + sin^2 Φ] R^2 [1] = a^2 [1 + 1 + 2 cos Φ] I = R^2 = 2a^2 [1 + cos Φ] = 2a^2 × 2 cos^2\(\frac{\phi}{2}\); I = 4a^2 cos^2 \(\frac{\phi}{2}\) ……………. (7) i) Maximum intensity (I[max]) cos^2 \(\frac{\phi}{2}\) = 1 Φ = 2nπ where n = 0, 1, 2, 3 ……………. Φ = 0, 2π, 4π, 6π …………. ∴ I[max] = 4a^2. ii) Minimum intensity (I[min]) cos^2 \(\frac{\phi}{2}\) = 0 Φ = (2n + 1)π where n = 0, 1, 2, 3 ……………. Φ = π, 3π, 5π, 7π …………. I[min] = 0. Question 13. Derive the equation for the moment of couple acting on an electric dipole in a uniform electric field. • A pair of opposite charges separated by a small distance is called dipole. • Consider the charge of dipole are -q and +q coulomb and the distance between them is 2a. • Then the electric dipole moment P is given by P = q × 2a = 2aq. It is a vector. It’s direction is from -q to +q along the axis of dipole. • It is placed in a uniform electric field E, making an angle 0 with field direction as shown in fig. • Due to electric field force on +q is F = +qE and force on -q is F = -qE. • These two equal and opposite charges constitute torque or moment of couple. i.e., torque, τ = ⊥^r distance × magnitude of one of force ∴ τ = (2a sin θ) qE = 2aqE sin θ = PE sin θ In vector form, \(\vec{\tau}=\overrightarrow{\mathrm{P}} \times \overrightarrow{\mathrm{E}}\) Question 14. Derive an expression for the capacitance of a parallel plate capacitor. Consider two parallel plates A and B separated by a distance d in air. The upper plate A is given a charge +q and a lower plate B given a charge -q. By using Gauss law, first we calculate the value of elec- trie field E. Consider a Gaussian surface PQRS. The flux through the ends PR and QS is zero, (∵ angle between E and ds is 90°). Electric flux through the surface PQ is zero, because inside the conductor charge is The only contribution of electric flux is due to the surface RS. Total electric flux through the entire Gaussian surface is Φ[E] = \(\oint \mathrm{E} \cdot \mathrm{dS}=\oint \mathrm{EdS} \cos \theta\) (∵ Angle between E and dS is zero, i.e., θ = 0) Φ[E] = \(\oint \mathrm{E} \cdot \mathrm{dS}\) (∵ \(\oint \mathrm{dS}=\mathrm{A}\)) Let A be the area of each plate Φ[E] = E.A …………. (1) Applying Gauss law Φ[E] = \(\frac{1}{\varepsilon_0}\) q ………… (2) From eq’s (1) and (2), EA = \(\frac{\mathrm{q}}{\varepsilon_0}\) E = \(\frac{\mathrm{q}}{\varepsilon_0 \mathrm{~A}}\) …………. (3) The potential difference V between the plates can be written as V = E.d = \(\frac{\mathrm{q}}{\varepsilon_0 \mathrm{~A}}\).d (∵ C = \(\frac{\mathrm{q}}{\mathrm{v}}\)) \(\frac{\mathrm{q}}{\mathrm{v}}\) = \(\frac{\varepsilon_0 \mathrm{~A}}{\mathrm{~d}}\) ∴ C = \(\frac{\varepsilon_0 \mathrm{~A}}{\mathrm{~d}}\) Question 15. A circular coil of wire consisting of 100 turns, each of radius 8.0 cm cames a current of 0.40 A. What is the magnitude of the magnetic field B at the centre of the coil ? Given, n = 100; r = 8 cm = 8 × 10^-2m; i = 0.40 A The magnetic field B at the centre of coil, B = \(\frac{\mu_0 \mathrm{ni}}{2 \mathrm{r}}\) ⇒ B = \(\frac{4 \pi \times 10^{-7} \times 100 \times 0.4}{2 \times 8 \times 10^{-2}}\) = \(\frac{4 \times 3.14 \times 4 \times 10^{-6}}{16 \times 10^{-2}}\) ∴ B = 3.14 × 10^-4T. Question 16. Apair of adjacent coils has a mutual inductance of 1.5 H. If the current in one coil changes from 0 to 20 A in 0.5 s, what is the change of flux linkage with the other coil ? Given M = 1.5 A di = 20 – 0 = 20 A dt = 0.5 sec e = M\(\frac{\mathrm{di}}{\mathrm{dt}}=\frac{\mathrm{d} \phi}{\mathrm{dt}}\) dΦ = M.di = 1.5 × 20 dΦ = 30 wb Question 17. What are limitations of Bohr’s theory of hydrogen atom? Limitations of Bohr’s theory of Hydrogen atom: 1. This theory is applicable only to simplest atom like hydrogen, with Z = 1. The theory fails in case of atoms of other elements for which Z > 1. 2. The theory does not explain why orbits of electrons are taken as circular, while elliptical orbits are also possible. 3. Bohr’s theory does not say anything about the relative intensities of spectral lines. 4. Bohr’s theory does not take into account the wave properties of electrons. Question 18. Describe how a semiconductor diode is used as a half wave rectifier. Working of half wave rectifiers : A single diode is used in Half wave rectifier. It rectifies only positive half cycles of the input AC signals. The circuit diagram is as shown in fig. The input AC signed is applied across the primaiy trans-former P. It induces AC voltage in secondary then the DC output is obtained across the load resistance ‘R[L]‘. The AC voltage across the secondary changes polarity after every half cycle. During the positive half cycle, the diode is in forward biased and current flows through it. During the negative half cycles, the diode is reverse biased and current does not flow through it. So, that current flows through the diode during positive half cycles only, current flows in R[L] through one direction. Thus a Half wave rectifier gives discontinuous and pulsative DC output across the load resistance as shown in fig. The number of DC pulses per second is equal to frequency of the applied AC. Efficiency of Rectifier: “The ratio of DC power output to the applied input AC power is known as Rectifier efficiency. It is denoted by ‘η’ ∴ η = \(\frac{\text { DC power output }}{\text { AC power input }}\) For Half wave Rectifier η = \(\frac{0.406 \times \mathrm{R}_{\mathrm{L}}}{\mathrm{r}_{\mathrm{f}}+\mathrm{R}_{\mathrm{L}}}\) Section – C (2 × 8 = 16) • Answer ANY TWO of the following questions. • Each question carries EIGHT marks. • ALL are long answer type questions. Question 19. Explain the formation of stationary waves in stretched strings and hence deduce the laws of transverse waves in stretched strings. A string is a metal wire whose length is large when compared to its thickness. A stretched string is fixed at both ends, when it is plucked at mid point, two reflected waves of same amplitude and frequency at the ends are travelling in opposite direction and overlap along the length. Then the resultant waves are known as the standing waves (or) stationary waves. Let two transverse progressive waves of same amplitude a, wave length k and frequency V, travelling in opposite direction be given by y[1] = a sin (kx – ωt) and y[2] = a sin (kx + ωt) where ω = 2π v and k = \(\frac{2 \pi}{\lambda}\) The resultant wave is given by y = y[1] + y[2] y = a sin (kx – ωt) + a sin (kx + ωt) y = (2a sin kx) cos ωt 2a sin kx = Amplitude of resultant wave. It depends on ‘kx’. If x = 0, \(\frac{\lambda}{2}, \frac{2 \lambda}{2}, \frac{3 \lambda}{2}\) … etc, the amplitude = zero These position are known as “Nodes”. If x = \(\frac{\lambda}{4}, \frac{3 \lambda}{4}, \frac{5 \lambda}{4}\) ……. etc, the amplitude = maximum (2a). These positions are called “Antinodes”. If the string vibrates in ‘P’ segments and ‘l’ is its length then length of each segment = \(\frac{l}{\mathrm{P}}\) Which is equal to \(\frac{\lambda}{2}\) ∴ \(\frac{l}{\mathrm{P}}=\frac{\lambda}{2}\) ⇒ λ = \(\frac{2 l}{\mathrm{P}}\) Harmonic frequency v = \(\frac{v}{\lambda}=\frac{v P}{2 l}\) v = \(\frac{v \mathrm{P}}{2 l}\) ………….. (1) If ‘T’ is tension (stretching force) in the string and ‘μ’ is linear density then velocity of transverse wave (v) in the string is v = \(\sqrt{\frac{T}{\mu}}\) ……. (2) From the Eqs (1) and (2) Harmonic frequency v = \(\frac{\mathrm{P}}{2 l} \sqrt{\frac{\mathrm{T}}{\mu}}\) If P = 1 then it is called fundamental frequency (or) first harmonic frequency ∴ Fundamental Frequency v = \(\frac{1}{2 l} \sqrt{\frac{\mathrm{T}}{\mu}}\) ………. (3) If p = 2 then it is first overtone (or) second harmonic frequency. v[1] = \(\frac{2}{2 l} \sqrt{\frac{\mathrm{T}}{\mu}}\)2v Similarly if p = 3 then second overtone (or) third harmonic frequency. v[2] = \(\frac{3}{2 l} \sqrt{\frac{\mathrm{T}}{\mu}}\)3v from the Eqs (3), (4) and (5) The ration of the frequencies of harmonics are v : v[1] : v[2] : = v : 2v : 3v = 1 : 2 : 3 ∴ The frequencies of the overtones in a given vibrating length, are integral multiples of the fundamental in the same length. Laws of Transverse Waves Along Stretched String: Fundamental frequency of the vibrating string v[2] = \(\frac{1}{2 l} \sqrt{\frac{\mathrm{T}}{\mu}}\) First Law: When the tension (T) and linear density (μ) are constant, the fundamental frequency (v) of a vibrating string is inversely proportional to its length. ∴ v ∝ \(\frac{1}{\mathrm{~T}}\) ⇒ vl constant, when ‘T’ and ‘μ’ are constant. Second Law: When the length (l) and its, linear density (m) are constant the fundamental frequency of a vibrating string is directly proportional to the square root of the stretching force (T). ∴ v ∝ √T ⇒ \(\frac{v}{\sqrt{T}}\) constant, when ‘l’ and ‘+’ are constant. Third Law: When the length (1) and the tension (T) are constant, the fundamental frequency of a vibrating string is inversely proportional to the square root of the linear density (M). v ∝ \(\frac{1}{\sqrt{\mu}}\) ⇒ v√μ constant, when ‘l’ and ‘T’ are constant. Question 20. State Kirchhoff s laws for an electrical network. Using these laws deduce the condition for balance in a Wheatstone bridge. TWo resistors of resistances 10 Ω and 15 Ω are connected in parallel. Find the effective resistance of their combination. 1) Kirchhoffs first law (Junction rule or KCL) : The algebraic sum of the currents at any junction is zero. ∴ ΣI = 0 The sum of the currents flowing towards a junction is equal to the sum of currents away from the junction. 2) Kirchhoffs second law (Loop rule or KVL) : The algebraic sum of potential around any closed loop is zero. ∴ Σ(IR) + ΣE = 0 Wheatstone bridge: Wheatstone’s bridge circuit consists of four resistances R[1], R[2], R[3] and R[4] are connected to form a closed path. A cell of emf e is connected between the point A and C and a galvanometer is connected between the.points B and D as shown in fig. The current through the various branches are indicated in the figure. The current through the galvanometer is I[g] and the resis¬tance of the galvanometer is G. Applying Kirchhoff’s first la at the junction D, I[1] – I[3] – I[g] = 0 ………….. (1) at the junction B, I[2] + I[g] I[4] = 0 ………… (2) ⇒ Applying Kirchhoff s second law to the closed path ADBA, – I[1]R[1] – I[g]G + I[2]R[2] = 0 ⇒ I[1] R[1] + I[g]G = I[2]R[2] ⇒ to the closed path DCBD – I[3]R[3] + I[4]R[4] = I[g]G = 0 ⇒ I[3]R[3] – I[g]G = I[4]R[4] ⇒ When the galvanometer shows zero deflection the points D and B are at the same potential so I[g] = 0. Substituting this value in (1), (2), (3) and (4). I[1] = I[3] ………… (5) I[2] = I[4] …………….. (6) I[1]R[1] = I[2]R[2] …………… (7) I[3]R[3] = I[4]R[4] …………… (8) ⇒ Dividing (7) by (8) \(\frac{I_1 R_1}{I_3 R_3}=\frac{I_2 R_2}{I_4 R_4}\) ⇒ \(\frac{R_1}{R_3}=\frac{R_2}{R_1}\) [∵ I[1] = I[3] & I[2] = I[4]] ∴ Wheatstone’s Bridge principle : R[4] = R[3] × \(\frac{\mathbf{R}_2}{\mathrm{R}_1}\) Problem: R[p] = \(\frac{R_1 R_2}{R_1+R_2}\) Resistance of the long wire 4R Hence, resistance of the half wire = \(\frac{4 \mathrm{R}}{2}\) = 2R When these two are connected in parallel, then the effective resistance R[p] = \(\frac{2 R \times 2 R}{2 R+2 R}=\frac{4 R^2}{4 R}\) = R Problem: Two resistors of resistances 10 Ω and 15 Ω are connected in parallel. Find the effective resistance of their combination. Given, R[1] = 10Ω; R[2] = 15; R[p] = ? Effective resistance, in parallel R[p] = \(\frac{R_1 R_2}{R_1+R_2}\) = \(\frac{10 \times 15}{10+15}\) = \(\frac{10 \times 15}{25}\) = 6Ω Question 21. What, is radioactivity ? State the law of radioactive decay. Show that radioactive decay is exponential in nature. The half life period of radium is 1600 years. How much time does 1 g of radium take to reduce to 0.125 g ? 1. Radioactivity: The nuclei of certain elements disintegrate spontaneously by emitting alpha (α), beta (β) and gamma (γ) rays. This phenomenon is called Radioactivity or Natural radioactivity. 2. Law of radioactivity decay: “The rate of radioactive decay (\(\frac{\mathrm{dN}}{\mathrm{dt}}\)) (or) the number of nuclei decaying per unit time at any instant, is directly proportional to the number of nuclei (N) present at that instant is called law of radioactivity decay.” 3. Radioactive decay is exponential in nature : Consider a radioactive substance. Let the number of nuclei present in the sample at t = 0, be N[0] and let N be the radioactive nuclei remain at an instant t. \(\frac{\mathrm{dN}}{\mathrm{dt}}\) ∝ N ⇒ N\(\frac{\mathrm{dN}}{\mathrm{dt}}\) = -λ dN = -λNdt ………….. (1) The proportionality constant λ is called decay constant or disintegration constant. The negative sign indi cates the decrease in the number of nuclei. 4. From eq. (1) \(\frac{\mathrm{dN}}{\mathrm{N}}\) = – λ dt …………. (2) 5. Integrating on both sides \(\int \frac{\mathrm{dN}}{\mathrm{N}}=-\lambda \int \mathrm{dt}\) ln N = -λt + C ………… (3) Where C = Integration constant. 6. At t = 0 N = N[0] Substituting in eq. (3), we get ln N[0] = C ∴ ln N = -λt + ln N[0] ln N – ln N[0] = -λt ln (\(\frac{N}{N_0}\)) = -λt ∴ N = N[0]e^-λt The above equation represents radioactive decay law. 7. It states that the number of radioactive nuclei in a radioactive sample decreases exponentially with time. The half life period of radium s 1600 years. How much time does lg of radium take to reduce to 0.125 g. Half life of radium = 1600 years Initial mass = 1g Final mass = 0.125 g The quantity remaining after ‘n’ half lifes is \(\) of the initial quantity. In this problem, \(\frac{1}{2^n}\) = \(\frac{\text { quantity remaining }}{\text { Initial quantity }}\) \(\frac{1}{2^n}\) = \(\frac{1}{4}+\frac{1}{v}=\frac{1}{\mathrm{f}}\) ∴ n = 3 ∴ Time taken = ‘n’ half-lifes t = nx^t 1/2 = 3 × 1600 = 4,800 years Leave a Comment
{"url":"https://apboardsolutions.com/ts-inter-2nd-year-physics-question-paper-may-2018/","timestamp":"2024-11-05T02:52:48Z","content_type":"text/html","content_length":"92562","record_id":"<urn:uuid:9c2ca47e-57c9-4783-bf36-b0b878fe6702>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00189.warc.gz"}
Custom color paint is mixed by adding $1441_w9_h37.png$ gallon of coloring to $1$ gallon of white base. A room requires $4592_w19_h37.png$ gallons of custom color paint. How many gallons of coloring are needed to mix with $3176_w19_h37.png$ gallons of white base? 1212If one gallon requires a certain amount of coloring, So, this is a multiplication problem: multiply the amount of coloring needed for 1 gallon by Convert You could also use a proportion to solve this problem, but it is not as simple as the above method.
{"url":"https://passemall.com/question/custom-color-paint-is-mixed-by-adding-1441w9h37png-gallon-of-coloring-to-5373682155257856/","timestamp":"2024-11-04T14:32:26Z","content_type":"text/html","content_length":"92028","record_id":"<urn:uuid:c5ce569b-24b5-4256-bde3-4ecacd42db7c>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00542.warc.gz"}
Damghan University Press Iranian Journal of Astronomy and Astrophysics 2322-4924 8 1 2021 03 01 Dust Acoustic Shock Waves in Strongly Coupled Dusty Plasmas with Superthermal Electrons and Ions 1 13 218 10.22128/ijaa.2021.482.1104 EN Hamid Reza Pakzad Department of Physics, Ferdowsi University of Mashhad, 91775-1436, Mashhad, Iran 0000-0001-8514-3155 Journal Article 2021 12 12 We use reductive perturbation method and derive the Kordeweg-de Vries-Burgers (KdV-Burgers) equation in coupled dusty plasmas containing ions and electrons obeying superthermal distribution. We discuss the effect of the plasma parameters on the shock wave structure. It is simply shown how soliton profile is converted into shock structure when the coupling force increases. In fact as long as the dispersive term and the dissipative term as well as the nonlinear term are balanced, the shock wave structure forms; otherwise, the soliton forms due to the balance between the dispersive term and the nonlinear term. We show that the effect of superthermal electrons is more influence in comparison with the superthermal ions on the behavior of the shock waves. It is also seen that increasing relative density ( ) decreases the amplitude of shock wave except for very small value of . Our investigation is of wide relevance to astronomers and space scientists working on interstellar space plasmas. https://
{"url":"https://ijaa.du.ac.ir/?_action=xml&issue=55","timestamp":"2024-11-06T04:30:48Z","content_type":"application/xml","content_length":"19623","record_id":"<urn:uuid:f1ee6338-06a8-454a-9121-5fbd73b1f39e>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00536.warc.gz"}
Equalizing IIR filters for a constant group delay. In the previous post, I designed an audio equalizer. In that design, I used 3 different FIR filters to split 3 different frequency bands in order to send them to different speakers. Very low frequencies will be sent to the sub-woofer, medium frequencies to the woofer, and high frequencies to the tweeter. FIR filters are a good option to perform this kind of processing since, having their symmetric coefficients, they delay all the frequencies the same amount of time, N/2, so if we use 3 FIR filters with the same number of taps, all the frequencies of our system will have the same delay, that means that our system will be a delayed linear system (the input frequencies are present in the output signal at every moment), I other words, the out filter has linear phase. On the other side, in order to achieve high attenuation slopes, we need high-order FIR filters. Is common in signal processors the capability to implement high-order FIR filters, for example, the Analog Devices SHARC processors can implement up to 1024th-order FIR filters. To achieve the desired attenuation with a lower-order filter we can use IIR filters instead of FIR. IIR filters have the advantage that we can achieve high attenuation with a significantly lower order than FIR filters. As we have seen in this post, this kind of filter has poles and zeroes, so they can be unstable in certain cases. In this post, we won’t focus on the stability, since we assume that the filter and its quantification are stable, but in its group delay. The group delay can be described as the delay for each frequency, knowing that a signal is an addition of several sinus signals with different frequencies. When we are working with FIR filters, all sinusoidal components of the signal spend the same amount of time to pass through the filter, this makes the mix of frequencies at every moment will be the same at the input and the delayed output, the system is linear. In the case of IIR filters, this characteristic is not fulfilled, so if our signal information is composed of several frequencies, the use of this kind of filter will be discouraged. There are several cases where the information is composed of different frequencies, one of them is the communications lines, where we can have different bands on the same channel and we need synchronism between them. Another case is when we are processing audio signals, where a sound produced by different instruments has different frequencies, but we need to listen to all of them at the same time. In order to make that possible, we can use FIR filters, or we can use equalized IIR filters, to achieve linear phase at least in the pass band, using all-pass filters. If we see the frequency response of an all-pass filter, we will see a constant magnitude of 0 dB in all frequencies, and a phase that goes from 0 to 2pi. This kind of filter is used to change the phase of a signal keeping its amplitude. As we said in the previous post, the group delay is the derive of the phase, so if we can change the phase of the signal, we can change its group delay, and in some cases, configuring this all-pass filters correctly, we can achieve systems that have linear phase. To do this, we are going to create a chain of biquad filters, where some of them are in charge to filter and change the magnitude of the signal, and others will correct the phase delay. For this example, we will design a pass-band filter to the band from 4 to 8 kHz. Configuring the all-pass filter to obtain a linear phase in the pass band can be very tedious, so for this post, I will use the tool Filter designer tool from the company Advanced Solutions Nederland B.V. that allows doing this configuration graphically. In order to continue with the previous post, we will redesign the medium frequency band, this time using an IIR filter. As this filter can achieve greater attenuation than low-order FIR filters, we will define our filter as a band-pass filter a little bit more restrictive, with a pass band from 500hz to 5kHz, and an attenuation of at least 27 dB at 200 Hz and 8kHz. First, we need to configure the band-pass filter. We can do this by entering the frequency values on the table, or by moving the points in the bode diagram. The configuration of the filter is the On the left of the window, we can see the position of the filter’s poles and zeroes. At this point, we can see how the group delay on the pass band between around 40 and 5 samples. To verify the response of the filter, we can do it inside the application by opening the built-in signal analyzer. The signal we will use to verify the response will contain 2 harmonics in 2 kHz and 4 kHz. We can see the response in the next figure. We will notice that the shape of the output signal has changed with respect to the input signal. This effect is due to the different delay that each signal experiment. To improve this response, we need to configure the all-pass filters, To do this, there are several methods based on mathematics, and also come methods based on machine learning. In this example, we will use a powerful feature of this software. On the top of the window, we can click over the 3 points to enable the addition of all-pass filters biquad to our system. Now on the bode diagram, we can add points that are corresponding with pole-zeroes pairs of all-pass biquad filters, which will change the phase of the filter. In this case, I have added 3 pairs, that are corresponding with 3 biquads, and the group delay variation will decrease significantly in the pass band. On the other side, the global delay has been increased due to the addition of 3 more stages to the filter. The order of the final system will be the order of the initial passband filter added to 2 times the number of biquads added for the equalization. In the new response, we can see that the signal is inverted with respect to the input, but if we focus on the shape of the signal, we will see that is very similar to the input signal. That means that the components of the signal are the same at every moment. With the help of MATLAB we can apply an inversion to the signal, and also correct the phase delay to verify the correct behavior. These corrections can be done offline, but when we use this kind of filter within an audio system, the shape of the signal is the only one that matters to produce the corresponding sound. Regarding the implementation in the FPGA, we have to add to the block design both I2S transmitter and receiver, Slice and Concat blocks to match the widths and the 8 biquads, 5 for the band-pass filter and 3 for the all-pass filter. In order to make easiest the introduction of the coefficients, I have generated a new biquad module where the coefficients are parameters, and the multiplications are registered `axis_biquad_v1_1.v^. Complete block design is shown in the next image, with the biquads grouped in 2 hierarchy blocks. Regarding the resources used, for 20-bit width coefficients and 24-bit width for the input and output, the amount of DSP blocks used is 66, that is a high number for only equalize one band. The use of the high amount of DSP Slices unregistered in series has other problems. DSPs are located in some place of the FPGA, more precisely in columns. If we want to use a lot of them and connect them in series, we will generate a very long path between the input of the first DSP Slice and the output of the last DSP Slice, which will report timing errors. To avoid that we need to register the outputs of the DSP Slices, and notice that an extra delay will be added. As point, we have seen how we can correct the group delay of a filter by adding all-pass biquads to the system. For some systems where synchronization is critical, like communication systems, we have no choice, and we have to use linear systems, but the goal of this post is to design an audio system. If we analyze the variation of the group delay in the pass band of the non-equalized filter, we can see a delta of 40 samples between the fastest frequency to the slowest. In a system with a sampling frequency of 44 ksps, these 40 samples represent less than 1cms of delay. On the other side, regarding the implementation, increasing the number of biquads, also increase the number of resources used in the FPGA. So, need we to make the filters linear in audio systems? As always, it depends, but in general, delays of 1 ms in audio frequencies can be acceptable, even delays of 10 ms for the lower frequencies can be acceptable without quality loss. Related posts
{"url":"https://www.controlpaths.com/2021/07/12/equalizing-iir-filters-for-a-constant-group-delay/","timestamp":"2024-11-04T22:05:01Z","content_type":"text/html","content_length":"49940","record_id":"<urn:uuid:4911c230-2692-4564-81eb-767c9f789738>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00558.warc.gz"}
There are many misconceptions surrounding the relationship between risetime and bandwidth, in oscilloscopes. These misconceptions extend to time-domain reflectometer (TDR) instruments in the form of questions about spatial resolution of impedance profile plots. Things get even more confusing when thinking of impedance profile plots generated from instruments like the vector network analyzer (VNA). This paper will clarify the thinking about these concepts. The Relationship Between Risetime and Bandwidth In oscilloscopes, based on the stated bandwidth, there is an expectation on the risetime of the instrument, as measured when a very fast step is applied to the instrument. This expectation is expressed as a multiplier m, such that bandwidth · risetime = m, where m = 0.35 is most commonly used. It turns out that this multiplier is valid only under a special case. Therefore, it is important to understand the source of this expectation. Various threshold crossing times are tabulated in table 1. For a single-pole system, the step response, while starting at zero, never actually reaches its final value, as the response is infinite in duration. Therefore, on the first row in table 1, one can see that it crosses the threshold at zero time, but never crosses the threshold at 1. second and third rows are the most important when talking about risetime. Usually, risetime is specified as the time it takes the signal to traverse either the 10% and 90% thresholds (the 10–90 risetime), or the 20% and 80% thresholds (the 20–80) risetime, with the 10–90 risetime being the most important. WavePulser 40iX Risetime While the absolute risetime of the TDR is indicative of the frequency content in the TDR pulse, it is only the frequency content relative to the noise floor that is really important for the quality of the measurements made insofar as dynamic range is concerned. [1] Otherwise, it has no effect on the effective risetime implied by the s-parameters nor on the spatial resolution of the instrument. That being said, the WavePulser 40iX utilizes an impulsive stimulus, whose typical characteristics are shown in figure 2. The incident impulse is shown in figure 2a, and the spectral content of the impulse is shown in figure 2b, where it is seen to have an essentially flat content of -62dBm/GHz out to 40GHz. The wiggles in the response are due to the small impedance discontinuity of the launch, which is seen at around 300ps in figure 2a. It is important to note that the frequency content in figure 2b is actually the product of the pulser frequency content and the frequency response of the sampler, so the actual frequency content of the impulse is probably higher. Because the WavePulser employs an impulsive stimulus, it has higher dynamic range at high frequency, but it must be integrated for comparison with a traditional TDR. The integrated step response is shown in figure 2c and zoomed in figure 2d. In figure 2c, it is seen to reach a nominal amplitude of 2pV·s.^1 In figure 2d, markers are placed at the 20% point and the 80% point. The 20% threshold is 0.4pV·s, which is reached at -2.685ps and the 80% threshold is 1.6pV·s, which is reached at 4.545ps for a 20–80 risetime measurement of 7.23ps. Multiplying this risetime by the 40GHz instrument end frequency gives 0.289, which is higher than for the single-pole system shown in the third row of table 1. This is because the response characteristic drops more sharply than a single-pole response, as seen in figure 2b; the response characteristic affects the multiplier. Despite the fast risetime of the WavePulser, the time-domain plots generated by the instrument are calculated from the measured s-parameters in much the same manner as a VNA would generate them. In the case of the VNA, the DC point is extrapolated, which sometimes causes problems. The WavePulser directly measures the DC point, which is critical to proper time-domain analysis. Because the time-domain plots are generated from calibrated s-parameter measurements, the frequency response shape from which the time-domain waveforms are generated can be considered to have unity gain up to the end frequency and then ends abruptly. Brick-wall Limited Systems In high-end oscilloscopes (i.e., those with bandwidths exceeding 15GHz), the frequency response rolls off very quickly after the 3dB bandwidth is reached, which affects the multiplier that relates bandwidth to risetime. The most extreme situation is the brick-wall limited system. A brick-wall system is one that passes frequency content with unity gain right up to a given frequency and no content after that frequency. It turns out that a brick-wall system is exactly what one obtains when examining time-domain effects from s-parameters, whether measured using a VNA or TDR. When considering s-parameters, the end frequency defines an effective sample rate for the time-domain waveforms. [2] Thus, for a given end frequency Fe, the effective sample rate is Fs = 2·Fe (i.e., the end frequency is considered the Nyquist rate) and the sample period is Ts = 1/Fs = 1/(2·Fe). A perfect system would have a step response that rises from 0 to 1 in exactly one sample. Thus, given s-parameters to 40GHz (an effective sample rate of 80GS/s), the step would rise in one sample period, or 12.5ps. One might think that this, therefore, is the effective risetime, but this is not the whole picture. Using a normalized end frequency of 0.5, allows for the various threshold crossing times and risetimes to be calculated in samples, where the sample period is calculated from the end frequency as previously described. Such a normalized system is shown in figure 3, where figure 3a shows the frequency response, and figure 3b shows the step response. These results are tabulated in table 2. There are several things to note: • The multiplier used in the 10–90 bandwidth–risetime relationship is 0.446, as opposed to the commonly stated 0.35 multiplier for single-pole systems. • The multiplier used in the 20–80 bandwidth–risetime relationship is 0.317, as opposed to the 0.221 multiplier in single-pole systems. • The WavePulser multiplier of 0.289 for the 20–80 risetime is somewhere in between the single-pole system and brick-wall system. • Both the 10–90 and 20–80 risetimes are less than one sample. One might erroneously think that the brick-wall system provides the worst case multiplier to be used, but it does not. This is because of the phase response of the system. The phase of the brick-wall response is linear phase, while most responses, such as the single-pole system, are minimum phase. This is the topic of another discussion. [4] Spatial Resolution and Propagation Velocity The relationship between TDR risetime and spatial resolution is recommended by the IPC^3 Test Methods Manual [5] for measuring the characteristics of lines on printed circuit boards by TDR. In the manual, the temporal resolution is defined as half the 10–90 risetime of the instrument. In order to convert this to physical length, the propagation velocity must be known. Since this document was written, however, it is more common for controlled impedance traces to be constructed as either stripline, or covered microstrip with advanced laminates. A stripline example with an advanced laminate is provided in the table 3, which gives the WavePulser a 1× resolution of 0.870mm. For this example, a trace length in excess of 3.479mm is recommended for accurate impedance measurements. As the IPC document points out, TDR effects other than spatial resolution must be considered in making impedance measurements, including ringing and other aberrations. Certainly, measurements that result from converting frequency-domain s-parameters to time-domain impedance profiles will exhibit these effects because of the nature of sin ¹xº/x interpolation. Fortunately, this can be mitigated within the WavePulser instrument by specifying a risetime to apply to the time-domain measurements. To illustrate this, the open-source software SignalIntegrity [6] is used to simulate TDR waveforms applied to a 40Ω 1× and 4× structure with applied risetimes ranging between 0 and 20 ps, shown in figure 4. The 1× and 4× structure simulation is shown in figures 4a and 4b, respectively. It is important to understand that the risetimes applied in the simulations do not by themselves determine any spatial resolution. In other words, 0ps risetime applied to 40GHz s-parameters will have the spatial resolution of 40GHz s-parameters. In some sense, the risetime of the applied step in the simulation adds in quadrature with the risetime inherent to the s-parameters. In order to see the effects of the incident step risetimes more clearly, zooms of the simulation waveforms are shown in figures 4c and 4d. In both of these plots, markers are placed at the correct measurement points; two vertical lines are placed at the beginning and ending time location of the 40Ω discontinuity, and a horizontal line marks the correct 40Ω impedance. In figure 4c, it is clear that the time boundaries of the measurement along with the actual impedance measured are incorrect, however the instrument has no problem time locating and, for the most part, measuring the discontinuity. The error is 2Ω with 0ps incident risetime. The instrument measures half the difference between 40Ω and the 50Ω discontinuity using 20ps incident risetime. While this error is not insignificant, it allows the instrument to identify the impedance change. It is the opinion of the author that the 20ps risetime setting measures the impedance properly within the spirit of temporal or spatial resolution for the 1× structure, given that the 4× structure is recommended to make the precise impedance measurement. This is further demonstrated by examining figure 4d, which is on the same scale as figure 4c. Here, it is seen that the time boundaries of the structure are properly measured with all incident risetimes, but the 20ps risetime setting shows the 40Ω impedance perfectly measured at 200ps with no aberrations present in the waveform. Bandwidth and risetime are related in a manner that depends on the thresholds used to measure the risetime and in the shape of the magnitude and phase response of the system. Spatial resolution of TDR instruments depend on the risetime, but for instruments that compute time-domain impedances from s-parameters, the dependency is actually on the end frequency. The WavePulser 40iX measures from true DC to 40 GHz, giving it an effective 10–90 risetime of 11.15ps, although the actual risetime is much faster. This provides the ability to resolve impedances that are 5.575ps in electrical length, which is approximately 1mm of resolution for microstrip on FR4, and < 1mm in more common situations. Adjusting the incident risetime in the time-domain measurement results, such as the impedance profile traces, reduces ringing and aberrations and improves impedance measurements.
{"url":"https://de.teledynelecroy.com/doc/rise-time-and-spatial-resolution","timestamp":"2024-11-13T14:42:17Z","content_type":"text/html","content_length":"48062","record_id":"<urn:uuid:d424f6e5-87fc-4f2c-868d-7dfdee210b46>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00501.warc.gz"}
Linear Equations With Brackets Worksheet - Equations Worksheets Linear Equations With Brackets Worksheet Linear Equations With Brackets Worksheet – The aim of Expressions and Equations Worksheets is to aid your child to learn more efficiently and effectively. These worksheets are interactive and challenges that are based on the sequence of operations. Through these worksheets, kids can grasp both simple as well as complex concepts in brief amount of time. These PDF resources are free to download and could be used by your child to learn math concepts. These resources are beneficial for students in the 5th to 8th grades. Download Free Linear Equations With Brackets Worksheet Some of these worksheets are designed for students from the 5th-8th grades. These two-step word problems are constructed using decimals or fractions. Each worksheet contains ten problems. They can be found at any print or online resource. These worksheets are an excellent method to learn how to reorder equations. These worksheets can be used to practice rearranging equations , and also help students understand equality and inverse operations. These worksheets are suitable for use by fifth and eighth graders. These worksheets are ideal for students who have difficulty learning to compute percentages. There are three types of problems you can choose from. You can select to solve single-step issues that involve decimal or whole numbers or use word-based methods to do fractions or decimals. Each page is comprised of 10 equations. These worksheets for Equations are suggested for students from 5th to 8th grade. These worksheets are an excellent tool for practicing fraction calculations and other concepts related to algebra. You can select from a variety of different kinds of problems that you can solve with these worksheets. You can pick a word-based problem or a numerical. It is crucial to select the correct type of problem since every challenge will be unique. Each page contains ten problems which makes them an excellent aid for students who are in 5th-8th grade. These worksheets help students understand the relationship between numbers and variables. These worksheets allow students to practice with solving polynomial equations or solving equations, as well as learning about how to use them in everyday life. If you’re looking for an excellent educational tool to discover the basics of equations and expressions and equations, start by looking through these worksheets. They will assist you in learning about the different types of mathematical issues and the various kinds of symbols used to express them. These worksheets can be extremely helpful for children in the first grade. These worksheets can teach students how to solve equations as well as graph. The worksheets are great for practicing polynomial variable. These worksheets will assist you to simplify and factor the process. There are plenty of worksheets to aid children in learning equations. The best way to get started learning about equations is to do the work yourself. There are many worksheets to help you understand quadratic equations. Each level has its own worksheet. These worksheets are a great way to test your skills in solving problems up to the fourth degree. After you’ve solved a step, you’ll be able proceed to solve other kinds of equations. After that, you can focus at solving similar-level problems. For example, you can solve a problem using the same axis in the form of an extended number. Gallery of Linear Equations With Brackets Worksheet Solving Equations With Brackets Teaching Resources MEDIAN Don Steward Mathematics Teaching Expanding Brackets Quadratic Solving Equations Involving Brackets Maths Worksheet And Answers GCSE Leave a Comment
{"url":"https://www.equationsworksheets.net/linear-equations-with-brackets-worksheet/","timestamp":"2024-11-09T05:48:27Z","content_type":"text/html","content_length":"65792","record_id":"<urn:uuid:30c46846-a22c-49e9-ac2f-1fcb3191ca24>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00115.warc.gz"}
I need sparknotes for online deposit bonuses. It seems like online casino deposit bonuses are purposefully confusing/misleading. I gave up trying to figure out Bovada's system after reading all of their literature, running a couple hundred through some games trying to make sense of it, and ultimately coming out more confused. Even for the simpler systems I feel like there is something I must be missing..... Example: Online Casino X offers 50% bonus play that is unlocked after your buy-in/bonus is wagered 30 times (for table games/VP). So a $100 deposit would net a $150 bankroll that can be cashed out after $4,500 coin-in. Say I'm playing BJ or 9/6 JOB poorly at a 99% return. Wouldn't that put the EV for a $100 deposit at $105+comps? Sorry if I am not understanding something obvious, but a quick explanation of the popular bonus systems would be very helpful. • Threads: 169 • Posts: 22520 Joined: Oct 10, 2012 Quote: gamerfreak It seems like online casino deposit bonuses are purposefully confusing/misleading. I gave up trying to figure out Bovada's system after reading all of their literature, running a couple hundred through some games trying to make sense of it, and ultimately coming out more confused. Even for the simpler systems I feel like there is something I must be missing..... Example: Online Casino X offers 50% bonus play that is unlocked after your buy-in/bonus is wagered 30 times (for table games/VP). So a $100 deposit would net a $150 bankroll that can be cashed out after $4,500 coin-in. Say I'm playing BJ or 9/6 JOB poorly at a 99% return. Wouldn't that put the EV for a $100 deposit at $105+comps? Sorry if I am not understanding something obvious, but a quick explanation of the popular bonus systems would be very helpful. Sounds good, but don't forget to add in any processing fees. BV was a good place to play until they added in makeup on the wagering requirements even if you lost. Last edited by: AxelWolf on May 4, 2017 ♪♪Now you swear and kick and beg us That you're not a gamblin' man Then you find you're back in Vegas With a handle in your hand♪♪ Your black cards can make you money So you hide them when you're able In the land of casinos and money You must put them on the table♪♪ You go back Jack do it again roulette wheels turinin' 'round and 'round♪♪ You go back Jack do it again♪♪ Quote: gamerfreak It seems like online casino deposit bonuses are purposefully confusing/misleading. I gave up trying to figure out Bovada's system after reading all of their literature, running a couple hundred through some games trying to make sense of it, and ultimately coming out more confused. Even for the simpler systems I feel like there is something I must be missing..... Example: Online Casino X offers 50% bonus play that is unlocked after your buy-in/bonus is wagered 30 times (for table games/VP). So a $100 deposit would net a $150 bankroll that can be cashed out after $4,500 coin-in. Say I'm playing BJ or 9/6 JOB poorly at a 99% return. Wouldn't that put the EV for a $100 deposit at $105+comps? Sorry if I am not understanding something obvious, but a quick explanation of the popular bonus systems would be very helpful. A few problems with some bonuses. 1. Getting paid is sometimes nearly impossible 2. Some bonuses are not cashable. I had a casino years ago where I put in 200 for a 200 bonus on slots only. I hit a mini jackpot of $800 or so. I tried cashing out and forfeiting the bonus. They would not allow me to and I ended up losing almost all of it getting to the min wagering. 3. Variance can kill your bankroll and your bonus. If you put in 100 pfor 50 bonus. There is a decent chance that you bust before getting the bonus. Expect the worst and you will never be disappointed. I AM NOT PART OF GWAE RADIO SHOW Many casinos don't allow VP or table games on deposit bonuses. Those that do allow increase the play through needed before you can cash out. If you can find an honest casino that allows vp or table games at 30 times wagering you should jump at the offer. 50-50-90 Rule: Anytime you have a 50-50 chance of getting something right, there is a 90% probability you'll get it wrong The 30% play through is on both the deposit and bonus so it would be 9000 play through not 4500 50-50-90 Rule: Anytime you have a 50-50 chance of getting something right, there is a 90% probability you'll get it wrong Quote: vegas The 30% play through is on both the deposit and bonus so it would be 9000 play through not 4500 I took that into account, and also adjusted the numbers for how the casino credits VP and BJ play. 30 x $150 = $4500 I used different (but equivalent) numbers from the actual offer that I PM'd you so it would be easier to explain. Quote: GWAE 3. Variance can kill your bankroll and your bonus. If you put in 100 pfor 50 bonus. There is a decent chance that you bust before getting the bonus. Right, I understand the ROR doing this offer with only $100 is very high. I just picked and easy number to explain the math.
{"url":"https://wizardofvegas.com/forum/gambling/online/28771-i-need-sparknotes-for-online-deposit-bonuses/","timestamp":"2024-11-13T22:45:41Z","content_type":"text/html","content_length":"57030","record_id":"<urn:uuid:5827eb6c-c3ce-4ac9-a712-5b904a7fc57d>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00223.warc.gz"}
Mathematical Sciences Research Institute Minimal Torsion Curves in Geometric Isogeny Classes February 02, 2023 (02:00 PM PST - 03:00 PM PST) Speaker(s): Abbey Bourdon (Wake Forest University) Location: SLMath: Eisenbud Auditorium, Online/Virtual Primary Mathematics Subject Classification Secondary Mathematics Subject Classification No Secondary AMS MSC Minimal Torsion Curves In Geometric Isogeny Classes Let $E/\mathbb{Q}$ be an elliptic curve. By Mordell's 1922 theorem, the points on $E$ with coordinates in $\mathbb{Q}$ form a finitely generated abelian group. In particular, the torsion subgroup of $E(\mathbb{Q})$ is a finite abelian group, and the groups that occur as $E(\mathbb{Q})_{\text{tors}}$ are known due to work of Mazur in 1977. The past decade has seen a renewed interest in studying torsion points on rational elliptic curves: from identifying the torsion subgroups that can arise on $E/\mathbb{Q}$ under base extension to a field of higher degree to a near complete classification of the image of $\ell$-adic Galois representations associated to elliptic curves over $\mathbb{Q}$. In this talk, we will discuss recent results which leverage this knowledge to begin to understand a new class of elliptic curves, namely, those geometrically isogenous to an elliptic curve defined over $\mathbb{Q}$. Motivated by the problem of producing low degree points on modular curves, we seek to characterize the elliptic curves within a fixed geometric isogeny class producing a point of prime-power order in least possible degree. Supplements No Notes/Supplements Uploaded Video/Audio Files Minimal Torsion Curves In Geometric Isogeny Classes
{"url":"https://legacy.slmath.org/workshops/976/schedules/32957","timestamp":"2024-11-12T16:37:17Z","content_type":"text/html","content_length":"36686","record_id":"<urn:uuid:82e6a0ba-7385-4374-a989-a0f00341c700>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00488.warc.gz"}
This is ancient version of left here for archival purposes. All calculators should still work, but you can't be sure. We also don't update them anymore, so expect outdated info. We suggest to move to new version of Calculla.com by clicking the link: Speed (velocity) Speed (velocity) units converter - converts units between metric (kilometres per hour, meters per second and many more), british-american (miles per hour, foot per second and many more), nautical (knots) and some other (machs, speed of light etc.) Some facts • Velocity is a vector size. • It specifies the change of the position vector in time. The concept of velocity was formalized with the development of calculus. Since then, the velocity is defined as the position vector derivative with time i.e. v = dr / dt, where v is a velocity vector r position vector and t is a time. • In common parlance - when we use the word speed - we normally refer to the scalar size, representing the value of the velocity vector (its "length"). • The velocity by definition only applies to singl point in time. Sometimes, in order to emphasize this fact (and rule out a possible confusion with the average speed) it is called instantaneous • There is also concept of average velocity, which is ratio of distance to time, in which this distance has been traveled. • Average velocity is sometimes colloquially called speed, but it is not a phrase used by physicists. • The basic unit of velocity in the SI system is m/s (meters per second). • According to Einstein's theory of relativity the highest attainable speed in nature is the speed of light amounting to 299 792 458 m/s. □ The speed of light constant exists in many physical formulas e.g. equation desribing the equivalence of energy and mass E=mc^2. □ Einstein's special theory of relativity gives a more general sense of the speed of light as limit velocity of energy transport (or otherwise velocity of impact) in the universe.. □ The light is electromagnetic wave with the frequency that is visible to the human eye. However, the speed of light concerns to all of electromagnetic waves and does not depend on their frequency. This means that for example radio or wifi signals are transmited with the speed of light. • Other common velocity constants are for example: □ First cosmic velocity - the smallest horizontal velocity to be given to the body relative to the celestial body attracts them to the body is moved along a closed orbit. In other words, it is the speed needed to became a satellite. □ Second cosmic velocity - the velocity needed to "break free" from the gravitational attraction of the given orb (for example Earth).. □ Third cosmic velocity - the initial velocity which a body has to have to leave the Solar System. □ Fourth cosmic velocity - the initial velocity needed to leave the Milky Way. How to convert? • Enter the number to field "value" - enter the NUMBER only, no other words, symbols or unit names. You can use dot (.) or comma (,) to enter fractions. • Find and select your starting unit in field "unit". Some unit calculators have huge number of different units to select from - it's just how complicated our world is... • And... you got the result in the table below. You'll find several results for many different units - we show you all results we know at once. Just find the one you're looking for. value: unit: decimals: metric miles per hour [mph] miles per minute kilometres per hour [km/h] miles per second [mps] kilometres per minute [km/min] foot per hour [fph] kilometres per second [km/s] foot per minute [fpm] metres per hour [m/h] foot per second [fps] metres per minute [m/min] inch per hour [iph] metres per second [m/s] inch per minute [ipm] inch per second [ips] furlong per fortnight speed of light in vacuum [c] speed of sound in air mach [M] knot [kn] Links to external sites (leaving Calculla?) Tags and links to this site Tags to Polish version:
{"url":"http://v1.calculla.com/velocity?menuGroup=Health","timestamp":"2024-11-12T22:54:22Z","content_type":"application/xhtml+xml","content_length":"71024","record_id":"<urn:uuid:10b92ff0-a89e-41b9-bc12-069519d7f440>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00386.warc.gz"}
Acceleration of an Elevator, Hydraulic An educational, fair use website Acceleration is defined as the rate of change of velocity with respect to time. It is measured in SI units by meter/second^2. One common unit of acceleration is known as g, the acceleration due to gravity of the Earth. An accelerometer is an instrument used to measure acceleration and the effects due to gravity. Materials Used: • low-g accelerometer • LabPro • laptop • masking tape Experiment 1 In this experiment we rode the elevator at Midwood High School and using an accelerometer that was connected to the laptop through the LabPro.We zeroed the accelerometer and let the Logger Pro software collect the acceleration of the elevator. It collected the elevator's acceleration at 0.1 second intervals for a total of 20 seconds. In this experiment we started collecting the data on the 3rd floor as the elevator travelled down to a stop in the basement. The results are compiled in this table. • The acceleration vs. time graph shows that the peak acceleration of 0.64 m/s^2 was reached at 1.9 s, dropped to 0 m/s^2 while the elevator was traveling at a constant speed, and decelerated to 0.71 m/s^2 at 18.9 s until the elevator came to a rest. • We applied the integral function to the acceleration graph to graph the velocity vs. time graph. • The velocity vs. time graph shows that the elevator started from rest and accelerated in the downward position reaching a peak speed of 0.820 m/s at 6.1 s, until it reached a constant velocity. When we reached the basement the graph shows that the elevator decelerated and came to a stop. • We again applied the integral function to the velocity graph to graph the displcement vs. time graph. • The displacement vs. time graph shows that the elevator started from rest and accelerated in the downward position a distance of 14.309 m, the distance from the 3rd floor to the basement of Midwood High School. Experiment 2 In this experiment we rode the elevator at Midwood High School, but instead of going down, we decided to go up.We zeroed the accelerometer and let the Logger Pro software collect the acceleration of the elevator. It collected the elevator's acceleration at 0.1 second intervals for a total of 20 seconds. In this experiment we started collecting the data on the 2nd floor as the elevator travelled up to a stop at the 4th floor. The results are compiled in this table. • The acceleration vs. time graph shows that the elevator decelerated from rest to 0.66 m/s^2, then did not accelerate until it reached a peak acceleration of 0.74 m/s^2. • When we graphed the velocity, the graph showed that speed increased, reached a constant value for a few seconds, then increased and kept increasing even when the elevator stopped. Therefore, we adjusted our speed values by utilizing the function • Using our adjusted velocity values, we produced a new graph which showed that the speed increased, remained constant and then dropped to zero when the elevator came to a rest. • The distance vs. time graph derived from utilizing the integral function for our adjusted velocity graph showed that the total displacement the elevator travelled was 8.176 m, about two floors of our school building. Olga Strachna, Diana Kuruvilla, Dorothy Soo -- 2005
{"url":"https://hypertextbook.com/facts/2005/elevator.shtml","timestamp":"2024-11-03T20:24:49Z","content_type":"text/html","content_length":"25206","record_id":"<urn:uuid:1d9c1696-7efc-455b-bc0f-832755e72472>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00560.warc.gz"}
How to Calculate Sales Quantity Variance? (Definition, Formula, and Example) - CFAJournal How to Calculate Sales Quantity Variance? (Definition, Formula, and Example) A sales volume variance arises when there is a deviation between the actual number of units sold and the estimated number of units expected to be sold. This could be due to various reasons, as listed below: • Changes in the price of a substitute good would affect the demand for goods of the company as per the law of demand. • Changes in the price of complementary goods would also affect the demand for goods. • Reduced quality of goods would harm customer loyalty and negatively affect the demand for goods. • Demand for a particular good can also be affected by changes in trends and fashion. A sales quantity variance analysis is performed because even though it’s almost impossible for the company to achieve the targeted sales, it needs to keep improving its operations to prosper. Through the variance analysis, companies depict the reasons behind deviations between standard/ideal data and actual data. The reasons are then improved or eliminated to try harder to achieve the targeted data. A sales quantity variance is performed when an entity sells more than one commodity. It is an extension of the sales volume variance, which reflects the changes in standard contribution or profit due to variations in budgeted unit sales and actual units sold. Sales Volume Variance = Sales Quantity Variance + Sales Mix Variance Sales quantity variance can be calculated through the following formula: Marginal costing: (Budgeted sales – Sales at standard mix) Standard contribution Absorption costing: (Budgeted Sales – Sales at standard mix) Standard profit How to calculate sales quantity variance? Step#1 Find out the standard mix ratio the sales should be in. Step#2 Proportion of the actual units sold by the company in the standard mix ratio. Step#3 Apply the formula. Step#4 Add both variances. Let me illustrate these steps through an example below, DDphones Inc is a company that manufactures two products as follows: • Earphones – to be inserted inside the ears for audibility sold at a selling price of $5 • Headphones – must be worn overhead for audibility and sold at a selling price of $10 DDphones had expected to sell 3500 units of earphones and 1500 units of headphones at a standard contribution of $6 and $8, respectively. However, at the end of the year, it managed to sell 4200 units of earphones and 1800 units of headphones. Calculate the sales quantity variance. The sales quantity variance can be calculated as follows: Step#1 Standard Mix Ratio: The standard ratio of the number of earphones and headphones sold is 70:30 Earphones = 4,200 units Headphones = 1,800 units Step#2 Proportion actual sales into the standard mix: This step involves calculating the number of units of each commodity that should have been sold to achieve the targeted profit. It can be done as shown below. Actual total sales = 4,200 + 1,800 = 6,000 Sales at standard mix (earphones) = 6,000 70% = 4200 Sales at standard mix (headphones) = 6,000 30% = 1800 Step#3 Apply the formula: Calculate the sales quantity variance of both commodities separately as follows: (3,500 – 4,200) * $6 = $4,200 (F) for earphones (1,500 – 1,800) * $8 = $2,400 (F) for headphones Step#4 Add variances of both commodities: Sales quantity variance = $4,200 + $2,400 = $6,600 (F) A favorable sales quantity variance occurs when the budgeted sales are less than actual sales at the standard mix, meaning that the company earned more contributions than expected. An unfavorable quantity variance occurs when the budgeted sales are more than actual sales at the standard mix, meaning that the company earned less contribution than expected. Is Sales Volume Variance the Same as Sales Quantity Variance? Sales volume variance and sales quantity variance are two different types of metrics used when measuring the performance of a business’s sales strategy. While they are related, some distinct differences between the two terms should be understood by those evaluating them. Volume variance is a measure of how successful a company’s pricing strategy was during a particular period, based on the volume of sales achieved relative to the number of units sold. This can give insight into whether customers found the price attractive enough to purchase more or if it was too high for them to purchase. On the other hand, quantity variance measures how well a company has sold enough units regardless of price. This metric looks at how close actual sales were to expected results, which can help identify areas where improvements need to be made in either processes or products. While volume and quantity variances involve evaluating sales performance over time, they measure different aspects and provide different insights into the factors at play in any given situation. By understanding what each type can tell you about your own business you can use them together to maximize profits and gain detailed insights into customer behavior over time. Advantages of Using Sales Quantity Variance Analyses Sales quantity variance analysis is a useful tool for businesses looking to gain insight into the effectiveness of their sales strategies. This type of analysis focuses on whether or not companies are selling enough units relative to their expectations and can help them identify areas of improvement. Here are some advantages of using sales quantity variance analyses: 1. Provides an accurate picture of actual performance – Not only does this type of analysis measure how to close sales were to projections, but it also takes into account any pricing changes that may have occurred, giving a complete picture of what happened. 2. Highlights potential issues quickly – By tracking sales quantity variance over time, businesses can quickly recognize problems in their strategy or product quicker to adjust as needed. 3. Helps ensure efficient use of resources – By understanding where the demand lies and discrepancies between expected results and reality, companies can allocate resources most effectively to maximize profits. Overall, a comprehensive sales quantity variance analysis provides valuable insight into customer behavior which helps companies make better decisions about pricing strategy and product development. With the ability to react and adjust quickly to changing conditions, organizations can remain ahead of their competition and stay profitable. Limitations of Sales Quantity Variance Analysis Sales quantity variance analysis is a powerful tool for analyzing sales performance, but it has limitations. While this type of analysis can help businesses identify areas of improvement and where the demand lies, it cannot provide a complete picture of customer behavior. Some key limitations to consider when using this form of analysis include the following: 1. Does not take into account other factors – Variance analysis measures discrepancies between projected sales and actual performance based on the number of units sold, but it does not take into account any other factors that may affect buyer decisions such as discounts or promotions. 2. Not always an accurate reflection – Since variance analyses compare expected results to actual outcomes, there could be issues with the assumptions made about the demand, resulting in an inaccurate reflection of what occurred. 3. Difficult to interpret – As variance analyses rely heavily on numerical data, they can be difficult to interpret correctly without careful consideration and a thorough understanding of the metrics involved. While sales quantity variance analysis is a valuable tool for evaluating performance and customer behavior, it should not be used in isolation as it may lead to inaccurate conclusions or missed opportunities due to its inherent limitations.
{"url":"https://www.cfajournal.org/how-calculate-sales-quantity-variance/","timestamp":"2024-11-07T05:38:37Z","content_type":"text/html","content_length":"158426","record_id":"<urn:uuid:52258450-8ab1-4503-b315-4b5e3e17d3a3>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00058.warc.gz"}
KB TO GB CONVERSION ONLINE FREE CALCULATOR | CONVERTER IN 2024 KB TO GB CONVERSION ONLINE | Let's learn how to convert KB to GB by free kilobyte in a gigabyte online calculator in 2024. Do you ever find yourself struggling to convert KB to GB? If so, you're not alone. Many people find this task to be confusing and difficult. Enter Kilobyte/s(KB)value- = GB(in Decimal) = GB(in Binary) 1. KB to MB online converter 2. MB in GB converter 3. MB to KB converter But with a little bit of understanding, it can be a breeze. Let's take a look at how to convert KB to GB. First, let's understand what a kilobyte (KB) and a gigabyte (GB) are. A kilobyte is a unit of measurement that equals 1,000 bytes. A gigabyte, on the other hand, is a unit of measurement that equals 1,000,000,000 bytes. So, one gigabyte is equal to one thousand times one thousand kilobytes. To convert from KB to GB, you simply need to divide the number of kilobytes by 1,000,000. For example, if you have 2,000,000 KB, you would divide this by 1,000,000 to get 2 GB. Keep in mind that you can also use this method to convert from MB to GB. To do this, you would simply need to divide the number of megabytes (MB) by 1,000. So, there you have it - a quick and easy guide to converting KB to GB. With this knowledge, you'll be able to convert between these units of measurement with ease. A gigabyte (GB) is a measure of computer storage capacity. It is equal to 1,000 megabytes (MB), or 1,000,000 kilobytes (KB). A kilobyte (KB) is a measure of computer storage capacity. It is equal to 1,024 bytes. 1 byte = 8 bits. 1 Kilobyte = 1024 bytes. 1 Megabyte = 1024 Kilobytes. 1 Gigabyte = 1024 Megabytes. 1 / 1000000 = 0.000001 gigabytes in Decimal KB/1000000 = Gigabytes WHAT ARE THE BENEFITS TO CONVERT KB TO GB WITH THE CONVERTER TOOL IN 2023? Are you looking for a way to convert your KB to GB in 2023? If so, then you may want to consider using a converter tool. There are many benefits to using a converter tool, including: 1. They are easy to use. 2. They are usually free. 3. They can help you convert your files quickly and easily. If you are looking for a way to convert your KB to GB in 2023, then a converter tool may be the best option for you. There are many common uses for gigabytes (GB) and kilobytes (KB). GB are often used to measure the size of large files or data sets, while KB are more commonly used to measure the size of smaller Some common uses for GB include measuring the size of video files, audio files, and images. GB can also be used to measure the amount of storage space available on a hard drive or other storage Some common uses for KB include measuring the size of text files, web pages, and email messages. KB can also be used to measure the amount of bandwidth used by a website or internet connection. A gigabyte is often used to measure the capacity of a computer's hard drive, while a kilobyte is typically used to measure the size of a computer file. 1 MB as 1,000,000 (10^6) bytes and 1 GB as 1,000,000,000 (10^9) bytes. Networking - A Gigabit Ethernet (GbE) connection transmits data at a rate of 1,000,000,000 bits per second (1,000 Mbps). A Terabit Ethernet (TbE) connection transmits data at a rate of 1,000,000,000,000 bits per second (1,000 Gbps). A Petabit Ethernet (PbE) connection transmits data at a rate of 1,000,000,000,000,000 bits per second (1,000 Tbps). A Petabit Ethernet connection can transmit the equivalent of 250 DVDs worth of data every second. │1 kB (kilobyte) = 1000^1 B = 10^3 bytes │ │1 MB (megabyte) = 1000^2 B = 10^6 bytes │ │1 GB (gigabyte) = 1000^3 B = 10^9 bytes │ │1 TB (terabyte) = 1000^4 B = 10^12 bytes │ │1 PB (petabyte) = 1000^5 B = 10^15 bytes │ │1 gigabyte = 1 billion bytes 1 terabyte = 1 trillion bytes│ │1 KiB (kibibyte) = 10241 B = 210 bytes│ │1 MiB (mebibyte) = 10242 B = 220 bytes│ │1 GiB (gibibyte) = 10243 B = 230 bytes│ │1 TiB (tebibyte) = 10244 B = 240 bytes│ │1 PiB (pebibyte) = 10245 B = 250 bytes│ When it comes to understanding internet usage, there are two key units of measurement that are often used: kilobytes (KB) and gigabytes (GB). So, what's the difference between the two, and how can you convert KB to GB? A kilobyte is actually a multiple of the unit byte; specifically, it is equal to 1,024 bytes. A gigabyte, on the other hand, is 1,024 times larger than a kilobyte, coming in at 1,048,576 bytes. So, if you want to convert KB to GB, you simply need to divide the number of kilobytes by 1,024. For example, let's say you want to know how many GB are in 2,560 KB. To calculate this, you would divide 2,560 by 1,024, which would give you 2.5 GB. It's important to understand the difference between KB and GB when it comes to internet usage, as your internet plan is typically billed based on the amount of data you use each month. For instance, if you have a 1 GB data plan, you would be able to use 1,024 MB of data per month (which is the same as 1,048,576 KB). Keep in mind that 1 MB is actually equal to 1,024 KB, so when you see internet plans that are advertised as, say, 50 GB, they are really giving you 50,000 MB (which is the same as 51,200,000 KB). So, there you have it! A quick guide to understanding the difference between KB and GB, and how to convert KB to GB. understanding internet usage is key to choosing the right data plan, so be sure to remember these tips when looking for a plan that's perfect for you. 1. Choose a plan that meets your needs First and foremost, it's important to choose a data plan that meets your needs. If you rarely use data, then a plan with a low data cap may be sufficient. However, if you use data frequently, then you'll likely need a plan with a higher data cap. 2. Consider your usage patterns It's also important to consider your usage patterns when choosing a data plan. For example, if you often use data while traveling, you'll need a plan that offers international roaming. Alternatively, if you only use data at home, a plan with a limited nationwide data allowance may be more appropriate. 3. Review the plan's terms and conditions Before signing up for a data plan, be sure to review the plan's terms and conditions. This will help you understand the plan's restrictions and what you can and can't do. 1. Use the website's search bar to input "kilobytes to gigabytes." 2. Enter the number of kilobytes you would like to convert into gigabytes. Automatically this conversion tool converts the number of kilobytes you entered into gigabytes. There are several benefits to using a website to convert between kilobytes and gigabytes. First, it is convenient because the website can be accessed from any internet-connected device. Additionally, the website is user-friendly and easy to use, making it a great resource for people who are not familiar with the metric system. Finally, the website is updated regularly with the latest conversion rates, so users can be confident that they are getting accurate information. HOW TO CONVERT BETWEEN GIGABYTES AND KILOBYTES IN M.S EXCEL? To convert between gigabytes and kilobytes in M.S Excel, you can use the following formula: =convert(number, "GB", "KB") Where "number" is the number you want to convert, and "GB" and "KB" are the units you want to convert between. 1 MILLION KB TO GB This is a common question when dealing with large amounts of data. How many gigabytes are in a million kilobytes? A million kilobytes is actually only 1,000 megabytes (MB). So there are 1,000 MB in a GB. This means that there are 1,000,000 KB in a GB. Therefore, there are 1,000 gigabytes (GB) in a million kilobytes (KB). This can be a useful conversion to know when dealing with large amounts of data. For example, if you have a file that is 1.5 GB in size, you know that it is 1,500 MB or 1.5 million KB. There is a big difference between KB and GB when it comes to data usage. KB stands for kilobytes and GB stands for gigabytes. A gigabyte is about 1000 times bigger than a kilobyte. That means that if you have a 1 GB file, it would take up about 1000 times more space than a 1 KB file. When it comes to data usage, this difference can be significant. If you have a lot of data to use, you will want to make sure you have a plan that gives you enough GB to cover your usage. If you only have a few KB of data to use. Is a KB bigger than a GB? A kb is 1024 bytes and a GB is 1024 megabytes so a kb is not bigger than a GB. What is bigger 1 GB or 1 KB? This is a question that often comes up when discussing computer storage. The answer is actually quite simple - 1 GB is bigger than 1 KB. This is because 1 GB is equal to 1,000,000,000 bytes, while 1 KB is only equal to 1,000 bytes. So, when you are looking at storage capacity, 1 GB is definitely bigger than 1 KB. Of course, this doesn't mean that 1 KB is worthless. It just means that 1 GB can hold more data than 1 KB. So, when you are choosing a storage option for your computer, make sure to pick one that has the right capacity for your needs. How many KB is 2 GB of data? 2 GB of data is equal to 2048kb. This means that if you have 2 GB of data, you have 2048kb of data. So, that 2000000 KB is equal to 2 GB. What is the website's name? The website's name is kilobyte to gigabyte. What is the website's purpose? The website's purpose is to help you convert between kilobytes and gigabytes. How do I use the website? To use the website, simply enter the desired value into the text field and click the "Convert" button. The website will display the corresponding value in gigabytes and kilobytes. What is KB to GB calculator? If you're wondering how to convert Kilobytes to Gigabytes, you're in luck - there's a handy tool for that, and it's called a KB to GB calculator. There's a few things you need to know before you use the calculator. First, one Kilobyte is equivalent to 1,024 Bytes. Second, one Gigabyte is equivalent to 1,024 Megabytes. With that information in mind, using the KB to GB calculator is simple. All you need to do is enter the number of Kilobytes you want to convert, and the calculator will do the rest. The calculator will give you the answer in Gigabytes - all you need to do then is round up or down to the nearest whole number. And that's all there is to it! Converting Kilobytes to Gigabytes is easy with a KB to GB calculator. How to calculate 1000000 KB to GB? 1000000 kb = 1000000 bytes 1000000000 bytes = 1 GB So, 1000000 kb is equal to 1 GB Conclude - Here in this article, I have explained in full detail about the difference between KB (kilobytes) and GB (gigabytes) and how to convert KB to GB online with a free calculator in 2023 for beginners with examples as well as a chart.
{"url":"https://www.numbersintowordsconverter.in/2022/07/kb-to-gb-conversion-online.html","timestamp":"2024-11-06T02:26:00Z","content_type":"application/xhtml+xml","content_length":"312382","record_id":"<urn:uuid:6cecf0f7-97ca-4436-97e1-3f7812fc8c8b>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00456.warc.gz"}
Structural equation modeling Figure 1. An example structural equation model after estimation. Latent variables are sometimes indicated with ovals while observed variables are shown in rectangles. Residuals and variances are sometimes drawn as double-headed arrows (shown here) or single arrows and a circle (as in Figure 2). The latent IQ variance is fixed at 1 to provide scale to the model. Figure 1 depicts measurement errors influencing each indicator of latent intelligence and each indicator of latent achievement. Neither the indicators nor the measurement errors of the indicators are modeled as influencing the latent variables.^[1] Figure 2. An example structural equation model before estimation. Similar to Figure 1 but without standardized values and fewer items. Because intelligence and academic performance are merely imagined or theory-postulated variables, their precise scale values are unknown, though the model specifies that each latent variable's values must fall somewhere along the observable scale possessed by one of the indicators. The 1.0 effect connecting a latent to an indicator specifies that each real unit increase or decrease in the latent variable's value results in a corresponding unit increase or decrease in the indicator's value. It is hoped a good indicator has been chosen for each latent, but the 1.0 values do not signal perfect measurement because this model also postulates that there are other unspecified entities causally impacting the observed indicator measurements, thereby introducing measurement error. This model postulates that separate measurement errors influence each of the two indicators of latent intelligence, and each indicator of latent achievement. The unlabeled arrow pointing to academic performance acknowledges that things other than intelligence can also influence academic performance. Structural equation modeling (SEM) is a diverse set of methods used by scientists doing both observational and experimental research. SEM is used mostly in the social and behavioral sciences but it is also used in epidemiology,^[2] business,^[3] and other fields. A definition of SEM is difficult without reference to technical language, but a good starting place is the name itself. SEM involves a model representing how various aspects of some phenomenon are thought to causally connect to one another. Structural equation models often contain postulated causal connections among some latent variables (variables thought to exist but which can't be directly observed). Additional causal connections link those latent variables to observed variables whose values appear in a data set. The causal connections are represented using equations but the postulated structuring can also be presented using diagrams containing arrows as in Figures 1 and 2. The causal structures imply that specific patterns should appear among the values of the observed variables. This makes it possible to use the connections between the observed variables' values to estimate the magnitudes of the postulated effects, and to test whether or not the observed data are consistent with the requirements of the hypothesized causal structures.^[4] The boundary between what is and is not a structural equation model is not always clear but SE models often contain postulated causal connections among a set of latent variables (variables thought to exist but which can't be directly observed, like an attitude, intelligence or mental illness) and causal connections linking the postulated latent variables to variables that can be observed and whose values are available in some data set. Variations among the styles of latent causal connections, variations among the observed variables measuring the latent variables, and variations in the statistical estimation strategies result in the SEM toolkit including confirmatory factor analysis, confirmatory composite analysis, path analysis, multi-group modeling, longitudinal modeling, partial least squares path modeling, latent growth modeling and hierarchical or multilevel modeling.^[5]^[6]^[7]^[8]^[9] SEM researchers use computer programs to estimate the strength and sign of the coefficients corresponding to the modeled structural connections, for example the numbers connected to the arrows in Figure 1. Because a postulated model such as Figure 1 may not correspond to the worldly forces controlling the observed data measurements, the programs also provide model tests and diagnostic clues suggesting which indicators, or which model components, might introduce inconsistency between the model and observed data. Criticisms of SEM methods hint at: disregard of available model tests, problems in the model's specification, a tendency to accept models without considering external validity, and potential philosophical biases.^[10] A great advantage of SEM is that all of these measurements and tests occur simultaneously in one statistical estimation procedure, where all the model coefficients are calculated using all information from the observed variables. This means the estimates are more accurate than if a researcher were to calculate each part of the model separately.^[11] Structural equation modeling (SEM) began differentiating itself from correlation and regression when Sewall Wright provided explicit causal interpretations for a set of regression-style equations based on a solid understanding of the physical and physiological mechanisms producing direct and indirect effects among his observed variables.^[12]^[13]^[14] The equations were estimated like ordinary regression equations but the substantive context for the measured variables permitted clear causal, not merely predictive, understandings. O. D. Duncan introduced SEM to the social sciences in his 1975 book^[15] and SEM blossomed in the late 1970's and 1980's when increasing computing power permitted practical model estimation. In 1987 Hayduk^[6] provided the first book-length introduction to structural equation modeling with latent variables, and this was soon followed by Bollen's popular text (1989).^[16] Different yet mathematically related modeling approaches developed in psychology, sociology, and economics. Early Cowles Commission work on simultaneous equations estimation centered on Koopman and Hood's (1953) algorithms from transport economics and optimal routing, with maximum likelihood estimation, and closed form algebraic calculations, as iterative solution search techniques were limited in the days before computers. The convergence of two of these developmental streams (factor analysis from psychology, and path analysis from sociology via Duncan) produced the current core of SEM. One of several programs Karl Jöreskog developed at Educational Testing Services, LISREL^[17]^[18]^[19] embedded latent variables (which psychologists knew as the latent factors from factor analysis) within path-analysis-style equations (which sociologists inherited from Wright and Duncan). The factor-structured portion of the model incorporated measurement errors which permitted measurement-error-adjustment, though not necessarily error-free estimation, of effects connecting different postulated latent variables. Traces of the historical convergence of the factor analytic and path analytic traditions persist as the distinction between the measurement and structural portions of models; and as continuing disagreements over model testing, and whether measurement should precede or accompany structural estimates.^[20]^[21] Viewing factor analysis as a data-reduction technique deemphasizes testing, which contrasts with path analytic appreciation for testing postulated causal connections – where the test result might signal model misspecification. The friction between factor analytic and path analytic traditions continue to surface in the literature. Wright's path analysis influenced Hermann Wold, Wold's student Karl Jöreskog, and Jöreskog's student Claes Fornell, but SEM never gained a large following among U.S. econometricians, possibly due to fundamental differences in modeling objectives and typical data structures. The prolonged separation of SEM's economic branch led to procedural and terminological differences, though deep mathematical and statistical connections remain.^[22]^[23] The economic version of SEM can be seen in SEMNET discussions of endogeneity, and in the heat produced as Judea Pearl's approach to causality via directed acyclic graphs (DAG's) rubs against economic approaches to modeling.^[4] Discussions comparing and contrasting various SEM approaches are available^[24]^[25] but disciplinary differences in data structures and the concerns motivating economic models make reunion unlikely. Pearl^[4] extended SEM from linear to nonparametric models, and proposed causal and counterfactual interpretations of the equations. Nonparametric SEMs permit estimating total, direct and indirect effects without making any commitment to linearity of effects or assumptions about the distributions of the error terms.^[25] SEM analyses are popular in the social sciences because computer programs make it possible to estimate complicated causal structures, but the complexity of the models introduces substantial variability in the quality of the results. Some, but not all, results are obtained without the "inconvenience" of understanding experimental design, statistical control, the consequences of sample size, and other features contributing to good research design. General steps and considerations The following considerations apply to the construction and assessment of many structural equation models. Model specification Building or specifying a model requires attending to: • the set of variables to be employed, • what is known about the variables, • what is presumed or hypothesized about the variables' causal connections and disconnections, • what the researcher seeks to learn from the modeling, • and the cases for which values of the variables will be available (kids? workers? companies? countries? cells? accidents? cults?). Structural equation models attempt to mirror the worldly forces operative for causally homogeneous cases – namely cases enmeshed in the same worldly causal structures but whose values on the causes differ and who therefore possess different values on the outcome variables. Causal homogeneity can be facilitated by case selection, or by segregating cases in a multi-group model. A model's specification is not complete until the researcher specifies: • which effects and/or correlations/covariances are to be included and estimated, • which effects and other coefficients are forbidden or presumed unnecessary, • and which coefficients will be given fixed/unchanging values (e.g. to provide measurement scales for latent variables as in Figure 2). The latent level of a model is composed of endogenous and exogenous variables. The endogenous latent variables are the true-score variables postulated as receiving effects from at least one other modeled variable. Each endogenous variable is modeled as the dependent variable in a regression-style equation. The exogenous latent variables are background variables postulated as causing one or more of the endogenous variables and are modeled like the predictor variables in regression-style equations. Causal connections among the exogenous variables are not explicitly modeled but are usually acknowledged by modeling the exogenous variables as freely correlating with one another. The model may include intervening variables – variables receiving effects from some variables but also sending effects to other variables. As in regression, each endogenous variable is assigned a residual or error variable encapsulating the effects of unavailable and usually unknown causes. Each latent variable, whether exogenous or endogenous, is thought of as containing the cases' true-scores on that variable, and these true-scores causally contribute valid/genuine variations into one or more of the observed/reported indicator variables.^[26] The LISREL program assigned Greek names to the elements in a set of matrices to keep track of the various model components. These names became relatively standard notation, though the notation has been extended and altered to accommodate a variety of statistical considerations.^[19]^[6]^[16]^[27] Texts and programs "simplifying" model specification via diagrams or by using equations permitting user-selected variable names, re-convert the user's model into some standard matrix-algebra form in the background. The "simplifications" are achieved by implicitly introducing default program "assumptions" about model features with which users supposedly need not concern themselves. Unfortunately, these default assumptions easily obscure model components that leave unrecognized issues lurking within the model's structure, and underlying matrices. Two main components of models are distinguished in SEM: the structural model showing potential causal dependencies between endogenous and exogenous latent variables, and the measurement model showing the causal connections between the latent variables and the indicators. Exploratory and confirmatory factor analysis models, for example, focus on the causal measurement connections, while path models more closely correspond to SEMs latent structural connections. Modelers specify each coefficient in a model as being free to be estimated, or fixed at some value. The free coefficients may be postulated effects the researcher wishes to test, background correlations among the exogenous variables, or the variances of the residual or error variables providing additional variations in the endogenous latent variables. The fixed coefficients may be values like the 1.0 values in Figure 2 that provide a scales for the latent variables, or values of 0.0 which assert causal disconnections such as the assertion of no-direct-effects (no arrows) pointing from Academic Achievement to any of the four scales in Figure 1. SEM programs provide estimates and tests of the free coefficients, while the fixed coefficients contribute importantly to testing the overall model structure. Various kinds of constraints between coefficients can also be used.^[27]^[6]^[16] The model specification depends on what is known from the literature, the researcher's experience with the modeled indicator variables, and the features being investigated by using the specific model structure. There is a limit to how many coefficients can be estimated in a model. If there are fewer data points than the number of estimated coefficients, the resulting model is said to be "unidentified" and no coefficient estimates can be obtained. Reciprocal effect, and other causal loops, may also interfere with estimation.^[28]^[29]^[27] Estimation of free model coefficients Model coefficients fixed at zero, 1.0, or other values, do not require estimation because they already have specified values. Estimated values for free model coefficients are obtained by maximizing fit to, or minimizing difference from, the data relative to what the data's features would be if the free model coefficients took on the estimated values. The model's implications for what the data should look like for a specific set of coefficient values depends on: a) the coefficients' locations in the model (e.g. which variables are connected/disconnected), b) the nature of the connections between the variables (covariances or effects; with effects often assumed to be linear), c) the nature of the error or residual variables (often assumed to be independent of, or causally-disconnected from, many variables), and d) the measurement scales appropriate for the variables (interval level measurement is often assumed). A stronger effect connecting two latent variables implies the indicators of those latents should be more strongly correlated. Hence, a reasonable estimate of a latent's effect will be whatever value best matches the correlations between the indicators of the corresponding latent variables – namely the estimate-value maximizing the match with the data, or minimizing the differences from the data. With maximum likelihood estimation, the numerical values of all the free model coefficients are individually adjusted (progressively increased or decreased from initial start values) until they maximize the likelihood of observing the sample data – whether the data are the variables' covariances/correlations, or the cases' actual values on the indicator variables. Ordinary least squares estimates are the coefficient values that minimize the squared differences between the data and what the data would look like if the model was correctly specified, namely if all the model's estimated features correspond to real worldly features. The appropriate statistical feature to maximize or minimize to obtain estimates depends on the variables' levels of measurement (estimation is generally easier with interval level measurements than with nominal or ordinal measures), and where a specific variable appears in the model (e.g. endogenous dichotomous variables create more estimation difficulties than exogenous dichotomous variables). Most SEM programs provide several options for what is to be maximized or minimized to obtain estimates the model's coefficients. The choices often include maximum likelihood estimation (MLE), full information maximum likelihood (FIML), ordinary least squares (OLS), weighted least squares (WLS), diagonally weighted least squares (DWLS), and two stage least squares.^[27] One common problem is that a coefficient's estimated value may be underidentified because it is insufficiently constrained by the model and data. No unique best-estimate exists unless the model and data together sufficiently constrain or restrict a coefficient's value. For example, the magnitude of a single data correlation between two variables is insufficient to provide estimates of a reciprocal pair of modeled effects between those variables. The correlation might be accounted for by one of the reciprocal effects being stronger than the other effect, or the other effect being stronger than the one, or by effects of equal magnitude. Underidentified effect estimates can be rendered identified by introducing additional model and/or data constraints. For example, reciprocal effects can be rendered identified by constraining one effect estimate to be double, triple, or equivalent to, the other effect estimate,^[29] but the resultant estimates will only be trustworthy if the additional model constraint corresponds to the world's structure. Data on a third variable that directly causes only one of a pair of reciprocally causally connected variables can also assist identification.^[28] Constraining a third variable to not directly cause one of the reciprocally-causal variables breaks the symmetry otherwise plaguing the reciprocal effect estimates because that third variable must be more strongly correlated with the variable it causes directly than with the variable at the "other" end of the reciprocal which it impacts only indirectly.^[28] Notice that this again presumes the properness of the model's causal specification – namely that there really is a direct effect leading from the third variable to the variable at this end of the reciprocal effects and no direct effect on the variable at the "other end" of the reciprocally connected pair of variables. Theoretical demands for null/zero effects provide helpful constraints assisting estimation, though theories often fail to clearly report which effects are allegedly nonexistent. Model assessment This article may benefit from being shortened by the use of summary style. Model assessment depends on the theory, the data, the model, and the estimation strategy. Hence model assessments consider: • whether the data contain reasonable measurements of appropriate variables, • whether the modeled case are causally homogeneous, (It makes no sense to estimate one model if the data cases reflect two or more different causal networks.) • whether the model appropriately represents the theory or features of interest, (Models are unpersuasive if they omit features required by a theory, or contain coefficients inconsistent with that • whether the estimates are statistically justifiable, (Substantive assessments may be devastated: by violating assumptions, by using an inappropriate estimator, and/or by encountering non-convergence of iterative estimators.) • the substantive reasonableness of the estimates, (Negative variances, and correlations exceeding 1.0 or -1.0, are impossible. Statistically possible estimates that are inconsistent with theory may also challenge theory, and our understanding.) • the remaining consistency, or inconsistency, between the model and data. (The estimation process minimizes the differences between the model and data but important and informative differences may Research claiming to test or "investigate" a theory requires attending to beyond-chance model-data inconsistency. Estimation adjusts the model's free coefficients to provide the best possible fit to the data. The output from SEM programs includes a matrix reporting the relationships among the observed variables that would be observed if the estimated model effects actually controlled the observed variables' values. The "fit" of a model reports match or mismatch between the model-implied relationships (often covariances) and the corresponding observed relationships among the variables. Large and significant differences between the data and the model's implications signal problems. The probability accompanying a χ^2 (chi-squared) test is the probability that the data could arise by random sampling variations if the estimated model constituted the real underlying population forces. A small χ^2 probability reports it would be unlikely for the current data to have arisen if the modeled structure constituted the real population causal forces – with the remaining differences attributed to random sampling variations. If a model remains inconsistent with the data despite selecting optimal coefficient estimates, an honest research response reports and attends to this evidence (often a significant model χ^2 test).^[ 30] Beyond-chance model-data inconsistency challenges both the coefficient estimates and the model's capacity for adjudicating the model's structure, irrespective of whether the inconsistency originates in problematic data, inappropriate statistical estimation, or incorrect model specification. Coefficient estimates in data-inconsistent ("failing") models are interpretable, as reports of how the world would appear to someone believing a model that conflicts with the available data. The estimates in data-inconsistent models do not necessarily become "obviously wrong" by becoming statistically strange, or wrongly signed according to theory. The estimates may even closely match a theory's requirements but the remaining data inconsistency renders the match between the estimates and theory unable to provide succor. Failing models remain interpretable, but only as interpretations that conflict with available evidence. Replication is unlikely to detect misspecified models which inappropriately-fit the data. If the replicate data is within random variations of the original data, the same incorrect coefficient placements that provided inappropriate-fit to the original data will likely also inappropriately-fit the replicate data. Replication helps detect issues such as data mistakes (made by different research groups), but is especially weak at detecting misspecifications after exploratory model modification – as when confirmatory factor analysis (CFA) is applied to a random second-half of data following exploratory factor analysis (EFA) of first-half data. A modification index is an estimate of how much a model's fit to the data would "improve" (but not necessarily how much the model's structure would improve) if a specific currently-fixed model coefficient were freed for estimation. Researchers confronting data-inconsistent models can easily free coefficients the modification indices report as likely to produce substantial improvements in fit. This simultaneously introduces a substantial risk of moving from a causally-wrong-and-failing model to a causally-wrong-but-fitting model because improved data-fit does not provide assurance that the freed coefficients are substantively reasonable or world matching. The original model may contain causal misspecifications such as incorrectly directed effects, or incorrect assumptions about unavailable variables, and such problems cannot be corrected by adding coefficients to the current model. Consequently, such models remain misspecified despite the closer fit provided by additional coefficients. Fitting yet worldly-inconsistent models are especially likely to arise if a researcher committed to a particular model (for example a factor model having a desired number of factors) gets an initially-failing model to fit by inserting measurement error covariances "suggested" by modification indices. MacCallum (1986) demonstrated that "even under favorable conditions, models arising from specification serchers must be viewed with caution."^[31] Model misspecification may sometimes be corrected by insertion of coefficients suggested by the modification indices, but many more corrective possibilities are raised by employing a few indicators of similar-yet-importantly-different latent variables.^[32] "Accepting" failing models as "close enough" is also not a reasonable alternative. A cautionary instance was provided by Browne, MacCallum, Kim, Anderson, and Glaser who addressed the mathematics behind why the χ^2 test can have (though it does not always have) considerable power to detect model misspecification.^[33] The probability accompanying a χ^2 test is the probability that the data could arise by random sampling variations if the current model, with its optimal estimates, constituted the real underlying population forces. A small χ^2 probability reports it would be unlikely for the current data to have arisen if the current model structure constituted the real population causal forces – with the remaining differences attributed to random sampling variations. Browne, McCallum, Kim, Andersen, and Glaser presented a factor model they viewed as acceptable despite the model being significantly inconsistent with their data according to χ^2. The fallaciousness of their claim that close-fit should be treated as good enough was demonstrated by Hayduk, Pazkerka-Robinson, Cummings, Levers and Beres^[34] who demonstrated a fitting model for Browne, et al.'s own data by incorporating an experimental feature Browne, et al. overlooked. The fault was not in the math of the indices or in the over-sensitivity of χ^2 testing. The fault was in Browne, MacCallum, and the other authors forgetting, neglecting, or overlooking, that the amount of ill fit cannot be trusted to correspond to the nature, location, or seriousness of problems in a model's specification.^[35] Many researchers tried to justify switching to fit-indices, rather than testing their models, by claiming that χ^2 increases (and hence χ^2 probability decreases) with increasing sample size (N). There are two mistakes in discounting χ^2 on this basis. First, for proper models, χ^2 does not increase with increasing N,^[30] so if χ^2 increases with N that itself is a sign that something is detectably problematic. And second, for models that are detectably misspecified, χ^2 increase with N provides the good-news of increasing statistical power to detect model misspecification (namely power to detect Type II error). Some kinds of important misspecifications cannot be detected by χ^2,^[35] so any amount of ill fit beyond what might be reasonably produced by random variations warrants report and consideration.^[36]^[30] The χ^2 model test, possibly adjusted,^[37] is the strongest available structural equation model test. Numerous fit indices quantify how closely a model fits the data but all fit indices suffer from the logical difficulty that the size or amount of ill fit is not trustably coordinated with the severity or nature of the issues producing the data inconsistency.^[35] Models with different causal structures which fit the data identically well, have been called equivalent models.^[27] Such models are data-fit-equivalent though not causally equivalent, so at least one of the so-called equivalent models must be inconsistent with the world's structure. If there is a perfect 1.0 correlation between X and Y and we model this as X causes Y, there will be perfect fit and zero residual error. But the model may not match the world because Y may actually cause X, or both X and Y may be responding to a common cause Z, or the world may contain a mixture of these effects (e.g. like a common cause plus an effect of Y on X), or other causal structures. The perfect fit does not tell us the model's structure corresponds to the world's structure, and this in turn implies that getting closer to perfect fit does not necessarily correspond to getting closer to the world's structure – maybe it does, maybe it doesn't. This makes it incorrect for a researcher to claim that even perfect model fit implies the model is correctly causally specified. For even moderately complex models, precisely equivalently-fitting models are rare. Models almost-fitting the data, according to any index, unavoidably introduce additional potentially-important yet unknown model misspecifications. These models constitute a greater research impediment. This logical weakness renders all fit indices "unhelpful" whenever a structural equation model is significantly inconsistent with the data,^[36] but several forces continue to propagate fit-index use. For example, Dag Sorbom reported that when someone asked Karl Joreskog, the developer of the first structural equation modeling program, "Why have you then added GFI?" to your LISREL program, Joreskog replied "Well, users threaten us saying they would stop using LISREL if it always produces such large chi-squares. So we had to invent something to make people happy. GFI serves that purpose."^[38] The χ^2 evidence of model-data inconsistency was too statistically solid to be dislodged or discarded, but people could at least be provided a way to distract from the "disturbing" evidence. Career-profits can still be accrued by developing additional indices, reporting investigations of index behavior, and publishing models intentionally burying evidence of model-data inconsistency under an MDI (a mound of distracting indices). There seems no general justification for why a researcher should "accept" a causally wrong model, rather than attempting to correct detected misspecifications. And some portions of the literature seems not to have noticed that "accepting a model" (on the basis of "satisfying" an index value) suffers from an intensified version of the criticism applied to "acceptance" of a null-hypothesis. Introductory statistics texts usually recommend replacing the term "accept" with "failed to reject the null hypothesis" to acknowledge the possibility of Type II error. A Type III error arises from "accepting" a model hypothesis when the current data are sufficient to reject the model. Whether or not researchers are committed to seeking the world’s structure is a fundamental concern. Displacing test evidence of model-data inconsistency by hiding it behind index claims of acceptable-fit, introduces the discipline-wide cost of diverting attention away from whatever the discipline might have done to attain a structurally-improved understanding of the discipline’s substance. The discipline ends up paying a real costs for index-based displacement of evidence of model misspecification. The frictions created by disagreements over the necessity of correcting model misspecifications will likely increase with increasing use of non-factor-structured models, and with use of fewer, more-precise, indicators of similar yet importantly-different latent variables.^[32] The considerations relevant to using fit indices include checking: 1. whether data concerns have been addressed (to ensure data mistakes are not driving model-data inconsistency); 2. whether criterion values for the index have been investigated for models structured like the researcher's model (e.g. index criterion based on factor structured models are only appropriate if the researcher's model actually is factor structured); 3. whether the kinds of potential misspecifications in the current model correspond to the kinds of misspecifications on which the index criterion are based (e.g. criteria based on simulation of omitted factor loadings may not be appropriate for misspecification resulting from failure to include appropriate control variables); 4. whether the researcher knowingly agrees to disregard evidence pointing to the kinds of misspecifications on which the index criteria were based. (If the index criterion is based on simulating a missing factor loading or two, using that criterion acknowledges the researcher's willingness to accept a model missing a factor loading or two.); 5. whether the latest, not outdated, index criteria are being used (because the criteria for some indices tightened over time); 6. whether satisfying criterion values on pairs of indices are required (e.g. Hu and Bentler^[39] report that some common indices function inappropriately unless they are assessed together.); 7. whether a model test is, or is not, available. (A χ^2 value, degrees of freedom, and probability will be available for models reporting indices based on χ^2.) 8. and whether the researcher has considered both alpha (Type I) and beta (Type II) errors in making their index-based decisions (E.g. if the model is significantly data-inconsistent, the "tolerable" amount of inconsistency is likely to differ in the context of medical, business, social and psychological contexts.). Some of the more commonly used fit statistics include • Chi-square □ A fundamental test of fit used in the calculation of many other fit measures. It is a function of the discrepancy between the observed covariance matrix and the model-implied covariance matrix. Chi-square increases with sample size only if the model is detectably misspecified.^[30] • Akaike information criterion (AIC) □ An index of relative model fit: The preferred model is the one with the lowest AIC value. □ ${\displaystyle {\mathit {AIC}}=2k-2\ln(L)\,}$ □ where k is the number of of the model. • Root Mean Square Error of Approximation (RMSEA) □ Fit index where a value of zero indicates the best fit.^[40] Guidelines for determining a "close fit" using RMSEA are highly contested.^[41] • Standardized Root Mean Squared Residual (SRMR) □ The SRMR is a popular absolute fit indicator. Hu and Bentler (1999) suggested .08 or smaller as a guideline for good fit.^[42] • Comparative Fit Index □ In examining baseline comparisons, the CFI depends in large part on the average size of the correlations in the data. If the average correlation between variables is not high, then the CFI will not be very high. A CFI value of .95 or higher is desirable.^[42] The following table provides references documenting these, and other, features for some common indices: the RMSEA (Root Mean Square Error of Approximation), SRMR (Standardized Root Mean Squared Residual), CFI (Confirmatory Fit Index), and the TLI (the Tucker-Lewis Index). Additional indices such as the AIC (Akaike Information Criterion) can be found in most SEM introductions.^[27] For each measure of fit, a decision as to what represents a good-enough fit between the model and the data reflects the researcher's modeling objective (perhaps challenging someone else's model, or improving measurement); whether or not the model is to be claimed as having been "tested"; and whether the researcher is comfortable "disregarding" evidence of the index-documented degree of ill fit.^[30] Features of Fit Indices RMSEA SRMR CFI Index Name Root Mean Square Error of Approximation Standardized Root Mean Squared Residual Confirmatory Fit Index Formula RMSEA = sq-root((χ^2 - d)/(d(N-1))) Basic References ^[43]^[44]^[45] Factor Model proposed wording .06 wording?^[39] for critical values NON-Factor Model proposed wording for critical values References proposing revised/changed, ^[39] ^[39] ^[39] disagreements over critical values References indicating two-index or paired-index ^[39] ^[39] ^[39] criteria are required Index based on χ^2 Yes No Yes References recommending against use ^[36] ^[36] ^[36] of this index Sample size, power, and estimation Researchers agree samples should be large enough to provide stable coefficient estimates and reasonable testing power but there is no general consensus regarding specific required sample sizes, or even how to determine appropriate sample sizes. Recommendations have been based on the number of coefficients to be estimated, the number of modeled variables, and Monte Carlo simulations addressing specific model coefficients.^[27] Sample size recommendations based on the ratio of the number of indicators to latents are factor oriented and do not apply to models employing single indicators having fixed nonzero measurement error variances.^[32] Overall, for moderate sized models without statistically difficult-to-estimate coefficients, the required sample sizes (N’s) seem roughly comparable to the N’s required for a regression employing all the indicators. The larger the sample size, the greater the likelihood of including cases that are not causally homogeneous. Consequently, increasing N to improve the likelihood of being able to report a desired coefficient as statistically significant, simultaneously increases the risk of model misspecification, and the power to detect the misspecification. Researchers seeking to learn from their modeling (including potentially learning their model requires adjustment or replacement) will strive for as large a sample size as permitted by funding and by their assessment of likely population-based causal heterogeneity/homogeneity. If the available N is huge, modeling sub-sets of cases can control for variables that might otherwise disrupt causal homogeneity. Researchers fearing they might have to report their model’s deficiencies are torn between wanting a larger N to provide sufficient power to detect structural coefficients of interest, while avoiding the power capable of signaling model-data inconsistency. The huge variation in model structures and data characteristics suggests adequate sample sizes might be usefully located by considering other researchers’ experiences (both good and bad) with models of comparable size and complexity that have been estimated with similar data. Causal interpretations of SE models are the clearest and most understandable but those interpretations will be fallacious/wrong if the model’s structure does not correspond to the world’s causal structure. Consequently, interpretation should address the overall status and structure of the model, not merely the model’s estimated coefficients. Whether a model fits the data, and/or how a model came to fit the data, are paramount for interpretation. Data fit obtained by exploring, or by following successive modification indices, does not guarantee the model is wrong but raises serious doubts because these approaches are prone to incorrectly modeling data features. For example, exploring to see how many factors are required preempts finding the data are not factor structured, especially if the factor model has been “persuaded” to fit via inclusion of measurement error covariances. Data’s ability to speak against a postulated model is progressively eroded with each unwarranted inclusion of a “modification index suggested” effect or error covariance. It becomes exceedingly difficult to recover a proper model if the initial/base model contains several Direct-effect estimates are interpreted in parallel to the interpretation of coefficients in regression equations but with causal commitment. Each unit increase in a causal variable’s value is viewed as producing a change of the estimated magnitude in the dependent variable’s value given control or adjustment for all the other operative/modeled causal mechanisms. Indirect effects are interpreted similarly, with the magnitude of a specific indirect effect equaling the product of the series of direct effects comprising that indirect effect. The units involved are the real scales of observed variables’ values, and the assigned scale values for latent variables. A specified/fixed 1.0 effect of a latent on a specific indicator coordinates that indicator’s scale with the latent variable’s scale. The presumption that the remainder of the model remains constant or unchanging may require discounting indirect effects that might, in the real world, be simultaneously prompted by a real unit increase. And the unit increase itself might be inconsistent with what is possible in the real world because there may be no known way to change the causal variable’s value. If a model adjusts for measurement errors, the adjustment permits interpreting latent-level effects as referring to variations in true scores.^[26] SEM interpretations depart most radically from regression interpretations when a network of causal coefficients connects the latent variables because regressions do not contain estimates of indirect effects. SEM interpretations should convey the consequences of the patterns of indirect effects that carry effects from background variables through intervening variables to the downstream dependent variables. SEM interpretations encourage understanding how multiple worldly causal pathways can work in coordination, or independently, or even counteract one another. Direct effects may be counteracted (or reinforced) by indirect effects, or have their correlational implications counteracted (or reinforced) by the effects of common causes.^[15] The meaning and interpretation of specific estimates should be contextualized in the full model. SE model interpretation should connect specific model causal segments to their variance and covariance implications. A single direct effect reports that the variance in the independent variable produces a specific amount of variation in the dependent variable’s values, but the causal details of precisely what makes this happens remains unspecified because a single effect coefficient does not contain sub-components available for integration into a structured story of how that effect arises. A more fine-grained SE model incorporating variables intervening between the cause and effect would be required to provide features constituting a story about how any one effect functions. Until such a model arrives each estimated direct effect retains a tinge of the unknown, thereby invoking the essence of a theory. A parallel essential unknownness would accompany each estimated coefficient in even the more fine-grained model, so the sense of fundamental mystery is never fully eradicated from SE models. Even if each modeled effect is unknown beyond the identity of the variables involved and the estimated magnitude of the effect, the structures linking multiple modeled effects provide opportunities to express how things function to coordinate the observed variables – thereby providing useful interpretation possibilities. For example, a common cause contributes to the covariance or correlation between two effected variables, because if the value of the cause goes up, the values of both effects should also go up (assuming positive effects) even if we do not know the full story underlying each cause.^[15] (A correlation is the covariance between two variables that have both been standardized to have variance 1.0). Another interpretive contribution might be made by expressing how two causal variables can both explain variance in a dependent variable, as well as how covariance between two such causes can increase or decrease explained variance in the dependent variable. That is, interpretation may involve explaining how a pattern of effects and covariances can contribute to decreasing a dependent variable’s variance.^[47] Understanding causal implications implicitly connects to understanding “controlling”, and potentially explaining why some variables, but not others, should be controlled.^[4]^[48] As models become more complex these fundamental components can combine in non-intuitive ways, such as explaining how there can be no correlation (zero covariance) between two variables despite the variables being connected by a direct non-zero causal effect.^[15]^[16]^[6]^ The statistical insignificance of an effect estimate indicates the estimate could rather easily arise as a random sampling variation around a null/zero effect, so interpreting the estimate as a real effect becomes equivocal. As in regression, the proportion of each dependent variable’s variance explained by variations in the modeled causes are provided by R^2, though the Blocked-Error R^2 should be used if the dependent variable is involved in reciprocal or looped effects, or if it has an error variable correlated with any predictor’s error variable.^[49] The caution appearing in the Model Assessment section warrants repeat. Interpretation should be possible whether a model is or is not consistent with the data. The estimates report how the world would appear to someone believing the model – even if that belief is unfounded because the model happens to be wrong. Interpretation should acknowledge that the model coefficients may or may not correspond to “parameters” – because the model’s coefficients may not have corresponding worldly structural features. Adding new latent variables entering or exiting the original model at a few clear causal locations/variables contributes to detecting model misspecifications which could otherwise ruin coefficient interpretations. The correlations between the new latent’s indicators and all the original indicators contribute to testing the original model’s structure because the few new and focused effect coefficients must work in coordination with the model’s original direct and indirect effects to coordinate the new indicators with the original indicators. If the original model’s structure was problematic, the sparse new causal connections will be insufficient to coordinate the new indicators with the original indicators, thereby signaling the inappropriateness of the original model’s coefficients through model-data inconsistency.^[29] The correlational constraints grounded in null/zero effect coefficients, and coefficients assigned fixed nonzero values, contribute to both model testing and coefficient estimation, and hence deserve acknowledgment as the scaffolding supporting the estimates and their interpretation.^[29] Interpretations become progressively more complex for models containing interactions, nonlinearities, multiple groups, multiple levels, and categorical variables.^[27] Effects touching causal loops, reciprocal effects, or correlated residuals also require slightly revised interpretations.^[6]^[29] Careful interpretation of both failing and fitting models can provide research advancement. To be dependable, the model should investigate academically informative causal structures, fit applicable data with understandable estimates, and not include vacuous coefficients.^[50] Dependable fitting models are rarer than failing models or models inappropriately bludgeoned into fitting, but appropriately-fitting models are possible.^[34]^[51]^[52]^[53] The multiple ways of conceptualizing PLS models^[54] complicate interpretation of PLS models. Many of the above comments are applicable if a PLS modeler adopts a realist perspective by striving to ensure their modeled indicators combine in a way that matches some existing but unavailable latent variable. Non-causal PLS models, such as those focusing primarily on R^2 or out-of-sample predictive power, change the interpretation criteria by diminishing concern for whether or not the model’s coefficients have worldly counterparts. The fundamental features differentiating the five PLS modeling perspectives discussed by Rigdon, Sarstedt and Ringle^[54] point to differences in PLS modelers’ objectives, and corresponding differences in model features warranting interpretation. Caution should be taken when making claims of causality even when experiments or time-ordered investigations have been undertaken. The term causal model must be understood to mean "a model that conveys causal assumptions", not necessarily a model that produces validated causal conclusions—maybe it does maybe it does not. Collecting data at multiple time points and using an experimental or quasi-experimental design can help rule out certain rival hypotheses but even a randomized experiments cannot fully rule out threats to causal claims. No research design can fully guarantee causal Controversies and movements Structural equation modeling is fraught with controversies. Researchers from the factor analytic tradition commonly attempt to reduce sets of multiple indicators to fewer, more manageable, scales or factor-scores for later use in path-structured models. This constitutes a stepwise process with the initial measurement step providing scales or factor-scores which are to be used later in a path-structured model. This stepwise approach seems obvious but actually confronts severe underlying deficiencies. The segmentation into steps interferes with thorough checking of whether the scales or factor-scores validly represent the indicators, and/or validly report on latent level effects. A structural equation model simultaneously incorporating both the measurement and latent-level structures not only checks whether the latent factors appropriately coordinates the indicators, it also checks whether that same latent simultaneously appropriately coordinates each latent’s indictors with the indicators of theorized causes and/or consequences of that latent.^[29] If a latent is unable to do both these styles of coordination, the validity of that latent is questioned, and a scale or factor-scores purporting to measure that latent is questioned. The disagreements swirled around respect for, or disrespect of, evidence challenging the validity of postulated latent factors. The simmering, sometimes boiling, discussions resulted in a special issue of the journal Structural Equation Modeling focused on a target article by Hayduk and Glaser^[20] followed by several comments and a rejoinder,^[21] all made freely available, thanks to the efforts of George Marcoulides. These discussions fueled disagreement over whether or not structural equation models should be tested for consistency with the data, and model testing became the next focus of discussions. Scholars having path-modeling histories tended to defend careful model testing while those with factor-histories tended to defend fit-indexing rather than fit-testing. These discussions led to a target article in Personality and Individual Differences by Paul Barrett^[36] who said: “In fact, I would now recommend banning ALL such indices from ever appearing in any paper as indicative of model “acceptability” or “degree of misfit”.” ^[36](page 821). Barrett’s article was also accompanied by commentary from both perspectives.^[50]^[55] The controversy over model testing declined as clear reporting of significant model-data inconsistency becomes mandatory. Scientists do not get to ignore, or fail to report, evidence just because they do not like what the evidence reports.^[30] The requirement of attending to evidence pointing toward model mis-specification underpins more recent concern for addressing “endogeneity” – a style of model mis-specification that interferes with estimation due to lack of independence of error/residual variables. In general, the controversy over the causal nature of structural equation models, including factor-models, has also been declining. Stan Mulaik, a factor-analysis stalwart, has acknowledged the causal basis of factor models.^[56] The comments by Bollen and Pearl regarding myths about causality in the context of SEM^[25] reinforced the centrality of causal thinking in the context of SEM. A briefer controversy focused on competing models. Comparing competing models can be very helpful but there are fundamental issues that cannot be resolved by creating two models and retaining the better fitting model. The statistical sophistication of presentations like Levy and Hancock (2007),^[57] for example, makes it easy to overlook that a researcher might begin with one terrible model and one atrocious model, and end by retaining the structurally terrible model because some index reports it as better fitting than the atrocious model. It is unfortunate that even otherwise strong SEM texts like Kline (2016)^[27] remain disturbingly weak in their presentation of model testing.^[58] Overall, the contributions that can be made by structural equation modeling depend on careful and detailed model assessment, even if a failing model happens to be the best available. An additional controversy that touched the fringes of the previous controversies awaits ignition. Factor models and theory-embedded factor structures having multiple indicators tend to fail, and dropping weak indicators tends to reduce the model-data inconsistency. Reducing the number of indicators leads to concern for, and controversy over, the minimum number of indicators required to support a latent variable in a structural equation model. Researchers tied to factor tradition can be persuaded to reduce the number of indicators to three per latent variable, but three or even two indicators may still be inconsistent with a proposed underlying factor common cause. Hayduk and Littvay (2012)^[32] discussed how to think about, defend, and adjust for measurement error, when using only a single indicator for each modeled latent variable. Single indicators have been used effectively in SE models for a long time,^[51] but controversy remains only as far away as a reviewer who has considered measurement from only the factor analytic perspective. Though declining, traces of these controversies are scattered throughout the SEM literature, and you can easily incite disagreement by asking: What should be done with models that are significantly inconsistent with the data? Or by asking: Does model simplicity override respect for evidence of data inconsistency? Or, what weight should be given to indexes which show close or not-so-close data fit for some models? Or, should we be especially lenient toward, and “reward”, parsimonious models that are inconsistent with the data? Or, given that the RMSEA condones disregarding some real ill fit for each model degree of freedom, doesn’t that mean that people testing models with null-hypotheses of non-zero RMSEA are doing deficient model testing? Considerable variation in statistical sophistication is required to cogently address such questions, though responses will likely center on the non-technical matter of whether or not researchers are required to report and respect Extensions, modeling alternatives, and statistical kin • Categorical dependent variables • Categorical intervening variables • Copulas • Deep Path Modelling ^[59] • Exploratory Structural Equation Modeling ^[60] • Fusion validity models^[61] • Item response theory models • Latent class models • Link functions • Longitudinal models ^[62] • Measurement invariance models ^[63] • Multilevel models, hierarchical models (e.g. people nested in groups) • Multiple group modelling with or without constraints between groups (genders, cultures, test forms, languages, etc.) • Multi-method multi-trait models • Random intercepts models • Structural Equation Model Trees • Structural Equation Multidimensional scaling^[65] Structural equation modeling programs differ widely in their capabilities and user requirements.^[66] See also • Causal model – Conceptual model in philosophy of science • Graphical model – Probabilistic model • Multivariate statistics – Simultaneous observation and analysis of more than one outcome variable • Partial least squares regression – Statistical method • Simultaneous equations model – Type of statistical model • Causal map – A network consisting of links or arcs between nodes or factors • Bayesian Network – Statistical model 1. . 2. . 3. . 4. ^ ^a ^b ^c ^d ^e Pearl, J. (2009). Causality: Models, Reasoning, and Inference. Second edition. New York: Cambridge University Press. 5. ) 6. ^ ^a ^b ^c ^d ^e ^f Hayduk, L. (1987) Structural Equation Modeling with LISREL: Essentials and Advances. Baltimore, Johns Hopkins University Press. ISBN 0-8018-3478-3 7. . 8. . 9. . 10. . 11. ^ MacCallum & Austin 2000, p. 209. 12. ^ Wright, Sewall. (1921) "Correlation and causation". Journal of Agricultural Research. 20: 557-585. 13. . 14. ^ Wolfle, L.M. (1999) "Sewall Wright on the method of path coefficients: An annotated bibliography" Structural Equation Modeling: 6(3):280-291. 15. ^ ^a ^b ^c ^d Duncan, Otis Dudley. (1975). Introduction to Structural Equation Models. New York: Academic Press. ISBN 0-12-224150-9. 16. ^ ^a ^b ^c ^d Bollen, K. (1989). Structural Equations with Latent Variables. New York, Wiley. ISBN 0-471-01171-1. 17. ^ Jöreskog, Karl; Gruvaeus, Gunnar T.; van Thillo, Marielle. (1970) ACOVS: A General Computer Program for Analysis of Covariance Structures. Princeton, N.J.; Educational Testing Services. 18. ^ Jöreskog, Karl Gustav; van Thillo, Mariella (1972). "LISREL: A General Computer Program for Estimating a Linear Structural Equation System Involving Multiple Indicators of Unmeasured Variables" (PDF). Research Bulletin: Office of Education. ETS-RB-72-56 – via US Government. 19. ^ ^a ^b Jöreskog, Karl; Sorbom, Dag. (1976) LISREL III: Estimation of Linear Structural Equation Systems by Maximum Likelihood Methods. Chicago: National Educational Resources, Inc. 20. ^ ^a ^b Hayduk, L.; Glaser, D.N. (2000) "Jiving the Four-Step, Waltzing Around Factor Analysis, and Other Serious Fun". Structural Equation Modeling. 7 (1): 1-35. 21. ^ ^a ^b Hayduk, L.; Glaser, D.N. (2000) "Doing the Four-Step, Right-2-3, Wrong-2-3: A Brief Reply to Mulaik and Millsap; Bollen; Bentler; and Herting and Costner". Structural Equation Modeling. 7 (1): 111-123. 22. ^ Westland, J.C. (2015). Structural Equation Modeling: From Paths to Networks. New York, Springer. 23. . 24. ^ Imbens, G.W. (2020). "Potential outcome and directed acyclic graph approaches to causality: Relevance for empirical practice in economics". Journal of Economic Literature. 58 (4): 11-20-1179. 25. ^ . 26. ^ . 27. ^ ^a ^b ^c ^d ^e ^f ^g ^h ^i Kline, Rex. (2016) Principles and Practice of Structural Equation Modeling (4th ed). New York, Guilford Press. ISBN 978-1-4625-2334-4 28. ^ ^a ^b ^c Rigdon, E. (1995). "A necessary and sufficient identification rule for structural models estimated in practice." Multivariate Behavioral Research. 30 (3): 359-383. 29. ^ ^a ^b ^c ^d ^e ^f ^g Hayduk, L. (1996) LISREL Issues, Debates, and Strategies. Baltimore, Johns Hopkins University Press. ISBN 0-8018-5336-2 30. ^ . 31. . 32. ^ . 33. ^ Browne, M.W.; MacCallum, R.C.; Kim, C.T.; Andersen, B.L.; Glaser, R. (2002) "When fit indices and residuals are incompatible." Psychological Methods. 7: 403-421. 34. ^ . Note the correction of .922 to .992, and the correction of .944 to .994 in the Hayduk, et al. Table 1. 35. ^ . 36. ^ ^a ^b ^c ^d ^e ^f ^g Barrett, P. (2007). "Structural equation modeling: Adjudging model fit." Personality and Individual Differences. 42 (5): 815-824. 37. ^ Satorra, A.; and Bentler, P. M. (1994) “Corrections to test statistics and standard errors in covariance structure analysis”. In A. von Eye and C. C. Clogg (Eds.), Latent variables analysis: Applications for developmental research (pp. 399–419). Thousand Oaks, CA: Sage. 38. ^ Sorbom, D. "xxxxx" in Cudeck, R; du Toit R.; Sorbom, D. (editors) (2001) Structural Equation Modeling: Present and Future: Festschrift in Honor of Karl Joreskog. Scientific Software International: Lincolnwood, IL. 39. ^ ^a ^b ^c ^d ^e ^f ^g ^h Hu, L.; Bentler, P.M. (1999) "Cutoff criteria for fit indices in covariance structure analysis: Conventional criteria versus new alternatives." Structural Equation Modeling. 6: 1-55. 40. ^ Kline 2011, p. 205. 41. ^ Kline 2011, p. 206. 42. ^ ^a ^b Hu & Bentler 1999, p. 27. 43. ^ Steiger, J. H.; and Lind, J. (1980) "Statistically Based Tests for the Number of Common Factors." Paper presented at the annual meeting of the Psychometric Society, Iowa City. 44. ^ Steiger, J. H. (1990) "Structural Model Evaluation and Modification: An Interval Estimation Approach". Multivariate Behavioral Research 25:173-180. 45. ^ Browne, M.W.; Cudeck, R. (1992) "Alternate ways of assessing model fit." Sociological Methods and Research. 21(2): 230-258. 46. ^ Herting, R.H.; Costner, H.L. (2000) “Another perspective on “The proper number of factors” and the appropriate number of steps.” Structural Equation Modeling. 7 (1): 92-110. 47. ^ Hayduk, L. (1987) Structural Equation Modeling with LISREL: Essentials and Advances, page 20. Baltimore, Johns Hopkins University Press. ISBN 0-8018-3478-3 Page 20 48. ^ Hayduk, L. A.; Cummings, G.; Stratkotter, R.; Nimmo, M.; Grugoryev, K.; Dosman, D.; Gillespie, M.; Pazderka-Robinson, H. (2003) “Pearl’s D-separation: One more step into causal thinking.” Structural Equation Modeling. 10 (2): 289-311. 49. ^ Hayduk, L.A. (2006) “Blocked-Error-R2: A conceptually improved definition of the proportion of explained variance in models containing loops or correlated residuals.” Quality and Quantity. 40: 50. ^ ^a ^b Millsap, R.E. (2007) “Structural equation modeling made difficult.” Personality and Individual Differences. 42: 875-881. 51. ^ ^a ^b Entwisle, D.R.; Hayduk, L.A.; Reilly, T.W. (1982) Early Schooling: Cognitive and Affective Outcomes. Baltimore: Johns Hopkins University Press. 52. ^ Hayduk, L.A. (1994). “Personal space: Understanding the simplex model.” Journal of Nonverbal Behavior., 18 (3): 245-260. 53. ^ Hayduk, L.A.; Stratkotter, R.; Rovers, M.W. (1997) “Sexual Orientation and the Willingness of Catholic Seminary Students to Conform to Church Teachings.” Journal for the Scientific Study of Religion. 36 (3): 455-467. 54. ^ . 55. ^ Hayduk, L.A.; Cummings, G.; Boadu, K.; Pazderka-Robinson, H.; Boulianne, S. (2007) “Testing! testing! one, two, three – Testing the theory in structural equation models!” Personality and Individual Differences. 42 (5): 841-850 56. ^ Mulaik, S.A. (2009) Foundations of Factor Analysis (second edition). Chapman and Hall/CRC. Boca Raton, pages 130-131. 57. ^ Levy, R.; Hancock, G.R. (2007) “A framework of statistical tests for comparing mean and covariance structure models.” Multivariate Behavioral Research. 42(1): 33-66. 58. . 59. ) 60. . 62. . 63. . 64. , retrieved 2023-11-03 65. . 66. . Further reading External links
{"url":"https://findatwiki.com/Structural_equation_modeling","timestamp":"2024-11-05T07:14:25Z","content_type":"text/html","content_length":"281910","record_id":"<urn:uuid:b6858108-70dd-422d-9538-298659f4b144>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00018.warc.gz"}
Division in Action: Real-World Applications and Examples - Ka Bratli Posted inMilitary Division in Action: Real-World Applications and Examples Division is a fundamental arithmetic operation that involves breaking a whole into equal parts. In the real world, division is used in a variety of situations to distribute resources, calculate proportions, and create balanced teams. In this article, we will explore some real-world applications and examples of division in action. Share Equally One common real-world application of division is when dividing a group of items or resources equally among a group of people. For example, if you have 12 cookies and want to share them equally among 4 friends, you would divide the total number of cookies by the number of friends to determine how many each person will receive. In this case, each friend would receive 3 cookies. 12 ÷ 4 = 3 This is a simple example of how division can be used to distribute resources evenly among a group of people. Proportional Allocation Another common application of division in real-world scenarios is to calculate proportions or percentages. For example, if a recipe calls for 1 cup of sugar for every 2 cups of flour, you would use division to determine the ratio of sugar to flour. 1 ÷ 2 = 0.5 In this case, the ratio of sugar to flour is 0.5, meaning that for every cup of flour, you need half a cup of sugar. Team Building Division is also used in team-building exercises to create balanced teams. For instance, if you have a group of 16 people and want to divide them into 4 teams of equal size, you would use division to determine how many people should be on each team. 16 ÷ 4 = 4 In this case, each team would have 4 members, ensuring that the teams are balanced in size. Financial Planning Division is also commonly used in financial planning to calculate budgets, expenses, and investments. For example, if you have a budget of $1,000 and want to divide it evenly among 5 different categories, you would use division to determine how much money should be allocated to each category. $1,000 ÷ 5 = $200 In this case, $200 would be allocated to each category, ensuring that the budget is evenly distributed. Engineering and Science Division is a key operation in engineering and science, where it is used to calculate quantities such as speed, distance, and energy. For example, in physics, division is used to calculate velocity by dividing the distance traveled by the time taken. Distance = 500 meters Time = 10 seconds Velocity = Distance ÷ Time Velocity = 500 ÷ 10 = 50 m/s In this case, the object is traveling at a velocity of 50 meters per second. Survey Data Analysis Division is also used in survey data analysis to calculate percentages and proportions. For example, if you conduct a survey with 100 respondents and want to calculate the percentage of respondents who selected a certain option, you would use division to determine the proportion of respondents who chose that option. Number of respondents who chose Option A = 25 Percentage of respondents who chose Option A = (25 ÷ 100) x 100% = 25% In this case, 25% of respondents chose Option A in the survey. Division is a versatile arithmetic operation that has numerous real-world applications and examples. Whether it be distributing resources, calculating proportions, building teams, or analyzing survey data, division plays a crucial role in many aspects of everyday life. By understanding how division works and how it can be applied, we can improve our problem-solving skills and make more informed decisions in various situations. So the next time you encounter a situation that requires division, remember the real-world applications and examples we’ve discussed in this article. No comments yet. Why don’t you start the discussion?
{"url":"https://ka-bratli.com/division-in-action-real-world-applications-and-examples/","timestamp":"2024-11-05T20:26:14Z","content_type":"text/html","content_length":"58399","record_id":"<urn:uuid:447e7901-bb2c-4f14-a92e-257cc732fdf1>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00802.warc.gz"}
Learned bloom filters At the intersection of two domains: machine learning and data structures lie learned data structures. Learned data structures offer improved performance as compared to their classical counterparts by making use of trends in the input data. We will explore Learned Bloom Filters which build on classical Bloom Filters and offer trade-offs which can be used to achieve optimum performance in some Bloom Filters A membership query returns a yes/no answer to a question of the form “Is this element \(x\) present in a given set \(S\)?” Searching algorithms like linear search and binary search can be used to answer this... read more Write a comment ...
{"url":"https://dcll-research.iiitd.edu.in/Learned-Bloom-Filters.html","timestamp":"2024-11-14T18:36:06Z","content_type":"text/html","content_length":"90328","record_id":"<urn:uuid:b36d9926-e9f8-4152-bb69-64b1f0af7624>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00146.warc.gz"}
Pre-Video Questions The slope of the line tangent to the graph of $y = f(x)$ at the input value $x = 7$ is shown in the graph below. The slope of this tangent line is 1. Which of the following statements most accurately conveys what happens to the value of $f(x)$ as the value of $x$ varies by an infinitesimal amount $dx$ away from $x = 7$? The value of $f(x)$ changes by 1. The value of $f(x)$ changes by 1 times 7. The value of $f(x)$ changes by 1 divided by 7. The value of $f(x)$ changes by 1 times $dx$. The value of $f(x)$ changes by 1 divided by $dx$.
{"url":"https://ximera.osu.edu/calcvids2019/nin/v/secanttangent/secanttangent/intro","timestamp":"2024-11-08T19:06:51Z","content_type":"text/html","content_length":"28536","record_id":"<urn:uuid:895aaa42-eaef-443a-a7b7-044a6a9bbdbf>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00177.warc.gz"}
Tea Total I'm moving to a new bat place and new bat format: Numerical Notes. This move is spurred by a combination of technical and personal considerations. On the technical side: I mostly spend my evenings away from computer networks, which means that the times I'm most likely to take a break and blog are also the times that I'm least likely to have the connectivity to post anything. Also, I've decided that I'd really like to be able to write mathematical ideas, and it's very difficult to do that with plain HTML. The new tools give me more control. On the personal side: I would rather correspond with my friends and family (by letters or e-mail) than broadcast news through this blog. On the other hand, essay writing to an audience -- even an unknown audience -- has proven to be a valuable way for me to clarify my thoughts. Expect further essays and computations at the new site, but not so much personal news. I hope I will be better about sending letters with that! Thanks for reading. In How to Solve It, Polya lists four steps for mathematical problem-solving: understand the problem, devise a plan, carry out the plan, and then look back. It seems like I spend most of my time -- or at least most of my interesting time -- in the “understand the problem” or the “look back” phases. In some sense, I spend way more time in the phases of initial understanding and later reflection than in any of the other phases. I spend years on them! The thing is, “devise a plan” and “carry out the plan” are crucial components in solving individual problems. But figuring out a problem that's simultaneously interesting and tractable -- ah, this is not so much a matter of planning as a matter of observation. I've already mentioned my experience with geometry: took a graduate Riemannian geometry course, felt completely bewildered, came out... and “got it” over the next several years in a series of “ah!” moments (where “moments” should be very loosely interpreted as “time periods ranging between a second and twelve hours”). I'm less sure if I've mentioned other such experiences. The time between initial exposure and any personal feeling of competence is often a couple years, at least: for linear algebra, for numerical analysis, for functional analysis, for ODEs, for PDEs... the list goes on. But what really constitutes a satisfying feeling of basic competence? I don't think it's the ability to ace an exam, or even to do difficult problems; those are side-effects at best. I've received good grades in all my mathematical courses, really; I saw stuff, I did the problems, I put together the pieces in my head, and at the end of those courses, I knew the material as well as anyone could expect. But there's a difference between knowing something at the end of a semester and knowing something so well that you can use it intuitively in a complicated argument, so well that it's not just something that you remember, but something you couldn't possibly forget. Of course, effortless recall of mathematical concepts is no more sufficient to do interesting mathematics than effortless use of English vocabulary is sufficient to craft good prose. It's a starting point. It is, to me, the real meat of the “understanding the problem” phase. And to do it right either takes someone much smarter than me, or someone willing to devote years of thought -- though not continuous thought -- to really grok the concepts. Sometimes I run into people who claim expertise before they've spent the time to get to what I would consider a starting point (for whatever reason this happens more in CS than in math), and who are undone by a few simple questions. I've been in that position, too! But I feel a little bad when I see it happen to someone else, and then I see that person walk away still feeling like an expert. It's not so much that fake experts hurt others, or that they hurt the reputation of their “field of expertise,” though unfortunately such things do happen; but when I've met such folks, they seem to have a certain smugness which inures them to the feeling of accomplishment which only comes (for me) after years of feeling like the brain knows what's going on, but the guts haven't quite caught up. For the record: I consider myself expert in certain aspects of numerical software development. I would grudgingly grant that I might become an expert in certain aspects of numerical linear algebra. There's no way that I'd let you call me an expert in Riemannian geometry; it's just that I've spent enough years with certain ideas floating around in my mind -- things like manifolds and atlases, differential forms and vector bundles and tensor bundles -- that I feel a certain comfort with the material. And the same is true of other things. Looking back is just as important, and just as time consuming, as the initial effort of understanding mathematical ideas. The initial understanding and the looking back blur together. Let me give an example, one that partly inspired this whole line of thought. One of the first topics covered in a compilers class is lexical analysis: how does one break up a string like “1 + a12(b*c)” into meaningful tokens, things like the number 1 or the identifier a12? The usual way of approaching this is through finite automata: one starts with regular expressions, which are translated into nondeterministic finite automata, which are then converted into deterministic automata, which are then compacted. Then you generate some executable representation of the automata -- a state table and an interpreter, perhaps -- and away you go, tokenizing files with gleeful abandon. Or maybe you just say “to heck with this,” and do it in an ad-hoc way, in which case you'll probably re-invent either state machines or recursive-descent parsing. For the lexer the students were required to write, we gave them a code base which was written in a recursive-descent style; but it turns out that a DFA can be converted directly into an LL(1) grammar, so... six of one, half a dozen of the other. So all these ideas come into a seemingly simple task of chopping up a file into individual tokens. And it's all very beautiful, but it can be sort of confusing, particularly since the students were really writing an LL(1) parser (a recursive-descent type parser) before they were ever formally introduced to the ideas of LL(1) parsing. So one day, as we were walking out of section, one of my students asked “why should we learn the NFA -> DFA algorithm, if we're not coding it directly, and if we there are well-established tools that will do such conversions for us?” It was a fair question, but while my brain was sorting among the reasons I could give, my gut interjected. “If you can understand NFA -> DFA conversion,” I said, “then you'll be able to understand the construction of state machines for LR(k) parsing. And you want to understand the construction of those state machines, even if you're using an automated tool; otherwise, you'll have no understanding of how to avoid conflicts when they occur.” My brain agreed that this was a pretty good reason, and started to expand; but by that time the student had thanked me and we'd parted ways. Guess what? Before he asked, I'd never really thought about it before. But it really is true. The technical point: the NFA -> DFA algorithm and the LR(1) parser construction algorithm are similar in flavor, in that both involve manipulating states that represent sets of things (sets of NFA states in one case, and sets of productions in the other). Further, you can mimic the effects of the NFA -> DFA algorithm by converting an NFA into a certain type of grammar and then generating an LR(k) parse table. And stuff like this keeps happening! But sometimes it goes off in odd directions. I write an LL(1) parser generator, I learn something interesting about Lisp. I try to come up with a non-trivial regular expression for a class exercise, I end up reminding myself of some interesting number theory... and then wishing that I had time to go learn a bit more about analytic number theory, since I've been thinking about analytic function theory in a different context. I think about code optimization shortly after thinking about a problem in floating point error analysis, and I find myself thinking “gee, I could probably write an optimizer that new my favorite floating point error analysis tricks, and was able to generate code for maximal accuracy rather than for maximal performance.” Oh, so many wonderful connections there are! The best thing is that I've learned that I can mention some of these things in class, and at this point most of the students seem to understand what I've said -- and a few of them get curious enough that they start playing with the ideas and making connections and asking questions. O, brave new world, that has such people in it! But what is this activity, really? It's not problem-solving, really, though sometimes it results in good problems. It's something of a hybrid: exploration to discover new problems, or to discover how to understand problems, followed by reflection to steer further exploration. Were I a poet, I might mention the phoenix here, with new attacks and approaches rising from the ashes of old problems. But I am not a poet, so I shan't mention the phoenix. Except I've mentioned the phoenix just in order to bring attention to what I'm not mentioning. So am I waxing poetic? Ah, the joy of it all! • Adventures of a Mathematician (S. Ulam) Did I finish this two weeks ago? In any case, it was interesting to read. In addition to the autobiographical and historical comments, Ulam says a great deal about mathematics and mathematical ways of thinking. Though he sometimes mentions specific technical areas, almost all of the book is accessible to a general audience. • Fool's Errand (R. Hobb) The books in the Farseer trilogy kept me up past when I should have gone to sleep. This one did, too. But I think I will postpone picking up Golden Fool, the second book in this new trilogy. I think the bleakness of the characters is a bit overdone. • Forty Signs of Rain (K. S. Robinson) I can't think of any books by KSR that I haven't enjoyed (although Icehenge wasn't as much fun as Years of Rice and Salt, for instance). I have a few quibbles, mostly involving his description of the idea of an algorithm, but I like his cast of characters and of ideas, and I really like the writing style. There wasn't much plot, yet, but it didn't feel like a deficiency; and anyhow, I expect the plot will develop further in the other books. I do not think the similarity of the events in the book to current political and meteorological events is particularly coincidental. • The Education of Henry Adams (H. Adams) I haven't gotten more than thirty pages into it, but so far I've enjoyed it. There's a dry humor there that appeals to me. • Best American Science Writing 2005 (Edited by A. Lightman) Haven't started it yet, but I'm looking forward to it. I usually enjoy anthologies of popular science writing. • Fundamentals of Electroacoustics (Fischer) A short, plainly written description of different types of electromechanical interactions, and of circuit-style models of coupled electrical and mechanical systems (speakers and microphones, mostly). Translated from the German original. Why couldn't I have found this book a couple years ago? But it's on my shelf now. • Mathematics of Classical and Quantum Physics (F.W. Byron and R.W. Fuller) A Dover reprint with two volumes bound as one. Includes sections on variational calculus, Hilbert spaces and spectral theory, analytic function theory, Green's functions and integral equations, and symmetry groups. I have other books that treat most of these topics in greater detail, but many of those books make little or no useful mention of physical applications or motivations. At the same time, Byron and Fuller have written an essentially mathematical book: there are no rabbit-from-a-hat tricks (or when there are, they are preceded by an apology and a reference), and certain details which seem subject to frequent abuse in physics and engineering texts -- like the treatment of the Dirac delta -- are handled with appropriate rigor. An aside: I'm much better as a mathematician than as a physicist. When I think about physical effects, I tend to think of them as concrete approximations of certain mathematical abstractions; and for the most part, my intuition for the mathematical things is better than my intuition for the physical things. This is a constant source of frustration for me when I'm learning about physics, since hand-waving appeals to “physical intuition” generally confuse me more than they help, at least initially. Nonetheless, I like learning about physics, and about connections between mathematics and physics. Furthermore, I like to learn about mathematical and scientific history, and a great deal of mathematics has historically been inspired by descriptions of physics: the calculus of variations coming from classical mechanics, many ideas in functional analysis coming from quantum mechanics, many ideas in group theory coming from mechanical symmetries, many ideas in graph theory connecting to circuit analysis, and so on. • Modulated Waves (Ostrovsky and Potapov) I've mentioned this book before. I read the first several chapters over the course of a couple weeks a while back (early summer), and since then I've been re-reading chapters or reading new chapters as the whim or the need takes me. This, too, is a mathematical book; but like Byron and Fuller's book, it contains a nice infusion of physical ideas and applications. I picked up my copy of Ostrovsky and Potapov this weekend to compare their presentation of dispersion relations to the corresponding presentation in Byron and Fuller. I wish I had a better idea how to effectively use a Windows box at a distance. Now that my old laptop is retired, I no longer have such easy access to a Windows machine. The department has a server that I can use with rdesktop, but it's of limited usefulness: I can't run MATLAB on it to compile my codes; I can't print from it; and “for security reasons,” I can't access it from the wireless Even were I able to get remote access to a Windows machine which was not so restricted, I know that I'd still find it an irksome thing to use. No, I don't think Windows is counter-intuitive, nor that it's immensely buggy. Since the introduction of Windows 2000, I think the operating system has become immensely more stable; and Microsoft does well enough at making most of its software usable. I just wish the system was a little less interactive! Effective computing is largely about constructive laziness. For the compilers class for which I'm an instructor this semester, we have a lot of code that's distributed to students in compiled form. The compiled files vary from vendor to vendor, platform to platform, and version to version. So, in the spirit of constructive laziness, I wrote a build script that shuttles files back and forth between three different computers, then runs Lisp programs to generate compiled files for seven different platforms, then moves all those files to the appropriate places on the class web page. Seven different platforms, just by typing make all! But then there's an eighth platform, which is Allegro CL 7.0 under Windows; and to produce those compiled files, I defer to the professor. It would be much more convenient for both of us if I could just add another line or two to my build file; but I don't know how to do that, and while I could probably write more code to get around the issue, it's not worth the bother. (Incidentally, this is also a good reason for distributing programs as source code -- then you can defer to someone else the work of building the program on whatever platform is of interest! Clearly, though, this is not such a good option when you're trying to provide a student with a model solution without actually giving away all the details.) I think it may be possible for me to run a PC emulator on my Mac laptop, and on that Windows emulator to install enough UNIX-like utilities that I can autobuild there. Also, in the unlikely event that someone reading this has experience running gcc as a cross-compiler with a Windows target, I'd like to hear about it. 1. The word you're thinking of is “teetotal.” It's an adjective that describes one who does not drink. It is a pun, since I prefer tea to alcohol (though I've decided a cup of cider from time to time is okay, too). 2. There are two aspects of MATLAB EXternal interface (MEX) programming that cause most of your headaches. First: when you use the mex script, the compiler flags are set in such a way that most of the warnings are turned off. This is bad mojo. Where possible, put most of the functionality for your program into a separate file, so that you can compile it to object files (and perhaps even link it to a testing stub) without testing mex. Then use mex to link everything together. Second: mex creates dynamically loadable modules, and some of the dependency checks for those modules are not performed until load time or (if you're on OS X) possibly even until the unlinked routines are hit at run time. That's why you get errors about “unresolved symbols” and “invalid mex files.” Check the order of the libraries on your link line, and check to make sure you have all the libraries you need. 3. You don't want to write your own eigensolver, LU factorization routine, etc. Not in C++, not in C, not in Fortran. For dense numerical linear algebra, go look at LAPACK and a good BLAS implementation (BLAS = Basic Linear Algebra Subroutines). If you're writing in C++, and you have trouble linking to a Fortran code like LAPACK, make sure you use the extern “C” directive in order to turn of the C++ compiler's name-mangling. If you're writing in C on most Unix-y platforms, you can call Fortran code by simply appending an underscore; e.g. call dgemm_ from C instead of dgemm. Of course, this is somewhat platform dependent. There is a C translation of LAPACK, which I was responsible for a while back, but I think it's worth the effort of learning to link to Fortran. Still, you might also check out the other packages on Netlib, to see if any of them fit your needs. For large, sparse systems, you'll want something different from what you use for dense stuff. I use Tim Davis's UMFPACK system for sparse linear solves, and ARPACK for computing eigenvalues of sparse matrices. 4. When I wrote before about tensors and duality, the code was there purely for illustration. It is not how I would typically structure a computation, or at least it's not how I'd structure it if I wanted to run something big in a reasonable amount of time. You might check out the Matlisp bindings if you're interested in something serious. 5. Am I really so pessimistic about my code? For those searching on “crash,” check out Valgrind. This tool will help you. It surely helps me. 6. If you really are interested in code to do number-theoretic computations, check out Magma, Maxima, or perhaps Mathematica or Maple. If you want to crunch numbers, consider MATLAB, Octave, or perhaps R. There exist C++ number theory libraries, but I'm not sure how much use they'll be (unless you're trying to write a distributed code-cracking system or something -- in which case it's already been done, and you should probably use existing sources). 7. For information on the Euler totient function, the lengths of repeating fractions, or a variety of other such things, I refer you to Eric Weisstein's Mathworld. 8. If you want a Lisp lexer generator or LL(1) parser generator, you can grab one from my software page. But for the parser generator, you may prefer something that handles LALR grammars, and for the lexer -- why bother? Time out for reading There is a new Half Price books in Berkeley, along Shattuck Avenue a couple blocks west of campus. Curiously, they're in the same space as the dollar store where I got some pans and utensils just after I moved to Berkeley. As part of their opening celebration, they have an additional 20% off. So I wandered in, saw some familiar faces, and picked up a few books: • Adventures of a Mathematician (S. Ulam) -- This autobiographical book is the only one that I've started. There's a preface that describes Ulam's work, which covers a broader range of pure and applied mathematics than I'd realized. And I thought, “excellent, this will be very interesting.” Then I went to the Au Coquelet cafe to read the prologue and sip something warm, and I realized that this will be very interesting. This book didn't start life as an autobiography; initially, Ulam thought to write a biography of von Neumann. But (on page 5): When I started to organize my thoughts, I realized that up to that time -- it was about 1966, I think -- there existed few descriptions of the unusual climate in which the birth of the atomic age took place. Official histories do not give the real motivations or go into the inner feelings, doubts, convictions, determinations, and hopes of the individuals who for over two years lived under unusual conditions. A set of flat pictures, they give at best only the essential facts. Thinking of all this in the little plane from Albuquerque to Los Alamos, I remembered how Jules Verne and H. G. Wells had influenced me in my childhood in books I read in Polish translation. Even in my boyish dreams, I did not imagine that some day I would take part in equally fantastic undertakings. The result of all these reflections was that instead of writing a life of von Neumann, I have undertaken to describe my personal history, as well as what I know of a number of other scientists who also became involved in the great technological achievements of this age. What follows is a remarkable book, part autobiography, part biography, and part introspection on the workings of memory and of the mathematical mind. • The Education of Henry Adams (H. Adams) -- I'm not sure where I first heard about this book. I think it was from reading Jacques Barzun, either Teacher in America or A Stroll with William James; that it would be from Barzun is a pretty good guess, though, since Barzun's books are usually crammed with references to books that I decide I want to read (and then quickly lose in my list). Either way, I know in advance to expect more than basic autobiography from this one. • Fool's Errand (R. Hobb) -- A little lighter reading. I enjoyed the first trilogy, but every time I looked for the start to the second trilogy on the local bookshelves, I could only find the second book and on. So now I have something for the next time I feel like a novel. Right now I'm switching back and forth between Ulam's book and the most recent SIAM Review for my evening reading. This issue of SIAM Review is a really good one, both for technical content and for style. J.P. Boyd, who has written a book on spectral methods which I've mentioned before, has an article on “Hyperasymptotics and the Linear Boundary Layer Problem” which is both informative and highly entertaining. To quote again: Unfortunately, asymptotics is usually taught very badly when taught at all. When a student asks, “What does one do when x is larger than the radius of convergence of the power series?”, the response is a scowl and a muttered “asymptotic series!”, followed by a hasty scribbling of the inverse power series for a Bessel function. “But of course, that's all built-in to MATLAB, so one never has to use it any more.” Humbug! ... Arithmurgy [number-crunching] hasn't replaced asymptotics; rather, number-crunching and asymptotic series are complementary and mutually enriching. The article refers several times to a longer article (98 pages) entitled “The Devil's invention: Asymptotics, superasymptotics, and hyperasymptotics,” which I think I would have to put on my reading list just for the title, even if I didn't know that I found the author so entertaining. But given the state of my reading list right now, perhaps it will have to go onto the longer list rather than the shorter one. Most of my reading time recently has gone to technical material (excepting Tools for Teaching by B. Davis, which is the textbook for the teaching course). This is unfortunate, because what I read tends to strongly influence what things I think about for casual conversation. So if I'm only reading about number theory, Gershgorin disks, and the error analysis of non-self-adjoint PDE eigenvalue problems, I tend to end up talking about those things to anyone who will listen. Those topics are really cool, and I do have buddies who are willing to go have a coffee, or a cider and some pizza, and have a conversation that wanders in and out of technical realms. Nevertheless: a little light reading in the evenings seems to make conversations in the day go more smoothly. You probably learned this trick in grade school: to see whether a number is divisible by three, add all the digits and check if the sum is divisible by three. The same thing works for nine. Ever wonder why? Actually, three and nine are just special cases of something very general. What does it mean if q divides n evenly? It means that there is ultimately no remainder in the division. So if r represents the remainder at each step of long division, then I should be able to write r = 0 for d = digits r = mod(r * 10 + d, q) If r = 0 at the end of the loop, then the number represented by the given string of digits is evenly divisible by q. Now, it's not too difficult to show that I could rewrite the update of r as r = mod(r * (10 + k * q) + d, q) for any integer k. In particular, this means that I could let p = mod(10,q), and write the update formula as r = mod(r * p + d, q) The trick with division by three and division by nine works because mod(10,3) = mod(10,9) = 1. Once you realize how this works, you can think of quick checks for all sorts of divisibility properties. For example, take the number 86834; is it evenly divisible by 11? Yes! I can tell because 10 and -1 are equivalent modulo 11, so that my remainder update looks like r = mod(-r + d, 11) I can thus check divisibility by 11 by looking at alternating sums of the digits. So since 8 - 6 + 8 - 3 + 4 = 11 is divisible by 11, so also is 86834. Or if I wanted to know whether a number written in octal (base 8) was divisible by 7, I could add the octal digits and see whether seven divided the sum. Or I could play around for five minutes to find that the binary representation of any multiple of three matches the regular expression (0 | 1 (00)* (1|01))* Hooray! A meaningful regular expression which isn't absolutely trivial to parse! So such toy problems in number theory not only amuse me, but they also actually serve some use as a source of exercises for my students. We'll see whether any of them appreciate the entertainment value as I do. • Currently drinking: Mint tea A friend pointed out this, which I think is one of the most entertaining uses of GIF animation that I've seen in a long time.
{"url":"https://teatotal.blogspot.com/","timestamp":"2024-11-03T23:32:32Z","content_type":"text/html","content_length":"73692","record_id":"<urn:uuid:19922b90-4faa-40fd-9236-c1a912cccc2e>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00718.warc.gz"}
☆ org.hipparchus.geometry.euclidean.oned.IntervalsSet □ Nested Class Summary ☆ Nested classes/interfaces inherited from interface org.hipparchus.geometry.partitioning.Region □ Constructor Summary Constructor Description IntervalsSet(double tolerance) Build an intervals set representing the whole real line. IntervalsSet(double lower, double upper, double tolerance) Build an intervals set corresponding to a single interval. IntervalsSet(Collection<SubHyperplane<Euclidean1D>> boundary, double tolerance) Build an intervals set from a Boundary REPresentation (B-rep). IntervalsSet(BSPTree<Euclidean1D> tree, double tolerance) Build an intervals set from an inside/outside BSP tree. □ Method Summary ☆ Methods inherited from class org.hipparchus.geometry.partitioning.AbstractRegion applyTransform, checkPoint, checkPoint, checkPoint, checkPoint, contains, copySelf, getBarycenter, getBoundarySize, getSize, getTolerance, getTree, intersection, isEmpty, isEmpty, isFull, isFull, setBarycenter, setBarycenter, setSize ☆ Methods inherited from class java.lang.Object clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait □ Constructor Detail ☆ IntervalsSet public IntervalsSet(double tolerance) Build an intervals set representing the whole real line. tolerance - tolerance below which points are considered identical. ☆ IntervalsSet public IntervalsSet(double lower, double upper, double tolerance) Build an intervals set corresponding to a single interval. lower - lower bound of the interval, must be lesser or equal to upper (may be Double.NEGATIVE_INFINITY) upper - upper bound of the interval, must be greater or equal to lower (may be Double.POSITIVE_INFINITY) tolerance - tolerance below which points are considered identical. ☆ IntervalsSet public IntervalsSet(BSPTree<Euclidean1D> tree, double tolerance) Build an intervals set from an inside/outside BSP tree. The leaf nodes of the BSP tree must have a Boolean attribute representing the inside status of the corresponding cell (true for inside cells, false for outside cells). In order to avoid building too many small objects, it is recommended to use the predefined constants Boolean.TRUE and Boolean.FALSE tree - inside/outside BSP tree representing the intervals set tolerance - tolerance below which points are considered identical. ☆ IntervalsSet public IntervalsSet(Collection<SubHyperplane<Euclidean1D>> boundary, double tolerance) Build an intervals set from a Boundary REPresentation (B-rep). The boundary is provided as a collection of sub-hyperplanes. Each sub-hyperplane has the interior part of the region on its minus side and the exterior on its plus side. The boundary elements can be in any order, and can form several non-connected sets (like for example polygons with holes or a set of disjoints polyhedrons considered as a whole). In fact, the elements do not even need to be connected together (their topological connections are not used here). However, if the boundary does not really separate an inside open from an outside open (open having here its topological meaning), then subsequent calls to the checkPoint method will not be meaningful anymore. If the boundary is empty, the region will represent the whole space. boundary - collection of boundary elements tolerance - tolerance below which points are considered identical. □ Method Detail ☆ buildNew public IntervalsSet buildNew(BSPTree<Euclidean1D> tree) Build a region using the instance as a prototype. This method allow to create new instances without knowing exactly the type of the region. It is an application of the prototype design pattern. The leaf nodes of the BSP tree must have a Boolean attribute representing the inside status of the corresponding cell (true for inside cells, false for outside cells). In order to avoid building too many small objects, it is recommended to use the predefined constants Boolean.TRUE and Boolean.FALSE. The tree also must have either null internal nodes or internal nodes representing the boundary as specified in the getTree method). Specified by: buildNew in interface Region<Euclidean1D> Specified by: buildNew in class AbstractRegion<Euclidean1D,Euclidean1D> tree - inside/outside BSP tree representing the new region the built region ☆ getInf public double getInf() Get the lowest value belonging to the instance. lowest value belonging to the instance (Double.NEGATIVE_INFINITY if the instance doesn't have any low bound, Double.POSITIVE_INFINITY if the instance is empty) ☆ getSup public double getSup() Get the highest value belonging to the instance. highest value belonging to the instance (Double.POSITIVE_INFINITY if the instance doesn't have any high bound, Double.NEGATIVE_INFINITY if the instance is empty) ☆ asList public List<Interval> asList() Build an ordered list of intervals representing the instance. This method builds this intervals set as an ordered list of Interval elements. If the intervals set has no lower limit, the first interval will have its low bound equal to Double.NEGATIVE_INFINITY. If the intervals set has no upper limit, the last interval will have its upper bound equal to Double.POSITIVE_INFINITY. An empty tree will build an empty list while a tree representing the whole real line will build a one element list with both bounds being infinite. a new ordered list containing Interval elements ☆ iterator public Iterator<double[]> iterator() The iterator returns the limit values of sub-intervals in ascending order. The iterator does not support the optional remove operation. Specified by: iterator in interface Iterable<double[]>
{"url":"https://hipparchus.org/apidocs-3.0/org/hipparchus/geometry/euclidean/oned/IntervalsSet.html","timestamp":"2024-11-03T21:59:19Z","content_type":"text/html","content_length":"36645","record_id":"<urn:uuid:b7591fe8-7b9e-4a4c-a555-bc9e2942ae7b>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00069.warc.gz"}
Theorem of the Day Welcome to the 214th edition of Carnival of Mathematics! CoM Explained! CoM on Twitter CoM official site Other Carnivals at Theorem of the Day Welcome to the 214th edition of Carnival of Mathematics, meta-hosted by the remarkable aperiodical.com. In order to appear cool and up-to-the-minute I thought I should ask ChatGPT to do the traditional Carnival honours and provide some facts about the number 214. Here's a screenshot: I thought yada yada yada ... WAIT A MINUTE! 214 is the sum of WHAT? So is 214 actually the sum of the first 23 anything? I'll append your valid answers to this page if you email me them (no, not you, ChatGPT!) So straight to submissions! Well, here's a ChatGPT gem from Tony Mann which I wanted to submit but hesitated because it's on Facebook and I'm never sure what that means in terms of durability/ Three breakthroughs in mathematics in March: on 16th Gil Kalai's Combinatorics and More posted rumours of a big announcement from a Cambridge seminar and added the big reveal the day after: a reduced upper bound for Ramsey numbers that has been sought since 1935. Timothy Gowers, who was at the seminar, reported on Twitter that there was the same feeling as at Andrew Wiles's history-making 1993 Perhaps unfairly, the huge significance of an upper bound reduction from \(4^k\) to \((4-\varepsilon)^k\) was never going to hold the headlines in competition with The Hat, announced a few days later. This from the University of Waterloo was closely followed by a 3D-printout at thingiverse.com from Dave Richeson (divbyzero); an interactive Hat tiler from Mathigon.org; a complete github monotiling bundle from Christian Lawson-Perfect; quilting instructions from fractalkitty; and a slew of first-rate blog explanations: The Aperiodical, Gödel's Lost Letter, Gil Kalai again, ... And the third announcement? Two high school girls coming up with a trig proof of Pythagoras. One that didn't depend (circularly) on \(\cos^2x+\sin^2x=1.\) This was a bit more low-key because the actual proof wasn't available. There was just a Guardian article and a regional AMS talk. But Youtube channel MathTrain cleverly reverse-engineered the proof from pictures of slides from the AMS talk. Then for good measure, Akshar Varma on mathstodon produced a diagrammatic version. More trig: it was the day formerly known as Pi day on the 14th. Ioanna Georgiou had this very nice infinitelyirrational.com podcast about Archimedes, as seen through the lens of her clever popular mathematics writing (see here). More directly Pi-related was an announcement on twitter by John C. Beach regarding calculations encoded by the Egyptians. He has this follow-up. Andrius Kulikauskas submits a very different take on geometry: A Geometry of Moods, Evoked by Wǔjué Poems of the Táng Dynasty. Patrick Honner also goes from a poem to geometry: "Neg-a-tive b, plus or minus / The square root of b squared / ...", a quadratic formula mnemonic, apparently. Why is cubic less accessible than quadratic to algorithmic solution? This is a contribution to Quanta magazine's teaching wing: Quantized Academy, and very beautifully done it is too, with some well-judged self-test questions at the end. But I can't let him get away with "But unlike the quadratic formula, [the cubic formula] has no catchy tune to sing along to". Who does not know Tartaglia's timeless aria "Quando che’l cubo con le cose appresso"? An English translation by Kellie Gutman helpfully brought to us by poetrywithmathematics.blogspot.com. Quanta magazine is a force of nature (at least in mathematics; some of their physics coverage has been controversial). Magazine, Academy, Podcasts, ... Super-explainer Steven Strogatz hosts The Joy of Why (a play on one of his book titles) and in "Is There Math Beyond the Equal Sign?" talks to category theorist Eugenia Cheng (author of The Joy of Abstraction). Listen to the audio — a transcript is helpfully provided, but the spoken word here brings everything to life. No transcripts for Youtube clips, normally, but polymathematic's explantion of intransitive dice is wonderfully visual and provides a beautiful little case study in applied probability. Bartosz Ciechanowski has a "weekend hobby" which would be a full-time job for lesser mortals. So lesser am I that full-time is just assimilating his incredibly rich and interactive description of the mechanics of riding a bicyle. In awe! Some nice submissions on factoring: John D Cook has this on Conway's method, with lots of useful pointers to related things; Matt Henderson has a cute gif on twitter for visualising factoring; and he has another gif: a way of telling if a maze is divisible by two! Meanwhile, if you want to divide into two equal factors and don't trust ChatGPT (I don't), well, squarerootcalc.com has just what you Richard Fisher trusts ChatGPT because he used it "to research trusted sources and calculate parts of" The numbers that are too big to imagine, a well-written BBC Future stroll through the mathematics of big. "For the sake of clarity", the BBC primly points out, "[we don't] use generative AI as a primary source or to replace the journalism needed for our articles." Let's see how long that lasts. Jay Daigle certainly isn't taking ChatGPT on trust and his blog has a couple of very thorough explorations of its implications for mathematics and mathematics education. One of the great things about doing Mathematics Carnival is the serendipidous blog reading entailed. The actual submission from Jay Daigle's blog was on Euler's method for getting approximate evaluations of ODE solutions and how this is a different way of looking at the Riemann integral and the Fundamental Theorem of Calculus. Daigle's blog posts tend to finish with a step back: "what's this actually amount to?" which I find Thanks for reading! Submit here to Carnival 215, to be hosted by Cassandra at Cassandra Lee Yieng’s Blog. Email me what 214 is the sum of the first 23 of!
{"url":"https://theoremoftheday.org/SpecialEvents/CoM214.html","timestamp":"2024-11-04T18:20:05Z","content_type":"text/html","content_length":"16391","record_id":"<urn:uuid:04712be9-f663-4eee-bd44-594d8a19964b>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00297.warc.gz"}
Characterizing Spatial Structure in Climate Model Ensembles 1. Introduction Much of our understanding of the climate system is derived from climate model experiments, which also underpin most projections of future climate based on specified scenarios of greenhouse gas emissions and socioeconomic development (Chen et al. 2023). Climate models may be used individually, as when using general circulation models (GCMs) or Earth system models (ESMs) to provide global simulations at a relatively coarse spatial resolution, or in combination, as when using GCMs or ESMs to provide boundary conditions for high-resolution regional climate model (RCM) simulations. Climate model experiments are associated with multiple sources of uncertainty (Rougier and Goldstein 2014), which are often studied using collections (“ensembles”) of model runs. These include ensembles that explore internal variability, often by varying initial conditions in a single model (e.g., Rodgers et al. 2021; Maher et al. 2021; von Trentini et al. 2020); perturbed physics ensembles (Collins et al. 2006) that vary the representation of key processes, again within a single model; and multimodel ensembles (Meehl et al. 2007; von Trentini et al. 2019) that can be regarded as attempting to characterize the range of outcomes that could be anticipated on the basis of current scientific knowledge, as represented by the models considered. In any such ensemble, the sampled uncertainties induce variation between ensemble members. It is therefore of interest to characterize this variation: for example, by identifying sources of uncertainty to which the outputs are most sensitive and hence suggesting priorities for future model improvements, or by identifying representative subsets of members for use in applications where it is not feasible to process the entire ensemble. The latter situation can arise when climate projections are used as inputs to other models of climate change impacts: if the impacts models themselves are computationally expensive or if resources are otherwise limited, then this limits the number of ensemble members that can be used (Cannon 2015). A substantial body of literature is devoted to quantifying the relative importance of different sources of variation within an ensemble. For example, Hawkins and Sutton (2009) applied a heuristic approach to projections from the World Climate Research Programme’s Coupled Model Intercomparison Project phase 3 (CMIP3) multimodel ensemble, to partition the variation into components attributable respectively to GCMs, emissions scenarios, and internal variability of the climate system. This approach was placed on a more formal footing using analysis of variance (ANOVA) techniques by Yip et al. (2011). A drawback of these authors’ fixed-effects ANOVA methodology, however, is that the unambiguous partitioning of variation requires a balanced ensemble, which in this case means that each GCM is run the same number of times with each emissions scenario. The CMIP3 ensemble does not meet this condition, so Yip et al. (2011) discarded some ensemble members and applied their methodology to the largest possible balanced subset. Of course, discarding ensemble members incurs a loss of information: this can be avoided using a random-effects ANOVA (Northrop and Chandler 2014) or by considering the problem as one of a balanced ensemble with some values missing, and estimating (“imputing”) these missing values (Evin et al. 2019, 2021). The latter approaches both use Bayesian methods and require the use of Markov chain Monte Carlo (MCMC) techniques that are computationally intensive. This perhaps disincentivizes their routine use in situations where the outputs of interest are high dimensional (i.e., involve many quantities of interest), for example when analyzing maps involving large numbers of spatial locations. In high-dimensional settings, a further potential concern with ANOVA approaches is that they are designed to analyze scalar-valued quantities. Most applications of ANOVA in climate science have therefore considered each quantity separately. For example, Christensen and Kjellström (2020) applied fixed-effects ANOVAs to a subset of the EuroCORDEX regional model ensemble (Jacob et al. 2014) separately for each spatial location within the EuroCORDEX study region, and mapped the results. However, a comprehensive analysis of variation in high-dimensional ensemble outputs requires methods that are specifically designed for a high-dimensional setting. This has rarely been attempted to date, a notable exception being Sain et al. (2011), who used functional ANOVA methods to analyze ensembles of maps within a Bayesian framework, which, as noted above, can be computationally expensive. Other exceptions include applications of clustering methods to find representative subsets of ensemble members for use in impacts studies (e.g., Cannon 2015; Casajus et al. 2016): these methods aim to produce a simplified representation of the variation by clustering the ensemble members into groups that are relatively homogeneous and distinct. Alternative simplified representations can be obtained via dimension reduction, for example by applying principal components analysis (PCA) or empirical orthogonal function (EOF) analysis across ensemble members (e.g., Li and Xie 2012; Yim et al. 2016; Wang et al. 2020). Such “intermodel EOF” analyses seek to identify dominant modes of spatial variation across the ensemble: the cited references demonstrate how these modes may be of interest in their own right, as well as how the associated scores (see below) can be used to identify contrasting models for further study. A potential difficulty, however, is that ensembles often contain structure that EOF analyses are not designed to handle. In regional ensembles for example, structure is induced by the GCM and RCM combinations used to generate each member. Similarly, in the CMIP ensembles, direct EOF analyses will be dominated by the models with the most runs. To avoid this problem when analyzing CMIP5 outputs, Yim et al. (2016) worked with the average of each model’s runs, while Wang et al. (2020) analyzed a single run from each model. However, no formal justification was provided for either Against this background, the main contribution of the present paper is to formalize the application of dimension reduction techniques to climate model ensembles, while also accounting appropriately for ensemble structure. The resulting approach, which we call ensemble principal pattern (EPP) analysis, reduces to intermodel EOF analysis when applied to unstructured ensembles containing a single member per model. It is intended primarily as a tool to enable the rapid exploration of ensembles, either to select contrasting or representative members for use in impacts studies or to highlight distinctive modes of variation that may merit further investigation. To motivate our proposal and fix ideas subsequently, section 2 presents some outputs from the EuroCORDEX regional ensemble over the United Kingdom and discusses some questions that could be asked about these outputs. The basic intermodel EOF methodology is described in section 3 as a reference. In section 4 the methodology is extended, in conjunction with multivariate ANOVA (MANOVA) techniques, to handle ensembles with more complex structures: the extension is illustrated using the EuroCORDEX example. Section 5 concludes and suggests further potential applications. 2. Motivating example: The EuroCORDEX regional ensemble We consider the bias in simulated summer (JJA) mean daily maximum surface temperature (“tasmax”) across the United Kingdom, over the period 1989–2008, for 64 EuroCORDEX ensemble members (Jacob et al. 2014) that were produced using combinations of 10 RCMs and 10 GCM runs from the CMIP5 experiment (Taylor et al. 2012). The GCM runs, from six unique GCMs, were conditioned on CMIP5 historical forcings until 2005, and on the RCP8.5 emissions scenario from 2006 to 2008. Over this limited period the RCP8.5 forcings deviate very little from the 2005 levels (van Vuuren et al. 2011) and hence can be considered as plausible proxies for the historical values. To calculate the biases in JJA tasmax, the 1989–2008 mean tasmax values were first calculated for each ensemble member on the native grid of the corresponding RCM. Next, these mean values were regridded to the 12 × 12 km^2 grid used by the HadUK gridded observational dataset (Perry et al. 2009), using a conservative area-weighting scheme (Jones 1999): The 12-km grid resolution is similar to that of each EuroCORDEX RCM. Finally, the biases for each ensemble member, shown in Fig. 1, were computed by subtracting the 1989–2008 mean HadUK tasmax values. Fig. 1. Bias in mean simulated surface temperature (°C) over the United Kingdom, for the period 1989–2008, from 64 members of the EuroCORDEX regional ensemble (see main text for details). Rows and columns correspond to RCMs and GCMs, respectively. Citation: Journal of Climate 37, 3; 10.1175/JCLI-D-23-0089.1 The columns of Fig. 1 represent GCM runs, and the rows represent RCMs. To simplify the subsequent presentation, the 10 GCM runs will be considered as representing separate models throughout: this is not required for the methodology, however. With this convention, each GCM and RCM pair contributes at most a single member to the ensemble. The ensemble is unbalanced however, with 36 of the 100 possible pairs missing. The missing pairs include four that are available but have been excluded deliberately: three because they are superseded in the ensemble by runs from later versions of the same RCMs driven by the same GCMs, and one because it contains inconsistent metadata regarding the driving GCM. As noted in section 1, lack of balance causes problems when attributing variation to its potential sources, whence analyses are sometimes restricted to the largest balanced subset of an ensemble. Here, the largest such subset involves eight RCMs and four GCM runs, hence including just 32 of the 64 available ensemble members. It is clearly undesirable to restrict attention to such a small Apart from the “missing” runs, the most obvious feature of Fig. 1 is the sixth column, corresponding to the HadGEM2-ES GCM: all ensemble members driven by this GCM appear to have a warm bias. Closer inspection also suggests potential cool biases associated with the HIRHAM5, RACMO22E, and RCA4 RCMs (fourth, sixth, and seventh rows): However, it is hard to be confident about this based on a purely visual inspection. Moreover, it is hard to identify any systematic and detailed spatial structure from the maps. This creates difficulties for several potential users of the ensemble. Consider, for example, a regional climate modeler who sees EuroCORDEX as an opportunity to explore the simulation of summer temperatures by different RCMs. Apart from the apparent cool biases noted above, Fig. 1 does not obviously provide useful information to support such a comparison. Another potential user is the climate impacts modeler with limited computational resources. Such a user might produce the equivalent of Fig. 1, showing future changes in some impact-relevant quantity under a specified emissions scenario, and they may wish to choose a small number of ensemble members spanning the range of potential impacts. This could be done by computing spatial averages of relevant quantities for each ensemble member, and then choosing members spanning the range of the resulting averages. However, spatial averages are not necessarily the most relevant quantities when considering impacts that may be localized in space (e.g., associated with urban areas). It would therefore be helpful to find an alternative visualization that allows users to identify detailed spatial structures in an ensemble. Our two notional EuroCORDEX users have different interests and priorities. Nonetheless, both would benefit from a visualization that simultaneously (i) is more compact than Fig. 1 and (ii) allows them to identify potentially interesting modes of variation. This is the aim of our EPP methodology—although, as discussed later, its use is not restricted to regional ensembles. 3. EPP analysis: The simplest case To develop the methodology, we start by considering an unstructured ensemble—for example, a multimodel ensemble with a single member per model, or a perturbed physics ensemble with one member per model variant. The ensemble outputs are considered as a collection of vectors {Y[i]: i = 1, …, n} say, where n is the number of members and $Yi=(Yi1Yi2⋯YiS)′$ is a vector of length S containing a value for the ith member at each of S spatial locations. Moreover, let $Y$ be the n × S “ensemble data matrix” with $Y$[i] as its ith row; let $Y¯=n−1∑i=1nYi$ denote the S × 1 overall ensemble mean vector, with the sth element being $Y¯s=n−1∑i=1nYis$; and let 1 be an S × 1 vector of ones. In this case, as noted earlier, the proposed methodology is equivalent to an intermodel EOF analysis which, for present purposes, is most conveniently presented by considering the singular value decomposition (SVD) of the centered data matrix $Y−1Y¯T=Y˜$ say, with $Yi−Y¯$ as its ith row. Since the approach is widely used already (see section 1), we merely sketch it here to prepare for the extension to structured ensembles in section 4. For more details of the underpinning mathematics, see Krzanowski (1988, section 4.1) or Gentle (2007, section 3.10). When n < S, the centered matrix $Y˜$ typically has rank n − 1 so that n separate maps are needed to visualize the complete ensemble (the ensemble mean, plus n − 1 maps to visualize the centered matrix—or, equivalently, separate maps of each ensemble member). Dimension reduction aims to find an accurate approximation of $Y˜$ of lower rank, say d < n − 1, so that the information can be visualized in just d + 1 maps including the ensemble mean. The SVD of the centered data matrix takes the form are orthogonal matrices with dimensions , respectively, and is a diagonal matrix with nonnegative elements sorted in decreasing order. Denote these elements by is zero because the rank of is at most − 1); then can be rewritten as are vectors of length respectively, containing the th columns of . Denoting the th element of , the th row of is thus so that each row of is a weighted sum of the . These vectors are each of length and, in the current context, represent spatial patterns: moreover, the patterns are uncorrelated because is orthogonal. In fact, these patterns are the principal components of and the weights { = 1, …, = 1, …, } are the corresponding principal component scores. Now consider truncating the right-hand side of Eq. (2) to retain just d < n terms. The result is an n × S matrix, $Y˜*$, say, of rank d and with ith row $λ1ui1v1T+λ2ui2v2T+⋯+λduidvdT,$. This matrix is a reduced-rank approximation to $Y˜$: the corresponding approximation to the original uncentred data matrix is $Y˜*+1YT¯=Y∗$, say. In fact, denoting by $Yis*$ the (i, s)th element of $Y*$, the sum of squared approximation errors $∑i=1n∑s=1S(Yis−Yis*)2$ is smaller than under any other rank d approximation of $Y$; and the jth spatial pattern v[j] accounts for a proportion $λj2/∑i=1nλi2$ of the “total variation” $∑i=1n∑s=1S(Yis−Y¯s)2$. As noted above, the convention in PCA is to define the principal components as the columns of $V$ and to subsume the singular values {λ[j]} into the corresponding scores. In the current context however, it is arguably more interpretable to multiply the {λ[j]} by the {v[j]} instead: the resulting spatial patterns {λ[j]v[j]} have the same units of measurement as the original quantity of interest and hence can be interpreted directly as contributions to the overall variation of that quantity, while the scores {u[ij]} are dimensionless and hence can be interpreted as “weights” attached to each pattern. We therefore define the EPPs as the patterns {λ[j]v[j]}; and the {u[ij]: i = 1, …, n; j = 1, …, d} are the corresponding EPP scores. An ensemble member with a positive (negative) score for the jth EPP will tend to have positive (negative) deviations from the ensemble mean in regions where the corresponding pattern v[j] is positive, and vice versa. A further detail is that $U$ and $V$ in (1) are not uniquely defined, because replacing $U$ and $V$ with −$U$ and −$V$, respectively, does not change the result. We remove this ambiguity by specifying that positive EPP scores are associated overall with above-average values of the quantity of interest. Operationally, this is achieved by multiplying each EPP by the sign of its average value: if this average is negative, the multiplication reverses the signs of the EPP and the corresponding scores. 4. EPPs for structured ensembles As noted in section 1, many ensembles contain structure that is not accounted for by the basic methodology described above. We now extend the methodology to address this, by combining multivariate analysis of variance (MANOVA) techniques with SVDs. For ease of exposition, we focus on regional ensembles involving R RCMs and G GCMs, in which each RCM and GCM combination has been run at most once. Extensions to more general structured ensembles are straightforward in principle: details can be found in the online supplemental material to the paper. a. Regional ensembles: Theory We denote the ensemble data matrix by $Y$ as before. Now however, the individual ensemble members are denoted by {$Y$[rg]}, with r and g indexing RCMs and GCMs respectively. The total number of runs is denoted by n[⋅⋅] so that the dimension of $Y$ is n[⋅⋅] × S, and n[r][⋅] and n[⋅g] denote the numbers of runs involving RCM r and GCM g, respectively. The ensemble structure can be represented using a statistical model: . In represents an overall mean; represent systematic departures from this overall mean for the th GCM and th RCM, respectively; and is a “residual” representing variation that cannot be attributed to systematic contributions from either the GCM or the RCM (e.g., arising from internal variability in the system), treated as though it is drawn independently for each member from a distribution with mean vector and covariance matrix . With the exception of the , these quantities are all vectors of length Equation (3) defines an additive MANOVA model, which is the multivariate analog of the two-way ANOVA model used in several references from section 1. Least squares estimates of the coefficient vectors {α[g]} and {β[r]} can be obtained from the data matrix: the estimation is particularly simple for a balanced ensemble with a run for every RCM and GCM combination so that n[r][⋅] = G, n[⋅g] = R and n[⋅⋅] = RG. Let $Y¯r⋅=nr⋅−1∑g=1GYrg$ and $Y¯⋅g=n⋅g−1∑r=1RYrg$ denote the mean vectors over all available runs for the rth RCM and gth GCM respectively; and let $Y¯⋅⋅=n⋅⋅−1∑r=1R∑g=1GYrg$ denote the overall ensemble mean. Then, following standard derivations e.g., Davison (2003, section 9.2.2), in the balanced case the least squares estimates of μ, α[g], and β[r] are, respectively, $μ^=Y¯⋅⋅, α^g=Y¯⋅g−Y¯⋅⋅$, and $β^r=Y¯r⋅−Y¯⋅⋅$. These are natural summary measures: for example, the estimated mean field for GCM g is $μ^+α^g=Y¯⋅⋅+Y¯⋅g−Y¯⋅⋅=Y¯⋅g$, with a corresponding expression for RCM r. However, the appropriate analogs of these quantities are less obvious in unbalanced ensembles with some missing RCM and GCM combinations. In this case, model (3) provides a natural generalization as we now elaborate. Modern treatments of unbalanced (M)ANOVA models (e.g., Faraway 2014 , section 13.2) exploit their alternative representation as linear models with dummy covariates defining group membership. Specifically, can be written as where, in addition to the vector defined already, is now a vector containing is an × ( − 1) matrix in which the th column contains ones in the rows corresponding to runs in which the th GCM was used, negative ones in the rows where the th GCM was used, and zeroes everywhere else; is an × ( − 1) matrix in which the th column contains ones in the rows corresponding to runs in which the th RCM was used, negative ones in the rows where the th RCM was used, and zeroes everywhere else; is the ( − 1) × matrix in which the rows are the coefficient vectors { = 1, …, − 1}; is the corresponding ( − 1) × matrix containing the coefficient vectors { = 1, …, − 1}; is the matrix in which the rows are the vectors { }; and the matrix , of dimension × ( − 1), is obtained by placing its component parts side by side. The accompanying python script (see the data availability statement) demonstrates these constructions. Under formulation , the least squares coefficient estimates satisfy the equation ( Rao 1973 each side of which is an ( − 1) × matrix. The cost of solving this system increases linearly in so that the solution is feasible on modern computers, even for large spatial domains. Computing $α^G=−∑g=1G−1α^g$ and $β^R=−∑r=1R−1β^r$ now yields a complete set of effect estimates for each GCM and RCM in the ensemble, satisfying the constraints $∑g=1Gα^g=∑r=1Rβ^r= 0$. These estimates can be mapped individually or, to avoid having to inspect G + R maps, subjected to their own SVD decompositions by stacking them into matrices of dimension G × S and R × S, respectively. These matrices are both centered by construction. The SVD decompositions yield two sets of EPPs, summarizing the dominant patterns of variation among the GCMs and RCMs respectively. Moreover, SVDs of the “residual” matrix $Y−1μ^T−XGα^−XRβ^$ (which is also centered) define spatial modes of variation that are not systematically related to either the GCMs or RCMs. We denote this residual matrix by $e$^(2); the superscript is for notational consistency with the supplemental material. A full EPP analysis thus consists of two steps: first, calculation of the GCM, RCM, and residual effects based on (3); and second, application of SVD to each set of effects to obtain the corresponding spatial modes of variation. b. Regional ensembles: Partitioning of variation The development above provides low-dimensional estimates of RCM, GCM, and residual effects in a regional ensemble. To fully understand the ensemble structure however, it is also necessary to quantify the relative importance of these effects. If, for example, the RCMs contribute only a small proportion of the total variation, then the RCM EPPs themselves are arguably of limited interest. We now address this issue. The approach can be regarded either as an extension of the ANOVA methodology of Yip et al. (2011), to handle unbalanced ensembles with multivariate outputs, or as a computationally cheap alternative to the functional ANOVA methodology of Sain et al. (2011). 1) Balanced ensembles Initially, it is again helpful to consider a balanced ensemble with no missing runs. In this case, the output vector for the ( ) run can be written as Consider now the total sum of squares and cross-products which is a matrix of dimension that plays the same role as the total sum of squares in a standard univariate ANOVA ( Krzanowski 1988 , section 13.3). To aid interpretation, note that − 1) is the sample covariance matrix between all pairs of locations across the ensemble. Using , a little algebra shows that the total SSCP can be decomposed as are SSCPs associated with the GCMs and RCMs, respectively, and represents residual unstructured variation. The relative magnitudes of these three SSCPs can therefore be used to summarize the relative importance of the three sources of variation within the ensemble. Note that for the balanced case under consideration, are formed respectively from the least squares estimates of the GCM and RCM effects in model . Note also that under , an unbiased estimator of the residual covariance matrix : the denominator here is the residual degrees of freedom after estimating an overall mean together with − 1 independent RCM effects and − 1 independent GCM effects. Moreover, in the balanced case with , ( + 1) is equal to ( − 1)( − 1) so that and the right-hand side of Eq. can be rewritten as We will revisit this expression later, when considering unbalanced ensembles. Unfortunately, if the number of locations S exceeds 1, there is no unique way to compare the magnitudes of $T$[G], $T$[R], and $T$[E] in (8) (if S = 1, then each of these matrices is a single number and can be expressed as a proportion of the total variation $T$). One option is to produce maps showing the diagonal elements of $T$, and the proportional contributions of $T$[G], $T$[R], and $T$[E] to each of these diagonal elements: these are the “total variability partition” maps in, for example, Fig. 5 of Christensen and Kjellström (2020). Importantly, the diagonal elements can all be calculated without having to compute the SSCPs themselves (see the supplemental material for details), which is helpful because they are of dimension S × S so that storage requirements can be excessive for large S. To supplement the maps just described, the diagonal elements of the matrices in (8) can be summed to obtain their traces. The trace operator is additive, so that trace($T$) = trace($T$[G]) + trace( $T$[R]) + trace($T$[E]) and the total trace is partitioned unambiguously into single-number summaries representing contributions from each source of variation. A potential disadvantage is that the off-diagonal elements of the SSCP matrices do not influence the partitioning: we return to this point in section 5. For the moment however, we highlight a useful alternative interpretation involving the centered data matrix $Y˜$. It is easy to verify that the total SSCP $T$ can alternatively be written as $T=YT˜Y˜$; and that trace($T$) is the sum of the squared elements of $Y˜$, referred to as the “total variation” in section 3. This quantity is the squared Frobenius norm of $Y˜$ (Gentle 2007, section 3.9), a standard measure of the overall magnitude of a matrix. Similarly, trace($T$[G]) and trace($T$[R]) are the squared Frobenius norms of $XGα^$ and $XRβ^$ respectively [the contributions of the GCMs and the RCMs to the centered data matrix, according to (4)]; and trace($T$[E]) is that of the residual matrix $e$^(2). The trace-based partitioning of variation therefore provides an interpretable decomposition focused on the data matrix itself, rather than the SSCPs. Finally, we note that the traces of the components of (8) are related to the SVDs of the components of (4): if the singular values of $Y˜=Y−1μ^T$ are $λ1≥⋯≥λn⋅⋅=0$ as in section 3, then $trace(T)=∑i= 1n⋅⋅λi2$, with corresponding expressions for the remaining traces. The singular values of $XGα^$ are just $R$ times those for the GCM EPPs, while those of $XRβ^$ are $G$ times those for the RCM EPPs and trace($T$[E]) is related to the SVD of the residual matrix e^(2). The proposed methodology therefore provides a hierarchical partitioning of the ensemble variation: the squared singular values of the EPPs for each component of (4) sum to the variation explained by that component and, in turn, these componentwise contributions sum to the total variation trace($T$). 2) Unbalanced ensembles If some (r, g) combinations are missing from a regional ensemble then Eq. (8) no longer holds and as discussed in section 1, the ensemble variation cannot be partitioned unambiguously. In this case, instead of discarding or imputing members to obtain a balanced ensemble, one alternative is to determine the range of variation (RoV) that is potentially attributable to each source. This is done by performing two separate analyses, each based on a sequence of statistical models as described in the supplemental material. In the first analysis, the maximum possible variation is attributed to the GCMs, with the RCM information used only to account for variation that cannot otherwise be explained. In the second analysis, the roles of GCMs and RCMs are reversed. The difference between the results indicates the extent to which the partitioning is affected by the lack of balance: in the balanced case, both analyses recover the unique decomposition (8). An alternative to the RoV approach is to provide a partitioning of variation that maintains a hierarchical relationship with the RCM, GCM and residual EPPs, as found in a balanced ensemble. This can be done via the alternative representation , because in the unbalanced case we still have estimates of the { }, the { }, and as described earlier. We therefore denote the three terms in , and , respectively, and define the SSCP for a complete ensemble as The traces of the components , and can now be used to estimate the relative contributions of each source of variation in a complete ensemble; and the corresponding EPPs provide a hierarchical partitioning of A potential objection to this approach is that it no longer provides an exact decomposition of variation for the observed ensemble. We therefore propose to use it in conjunction with the RoVs, as illustrated in the example below. Importantly, the RoVs are not guaranteed to contain the relative proportions derived from (10): they provide information on potential partitionings of variation in the observed ensemble, whereas (10) aims to account additionally for variation associated with unobserved ensemble members. If the observed members are unrepresentative in some sense—for example, if GCMs with above-average responses tend to be paired with RCMs that have below-average responses—then this lack of representativeness will feed into the RoVs. If the results from (10) lie outside these ranges therefore, this suggests that the characteristics of the available and missing ensemble runs may differ. The use of an estimated partitioning of variation can be regarded as a form of imputation (see section 1). By contrast with other imputation schemes however, it does not require estimation of the missing ensemble members: rather, it reweights the contributions to each SSCP to account for undersampled parts of the complete ensemble. This is similar in spirit to the well-established approach, in survey sampling, of reweighting to handle situations in which subgroups of a population are over- or underrepresented (e.g., Little and Rubin 2020, chapter 3). c. Regional ensembles: An unbalanced example The EPP methodology is now applied to the EuroCORDEX U.K. temperature biases of section 2. The data matrix has n[⋅⋅] = 64 rows corresponding to the ensemble members, and S = 1652 columns corresponding to grid cells. Figure 2 shows the estimated partitioning of variation for a complete ensemble based on Eq. (10). Focusing first on the estimated ensemble standard deviations in Fig. 2a, derived from $T†$, the most obvious features are two local areas of substantially enhanced variation. One of these is in the southeast, corresponding to the Greater London area; the other is in the English Midlands, which is also a major conurbation. Other areas of enhanced variation are less pronounced, and include nonurban regions (e.g., the Scottish Highlands) as well as upland and industrial areas of northern England. This suggests that as far as summer maximum temperature biases are concerned, the ensemble indicates some uncertainty associated with large urban heat islands and also, to a lesser extent, with topography. Fig. 2. Estimated decomposition of variation in the completed EuroCORDEX ensemble, for bias in summer mean daily maximum surface temperature over the U.K. from 1989 to 2008. (a) The ensemble standard deviations at each grid cell [i.e., the square roots of the diagonal elements of $T†/(n⋅⋅−1)$, with $T†$ as in (10)], and (b)–(d) the diagonal elements of $TG†,TR†$, and $TE†$, respectively, as percentages of the corresponding elements of $T†$. The traces of the respective matrices are also quoted, as percentages of trace($T†$). Citation: Journal of Climate 37, 3; 10.1175/JCLI-D-23-0089.1 Figures 2b–d use Eq. (10) to probe the sources of variation in the ensemble. The RCMs account for the highest percentage (53%) of the estimated total variation, although the GCMs also account for 38%: unstructured residual variation is relatively unimportant. These figures can be compared with the RoVs derived from the available ensemble members: these analyses reveal that the RCMs contribute between 35% and 62% of the total variation while the GCMs contribute between 29% and 56%. The GCM and RCM estimates from (10) both fall well within the respective RoVs, whence there is no evidence that the available ensemble members are unrepresentative with respect to biases in summer tasmax. A closer inspection of the maps also reveals that in the two urban areas where the ensemble variation is highest, the RCMs account for up to around 70% of it: this perhaps reinforces recent evidence (Lo et al. 2020) that uncertainties in urban heat island effects are primarily attributable to the To understand the variation in more detail, Figs. 3 and 4 show the GCM and RCM EPPs respectively. In both cases, panel a is the estimate of μ in (3). The first GCM EPP accounts for 92% of the GCM-attributed variation and shows a gentle north–south gradient; the second EPP is much less important. By contrast, the first RCM EPP—accounting for 87% of the RCM-related variation—clearly picks out the pattern corresponding to the effects of urban heat islands and topography. This demonstrates that the uncertainty regarding these effects comes predominantly from the RCMs, and in fact that it is the dominant pattern of RCM-related variation in the ensemble. Moreover, the EPP scores associated with this pattern enable us to identify the RCMs with the smallest and largest such effects, which are RACMO22E and REMO2015 respectively. For users with a particular interest in U.K. summer maximum temperatures in urban heat islands, this analysis therefore helps to identify the EuroCORDEX runs spanning the range of relevant historical biases. Fig. 3. EPP analysis for the estimated GCM effects in the 64-member EuroCORDEX ensemble, for bias in summer mean daily maximum surface temperature from 1989 to 2008. Shown are (a) the ensemble mean; (b),(c) maps of the first and second GCM EPPs, respectively; and (d) the associated EPP scores for each GCM in the ensemble. Subtitles for (b) and (c) indicate the percentage contributions of each EPP to the GCM-attributed variation. Citation: Journal of Climate 37, 3; 10.1175/JCLI-D-23-0089.1 Fig. 4. EPP analysis for the estimated RCM effects in the 64-member EuroCORDEX ensemble, for bias in summer mean daily maximum surface temperature from 1989 to 2008. The panels are directly analogous to those in Fig. 3. Citation: Journal of Climate 37, 3; 10.1175/JCLI-D-23-0089.1 5. Discussion a. Summary of the proposed methodology EPPs are designed to enable rapid exploration of structured climate model ensembles, particularly when the outputs of interest are high-dimensional. They are descriptive rather than inferential: in particular, we do not attempt to quantify uncertainties about sources of variation or to assess their “statistical significance,” which require more time-consuming methods as reviewed in the introduction. Nor do they aim to estimate the underlying properties of a model or system: this contrasts, for example, with superficially similar techniques that have been used to study the dynamical properties of individual climate models (e.g., Maher et al. 2018; Haszpra et al. 2020; Bódai et al. 2021). Estimation of system properties requires appropriately designed ensembles of sufficient size. By contrast, an EPP analysis is essentially an arithmetical decomposition of the ensemble: for balanced ensembles the decomposition is exact, while for unbalanced ensembles it relies on minimal assumptions corresponding to representations such as Eq. (3). For the descriptive analysis of regional ensembles, (3) itself is relatively uncontentious: it makes no distributional assumptions, although one potential restriction is the additive structure in which the effect of a given RCM is the same regardless of which GCM is driving it. This assumption can be relaxed if the ensemble contains multiple runs of each GCM and RCM combination: the supplemental material provides details. The model provides interpretable summaries of the GCM and RCM effects via the least squares coefficient estimates, and provides a framework for reweighting contributions to the SSCPs in unbalanced ensembles via Eqs. (9) and (10). This reweighting does not require the estimation of missing ensemble members, nor does it require that members are discarded to obtain a balanced subset. An open question, however, is to assess the uncertainty associated with the use of $T†,TG†,TR†$, and $TE†$ in place of $T,TG,TR$, and T[E]. A related question is considered by Christensen and Kjellström (2022), who use heuristic arguments to examine the implications of missing ensemble members for estimation of the coefficient vectors ({α[g]} and {β[r]} in the current context). It is not obvious, however, that these arguments can be extended to examine the effect on the partitioning of variation. Although EPP analysis is designed primarily for descriptive purposes, it is natural to ask how it is affected by sources of variation that are not considered explicitly—for example, Eq. (3) contains no direct representation of internal variability, because the EuroCORDEX ensemble does not provide multiple runs of each GCM and RCM pair with varying initial conditions. In such cases the associated unmodelled or unrepresented variation is subsumed into the residual term $T$[E] in (8), or its estimate $TE†$ in (10). A consequence is that if, as in Fig. 2, the residuals account for only a small proportion of the total variation, then the unmodelled sources must themselves contribute relatively little. b. How many EPPs? In Fig. 3, most variation in the GCM effects is dominated by the first two EPPs, which represent respectively a spatial monopole and dipole. A reviewer has pointed out that this situation is common, and queried whether higher-order EPPs may reveal more nuances of structure. We have investigated this (details are in the accompanying code—see the data availability statement). The higher-order EPPs are relatively unimportant (e.g., GCM EPPs 3 and 4 contribute respectively 0.9% and 0.3% of the GCM-attributed variation) and, moreover, exhibit no interpretable spatial patterns. These results are typical for GCM EPPs in our experience—although the monopole/dipole pattern is not so typical for RCM EPPs as exemplified by Fig. 4. At some level, a lack of “interesting” higher-order structure is itself noteworthy: for example, it suggests that identification of representative ensemble members can be done using just the first two EPP scores for each source of variation. Further investigation is needed, however, to determine whether these findings can be replicated across different ensembles, regions, time periods, and quantities of interest. c. Alternative measures of variation To summarize the overall magnitudes of the SSCP matrices involved in the partitioning of total variation, the development above uses their traces. Although this approach is appealing for its relationship with the Frobenius norm of the centered data matrix, it neglects the off-diagonal elements of the SSCPs. These elements are related to the correlations between neighboring locations and thus, in principle, contain information about the spatial extent of “typical” differences between ensemble members. Alternative summaries of the SSCPs have been considered in the MANOVA literature (e.g., Huberty and Olejnik 2006, chapter 3), albeit justified by the assumption (not made in the development above) that the residual vectors [the {ε[rg]} in (3)] have multivariate normal (Gaussian) distributions. One such alternative derives from fact that in the Gaussian case the least squares MANOVA coefficient estimates are also the maximum likelihood estimates (Krzanowski 1988, section 15.2); hence fitted models can be summarized in terms of their maximized log-likelihoods. In particular, the scaled deviance for a model is defined as twice the difference between its maximized log-likelihood and the highest log-likelihood attainable (i.e., the log-likelihood from a model that fits the ensemble outputs perfectly). The scaled deviance is a measure of “lack of fit”: for example, in linear regression models it is proportional to the residual sum of squares (Davison 2003, section 10.2). For a two-way additive MANOVA such as Eq. (3), it can be shown (see the supplemental material) that in a balanced ensemble the scaled deviance can be partitioned into contributions from the GCMs, RCMs, and residuals in exactly the same way as the total trace, and that the proportions of scaled deviance attributable to the GCMs and RCMs are trace(Σ^−1$T$[G])/trace(Σ^−1$T$) and trace(Σ^−1$T$[G] )/trace(Σ^−1$T$), respectively. Estimated extensions to unbalanced ensembles using (10) are immediate. Some care is needed with the calculations in practice however, as described in the supplemental material. We have applied the approach to several quantities derived from the EuroCORDEX ensemble: in all cases, a high proportion of the deviance was attributable to the RCMs, even in situations where the GCMs dominated the trace-based partitioning of variation. The deviance-based partitioning accounts for the spatial correlation structure of the ensemble members via the Σ matrix: any differences compared with the trace-based partitioning must therefore be associated with this correlation structure, implying that the ensemble members contain differing spatial “patches” of above- and below-average values that are associated with inter-RCM variation. This implication is unsurprising and suggests that the trace-based partitioning of variation yields more useful insights than a partitioning based on deviances. Further work is needed to determine whether this conclusion holds in general, or whether it is specific to the ensembles and study region considered in this paper. d. Extensions and other potential applications Our example considers maps of a single climate index (bias in tasmax). The methodology can also be applied to multiple indices simultaneously: all that is required is that the ensemble data matrix $Y$ contains the quantities of interest. For example, if a user wishes to select regional ensemble members for use with an impacts model taking mean winter temperature and total winter precipitation as inputs, one option is to carry out an EPP analysis of the n[⋅⋅] × 2S data matrix containing the relevant values of both temperature and precipitation. An intermodel EOF analysis along these lines is considered by Zhou et al. (2020). In cases involving variables with different units of measurement however, it is desirable to standardize the data for each variable prior to analysis so that the results are not dominated by the contributions from individual variables: see Krzanowski (1988, section 2.2) for a discussion of this in the closely related context of principal components analysis. In climate science, it is also common to standardize each index on a per-grid-square basis before calculating SVDs, when performing PCA or similar. The appropriateness of this for an EPP analysis depends on the context. We did not do it, because our goal was to understand the spatial variation in the ensemble: for example, standardizing the EuroCORDEX temperature biases individually for each grid square would have removed the interesting excess variation associated with urban heat islands and topography in the first panel of Fig. 2. The potential applications of EPP analysis extend beyond the regional ensemble example considered above. Consider, for example, a CMIP ensemble with different numbers of runs for each model (see references in section 1 ). The EPP analysis of such an ensemble starts from the representation now denotes the output from the th run of the th CMIP GCM ( = 1, …, represents an overall mean; represents a systematic departure from this mean for the th GCM; and represents residual variation. Note that is the direct analog of : the least squares estimates of are now respectively, where is the mean of the ensemble members from the th GCM. An EPP analysis thus focuses on the SVDs of the and, if relevant, the residuals . This is essentially the intermodel EOF approach taken, without formal justification, by Yim et al. (2016) section 1 Moving beyond regional and single-scenario CMIP ensembles, the supplemental material describes extensions to ensembles with more complex structures. For example, the approach could be used to explore the spatial structure of GCM- and scenario-specific variation in the CMIP ensembles which, as noted above, are often highly unbalanced: a simple application in this setting would use the GCM EPPs to characterize (dis)similarities between models in terms of their spatial patterns of projected future change. A more sophisticated analysis might focus on scenario effects: in such an analysis it may be reasonable to expect that the first scenario EPP will correspond to an overall pattern of change, and for the corresponding scenario-specific EPP scores to be related to some measure of net radiative forcing. Any departures from this expected pattern could yield interesting insights into the dynamics of the models. Other potential applications are to ensembles in which a single model is used to obtain projections for each of a set of emissions scenarios, starting from each of a common collection of carefully chosen initial conditions: here, an EPP analysis of the residual/ interaction term in an additive representation of scenario and initial condition effects could potentially reveal information about the state-dependence of climate change signals. EPPs can also be used to identify gaps in an existing structured ensemble, and hence to identify design priorities for additional runs. For example, in a regional ensemble each member can be summarized using the first EPP scores for the corresponding RCM and driving GCM respectively: the ensemble structure can then be visualized as a scatterplot of the corresponding pairs of scores. Such a plot will reveal combinations of scores—and hence of characteristic modes of behavior—that are not well represented and hence could be prioritized in subsequent ensemble updates. Finally, we note that EPP analysis has potential applications beyond climate model ensembles. One such application is to gridded data products providing estimates of quantities that are not observed directly at the locations of interest: in such settings, one way to characterize the estimation uncertainty in the data product is to provide multiple samples from its joint uncertainty distribution. The uptake of such techniques by data product providers is currently low, although they are likely to become more widely available in the future (Chandler et al. 2012). EPPs provide one possible route for data product users to choose informed and representative subsets of samples, enabling uncertainty to be propagated through their subsequent analyses. This research was funded under the U.K. Climate Resilience programme, which is supported by the UKRI Strategic Priorities Fund. The programme is co-delivered by the Met Office and NERC on behalf of UKRI partners AHRC, EPSRC, ESRC. The authors acknowledge the World Climate Research Programme’s Working Group on Regional Climate, and the Working Group on Coupled Modelling, former coordinating body of CORDEX and responsible panel for CMIP5. We also thank the climate modelling groups (see column headings in Fig. 1) for producing and making available their model output. We also acknowledge the Earth System Grid Federation infrastructure, an international effort led by the U.S. Department of Energy’s Program for Climate Model Diagnosis and Intercomparison, the European Network for Earth System Modelling and other partners in the Global Organisation for Earth System Science Portals (GO-ESSP). Finally, we thank the editor and three reviewers for their careful reading and constructive comments on an earlier draft of the paper. Data availability statement. All figures and analysis can be reproduced using python scripts and example data linked from https://www.ucl.ac.uk/statistics/research/eurocordex-uk. The scripts also include an additional example, demonstrating the EPP analysis of an unstructured ensemble. • Bódai, T., G. Drótos, K.-J. Ha, J.-Y. Lee, and E.-S. Chung, 2021: Nonlinear forced change and nonergodicity: The case of ENSO-Indian monsoon and global precipitation teleconnections. Front. Earth Sci., 8, 599785, https://doi.org/10.3389/feart.2020.599785. • Cannon, A. J., 2015: Selecting GCM scenarios that span the range of changes in a multimodel ensemble: Application to CMIP5 climate extremes indices. J. Climate, 28, 1260–1267, https://doi.org/ • Casajus, N., C. Périé, T. Logan, M.-C. Lambert, S. de Blois, and D. Berteaux, 2016: An objective approach to select climate scenarios when projecting species distribution under climate change. PLOS ONE, 11, e0152495, https://doi.org/10.1371/journal.pone.0152495. • Chandler, R. E., P. Thorne, J. Lawrimore, and K. Willett, 2012: Building trust in climate science: Data products for the 21st century. Environmetrics, 23, 373–381, https://doi.org/10.1002/ • Chen, D., and Coauthors, 2023: Framing, context, and methods. Climate Change 2021: The Physical Science Basis, V. Masson-Delmotte et al., Eds., Cambridge University Press, 147–286, https:// • Christensen, O. B., and E. Kjellström, 2020: Partitioning uncertainty components of mean climate and climate change in a large ensemble of European regional climate model projections. Climate Dyn., 54, 4293–4308, https://doi.org/10.1007/s00382-020-05229-y. • Christensen, O. B., and E. Kjellström, 2022: Filling the matrix: An ANOVA-based method to emulate regional climate model simulations for equally-weighted properties of ensembles of opportunity. Climate Dyn., 58, 2371–2385, https://doi.org/10.1007/s00382-021-06010-5. • Collins, M., B. B. B. Booth, G. R. Harris, J. M. Murphy, D. M. H. Sexton, and M. J. Webb, 2006: Towards quantifying uncertainty in transient climate change. Climate Dyn., 27, 127–147, https:// • Davison, A. C., 2003: Statistical Models. Cambridge University Press, 726 pp. • Evin, G., B. Hingray, J. Blanchet, N. Eckert, S. Morin, and D. Verfaillie, 2019: Partitioning uncertainty components of an incomplete ensemble of climate projections using data augmentation. J. Climate, 32, 2423–2440, https://doi.org/10.1175/JCLI-D-18-0606.1. • Evin, G., S. Somot, and B. Hingray, 2021: Balanced estimate and uncertainty assessment of European climate change using the large EURO-CORDEX regional climate model ensemble. Earth Syst. Dyn., 12 , 1543–1569, https://doi.org/10.5194/esd-12-1543-2021. • Faraway, J. J., 2014: Linear Models with R. 2nd ed. Chapman and Hall/CRC, 286 pp. • Gentle, J. E., 2007: Matrix Algebra: Theory, Computations, and Applications in Statistics. Springer, 530 pp. • Haszpra, T., M. Herein, and T. Bódai, 2020: Investigating ENSO and its teleconnections under climate change in an ensemble view—A new perspective. Earth Syst. Dyn., 11, 267–280, https://doi.org/ • Huberty, C. J., and S. Olejnik, 2006: Applied MANOVA and Discriminant Analysis. John Wiley and Sons, 528 pp. • Krzanowski, W., 1988: Principles of Multivariate Analysis. Oxford University Press, 563 pp. • Little, R., and D. Rubin, 2020: Statistical Analysis with Missing Data. 3rd ed. Wiley, 464 pp. • Lo, Y. T. E., D. M. Mitchell, S. I. Bohnenstengel, M. Collins, E. Hawkins, G. C. Hegerl, M. Joshi, and P. A. Stott, 2020: U.K. climate projections: Summer daytime and nighttime urban heat island changes in England’s major cities. J. Climate, 33, 9015–9030, https://doi.org/10.1175/JCLI-D-19-0961.1. • Maher, N., D. Matei, S. Milinski, and J. Marotzke, 2018: ENSO change in climate projections: Forced response or internal variability? Geophys. Res. Lett., 45, 11390–11398, https://doi.org/ • Maher, N., S. Milinski, and R. Ludwig, 2021: Large ensemble climate model simulations: Introduction, overview, and future prospects for utilising multiple types of large ensemble. Earth Syst. Dyn., 12, 401–418, https://doi.org/10.5194/esd-12-401-2021. • Meehl, G. A., C. Covey, T. Delworth, M. Latif, B. McAvaney, J. F. B. Mitchell, R. J. Stouffer, and K. E. Taylor, 2007: The WCRP CMIP3 multimodel dataset: A new era in climate change research. Bull. Amer. Meteor. Soc., 88, 1383–1394, https://doi.org/10.1175/BAMS-88-9-1383. • Rao, C. R., 1973: Linear Statistical Inference and its Applications. 2nd ed. Wiley, 625 pp. • Sain, S. R., D. Nychka, and L. Mearns, 2011: Functional ANOVA and regional climate experiments: A statistical analysis of dynamic downscaling. Environmetrics, 22, 700–711, https://doi.org/10.1002 • Taylor, K. E., R. J. Stouffer, and G. A. Meehl, 2012: An overview of CMIP5 and the experiment design. Bull. Amer. Meteor. Soc., 93, 485–498, https://doi.org/10.1175/BAMS-D-11-00094.1. • von Trentini, F., M. Leduc, and R. Ludwig, 2019: Assessing natural variability in RCM signals: Comparison of a multi model EURO-CORDEX ensemble with a 50-member single model large ensemble. Climate Dyn., 53, 1963–1979, https://doi.org/10.1007/s00382-019-04755-8. • von Trentini, F., E. E. Aalbers, E. M. Fischer, and R. Ludwig, 2020: Comparing interannual variability in three regional Single-Model Initial-condition Large Ensembles (SMILEs) over Europe. Earth Syst. Dyn., 11, 1013–1031, https://doi.org/10.5194/esd-11-1013-2020. • Wang, C., Y. Hu, X. Wen, C. Zhou, and J. Liu, 2020: Inter-model spread of the climatological annual mean Hadley circulation and its relationship with the double ITCZ bias in CMIP5. Climate Dyn., 55, 2823–2834, https://doi.org/10.1007/s00382-020-05414-z. • Yim, B. Y., H. S. Min, and J.-S. Kug, 2016: Inter-model diversity in jet stream changes and its relation to Arctic climate in CMIP5. Climate Dyn., 47, 235–248, https://doi.org/10.1007/ • Yip, S., C. A. T. Ferro, D. B. Stephenson, and E. Hawkins, 2011: A simple, coherent framework for partitioning uncertainty in climate predictions. J. Climate, 24, 4634–4643, https://doi.org/ • Zhou, S., G. Huang, and P. Huang, 2020: Inter-model spread of the changes in the East Asian summer monsoon system in CMIP5/6 models. J. Geophys. Res. Atmos., 125, 2020JD033016, https://doi.org/
{"url":"https://journals.ametsoc.org/view/journals/clim/37/3/JCLI-D-23-0089.1.xml","timestamp":"2024-11-07T04:40:28Z","content_type":"text/html","content_length":"859786","record_id":"<urn:uuid:5fb4d25c-4d03-4016-a921-daaeb60b5240>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00854.warc.gz"}
Resummation of fermionic in-medium ladder diagrams to all orders A system of fermions with a short-range interaction proportional to the scattering length a is studied at finite density. At any order an, we evaluate the complete contributions to the energy per particle E-(kf) arising from combined (multiple) particle-particle and hole-hole rescatterings in the medium. This novel result is achieved by simply decomposing the particle-hole propagator into the vacuum propagator plus a medium-insertion and correcting for certain symmetry factors in the (n-1)-th power of the in-medium loop. Known results for the low-density expansion up to and including order a4 are accurately reproduced. The emerging series in akf can be summed to all orders in the form of a double-integral over an arctangent function. In that representation the unitary limit a→∞ can be taken and one obtains the value ξ=0.5067 for the universal Bertsch parameter. We discuss also applications to the equation of state of neutron matter at low densities and mention further extensions of the resummation method. • Bertsch parameter • Many-body theory • Resummation of ladder diagrams Dive into the research topics of 'Resummation of fermionic in-medium ladder diagrams to all orders'. Together they form a unique fingerprint.
{"url":"https://portal.fis.tum.de/en/publications/resummation-of-fermionic-in-medium-ladder-diagrams-to-all-orders","timestamp":"2024-11-12T09:23:22Z","content_type":"text/html","content_length":"51048","record_id":"<urn:uuid:8dc1476f-506f-47e2-a833-d9a5cd356b69>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00361.warc.gz"}
Tips and Techniques Getting Started. The number of interesting and attractive patterns is almost limitless, so it is pretty to easy to find new patterns that you've never seen before. The best way to get started is to use some of the default saved patterns and make small changes to the parameters to see what happens. You should clear the figure after each change so you can see what the new pattern looks like. Attractive Patterns. The most attractive patterns are produced when the program draws complex patterns that repeat with a slight variation after each cycle and slowly evolve into different shapes. Producing these kinds patterns requires different strategies depending on which mode your are in. Two Wheel Mode. In Two Wheel mode, the slight variation is accomplished by having difference in the Wheel Size or Wheel RPM. The difference in these values changes the phase relationship between the two wheels with every cycle, which varies the pattern with every cycle. Planetary Modes. In the two Planetary modes, the wheels don't need to be different sizes or turning at different speeds to cause a variation. The variation is accomplished by the relationship between the small wheels and large wheels. For example, if the small wheels have diameter of 50, while the big wheels have a diameter of 100, the small circles will rotate rotate twice degree when the big circles rotates once. However, because the small wheel is rotating along with the big wheel, the relationship between the small wheel and the big wheel is constantly changing and the pattern produces three loops. If the sizes of the wheels are not an even multiples of each other, the relationship become much more complicated. For example, the image to right shows what happens if the size of the small wheel is by just to 52. Instead of a loop with three lobes, that pattern has 38 lobes. Spirographs. Patterns like this are called Spirographs or Guillochι patterns and they are a good starting point for more complex patterns. Here are the basic setting you need to use to produce a Mode: Outer or Inner Planetary Mode Circle Size: Set both circles to the same size, preferably by using the Lock button. Wheel RPM: Set both wheels to the same speed, preferably by using the Lock button. Wheel Angles: Set both wheels to an angle of zero. Using these settings will produce circular, symmetrical patterns. As pointed about above, the number of lobes is controlled by size of the circle. Here are some circles sizes and the number of lobes they produce. 80.0 = 6 Lobes 70.0 = 13 Lobes 65.0 = 27 Lobes 50.0 = 3 Lobes 75.0 = 5 Lobes 68.0 = 33 Lobes 64.0 = 34 Lobes 40.0 = 8 Lobes 72.0 = 31 Lobes 66.6 = 4 Lobes 60.0 = 7 Lobes 33.3 = 5 Lobes More Complex Patterns. The spirograph patterns are enormously sensitive to the starting angles of the wheels. The image to the right was created by changing the starting angles of main and planetary wheels to 28 and 68 degree respectively. Lots of complex and beautiful patterns can be derived by starting with a spirograph and modifying the starting angles of the wheels. Some patterns take many cycles to develop, so don't give up on a pattern too soon.
{"url":"https://fountainware.com/Funware/SuperSpiro/tips_and_techniques.htm","timestamp":"2024-11-14T10:53:41Z","content_type":"text/html","content_length":"24196","record_id":"<urn:uuid:371d84ef-b5a5-464d-b7e2-9dbf567505cd>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00782.warc.gz"}
Finding the Determinant of the Scalar Multiple of a Matrix Question Video: Finding the Determinant of the Scalar Multiple of a Matrix Mathematics • First Year of Secondary School If π ΄ is a square matrix of order 2 Γ 2 and |2π ΄| = 12, then |3π ΄^(π )| = οΌΏ. Video Transcript If π ΄ is a square matrix of order two by two and the determinant of two π ΄ is equal to 12, then the determinant of three times π ΄ transpose is equal to what. Is it option (A) 18, option (B) 24, option (C) 27? Or is it option (D) 36? In this question, weβ re given some information about the determinant of a two-by-two matrix π ΄. Weβ re told the determinant of two π ΄ is equal to 12, and we need to use this information to determine the determinant of three times the transpose of π ΄. Since weβ re told that π ΄ is a two-by-two matrix, we might be tempted to start by defining π ΄ to be a matrix of four unknowns. We could then substitute our expression for π ΄ into our equation to find an expression for the determinant of π ΄ and then try to use this to find an expression for the determinant of three times the transpose of π ΄. And this would work; however, it would be very complicated. Instead, we need to notice that our equations involve determinants of matrices. So instead, weβ ll start by simplifying by using the properties of determinants. Weβ ll start by simplifying the expression the determinant of two times π ΄. And to do this, weβ ll start by recalling the following property. For any square matrix π ΅ of order π by π and any scalar value π , the determinant of π times π ΅ is equal to π to the π th power multiplied by the determinant of π ΅. In our case, π ΄ is a matrix of order two by two. So, our value of π is two, giving us the determinant of two π ΄ is equal to two squared multiplied by the determinant of π ΄, which is, of course, four times the determinant of π ΄. We can then substitute this expression for the determinant of two π ΄ into the equation weβ re given in the question. This then gives us that four times the determinant of π ΄ is equal to 12. And we can solve for the determinant of π ΄. We divide both sides of the equation by four. This gives us the determinant of π ΄ is equal to However, weβ re not asked to find the determinant of π ΄; weβ re asked to find the determinant of three times the transpose of π ΄. To do this, letβ s try simplifying this expression by using the properties of determinants. First, we recall when we take the transpose of a matrix, we switch the rows with the columns. So, the transpose of matrix π ΄ is also a matrix of order two by two. This means we can once again apply the same property. π ΄ transpose is a two-by-two matrix. Therefore, the determinant of three multiplied by the transpose of π ΄ is equal to three squared multiplied by the determinant of π ΄ transpose. And we can simplify this to get nine multiplied by the transpose of π ΄, but we can simplify this expression even further by using another one of our properties of determinants. We recall for any square matrix π ΅, the determinant of π ΅ transpose is just equal to the determinant of π ΅. And we know π ΄ transpose is a square matrix, so we can replace this with the determinant of π ΄ to get nine multiplied by the determinant of π ΄. And we know what the determinant of matrix π ΄ is. Itβ s equal to three. Therefore, we can just substitute three for the determinants of π ΄ to get nine times three, which is equal to 27, which we can see is given as option (C). Therefore, weβ ve shown if π ΄ is a square matrix of order two by two and the determinant of two π ΄ is equal to 12, then the determinant of three times the transpose of π ΄ is equal to 27.
{"url":"https://www.nagwa.com/en/videos/737171846537/","timestamp":"2024-11-06T12:25:37Z","content_type":"text/html","content_length":"253281","record_id":"<urn:uuid:99b512f9-b03c-4173-98a1-c7eb8410ebf4>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00710.warc.gz"}
How to Use Python to Read Excel Formula Last Updated on March 10, 2022 by Jay Sometimes we want to use Python to read Excel formula instead of the values, and the openpyxl library offers a way to do that easily. Why do we need to read formulas? Because we might want to copy the formula to another place while keeping the column and row unchanged, or we want to tweak the formula but we first need to know what it is. openpyxl is a third-party Python library to work with Excel files. We can download it using pip. Open up a command prompt, then type the following. If you already have pandas, it’s likely openpyxl is already installed. pip install openpyxl Sample Excel file We are going to use a simple Excel file for this illustration so no download is required. See screenshot below: Python Read Excel Formula From Spreadsheet There are three formulas on the <formula> tab in cell B2, B4, and D2. Let’s use openpyxl to read the Excel sheet into Python: import openpyxl wb = openpyxl.load_workbook('Book1.xlsx') ws = wb['formula'] Checking cell B2’s data_type, we get ‘f’ which means this cell contains a formula. Note the .value attribute returns the actual Excel formula, which is ‘=PI()’. Also note the formula cell “value” is actually a String data type in Python. This is nice, but what if our spreadsheet has many formulas and we want actual values instead? Python Read Excel FIle As Value-Only The openpyxl.load_work() method has an optional argument data_only that is set to False by default. We can turn this on by setting it to True, then we’ll get values only instead of formulas. Note the below code, this time the data_type shows ‘n’ which means a number. And indeed we got the value of Pi, which is 3.14159… import openpyxl wb = openpyxl.load_workbook('Book1.xlsx', data_only = True) ws = wb['formula'] Additional Resources How to Access Excel File In Python
{"url":"https://pythoninoffice.com/how-to-use-python-to-read-excel-formula/","timestamp":"2024-11-08T07:28:09Z","content_type":"text/html","content_length":"70159","record_id":"<urn:uuid:b746fde5-6b26-49a3-9c6d-dc201f21dd08>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00761.warc.gz"}
Mastering Formulas In Excel: Which Of The Following Is Not A Valid For Formulas are the lifeblood of Excel. They allow us to perform complex calculations and manipulate data with ease, turning a simple spreadsheet into a powerful tool for analysis and decision-making. However, not all formulas are created equal, and it's important to understand which ones are valid and which ones are not. In this blog post, we'll take a closer look at the topic of invalid formulas in Excel and explore which of the following is not a valid formula. Key Takeaways • Formulas are the lifeblood of Excel, allowing for complex calculations and data manipulation. • Understanding the difference between valid and invalid formulas is crucial for accurate analysis and decision-making. • Common mistakes in formula creation can result in invalid formulas with potential for significant implications. • Recognizing and correcting invalid formulas is essential for maintaining data accuracy and reliability in Excel. • Best practices for creating and double-checking formulas can optimize efficiency and reliability in Excel. Understanding Excel Formulas In today's digital age, Excel has become an indispensable tool for organizations and individuals alike, for managing and analyzing data. One of the key functionalities that make Excel so powerful is its ability to perform complex calculations using formulas. Let's explore the definition of Excel formulas, their importance, and some commonly used examples. A. Definition of Excel formulas Excel formulas are expressions that perform calculations on values in a worksheet. They can range from simple arithmetic operations to more complex functions and references. Formulas in Excel always begin with an equal sign (=) followed by the expression to be calculated. B. Importance of accurate formulas for data analysis and reporting Accurate formulas are crucial for ensuring the reliability and validity of data analysis and reporting. When working with large datasets, even a small error in a formula can lead to significant discrepancies in the results. Therefore, mastering formulas in Excel is essential for producing accurate insights and reports. C. Examples of commonly used Excel formulas Here are some of the most commonly used Excel formulas: • SUM: This formula adds up a range of numbers. • AVERAGE: This formula calculates the average of a range of numbers. • IF: This formula performs a logical test and returns one value if the test is true and another if it is false. • VLOOKUP: This formula searches for a value in the first column of a table and returns a value in the same row from another column. • INDEX and MATCH: These formulas are used in combination to look up a value in a table based on the row and column headings. Valid vs. Invalid Formulas When working with Excel, mastering formulas is essential for efficient data manipulation and analysis. However, it's important to distinguish between valid and invalid formulas to ensure accurate A. Explanation of what constitutes a valid formula in Excel • Formulas that start with an equal sign (=) In Excel, all formulas must begin with an equal sign to indicate that the cell contains a calculation. Omitting the equal sign will result in an invalid formula. • Correct syntax and references Valid formulas in Excel must follow the correct syntax and reference the appropriate cells or ranges. Any mistake in the syntax or referencing will render the formula invalid. • Use of supported functions and operators Excel supports a wide range of functions and operators for performing calculations. Using unsupported functions or operators will lead to an invalid formula. B. Common mistakes that result in invalid formulas • Typographical errors Misspelling function names or cell references, or using incorrect punctuation can lead to a formula being invalid. • Incorrect cell references Mistakenly referencing non-existent or incorrect cells can cause an Excel formula to become invalid and produce erroneous results. • Missing or misplaced parentheses Failure to include or properly place parentheses in a formula can result in an invalid calculation. C. Implications of using invalid formulas in Excel • Erroneous results Using invalid formulas in Excel can lead to inaccurate calculations, potentially compromising data integrity and decision-making. • Data inconsistency Invalid formulas may result in inconsistencies across spreadsheets or reports, leading to confusion and errors in data analysis. • Complexity in error detection Identifying and rectifying invalid formulas can be time-consuming and challenging, especially in large and complex Excel workbooks. Identifying Invalid Formulas When working with Excel, it is important to be able to identify invalid formulas in your spreadsheets. Whether it's a simple typing error or a more complex problem, understanding how to recognize and address invalid formulas is essential for accurate data analysis. This chapter will discuss tips for recognizing invalid formulas, common errors that lead to invalid formulas, and the tools available in Excel for identifying errors in formulas. A. Tips for recognizing invalid formulas • Check for syntax errors: One of the most common reasons for invalid formulas is a syntax error. This could be a missing or misplaced parenthesis, incorrect use of operators, or a typo in a function name. • Look for referencing errors: Invalid formulas can also occur due to incorrect cell references. Make sure that all cell references are accurate and have been updated correctly if the data has been moved or edited. • Understand error messages: Excel provides error messages to help identify and fix invalid formulas. Take the time to understand what these error messages mean and how to address them. B. Common errors that lead to invalid formulas • Typing errors: Simple typing mistakes such as misspelling a function name or using the wrong syntax can lead to invalid formulas. • Incorrect cell references: Failing to update cell references when copying or moving formulas can result in invalid formulas. • Using unsupported functions: Excel has a wide range of functions, but not all of them may be supported or appropriate for the data you are working with. Using unsupported functions can lead to invalid formulas. C. Tools available in Excel for identifying errors in formulas • Error checking: Excel has a built-in error checking feature that helps identify and address errors in formulas. This includes options to trace precedents and dependents, as well as evaluate • Formula auditing: The formula auditing tools in Excel provide a range of options for tracing and evaluating formulas, including the ability to highlight errors and inconsistencies. • Named ranges: Using named ranges in your formulas can help prevent errors and make it easier to identify and fix invalid formulas. Examples of Invalid Formulas When working with Excel, it's important to understand what constitutes a valid formula and what does not. Here are some examples of invalid formulas and their corresponding correct versions, along with explanations of the errors and the potential consequences of using invalid formulas. Sample invalid formulas and their corresponding correct versions • =SUM(A1, B1) - This is an invalid formula because the correct syntax for the SUM function is =SUM(A1:B1). • =IF(A1 > 10, "Yes", "No") - This is an invalid formula because the correct syntax for the IF function requires a comma after the logical test: =IF(A1 > 10, "Yes", "No"). • =AVERAGE(A1, A2, A3) - This is an invalid formula because the correct syntax for the AVERAGE function is =AVERAGE(A1:A3). Explanation of the errors in the invalid formulas These invalid formulas result from incorrect syntax or arguments within the functions. Understanding the correct syntax for each function is crucial in ensuring that the formulas work as intended. Consequences of using the invalid formulas in Excel Using invalid formulas in Excel can lead to incorrect calculations, display errors, and overall data inconsistencies. It is essential to double-check and validate all formulas to avoid any potential issues in your spreadsheets. Best Practices for Creating Formulas in Excel When working with Excel, creating accurate and efficient formulas is crucial for ensuring the reliability of your data analysis. Here are some best practices to keep in mind when creating formulas in A. Tips for avoiding common mistakes in formula creation • Use cell references: Instead of hardcoding values into your formulas, use cell references to make your formulas more flexible and easier to update. • Check for errors: Use the error-checking features in Excel to identify and fix any errors in your formulas, such as circular references or #DIV/0! errors. • Avoid unnecessary complexity: Keep your formulas simple and easy to understand to reduce the likelihood of errors. B. Importance of double-checking formulas for accuracy • Verify input data: Before creating a formula, double-check the input data to ensure its accuracy and completeness. • Test with sample data: Test your formulas with sample data to verify their accuracy before applying them to larger datasets. • Use the evaluate formula tool: Excel's evaluate formula tool allows you to step through the calculation process to identify any inaccuracies or errors. C. Ways to optimize formulas for efficiency and reliability • Minimize volatile functions: Volatile functions, such as NOW() and RAND(), can slow down your spreadsheet, so use them sparingly. • Use array formulas judiciously: While array formulas can be powerful, they can also impact performance, so use them only when necessary. • Consider using helper columns: Instead of creating complex nested formulas, consider using helper columns to break down your calculations into smaller, more manageable steps. Recap: Mastering formulas in Excel is crucial for anyone looking to efficiently analyze and report data. Understanding which formulas are valid and which are not is essential for accurate calculations and results. Final thoughts: Using valid formulas is extremely important in Excel as it ensures the integrity and accuracy of your data analysis and reporting. By familiarizing yourself with the valid formulas and understanding which ones are not, you can avoid errors and produce more reliable results. So, take the time to master the formulas in Excel and watch your data analysis and reporting skills ONLY $99 Immediate Download MAC & PC Compatible Free Email Support
{"url":"https://dashboardsexcel.com/blogs/blog/mastering-formulas-in-excel-which-of-the-following-is-not-a-valid-formula","timestamp":"2024-11-14T23:58:36Z","content_type":"text/html","content_length":"215559","record_id":"<urn:uuid:e842ec01-5e9c-4dbf-be7c-558f61068d9d>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00628.warc.gz"}
Radioactive Decay and Half Life Simulation Redwood High School Name: Period: t was not until the end of the 1800’s that scientists found a method for determining the actual age of rocks, minerals and fossils. They found that radioactive elements decay, or change to other elements by giving off particles and energy at a constant and measurable rate. Scientists also found that for each different radioactive element, the rate of change was fixed and not at all affected by such things as pressure or temperature of the surrounding environment. The decay process is so regular that it can literally be used to determine the passage of time, like the ticking of a clock. In this activity, you will use a mathematical model to study the process of radioactive decay and examine how it can be used to determine the age of ancient earth materials - particularly fossils. It will be helpful to remember that the term “half-life” refers to the time required for half of the atoms of a given mass of substance to decay to a stable end product. Focus Questions • What are half-life and radioactive decay and how are they connected? • What is the relationship between specific elements and their half-lives? • How can radioactive decay and half life be used to calculate the absolute age of fossils? 1. Obtain the necessary materials for your lab group from the supply table. This should include a cardboard box with a lid (either loose or attached) and 100 popcorn kernels. 2. Count the popcorn kernels to be certain there are exactly 100 kernels in your box. Also check to be sure that each side on the inside of the box is numbered 1, 2, 3, or 4. 3. Cover the box. Hold it level and give it a sharp, single shake (up and down, not side to side). 4. Open the box and remove all the kernels that have the small end pointed toward side 1. Count and record the removed corn. Subtract the removed number from the number in the box before the shake (100). Then record the number of remaining kernels in the proper place on the data table. Do not return the removed corn kernels to the box. 5. Replace the lid, shake the box again, and once more remove the corn kernels pointing toward side 1. Count and record the removed corn. Subtract this figure from the number remaining from the previous shake. Record this new figure in the data table under corn remaining. 6. Repeat this process until all of the corn kernels have been removed from the box. You may need to have to add extra rows or some rows may not be used…it will be different for each group. Please include “shakes” where the corn removed is zero. 7. Return all 100 pieces of corn to the box, cover it and repeat the above procedure except, this time, after each shake, remove the kernels that are pointed toward both side 1 and side 2. Count the corn remaining and record each of your observations in the data table. 8. Continue this procedure until all of the corn has been removed from the box. 9. Finally, return all of the corn to the box and repeat the entire procedure for a third time, except this time remove the kernels that are facing sides 1, 2 and 3. Count the corn remaining and record each of your observations in the data table. Repeat until all the corn has been removed from the shoebox. Radioactive Decay Simulation - Data Table Starting Corn Count: Shake # / Side 1 Test / Side 1 and 2 Test Side 1, 2 and 3 Test corn removed / corn remaining / corn removed / corn remaining / corn removed / corn remaining Analysis and Conclusions 1. Construct a graph to illustrate your data on a separate piece of graph paper. The graph should compare number of shakes vs. number of corn kernels remaining. The graph should include the data from all three trials as recorded in each column of your data table. (Remember, bar graphs are used for discontinuous data, line graphs for continuous data.) Attach your graph to the back of your lab. 2. Write a few sentences below comparing the data produced from the three different tests. It will be helpful for you to write this while looking at your graph. 3. a) For the first data set (side 1 only) how many shakes were required before approximately half of the kernels were remaining in the box? b) For the second data set (sides 1 and 2) how many shakes were required before approximately half of the kernels were remaining in the box? c) For the third data set (sides 1, 2 and 3) how many shakes were required before approximately half of the kernels were remaining in the box? 4. Imagine you are a scientist who works to determine the age of fossils. Complete the following table, indicating what each of the components of the lab were simulating with respect to radioactive Component In Simulation What It Represents Corn kernels in box before simulation begins Corn kernels pointing towards any side (to be removed) Corn kernels remaining after any given shake A single shake The change from removing only side one kernels to either sides 1 and 2 or sides 1, 2, and 3 Half Life Use your new understanding of radioactive decay and half-life to perform the following calculations. 5. Suppose a radioactive element has a half-life of 30 days. a. How much of a 4 gram sample will be unchanged (still radioactive) after 60 days? (Show your work.) b. After 90 days? (Show your work.) c. After 120 days? (Show your work.) 6. Suppose a radioactive element has a half-life of 10,000 years. a. What percent of the material will be unchanged (still radioactive) after 20,000 years? (Show your work.) b. After 30,000 years? (Show your work.) 7. Create a graphic (which could be as simple as a data table) that demonstrates the following: 10 grams of a radioactive element, with a half life of 2 million years. Show how much (g) is remaining after 10 million years. 8. What are half-life and radioactive decay and how are they related? Don’t just define! 9. Why might the half-lives of different elements differ? Think about what is happening during the process of decay. 10. Describe how radioactive decay and half-life can be used to calculate the absolute age of fossils? Radioactive Decay and Half Life Simulation
{"url":"https://docsbay.net/doc/1005121/radioactive-decay-and-half-life-simulation","timestamp":"2024-11-13T05:11:23Z","content_type":"text/html","content_length":"19063","record_id":"<urn:uuid:cb3512dc-6e29-47cb-b59f-1ca850eaeb66>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00034.warc.gz"}
In the mid 70’s when people were playing around with early versions of computers, a mathematician was plotting the results of an equation containing a complex number, in a reiterative manner. And to his surprise, the results were quite fascinating: the patterns that emerged were beautiful to look at; but more interestingly, when you zoom into a small part of the pattern, it seemed to bring out a pattern that was so similar to the original. Similar, but different. And the mathematical discipline of fractals was born. It did not take long for people to recognize that this was an important discovery. Fractals had fractal dimensions. A river is not a straight line, mountains are not cones and clouds are not spherical. The new discipline brought out the fractal dimensions of natural structures. Mathematicians could paint trees, mountains, rivers and clouds with equations. And they looked more realistic than any painter could ever imagine. Scientists were suddenly seeing fractals everywhere. Terms such as scaling laws and power laws started appearing in almost all disciplines. Soon mathematicians realized a hidden complexity: fractals that keep changing their dimensions in different directions. And the notion of multi-fractals was born. This too spread to other disciplines like wild fire. Patterns in time and space that hitherto seemed too complex to explain suddenly became simple enough. Even the complexity of literature, music, painting and other art forms came under the purview of this simplicity. The study of science has always tried to model the present phenomena to predict the future. With different fields of the study of science evolving with the same goal, a modern branch of study has developed which finds its basics in the famous quote “The present determines the future, but the approximate present does not approximately determine the future”. Chaos Theory has recently turned fifty, celebrating more than half a century of flapping butterfly wings in Brazil and creating tornadoes in Texas. It was a meteorologist named Edward Lorenz who first outlined why seemingly consistent and knowable systems can still go wildly wrong. Fractals are geometric shapes that are very complex and infinitely detailed. You can zoom in on a section and it will have just as much detail as the whole fractal. They are recursively defined and small sections of them are similar to large ones. One way to think of fractals for a function f(x) is to consider x, f(x), f(f(x)), f(f(f(x))), f(f(f(f(x)))), etc. Fractals are related to chaos because they are complex systems that have definite properties. The word fractal was first introduced by Mandelbrot and Ness (1968) [1] and laid the foundations for fractal geometry. He also advanced fractals by showing that fractals cannot be treated as whole-number dimensions; they must instead have fractional dimensions. Calculation of fractal dimensions or rather measuring self-similarity has been a major area in the field of study of chaos.
{"url":"http://www.dgfoundation.in/chaos-fractals-and-non-linear-studies/","timestamp":"2024-11-13T09:29:04Z","content_type":"text/html","content_length":"15026","record_id":"<urn:uuid:f572008c-3172-45d5-ad62-43c835dabb25>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00534.warc.gz"}
1. FUNCTIONS OF A SINGLE REAL VARIABLE – Limits and continuity – Complete discussion includes proofs of all theorems in the body of the chapter as well as in the exercises. Horizonal asymptotes, limits at infinity, vertical asymptotes and infinite limits are discussed. 1. The limit of a function (56) 2. Theorems on limits of functions (64) 3. One side limits (73) 4. Infinite limits (78) 5. Limits at infinity (88) 6. Continuity of a function at a number (98) 7. Continuity of a composite function and continuity on an interval (107) 8. Continuity of the trigonometric functions and the squeeze theorem (114) 9. Proofs of some theorems on limits of functions (Supplementary) (122) 10. Additional theorems on limits of functions (131) An outline of calculus topics Found my outline! It’s the table of contents for the textbook The Calculus with Analytic Geometry, 6th Edition, by Louis M. Leithold. This was the text from which I learned everything of undergraduate calculus, from limits to Green’s functions and basis vectors and multiple integrals, well over a decade ago. My first copy had eventually become all dogeared and coffee-stained from hours of poring through the text over three semesters of coursework; the second copy fared no better, as it saw me through the last of three semesters, and one more taking in differential equations. I’d typed in the table of contents to serve as my weekly reading guide; those were busy days. What’s particularly good about TCWAG is that it provides clear step-by-step proofs of all theorems and major results, unlike free texts offered up by MIT through it’s Open Courseware program (see Calculus, by Gilbert Strang, at http://ocw.mit.edu/resources/res-18-001-calculus-online-textbook-spring-2005/textbook/). Only rarely did we encounter use of results that could not be developed as part of an introductory calculus course – these were clearly identified as, for example, in the case of the completeness axiom, which could only be fully developed in a course in real analysis. In all there are 18 chapters, which we’d covered at a rate of about six chapters per semester. This outline will serve as my guide to retrieving the calculus in the next few months. 1. FUNCTIONS OF A SINGLE REAL VARIABLE– Limits and continuity – Complete discussion includes proofs of all theorems in the body of the chapter as well as in the exercises. Horizonal asymptotes, limits at infinity, vertical asymptotes and infinite limits are discussed. 1. The limit of a function (56) 2. Theorems on limits of functions (64) 3. One side limits (73) 4. Infinite limits (78) 5. Limits at infinity (88) 6. Continuity of a function at a number (98) 7. Continuity of a composite function and continuity on an interval (107) 8. Continuity of the trigonometric functions and the squeeze theorem (114) 9. Proofs of some theorems on limits of functions (Supplementary) (122) 10. Additional theorems on limits of functions (131) 2. The derivative and differentiation– The classical geometrical interpretation of the derivative, detailed derivative evaluation, concise proofs of theorems on differentiation, derivatives of trig functions, and the Chain Rule. 1. The tangent line and the derivative (139) 2. Differentiability and continuity (148) 3. Theorems on differentiation of algebraic functions (156) 4. Rectilinear motion and the derivative as a rate of change (163) 5. Derivatives of the trigonometric functions (173) 6. The derivative of a composite function and the Chain Rule (181) 7. The derivative of the power function for rational exponents (190) 8. Implicit differentiation (195) 9. Related rates (199) 10. Derivatives of higher order (205) 3. Function extremaTechniques of graphing The differential – Theorems on extrema and graph behavior motivate a complete method for function graph sketching. Differentials are treated in this section, adjacent to differentiation subject matter. 1. Maximum and minimum function values (217) 2. Applications involving an absolute extremum on a closed interval (224) 3. Rolle’s theorem and the Mean-value theorem (230) 4. Increasing and decreasing functions and the first-derivative test (236) 5. Concavity and points of inflection (241) 6. The second-derivative test for relative extrema (249) 7. Drawing the sketch of the graph of a function (254) 8. Further treatment of absolute extrema and applications (260) 9. The differential (269) 10. Numerical solutions Newton’s method (277) 4. The definite integralIntegration – Indefinite integration, area, the definite integral, the Fundamental Theorem of the Calculus. Trapezoidal and parabolic rules, formulas for error bound 1. Antidifferentiation (286) 2. Some techniques of antidifferentiation (295) 3. Differential equations and rectilinear motion (303) 4. Area (312) 5. The definite integral (324) 6. Properties of the definite integral (331) 7. The mean-value theorem for integrals (340) 8. The fundamental theorems of the calculus (344) 9. Area of a plane region (352) 10. Numerical integration (supplementary) (359) 5. Applications of the definite integral– Evaluation techniques and principles are discussed, supported by concise motivation and explanations. Applications in physics. 1. Volumes by slicing, disks, and washers (374) 2. Cylindrical-shell method (383) 3. Length of arc of the graph of a function (388) 4. Center of mass of a rod (394) 5. Centroid of a plane region (400) 6. Work (407) 7. Liquid pressure (supplementary) (413) 6. Inverse, logarithmic and exponential functions– Includes a concise definition of irrational numbers. Applications. 1. Inverse functions (422) 2. Inverse function theorems Derivative of a function inverse (431) 3. The natural logarithmic function (439) 4. Logarithmic differentiation Integrals yielding ln(x) (449) 5. The natural exponential function (455) 6. Other exponential and logarithmic functions (463) 7. Applications (469) 8. First-order linear differential equations (supplementary) (481) 9. Review (492) 7. Inverse trigonometric and hyperbolic functions. 1. The inverse trigonometric functions (496) 2. Derivatives of the inverse trigonometric functions (503) 3. Integrals yielding inverse trigonometric functions (510) 4. Hyperbolic functions (514) 5. Inverse hyperbolic functions (supplementary) (523) 6. Review (527) 8. Techniques of integration– Computational methods encountered in practical problems. Crucial examples illustrate the principles involved. 1. Integration by parts (531) 2. Of powers of sine and cosine (537) 3. Of powers of tangent, cotangent, secant and cosecant (542) 4. By trigonometric substitution (545) 5. By partial fractions: linear (551) 6. By partial fractions: quadratic denom. fact (561) 7. Miscellaneous substitutions (566) 8. Integrals yielding inverse hyperbolic functions (supplementary) (570) 9. Review (575) 9. The conic sections and polar coordinates. 1. The parabola and translation of axes (578) 2. The ellipse (586) 3. The hyperbola (594) 4. Rotation of axes (604) 5. Polar coordinates (608) 6. Graphs of equations in polar coordinates (614) 7. Area of a region in polar coordinates (625) 8. A unified treatment of conic sections and polar equations of conics (629) 9. Tangent lines of polar curves (supplementary) (638) 10. Review (647) 10. Indeterminate forms, improper integrals, and Taylor’s formula– Concepts supporting a discussion on infinite series are assayed. Probability density function. 1. The indeterminate form 0/0 (651) 2. Other indeterminate forms (660) 3. Improper integrals with infinite limits of integration (665) 4. Other improper integrals (673) 5. Taylor’s formula (677) 6. Review (684) 11. INFINITE SERIES– Sequences and infinite series – Theorems. Tests for convergence of a series. 1. Sequences (687) 2. Monotonic and bounded sequences (694) 3. Infinite series of constant terms (700) 4. Infinite series: Four theorems (709) 5. Infinite series of positive terms (713) 6. The integral test (723) 7. Alternating series (726) 8. Absolute and conditional convergence Ratio test Root test (731) 9. Summary of convergence tests for infinite series (738) 10. Review (740) 12. Power series. 1. Introduction to power series (743) 2. Differentiation of power series (750) 3. Integration of power series (760) 4. Taylor series (767) 5. The binomial series (776) 6. Review (780) 13. VECTOR-VALUED AND MULTIVARIABLE FUNCTIONS– Vectors in the plane and parametric equations. 1. Vectors in the plane (783) 2. Dot product (794) 3. Vector-valued functions Parametric equations (801) 4. Calculus of vector-valued functions (808) 5. Length of arc (814) 6. The unit tangent and normal vectors Arc length as parameter (820) 7. Curvature (824) 8. Plane motion (832) 9. Tangential and normal components of acceleration (838) 10. Review (842) 14. 3D Vectors and solid analytic geometry. 1. R^3 space (846) 2. Vectors in R^3 (852) 3. Planes (861) 4. Lines in R^3 (868) 5. Cross product (873) 6. Cylinders and surfaces of revolution (883) 7. Quadric surfaces (888) 8. Curves in R^3 (894) 9. Cylindrical and spherical coordinates (901) 10. Review (905) 15. INTRODUCTION TO MULTIVARIATE CALCULUS– Differential calculus of functions of more than one variable – Extensions of 1V calculus. 1. Functions of more than one variable (908) 2. Limits (917) 3. Continuity (927) 4. Partial derivatives (931) 5. Differentiability Total differentials (939) 6. The Chain Rule (949) 7. Higher-order partial derivatives (956) 8. Sufficient conditions for differentiability (963) 9. Review (968) 16. Directional derivatives, gradients, and applications of partial derivatives– Vector fields. Solution of extrema problems and Lagrange multipliers. Exact differential equation solution. 1. Directional derivatives and gradients (972) 2. Tangent planes and normals to surfaces (979) 3. Extrema of functions of two variables (983) 4. Lagrange multipliers (997) 5. Obtaining a function from its gradient and exact differentials (1003) 6. Review (1011) 17. Multiple integration 1. The double integral (1014) 2. Double and iterated integrals (1019) 3. Center of mass and moments of inertia (1026) 4. The double integral in polar coordinates (1031) 5. Area of a surface (1036) 6. Triple integrals (1041) 7. Triple integrals in cylindrical and spherical coordinates (1046) 8. Review (1052) 18. Introduction to the calculus of vector fields– Intuitive appproach to problems in physics and engineering. 1. Vector fields (1056) 2. Line integrals (1064) 3. Line integrals independent of path (1072) 4. Green’s theorem (1082) 5. Surface integrals (1095) 6. Gauss’s Divergence Theorem and Stokes’s Theorem (1102) 7. Review (1108) I’ll head off to a local bookstore in about thirty minutes to see whether this text is even still in distribution – I’m sure that I’ll at least be able to order a copy from overseas. Biting bullets Flex (http://flex.sourceforge.net/manual/Patterns.html#Patterns) for parsing expressions. Algebra and trigonometry again. Can’t sleep: My mind is wandering about in my skull, a kitty among the bins. I can’t make up my mind how to represent an atomic algebraic expression term. Clearly, an expression term, and an elementary binary operator are two different kinds of objects. What do I make, then, of an expression term such as sin(x)? It’s a function operator taking a single argument. I’d like to be able to manipulate a pair of rational expressions like How about 2x? It’s an elementary product expression consisting of a constant and a variable. I’m beginning to think each subexpression ought to be represented as an object, now – so that I could replace the factor 2 with an expression, for example. This pretty much puts paid to representing an algebraic polynomial term as a struct: struct Monomial { integer coeff; Variable x; Exponent t; Each member coeff, x, and t would then be an Expr: struct Monomial { Expr coeff; Expr x; Expr t; Then I should represent elementary operations as functors as well, perhaps. Hmm. This eliminates the need for a Monomial type: Expr x('x'), X; X = 2 * x ^ 3; cout << X.strict().toString() << endl; // 2*(x^3) cout << X.toHumanString() << endl; // 2x3 cout << X.toMathML() << endl; // Something like 2x(super)3 I need to look into operator associativity rules and find out whether any combination of member and nonmember functions will cause type coercion of the RHS into an Expr type. My goal is to be able to create simple code to generate elementary algebra expressions programmatically, like so: Expr a('a', '1/(x+2)'); Expr b('b', '2/(x+7)'); Expr c; c = a * b; cout << c.toMathML() << endl; cout << c.execute().toMathML() << endl; // 1/(x^2 + 9x + 14) Then a simple, short chunk of code could do and generate an online algebra workbook of arbitrary complexity: string generate_rational_fraction_sum( int terms, int glbpower, int lubpower ) { Expr rfs; Expr numerator; Expr denominator; Expr monomial('x'); for ( int i = 0 ; i < terms ; i++ ) { integer r = rand(glbpower, lubpower); integer q = rand(1,10); for ( int j = 0 ; j < r ; j++ ) { // Generate an order-j polynomial expression for the denominator integer k = rand(1,10); denominator += k * monomial ^ j; numerator = rand(1,10); rfs += numerator / denominator; return rfs.execute().toMathML(); Recovering Lost Sectors Trigonometric identities – gone. Techniques for polynomial factoring – gone. Ditto for logarithmic and exponential functions. Drat! I guess recovering the calculus is basically going to take a wholesale rebootstrap of my entire maths education. Granted, a lot of it is going to be recovery rather than reintroduction. The skills have atrophied, and the surety of technique gone, but the memory of how to construct a proof, for example – the concept of inductive logic – isn’t completely lost to me. This feels like having fallen off a mountain, nevertheless. “Anybody have pitons?” I got pitons. The idea I have is to do both relearn algebra and trigonometry and write the software tools to help me do the math exercises. Instead of just the old pen and paper, I’d like to make use of a LaTeX engine and parser to enable me to solve these problems like I’d do on paper, but onscreen instead. I think of it basically as a way to generate problem sets and grade them on the fly to get feedback quicker (and to offset the tedium of interacting with the computer using an interface – a keyboard and maybe the mouse – that doesn’t allow easy math symbol Much as I’d like to reprise Vance in its’ entirety, I’ll need to use a University course outline to guide the structure of my application. There were bits of Vance that led into complex analysis, real analysis, and linear algebra that I’d like to include as part of my course flow. I’m imagining an interactive Vance that does for algebra and trigonometry what Push Pop Press did for Al Gore’s book on the environment, and digital media should be able to be molded to the purpose: Javascript and HTML5 have evolved quite a bit since 1996, and might just enable a Web interactive application to do just that. Can’t wait to start on elementary physics, a la Angry Birds. Setting up I’ve ripped apart my copy of Wylie and Barrett’s Advanced Engineering Mathematics in nice signature-length sections, and have turned it’s first four chapters into nice iPad albums. Ditto for Harry Lass’s Vector and Tensor Analysis. Over the next four weeks I’ll find either that I’d just wasted eight hours (and one perfectly good copy of W&B) of my previous weekend, or that I can still recover my old maths chops and get on with writing my own MRI scan image synthesis library.
{"url":"https://avahilario.net/blog/category/math/","timestamp":"2024-11-13T19:11:34Z","content_type":"text/html","content_length":"62143","record_id":"<urn:uuid:41a97253-2371-48a7-b6be-180475d67220>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00653.warc.gz"}
The Unit Circle SectionThe Unit Circle Supplemental Videos The main topics of this section are also presented in the following videos: In the previous section, we introduced periodic functions and demonstrated how they can be used to model real life phenomena like the rotation of the London Eye. In fact, there is an intuitive connection between periodic functions and the rotation of a circle. In this section, we will use our intuition and formalize this connection by exploring the unit circle and its unique features that lead us into the rich world of trigonometry. Many applications involving circles also involve a rotation of the circle so we must first introduce a measure for the rotation, or angle, between two rays (line segments) emanating from the center of a circle. The measure of an angle is a measurement between two intersecting lines, line segments, or rays, starting at the initial side and ending at the terminal side. It is a rotational measure, not a linear When measuring angles on a circle, unless otherwise directed, we measure angles in standard position: starting at the positive horizontal axis with a counterclockwise rotation. SubsectionMeasuring Angles in Degrees A degree is a unit of measurement of an angle. One rotation around a circle is equal to 360 degrees. An angle measured in degrees should always include the degree symbol \(^\circ\) or the word "degrees" after the number. For example, \(90^\circ=90\) degrees. Give the degree measure of the angle shown on the circle. The vertical and horizontal lines divide the circle into quarters. Since one full rotation is 360 degrees, each quarter rotation is \(360^\circ / 4 = 90^\circ\text{.}\) Draw an angle of \(30^\circ\) on a circle. An angle of \(30^\circ\) is \(1/3\) of \(90^\circ\) so by dividing a quarter rotation into thirds, we can draw a line showing an angle of \(30^\circ\text{.}\) Note8Going Greek When representing angles using variables, it is traditional to use Greek letters. Here is a list of commonly encountered Greek letters. \(\theta\) \(\phi \hspace{.075in} \text{or} \hspace{.075in} \varphi\) \(\alpha\) \(\beta\) \(\gamma\) theta phi alpha beta gamma Notice that since there are 360 degrees in one rotation, an angle greater than 360 degrees would indicate more than one full rotation. Shown on a circle, the resulting direction in which this angle's terminal side points is the same as another for an angle between 0 and 360. These angles are called coterminal. After completing their full rotation based on the given angle, two angles are coterminal if they terminate in the same position, so their terminal sides coincide (point in the same direction). Find an angle \(\theta\) that is coterminal with \(800^\circ\) where \(0^\circ \leq \theta \lt 360^\circ \text{.}\) Since adding or subtracting a full rotation, or 360 degrees, would result in an angle with the terminal side pointing in the same direction, we can find coterminal angles by adding or subtracting 360 An angle of 800 degrees is coterminal with an angle of \begin{equation*} 800^\circ - 360^\circ = 440^\circ \end{equation*} It is also coterminal with an angle of \begin{equation*} 440^\circ - 360^\circ = 80^\circ\text{.} \end{equation*} Finding the coterminal angle between 0 and \(360^\circ\) can make it easier to see which direction the terminal side of an angle points in. Find an angle \(\alpha\) that is coterminal with \(870^\circ\) where \(0^\circ \leq \alpha \lt 360^\circ \text{.}\) To find angles with the terminal sides pointing in the same direction, we can subtract 360 degrees: \begin{equation*} \begin{aligned} 870^\circ-360^\circ \amp= 510^\circ \\ 510^\circ-360^\circ \amp= 150^\circ \end{aligned} \end{equation*} Therefore \(870\) degrees is coterminal with \(150\) degrees. Note11Negative Angles On a number line, a positive number is measured to the right and a negative number is measured in the opposite direction (to the left). Similarly, a positive angle is measured counterclockwise and a negative angle is measured in the opposite direction (clockwise). Draw an angle of \(-45^\circ\) on a circle and find a positive coterminal angle \(\alpha\) where \(0^\circ \leq \alpha \lt 360^\circ \text{.}\) Since 45 degrees is half of 90 degrees, we can start at the positive horizontal axis and measure clockwise half of a 90 degree angle. We can find a positive coterminal angle by adding 360 degrees. \begin{equation*} -45^\circ+360^\circ=315^\circ \end{equation*} Find an angle \(\beta\) that is coterminal with \(-300^\circ\) where \(0^\circ \leq \beta \lt 360^\circ \text{.}\) Since \(-300^\circ+360^\circ=60^\circ\text{,}\) \(60\) degrees is coterminal with \(-300^\circ\text{.}\) Note14Common Angles in Degrees It can be helpful to know some of the frequently encountered angles in one rotation of a circle. Multiples of 30, 45, 60, and 90 degree angles are commonly encountered in many trigonometric applications. These angles are shown below. Becoming familiar with these angles and understanding how they relate to one another will be useful as we study the properties associated with them. SubsectionMeasuring Angles in Radians While measuring angles in degrees may be familiar, doing so often complicates matters since the units of measure can get in the way of calculations. For this reason, another measure of angles is commonly used. This measure is based on the distance around the unit circle. The Unit Circle The unit circle is a circle of radius 1, centered at the origin of the \((x,y)\) plane. When measuring an angle around the unit circle, we travel in the counterclockwise direction, starting from the positive \(x\)-axis. A negative angle is measured in the opposite, or clockwise, direction. A complete trip around the unit circle amounts to a total of 360 degrees. A radian is a measurement of an angle that arises from looking at angles as a fraction of the circumference of the unit circle. A complete trip around the unit circle amounts to a total of \(2\pi\) Radians are a unitless measure. Therefore, it is not necessary to write the label "radians" after a radian measure, and if you see an angle that is not labeled with "degrees" or the degree symbol, you should assume that it is a radian measure. Radians and degrees both measure angles. Thus, it is possible to convert between the two. Since one rotation around the unit circle equals 360 degrees or \(2\pi\) radians, we can use this as a conversion factor. Converting Between Radians and Degrees Since \(360 \text{ degrees} = 2\pi \text{ radians}\text{,}\) we can divide each side by 360 and conclude that \begin{equation*} \displaystyle 1 \text{ degree} = \frac{2\pi \text{ radians}}{360} = \frac{\pi \text{ radians}}{180} \end{equation*} So, to convert from degrees to radians, we can multiply by \(\displaystyle \ \frac{\pi \text{ radians}}{180^\circ}\) Similarly, we can conclude that \begin{equation*} \displaystyle 1 \text{ radian} = \frac{360^\circ}{2\pi} = \frac{180^\circ}{\pi} \end{equation*} So, to convert from radians to degrees, we can multiply by \(\displaystyle \ \frac{180^\circ}{\pi \text{ radians}}\) Convert \(\displaystyle \frac{\pi}{6}\) radians to degrees. Since we are given an angle in radians and we want to convert it to degrees, we multiply the angle by \(180^\circ\) and then divide by \(\pi\) radians. \begin{equation*} \frac{\pi}{6} \text{ radians} \cdot \frac{180^\circ}{\pi \text{ radians}} = 30^\circ \end{equation*} Convert \(15^\circ\) to radians. In this example, we start with an angle in degrees and want to convert it to radians. We multiply by \(\pi\) and divide by \(180^\circ\) so that the units of degrees cancel and we are left with the unitless measure of radians. \begin{equation*} 15^\circ \cdot \frac{\pi}{180^\circ} = \frac{\pi}{12} \end{equation*} Convert \(\displaystyle \frac{7\pi}{10}\) radians to degrees. Since we are given an angle in radians and we want to convert it to degrees, we multiply by the angle \(180^\circ\) and then divide by \(\pi\) radians. \begin{equation*} \frac{7\pi}{10} \text{ radians} \cdot \frac{180^\circ}{\pi \text{ radians}} = 126^\circ \end{equation*} Note18Common Angles in Radians Just as we listed some frequently encountered angles in degrees on a circle, we should also list the corresponding radian values for the common measures of a circle corresponding to degree multiples of 30, 45, 60, and 90 degrees. As with the degree measurements, it would be helpful to become familiar with these angles in radians and understand how they relate to one another. Above, we explored how to find coterminal angles for angles greater than 360 degrees and less than 0 degrees. Similarly, we can find coterminal angles for angles greater than \(2\pi\) radians and less than 0 radians. Find an angle \(\beta\) that is coterminal with \(\displaystyle \frac{11\pi}{4}\) where \(0^\circ \leq \beta \lt 2\pi \text{.}\) When working in degrees, we found coterminal angles by adding or subtracting 360 degrees, a full rotation. Likewise, in radians, we can find coterminal angles by adding or subtracting full rotations of \(2\pi\) radians. An angle of \(11\pi/4\) is coterminal with an angle of \begin{equation*} \frac{11\pi}{4} - 2\pi = \frac{11\pi}{4} - \frac{8\pi}{4} = \frac{3\pi}{4} \end{equation*} Find an angle \(\phi\) that is coterminal with \(\displaystyle -\frac{11\pi}{6}\) where \(0^\circ \leq \phi \lt 2\pi \text{.}\) An angle of \(-\frac{11\pi}{6}\) is coterminal with an angle of \begin{equation*} -\frac{11\pi}{6} + 2\pi = -\frac{11\pi}{6} + \frac{12\pi}{6} = \frac{\pi}{6} \end{equation*} SubsectionFinding Points on the Unit Circle While it is convenient to describe the location of a point on the unit circle using an angle, relating this angle to the \(x\) and \(y\) coordinates of the corresponding point is an important application of trigonometry. To do this, we will need to apply our knowledge of triangles. Find the \((x,y)\) coordinates for the point on the unit circle corresponding to an angle of 45 degrees or \(\pi/4\) radians. Let's start by drawing a picture and labeling the known information. We want to find the \(x\) and \(y\) coordinates of the point on the unit circle corresponding to an angle of 45 degrees or \(\pi/4\text{.}\) To do this, we can draw a vertical line from the point down to the \(x\)-axis, which forms a right triangle. The hypotenuse of this triangle is 1, since it corresponds to the radius of the unit circle, and the side lengths of this triangle are equal to \ (x\) and \(y\text{.}\) Now, using the Pythagorean Theorem, we get that \begin{equation*} x^2+y^2=1^2 \hspace{.25in} \text{ which simplifies to } \hspace{.25in} x^2+y^2=1 \end{equation*} Since the triangle formed is a 45-45-90 degree triangle, side lengths \(x\) and \(y\) must be equal. Therefore, we can substitute in \(x=y\) into the above equation. \begin{align*} x^2+y^2 \amp = 1 \amp\amp \text{Substitute in } x=y \\ \\ x^2+x^2 \amp = 1 \amp\amp \text{Add like terms } \\ \\ 2x^2 \amp = 1 \amp\amp \text{Divide by 2} \\ \\ x^2 \amp = \frac{1}{2} \amp\amp \text{Take the square root} \\ \\ x \amp = \pm\sqrt{\vphantom{\frac{1^2}{2}}\frac{1}{2}} \amp\amp \text{Since the } x \text{ value is positive, we keep the positive root so} \\ \\ x \amp = \ sqrt{\vphantom{\frac{1^2}{2}}\frac{1}{2}} \amp\amp \end{align*} Often this value is written with a rationalized denominator. Remember that to rationalize the denominator, we multiply by a term equivalent to 1 to get rid of the radical in the denominator, so \begin{equation*} x = \sqrt{\vphantom{\frac{1^2}{2}}\frac{1}{2}} \, \sqrt{\vphantom{\frac{1^2}{2}}\frac{2}{2}} = \sqrt{\vphantom{\frac{1^2}{2}}\frac{2}{4}} = \frac{\sqrt{2}}{2} \end{equation*} and since \(x\) and \(y\) are equal, \(\displaystyle y=\frac{\sqrt{2}}{2}\text{.}\) Thus, the \((x,y)\) coordinates for the point on the unit circle corresponding to an angle of 45 degrees or \(\pi/4\) radians are \begin{equation*} (x,y) = \left(\frac{\sqrt{2}}{2},\frac{\sqrt{2}}{2}\right)\text{.} \end{equation*} Find the \((x,y)\) coordinates for the point on the unit circle corresponding to an angle of 30 degrees or \(\pi/6\) radians. Let's start by drawing a picture and labeling the known information. We want to find the \(x\) and \(y\) coordinates of the point on the unit circle corresponding to an angle of 30 degrees or \(\pi/6\text{.}\) To do this, we can draw a triangle inside the unit circle with one side at an angle of 30 degrees and another at an angle of \(-30\) degrees. Notice that if we combine the resulting two right triangles to form one large triangle, then all three angles of the larger triangle are equal to 60 degrees. Since all of the angles in this triangle are equal, the sides will all be equal as well. Two of the side lengths of this triangle are equal to 1 because they correspond to the radius of the unit circle. Thus, the other side length must also be equal to 1. This side also has a length of \(2y\text{.}\) Therefore, we can conclude that \(2y = 1\) so \begin{equation*} y=\frac{1}{2} \end{equation*} Now, we can apply the Pythagorean Theorem to one of the right triangles to find the \(x\) value. We get that \begin{align*} x^2+y^2 \amp = 1^2 \\ \\ x^2+\left(\frac{1}{2}\right)^2 \amp = 1 \\ \\ x^2+\frac{1}{4} \amp = 1 \\ \\ x^2 \amp = \frac{3}{4} \\ \\ x \amp = \pm\sqrt{\vphantom{\frac{3^2}{4}}\frac{3} {4}} \\ \\ x \amp = \sqrt{\vphantom{\frac{3^2}{4}}\frac{3}{4}} = \frac{\sqrt{3}}{2} \end{align*} Thus, the \((x,y)\) coordinates for the point on the unit circle corresponding to an angle of 30 degrees or \(\pi/6\) are \begin{equation*} (x,y) = \left(\frac{\sqrt{3}}{2},\frac{1}{2}\right)\text{.} \end{equation*} In the previous example, we applied our knowledge of triangles to find the \(x\) and \(y\) coordinates of the point on the unit circle corresponding to 30 degrees. We can now use symmetry to find the \(x\) and \(y\) coordinates corresponding to an angle of 60 degrees or \(\pi/3\text{.}\) First, we draw a picture of the triangle corresponding to this point on the unit circle. Notice that the triangle shown above is similar to the one formed by the 30 degree angle since the hypotenuse is the same length and both are 30-60-90 degree triangles. Therefore, the \((x,y)\) coordinates for 60 degrees are the same as the \((x,y)\) coordinates for 30 degrees, only switched. Thus, the \((x,y)\) coordinates for the point on the unit circle corresponding to an angle of 60 degrees or \(\pi/3\) are \begin{equation*} (x,y) = \left(\frac{1}{2},\frac{\sqrt{3}}{2}\right)\text{.} \end{equation*} We have now found the coordinates of the points corresponding to all the commonly encountered angles in the first quadrant of the unit circle. Using symmetry, we can find the rest of the coordinates corresponding to common angles on the unit circle. A labeled picture of the unit circle is shown below.
{"url":"https://mathbooks.unl.edu/PreCalculus/unit-circle.html","timestamp":"2024-11-07T04:30:52Z","content_type":"text/html","content_length":"55783","record_id":"<urn:uuid:0def8ff4-af79-4281-85b5-1274e1c5dc77>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00270.warc.gz"}
We know that NTPC being a government aided organization, its recruitment is done through GATE examination. This paper consist of three parts:- a) ENGLISH. b) APTITUDE. c) TECHNICAL. 1. The expression (11.98 x 11.98 + 11.98 x x + 0.02 x 0.02) will be a perfect square for x equal to: 2. Simple interest on a certain sum of money for 3 years at 8% per annum is half the compound interest on Rs. 4000 for 2 years at 10% per annum. The sum placed on simple interest is: 3 A boat running downstream covers a distance of 16 km in 2 hours while for covering the same distance upstream, it takes 4 hours. What is the speed of the boat in still water? 4. A large tanker can be filled by two pipes A and B in 60 minutes and 40 minutes respectively. How many minutes will it take to fill the tanker from empty state if B is used for half the time and A and B fill it together for the other half? 5. Out of 7 consonants and 4 vowels, how many words of 3 consonants and 2 vowels can be formed? 1. Free electrons make current possible. 2. If 750 µA is flowing through 11 k of resistance, what is the voltage drop across the resistor? 3. At the end of a 14 day period, your utility bill shows that you have used 18 kWh. What is your average daily power? 4. A half-watt is equal to how many milliwatts? 5. Series aiding is a term sometimes used to describe voltage sources of the same polarity in series. 6. If there are a total of 120 mA into a parallel circuit consisting of three branches, and two of the branch currents are 40 mA and 10 mA, the third branch current is 7. In a parallel circuit, the branch with the lowest resistance has the most current. 8. What is the Thevenin equivalent (V[TH] and R[TH]) for the circuit given? 9. What is the current through R[2]? 10. Find branch current IR[2]. 11. Third-order determinants are evaluated by the expansion method or by the cofactor method. 12. A 2 F, a 4 F, and a 10 F capacitor are connected in series. The total capacitance is less than 13. Five time constants are required for a capacitor to charge fully or discharge fully. 14. In an RL circuit, the impedance is determined by both the resistance and the inductive reactance combined. 15. A 10 resistor, a 90 mH coil, and a 0.015 F capacitor are in series across an ac source. The impedance magnitude at 1,200 Hz below f[r] is 16. A certain series RLC circuit with a 200 Hz, 15 V ac source has the following values: R = 12 , C = 80 F, and L = 10 mH. The total impedance, expressed in polar form, is 17. Referring to Problem 18, what is the bandwidth of the filter? 18. In an RC differentiator, the sum of the capacitor voltage and the resistor voltage at any instant B. must be equal to the applied voltage C. is less than the applied voltage but greater than zero 19. If the RC time constant of an integrator is increased, as the time constant is increased A. the capacitor charges more during a pulse and discharges less between pulses B. the capacitor charges less during a pulse and discharges more between pulses C. the capacitor charges more during a pulse and discharges more between pulses D. the capacitor charges less during a pulse and discharges less between pulses 20. In an RL integrating circuit, the output voltage is taken across the inductor. Tell us Your Queries, Suggestions and Feedback
{"url":"https://blog.oureducation.in/placement-paper-for-ntpc/","timestamp":"2024-11-08T05:12:40Z","content_type":"text/html","content_length":"83405","record_id":"<urn:uuid:061381a8-7c88-47bb-9da0-508079565c43>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00861.warc.gz"}
[SNU Number Theory Seminar 2022-04-15] A note on non-ordinary primes for some genus-zero arithmetic groups • Date : 2022-04-15 (Fri) 16:00 • Place : 27-325 (SNU) • Speaker : Seokho Jin (Chung-Ang University) • Title : A note on non-ordinary primes for some genus-zero arithmetic groups • Abstract : Suppose that O_L is the ring of integers of a number field L, and suppose that f(z) is a normalized Hecke eigenform for Γ0(N)+. We say that f is non-ordinary at p if there is a prime ideal p ⊂ OL above p for which a_f(p) ≡ 0 (mod p). In authors’ previous paper it was proved that there are infinitely many Hecke eigenforms for SL2(Z) such that are non-ordinary at any given finite set of primes. In this talk, we extend this result to some genus 0 subgroups of SL2(R), namely, the normalizers Γ0(N)+ of the congruence subgroups Γ0(N). This is joint with Wenjun Ma. • Website: https://sites.google.com/view/snunt/seminars
{"url":"https://qsms.math.snu.ac.kr/index.php?mid=board_sjXR83&order_type=asc&listStyle=list&sort_index=readed_count&page=8&l=en&document_srl=2278","timestamp":"2024-11-11T17:51:53Z","content_type":"text/html","content_length":"67108","record_id":"<urn:uuid:5940225d-c7d5-46ac-ac03-de9c641edbfc>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00844.warc.gz"}
Special Relativity Optical Experiments Explained from the Perspective of a Peripherally Rotating Frame In the current paper we present an explanation of several fundamental tests of special relativity from the perspective of the frame co-moving with a rotating observer. The solution is of great interest for real time applications because Earth-bound laboratories are inertial only in approximation. We present the derivation of the Sagnac, Michelson-Morley, Kennedy-Thorndike and the Hammar experiments as viewed from the Earth-bound uniformly rotating frame, that is, the frame of the laboratory where the experiment is taking place. To our best knowledge such an attempt has never been made before, possibly due to its mathematical difficulty, so no precedents exist, this is a first. The current paper brings new information in the following areas: -new explanation of the Sagnac experiment -new explanation of the Michelson-Morley experiment -new explanation of the Hammar experiment -new explanation of the Kennedy Thorndike experiment The main thrust of the paper is to give a consistent explanation of various tests of special relativity as judged from the perspective of the rotating frame of the experimental setup. The theoretical results are shown to be consistent with the results derived for inertial frames in the specialty literature. The exact symbolic solutions are nevertheless different from the results obtained for inertial frames. This should not be surprising since the rotation actually “modulates” the proper speeds of the light wave-fronts. General coordinate transformations; Hammar experiment; Kennedy-Thorndike experiment; Michelson-Morley experiment; Sagnac experiment; Uniform rotation motion 03.30.+p, 52.20.Dq, 52.70.Nc Real life applications include accelerating and rotating frames more often than the idealized case of inertial frames. Our daily experiments happen in the laboratories attached to the rotating, continuously accelerating Earth. Usually, such experiments are explained from the perspective of an external, inertial frame because special relativity in rotating frames is viewed as more complicated. In the present paper, we will construct a straightforward explanation by applying the formalisms developed in previous work [1-6]. More exactly, we apply the formalisms derived in [1-3] to explaining the results of some of the most important tests of special relativity as viewed from the rotating frame of the lab where the experiments take place. Figure 1: Peripherally rotating frame of reference. The Sagnac experiment [8-12] is usually explained from the perspective of an inertial frame anchored to the center of rotation since the mathematical formalism is simpler from that perspective. In this section, we will use the results derived in the previous section in order to produce an equally straight forward explanation from the perspective of a frame attached to the periphery of the rotating device. Based on the prior results, the observer co-moving with the rotating frame measures the perimeter of a circle of radius r to be 2πr, both wave-fronts cover the same distance, 2πr, so the time difference between the clockwise and counterclockwise light fronts is calculated as (absolute speeds are used): We can now explain the null result of the Michelson Morley experiment [13-21] in the rotating frame of the lab co-rotating with the Earth. The elapsed time in the direction of motion is: The Kennedy-Thorndike experiment exploits the fact that the Earth bound laboratory has a variable speed due to the combined effect of Earth rotation around its axis and Earth revolution around the Sun [22]. The laboratory speed has contributions from the revolution of the Earth with respect to the Sun-centered frame, v[e] = 30km/s and Earth’s daily rotation v[d] so: Once again, while the transition times in the rotating frame is different from the ones calculated for idealistic inertial frames, the measured values of the experiment are null, exactly as predicted by the above theory. In the following, all calculations explaining the outcome of the Hammar experiment [23,24] are made from the point of view of the rotating Earth-bound frame and all employ the theory of special relativity in rotating frames. The clockwise ( t[cw] ) and counterclockwise (t[ccw]) time of light propagation are (Figure 2): Figure 2: Instrument motion with shielded arm moving parallel to the “aether wind”. The light source as well as the screen where interference occurs between the two light beams is located in point “A”. Also, a half-silvered mirror is used as a light splitter.
{"url":"https://www.heraldopenaccess.us/openaccess/special-relativity-optical-experiments-explained-from-the-perspective-of-a-peripherally-rotating-frame","timestamp":"2024-11-02T11:09:58Z","content_type":"text/html","content_length":"39666","record_id":"<urn:uuid:b8b59095-720c-4e7e-accf-11312c003131>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00712.warc.gz"}
Lévy noise versus Gaussian-noise-induced transitions in the Ghil–Sellers energy balance model Articles | Volume 29, issue 2 © Author(s) 2022. This work is distributed under the Creative Commons Attribution 4.0 License. Lévy noise versus Gaussian-noise-induced transitions in the Ghil–Sellers energy balance model We study the impact of applying stochastic forcing to the Ghil–Sellers energy balance climate model in the form of a fluctuating solar irradiance. Through numerical simulations, we explore the noise-induced transitions between the competing warm and snowball climate states. We consider multiplicative stochastic forcing driven by Gaussian and α-stable Lévy – $\mathit{\alpha }\in \left(\ mathrm{0},\mathrm{2}\right)$ – noise laws, examine the statistics of transition times, and estimate the most probable transition paths. While the Gaussian noise case – used here as a reference – has been carefully studied in a plethora of investigations on metastable systems, much less is known about the Lévy case, both in terms of mathematical theory and heuristics, especially in the case of high- and infinite-dimensional systems. In the weak noise limit, the expected residence time in each metastable state scales in a fundamentally different way in the Gaussian vs. Lévy noise case with respect to the intensity of the noise. In the former case, the classical Kramers-like exponential law is recovered. In the latter case, power laws are found, with the exponent equal to −α, in apparent agreement with rigorous results obtained for additive noise in a related – yet different – reaction–diffusion equation and in simpler models. This can be better understood by treating the Lévy noise as a compound Poisson process. The transition paths are studied in a projection of the state space, and remarkable differences are observed between the two different types of noise. The snowball-to-warm and the warm-to-snowball most probable transition paths cross at the single unstable edge state on the basin boundary. In the case of Lévy noise, the most probable transition paths in the two directions are wholly separated, as transitions apparently take place via the closest basin boundary region to the outgoing attractor. This property can be better elucidated by considering singular perturbations to the solar irradiance. Received: 20 Oct 2021 – Discussion started: 05 Nov 2021 – Revised: 17 Mar 2022 – Accepted: 05 Apr 2022 – Published: 11 May 2022 1.1Multistability of the Earth's climate The climate system comprises the following five interacting subdomains: the atmosphere, the hydrosphere (water in liquid form), the upper layer of the lithosphere, the cryosphere (water in solid form), and the biosphere (ecosystems and living organisms). The climate is driven by the inhomogeneous absorption of incoming solar radiation, which sets up nonequilibrium conditions. The system reaches an approximate steady state, where macroscopic fluxes of energy, momentum, and mass are present throughout its domain, and entropy is continuously generated and expelled into the outer space. The climate features variability on a vast range of spatial and temporal scales as a result of the interplay of forcing, dissipation, feedbacks, mixing, transport, chemical reactions, phase changes, and exchange processes between the subdomains (see Peixoto and Oort, 1992, Lucarini et al., 2014a, Ghil, 2015, and Ghil and Lucarini, 2020). In the late 1960s Budyko (1969) and Sellers (1969) independently proposed that, in the current astronomical and astrophysical configuration, the Earth could support two distinct climates, namely the present-day warm (W) state and a competing one characterised by global glaciation, usually referred to as the snowball (SB) state. Their analysis was performed using one-dimensional energy balance models (EBMs), which, despite their simplicity, were able to capture the essential physical mechanisms in action, i.e. the interplay between two key feedbacks. The Boltzmann feedback is associated with the fact that warmer bodies emit more radiation, and it is a negative, stabilising one. Instead, the instability of the system is due to the presence of the so-called ice–albedo feedback, whereby an increase in the ice-covered fraction of the surface leads to a further temperature reduction for the planet because ice efficiently reflects the incoming solar radiation. These mechanisms are active at all spatial scales, including the planetary one (see Budyko, 1969, and Sellers, 1969). Such pioneering investigations of the multistability of the Earth's climate were later extended by Ghil (1976) (see also the later analysis by Ghil and Childress, 1987), who provided a comprehensive mathematical framework for the problem, based on the study of the bifurcations of the system. The main control parameter defining the stability properties is the solar irradiance S^∗. Below the critical value S[W→SB], only the SB state is permitted, whereas, above the critical value S[SB→W], only the W state is permitted. Such critical values, which determine the region of bistability, are defined by bifurcations that emerge when, roughly speaking, the strength of the positive, destabilising feedbacks becomes as strong as the negative, stabilising feedbacks. Many variants of the models proposed by Budyko and Sellers have been discussed in the literature, all featuring by and large rather similar qualitative and quantitative features (Ghil, 1981; North et al., 1981; North, 1990; North and Stevens, 2006). Furthermore, these models have long been receiving a great deal of attention from the mathematical community regarding the possibility of proving the existence of solutions and evaluating their multiplicity (Hetzer, 1990; Díaz et al., 1997; Kaper and Engler, 2013; Bensid and Díaz , 2019). Only later were these predictions confirmed by actual data. Indeed, geological and paleomagnetic evidence suggests that, during the Neoproterozoic era, between 630×10^6 and 715×10^6 years ago, the Earth went, at least twice, into major long-lasting global glaciations that can be associated with the SB state (see Pierrehumbert et al., 2011, and Hoffman et al., 1998). Multicellular life emerged in our planet shortly after the final deglaciation from the last SB state (Gould, 1989). The robustness and importance of the competition between the Boltzmann feedback and the ice–albedo feedback in defining the global stability properties of the climate has been confirmed by investigations performed using higher complexity models (Lucarini et al., 2010; Pierrehumbert et al., 2011), including fully coupled climate models (Voigt and Marotzke, 2010). While the mechanisms described above are pretty robust, the concentration of greenhouse gases and the boundary conditions defined by the extent and position of the continents have an impact on the values of S[W→SB] and S[SB→W], as well as on the properties of the competing states. The presence of multistability has a key importance in terms of determining habitability conditions for Earth-like exoplanets (see Lucarini et al., 2013, and Linsenmeier et al., 2015). Additionally, several results indicate that the phase space of the climate system might well be more complex than the scenario of bistability described above. Various studies (Lewis et al., 2007; Abbot et al., 2011; Lucarini and Bódai, 2017; Margazoglou et al., 2021) performed with highly nontrivial climate models report the possible existence of additional competing states, up to a total of five (Brunetti et al., 2019; Ragon et al., 2022). In Margazoglou et al. (2021), it is argued that, in fact, one can see the climate as a multistable system where multistability is realised at different hierarchical levels. As an example, the tipping points (Lenton et al., 2008; Steffen et al., 2018) that characterise the current (W) climate state can be seen as a manifestation of a hierarchically lower multistability with respect to the one defining the dichotomy between the W and SB states. 1.2Transitions between competing metastable states: Gaussian vs. Lévy noise Clearly, in the case of autonomous systems where the phase space is partitioned in more than one basin of attraction of the corresponding attractors and the basin boundaries, the asymptotic state of the system is determined by its initial conditions. Things change dramatically when one includes time-dependent forcing which allows for transitions between competing metastable states (Ashwin et al. , 2012). In particular, following the viewpoint originally proposed by Hasselmann (1976), whereby the fast variables of the climate system act as stochastic forcings for the slow ones (Imkeller and von Storch, 2001), the relevance of studying noise-induced transitions between competing states has become apparent (Hänggi, 1986; Freidlin and Wentzell, 1984). This viewpoint, where the noise is usually assumed to be Gaussian distributed, has provided very valuable insight on the multiscale nature of climatic time series (Saltzman, 2001) and is related to the discovery of phenomena like stochastic resonance (Benzi et al., 1981; Nicolis, 1982). Metastability is ubiquitous in nature, and advancing its understanding is a key challenge in complex system science at large (Feudel et al., 2018). In general, the transitions between competing metastable states in stochastically perturbed multistable systems take place, in the weak noise limit, through special regions of the basin boundaries, which are named edge states. The edge states are saddles, and the trajectories initialised in the basin boundaries are attracted to them, but there is an extra direction of instability, so that a small perturbation sends an orbit towards one of the competing metastable states with a probability of one (Grebogi et al., 1983; Ott, 2002; Kraut and Feudel, 2002; Skufca et al., 2006; Vollmer et al., 2009). In the case the edge state supports chaotic dynamics, we refer to it as the melancholia (M) state (Lucarini and Bódai, 2017). In previous papers, we have shown that it is possible to construct M states in high-dimensional climate models (Lucarini and Bódai, 2017) and to prove that the nonequilibrium quasi-potential formalism introduced by Graham (1987) and Graham et al. (1991) provides a powerful framework for explaining the population of each metastable state and the statistics of the noise-induced transitions. In the weak noise limit, edge states act as gateways for noise-induced transitions between the metastable states (Lucarini and Bódai, 2019, 2020; Margazoglou et al., 2021); see also a recent study on a nontrivial metastable prey–predator model (Garain and Sarathi Mandal, 2022). The local minima and the saddles of the quasi-potential Φ, which generalises the classical energy landscape for non-gradient systems, correspond to competing metastable states and to edge states, respectively. In our investigation, the climate system is forced by adding a random – Gaussian-distributed – component to the solar irradiance, which impacts, in the form of multiplicative noise, only a small subset of the degrees of freedom of the system. We remark that such a choice of the stochastic forcing does not fully reflect the physical realism, as the variability of the solar irradiance has a more complex behaviour (Solanki et al., 2013). Instead, noise acts as a tool for exploring the global stability properties of the system, and injecting noise as fluctuation of the solar irradiance has the merit of impacting the Lorenz energy cycle, thus effecting all degrees of freedom of the system (Lucarini and Bódai, 2020). See also the recent detailed mathematical analysis of the stochastically perturbed one-dimensional EBMs presented in Díaz and Díaz (2021). A major limitation of this mathematical framework is the need to rigidly consider Gaussian noise laws, even if considerable freedom is left as to the choice of the spatial correlation properties of the noise. It seems natural to attempt a generalisation by considering the whole class of α-stable Lévy noise laws. Lévy processes (Applebaum, 2009; Duan, 2015), described in detail in Appendix A, are fundamentally characterised by the stability parameter $\mathit{\alpha }\in \left(\mathrm{0},\mathrm{2}\right]$, where the α=2 case corresponds to the Gaussian case (which is, indeed, a special Lévy process). In what follows, when we discuss Lévy noise laws, we refer to $\mathit{\alpha }\in \left(\mathrm{0},\mathrm{2}\right)$. Note that α-stable Lévy processes have played an important role in geophysics as they have provided the starting point for defining the multiplicative cascades also referred to as universal multifractal. This framework has been proposed as way to analyse and simulate, at climate scales, the ubiquitous intermittency and heavy-tailed statistics of clouds (Schertzer and Lovejoy, 1988), rain reflectivity (Tessier et al., 1993; Schertzer and Lovejoy, 1997), atmospheric turbulence (Schmitt et al., 1996), and soil moisture (Millàn et al., 2016). On longer timescales, multiplicative cascades have been used to interpret temperature records in the summit ice core Schmitt et al. (1995, see Lovejoy and Schertzer, 2013, for a summary of this viewpoint). Mathematicians, on the other hand, have defined a Lévy multiplicative chaos (Fan, 1997; Rhodes et al., 2014) as a more mathematically tractable alternative to the universal multifractal. Finally, we remark that fractional Fokker–Planck equations have been proposed by Schertzer et al. (2001) to investigate the properties of nonlinear Langevin-type equations forced by an α-stable Lévy noise with the goal of analysing and simulating anomalous diffusion. Following Ditlevsen (1999), it has become apparent that more general classes of α-stable Lévy noise laws might be useful for modelling noise-induced transitions in the climate system like Dansgaard–Oeschger events, which are sequences of periods of abrupt warming followed by slower cooling that occurred during the last glacial period (Barker et al., 2011). The viewpoint by Ditlevsen ( 1999) was particularly effective in stimulating mathematical investigations into noise-induced escapes from attractors, where, as stochastic forcing, one chooses a Lévy, rather than Gaussian, noise ( Imkeller and Pavlyukevich, 2006a, b; Chechkin et al., 2007; Debussche et al., 2013). Such analyses have clarified that a fundamental dichotomy exists with the classical Freidlin and Wentzell scenario mentioned above, even if phenomena like stochastic resonance can also be recovered in this case (Dybiec and Gudowska-Nowak, 2009; Kuhwald and Pavlyukevich, 2016). Whereas, in the Gaussian case, transitions between competing attractors occur as a result of the very unlikely combination of many steps all going in the right direction, in the Lévy case, transitions result from individual, very large and very rare jumps. Recently, Duan and collaborators have made fundamental progress in achieving a variational formulation of the Lévy noise-perturbed dynamical systems (Hu and Duan, 2020) and in developing corresponding methods for data assimilation (Gao et al., 2016) and data analysis (Lu and Duan, 2020). In terms of applications, Lévy noise is becoming a more and more a popular concept and tool for studying and interpreting complex systems (Grigoriu and Samorodnitsky, 2003; Penland and Sardeshmukh, 2012; Zheng et al., 2016; Wu et al., 2017; Serdukova et al., 2017; Cai et al., 2017; Singla and Parthasarathy, 2020; Gottwald, 2021). The contribution by Gottwald (2021) is especially worth recapitulating because of its methodological clarity. There, the idea is, following Ditlevsen (1999), to provide a conceptual deterministic climate model able to generate a Lévy-noise-like signal to describe, at least qualitatively, abrupt climate changes similar to Dansgaard–Oeschger events. A key building block is the idea proposed in Thompson et al. (2017) that a Lévy noise can be produced by integrating the so-called correlated additive and multiplicative (CAM) noise processes, which are defined by starting from standard Gaussian processes. The other key ingredient is to consider the atmosphere as the fast component in the multiscale model and deduce, using homogenisation theory (Pavliotis and Stuart, 2008; Gottwald and Melbourne, 2013), that its influence on the slower climate components can be closely represented as a Gaussian forcing. Finally, the temperature signal is cast as the integral of a CAM process. We remark that Gaussian and Lévy noise can be associated with stochastic forcings of a fundamentally different nature. One might think of Gaussian noise as being associated to the impact of very rapid unresolved scales of motion on the resolved ones Pavliotis and Stuart (2008). Instead, one might interpret α-stable Lévy noise as describing, succinctly, the impact of what in the insurance sector are called acts of God (e.g. an asteroid hitting the Earth, a massive volcanic eruption, or the sudden collapse of the West Antarctic ice sheet). 1.3Outline of the paper and main results We consider here the Ghil–Sellers Earth's EBM (Ghil, 1976), a diffusive, one-dimensional energy balance system, governed by a nonlinear reaction–diffusion parabolic partial differential equation. We stochastically perturb the system by adding random fluctuations to the solar irradiance; therefore, the noise is introduced in a multiplicative form. We study the transitions between the two competing metastable climate states and carry out a comparison of the effect of considering Lévy vs. Gaussian noise laws of weak intensity ε. The main challenges of the problem are (a) the fact that we are considering dynamical processes occurring in infinite dimensions (Doering, 1987; Duan and Wang, 2014; Alharbi, 2021) and (b) the consideration of multiplicative Lévy noise laws (Peszat and Zabczyk, 2007; Debussche et al., 2013). We characterise noise-induced transitions between the competing climate basins and quantify the effect of noise parameters on them by estimating the statistics of escape times and empirically constructing mean transition pathways called instantons. The results obtained confirm that, in the weak noise limit ε→0, the mean residence time in each metastable state driven by Gaussian vs. Lévy noise has a fundamentally different dependence on ε. Indeed, as expected, in the Gaussian case, the residence time grows exponentially with ε^−2, which is, thus, in basic agreement with the well-known Kramers (1940) law and the previous studies performed on climate models (Lucarini and Bódai, 2019, 2020). Instead, in the case of α-stable noise laws, the residence time increases with ε^−α. We perform simulations for $\mathit{\alpha }=\mathit {\left\{}\mathrm{0.5},\mathrm{1.0},\mathrm{1.5}\mathit{\right\}}.$ The obtained scaling can be explained by effectively treating the Lévy noise as a compound Poisson process, and this is in agreement with what is found for low-dimensional dynamics (Imkeller and Pavlyukevich, 2006a, b) and for the infinite dimensional stochastic Chafee–Infante reaction–diffusion equation (Debussche et al., 2013) in the case of additive noise. This might indicate that such scaling laws are more general than what has been so far assumed. Furthermore, we find clear confirmation that, in the case of Gaussian noise in the weak noise limit, the escape from either attractor's basin takes place through the edge state. Indeed, the most probable paths for both thawing and freezing processes meet at the edge state and have distinct instantonic and relaxation sections. In turn, for Lévy noise in the weak noise limit, the escapes from a given basin of attraction occur through the boundary region closest to the outgoing attractor. Hence, the paths are very different from the Gaussian case (especially so for the freezing transition) and, somewhat surprisingly, are identical regardless of the value of α considered. These properties can be better understood by studying the impact of including singular perturbations to the value of the solar irradiance. The rest of the paper is organised as follows. In Sect. 2, we present the Ghil–Sellers EBM and summarise its most important dynamical aspects and the steady-state solutions and their stability. The stochastic partial differential equation obtained by randomly perturbing the solar irradiance in the EBM is given in Sect. 3, where we also clarify the mathematical meaning of the solution of the stochastic partial differential equation. Section 3 also introduces the mean residence time and most probable transition path between the competing climate states. The numerical methods are also briefly presented. In Sect. 4, we discuss our main results. In Sect. 5, we present our conclusions and perspectives for future investigations. Finally, Appendix A presents a succinct description of α -stable Lévy processes, Appendix B sketches the derivation of the scaling laws for mean residence times presented in Debussche et al. (2013), Appendix C explores the behaviour and dynamics of singular Lévy perturbations of different duration, and Appendix D presents a tabular summary of the statistics of the problem. 2The Ghil–Sellers energy balance climate model The Ghil–Sellers EBM (Ghil, 1976) is described by a one-dimensional nonlinear, parabolic, reaction–diffusion partial differential equation (PDE) involving the surface temperature T field and the transformed space variable $x=\mathrm{2}\mathit{\varphi }/\mathit{\pi }\in \left[-\mathrm{1},\mathrm{1}\right]$, where $\mathit{\varphi }\in \left[-\mathit{\pi }/\mathrm{2},\mathit{\pi }/\mathrm{2}\ right]$ is the latitude. The model describes the processes of energy input, output, and diffusion across the domain and can be written as follows: $\begin{array}{}\text{(1)}& C\left(x\right){T}_{t}={D}_{I}\left(x,T,{T}_{x},{T}_{xx}\right)+{D}_{\text{II}}\left(x,T\right)-{D}_{\text{III}}\left(T\right),\end{array}$ where C(x) is the effective heat capacity, and $T=T\left(x,t\right)$ has boundary and initial conditions, as follows: $\begin{array}{}\text{(2)}& {T}_{x}\left(-\mathrm{1},t\right)={T}_{x}\left(\mathrm{1},t\right)=\mathrm{0},T\left(x,\mathrm{0}\right)={T}_{\mathrm{0}}\left(x\right).\end{array}$ The equation does not depend explicitly on the time t. The subscripts [t] and [x] refer to partial differentiation. The first term – D[I] – on the right-hand side of Eq. (1) can be written as $\begin{array}{}\text{(3)}& \begin{array}{rl}& {D}_{I}\left(x,T,{T}_{x},{T}_{xx}\right)\\ & =\frac{\mathrm{4}}{{\mathit{\pi }}^{\mathrm{2}}\mathrm{cos}\left(\mathit{\pi }x/\mathrm{2}\right)}\left[\ mathrm{cos}\left(\mathit{\pi }x/\mathrm{2}\right)K\left(x,T\right){T}_{x}{\right]}_{x}\phantom{\rule{0.125em}{0ex}},\end{array}\end{array}$ and describes the convergence of meridional heat transport performed by the geophysical fluids. The function K(x,T) is a combined diffusion coefficient, expressed as follows: $\begin{array}{}\text{(4)}& K\left(x,T\right)={k}_{\mathrm{1}}\left(x\right)+{k}_{\mathrm{2}}\left(x\right)g\left(T\right),\phantom{\rule{0.25em}{0ex}}\phantom{\rule{0.25em}{0ex}}\text{with}\text {(5)}& g\left(T\right)=\frac{{c}_{\mathrm{4}}}{{T}^{\mathrm{2}}}\mathrm{exp}\left(-\frac{{c}_{\mathrm{5}}}{T}\right).\end{array}$ The empirical functions k[1](x) and k[2](x) are eddy diffusivities for sensible and latent heat, respectively, and g(T) is associated with the Clausius–Clapeyron relation, which describes the relationship between temperature and saturation water vapour content of the atmosphere. The second term – D[II] – on the right-hand side of Eq. (1) describes the energy input associated with the absorption of incoming solar radiation and can be written as follows: $\begin{array}{}\text{(6)}& {D}_{\text{II}}\left(x,T\right)=\mathit{\mu }\mathcal{Q}\left(x\right)\left[\mathrm{1}-{\mathit{\alpha }}_{\mathrm{a}}\left(x,T\right)\right],\end{array}$ where 𝒬(x) is the incoming solar radiation, and α[a](x,T) is the surface reflectivity (albedo), which is expressed as follows: $\begin{array}{}\text{(7)}& {\mathit{\alpha }}_{\mathrm{a}}\left(x,T\right)={\left\{b\left(x\right)-{c}_{\mathrm{1}}\left({T}_{m}+min\left[T-{c}_{\mathrm{2}}z\left(x\right)-{T}_{m},\phantom{\rule where the subscript {⋅}[c] denotes a cutoff for a generic quantity h, defined as follows: The term c[2]z(x) in Eq. (7) indicates the difference between the sea level and surface level temperatures, and b(x) is a temperature independent empirical function of the albedo. The parameterisation given in Eqs. (7)–(8) encodes the positive ice–albedo feedback. The relative intensity of the solar radiation in the model can be controlled by the parameter μ. The last term – D[III] – on the right-hand side of Eq. (1) describes the energy loss to space by outgoing thermal planetary radiation and is responsible for the negative Boltzmann feedback. It can be written as follows: $\begin{array}{}\text{(9)}& {D}_{\text{III}}\left(T\right)=\mathit{\sigma }{T}^{\mathrm{4}}\left[\mathrm{1}-m\mathrm{tanh}\left({c}_{\mathrm{3}}{T}^{\mathrm{6}}\right)\right]\phantom{\rule{0.125em} where σ is the Stefan–Boltzmann constant, and the emissivity coefficient is expressed as 1−mtanh(c[3]T^6). Such a term describes, in a simple yet effective way, the greenhouse effect by reducing infrared radiation losses. The values of the empirical functions $C\left(x\right),Q\left(x\right),b\left(x\right),z\left(x\right),{k}_{\mathrm{1}}\left(x\right),{k}_{\mathrm{2}}\left(x\right)$ at discrete latitudes and empirical parameters ${c}_{\mathrm{1}},{c}_{\mathrm{2}},{c}_{\mathrm{3}},{c}_{\mathrm{4}},{c}_{\mathrm{5}},\mathit{\sigma },m,{T}_{m}$ are taken from Ghil (1976), as modified in Bódai et al. (2015). The choice of the empirical functions and parameters are extensively discussed in Ghil (1976). Of course, one might reasonably wonder about the robustness of our modelling strategy. Indeed, a plethora of EBMs analogous to the one described here have been presented in the literature, where slightly different parameterisations for the diffusion operator, for the albedo, and for the greenhouse effect are introduced. Such models are in fundamental agreement, both in terms of physical (Ghil, 1981; North et al., 1981; North, 1990; North and Stevens, 2006) and mathematical properties (Hetzer, 1990; Díaz et al., 1997; Kaper and Engler, 2013; Bensid and Díaz, 2019). In this study, we consider μ=1.05. For this value of μ, two stable asymptotic states – the W and the SB states – co-exist (see Fig. 1b). Indeed, a codimension of one manifold separates the basins of attraction of the W and SB states. We refer to D^W (D^SB) as the basin of attraction of the W (SB state). We refer to B as the basin boundary, which includes a single edge state M. Therefore, the system has three stationary solutions T[W](x), T[SB](x), and T[M](x) for the W, SB, and M state, respectively, as shown in Fig. 1a. In Ghil (1976), the three stationary solutions were obtained by equating T[t] to 0, and it was shown, through linear stability analysis, that the stationary solutions T[W] and T[SB] are stable, while T[M] is unstable. In Bódai et al. (2015) the unstable solution T[M] was constructed using a modified version of the edge-tracking algorithm (Skufca et al., 2006). Following previous studies (Bódai et al., 2015; Lucarini and Bódai, 2019, 2020; Margazoglou et al., 2021), when visualising our results, we apply a coarse graining to the phase space of the model. In what follows, we perform a projection on the plane spanned by the spatially averaged temperature $\stackrel{\mathrm{‾}}{T}$ and the averaged Equator minus the poles' temperature difference ΔT, which is defined as follows: $\begin{array}{}\text{(10)}& \stackrel{\mathrm{‾}}{T}=\left[T\left(x,t\right){\right]}_{\mathrm{0}}^{\mathrm{1}},\text{(11)}& \mathrm{\Delta }T=\left[T\left(x,t\right){\right]}_{\mathrm{0}}^{\mathrm {1}/\mathrm{3}}-\left[T\left(x,t\right){\right]}_{\mathrm{1}/\mathrm{3}}^{\mathrm{1}},\text{ where}\text{(12)}& \left[T\left(x,t\right){\right]}_{{x}_{l}}^{{x}_{h}}=\frac{{\int }_{{x}_{l}}^{{x}_{h}}\ mathrm{cos}\left(\mathit{\pi }x/\mathrm{2}\right)T\left(x,t\right)\mathrm{d}x}{{\int }_{{x}_{l}}^{{x}_{h}}\mathrm{cos}\left(\mathit{\pi }x/\mathrm{2}\right)\mathrm{d}x}.\end{array}$ Such a representation allows for a minimal yet still physically relevant description of the system. Indeed, changes in the energy budget of the system (warming versus cooling) are, to a first approximation, related to variations in $\stackrel{\mathrm{‾}}{T}$, while the large-scale energy transport performed by the geophysical fluids is controlled by ΔT. The boundary between high and low latitude in Eq. (11) is established at $x=±\mathrm{1}/\mathrm{3}$, i.e. at 30^∘N/S. Additionally, in some visualisations, we consider, as a third coordinate, the fraction of the surface with a below-freezing temperature (therefore, we expect 1 for global glaciation and 0 for no ice). We refer to this variable as I, and it is an attempt to extract an observable that resembles the sea ice percentage of the Earth's surface. Thus, the stationary solutions T[W](x), T[SB](x), and T[M](x), in terms of ΔT and $\stackrel{\mathrm{‾}}{T}$, correspond to ΔT[W]=16K, ΔT[SB]=8.3K, ΔT[M]= 17.5K, ${\stackrel{\mathrm{‾}}{T}}_{\mathrm{W}}$=297.7K, ${\stackrel{\mathrm{‾}}{T}}_{\text{SB}}$=235.1K, ${\stackrel{\mathrm{‾}}{T}}_{\mathrm{M}}$=258K, I[W]=0.2, I[SB]=1, and I[M]=1. 3Background and methodology 3.1Stochastic energy balance model In order to analyse the influence of random perturbations on the deterministic dynamics of the climate model described in Sect. 2, we perturb the relative intensity μ of the solar irradiance by including a symmetric α-stable Lévy process and rewrite Eq. (1) in the form of the following stochastic partial differential equation (SPDE): $\begin{array}{}\text{(13)}& \begin{array}{rl}C\left(x\right){\mathcal{T}}_{t}=& \phantom{\rule{0.25em}{0ex}}{D}_{I}\left(x,\mathcal{T},{\mathcal{T}}_{x},{\mathcal{T}}_{xx}\right)\\ & \phantom{\rule {0.25em}{0ex}}+{D}_{\text{II}}\left(x,\mathcal{T}\right)\left(\mathrm{1}+\mathit{\epsilon }/\mathit{\mu }{\stackrel{\mathrm{˙}}{L}}^{\mathit{\alpha }}\left(t\right)\right)-{D}_{\text{III}}\left(\ where the boundary and initial conditions defined by Eq. (2) apply to the stochastic temperature field 𝒯. Here the parameter ε>0 controls the noise intensity, and (L^α(t)[t≥0]) is a symmetric α -stable process defined in Appendix A. We consider symmetric processes because we want to have a simple mathematical model allowing for transitions in both the SB→W and the W direction →SB direction. Instead, a strongly skewed process would have made it very hard to explore the full phase space because a lack of symmetry would invariably favour one of the two transitions. As mentioned before, we refer to the Lévy case if the stability parameter $\mathit{\alpha }\in \left(\mathrm{0},\mathrm{2}\right)$, so that we consider a jump process. We recall that the jumps become more frequent and less intense as α increases. We define $\stackrel{\mathrm{˙}}{\mathcal{L}}\left(t\right)=\mathcal{Q}\left(x\right)\left[\mathrm{1}-{\mathit{\alpha }}_{\mathrm{a}}\left(x,\mathcal{T}\right)\right]{\stackrel{\mathrm{˙}}{L}}^{\ mathit{\alpha }}\left(t\right)$, as the generalised derivative of a stochastic process in a suitably defined functional space. Equation (13) features multiplicative noise. The research interest on this type of SPDE (Doering, 1987; Peszat and Zabczyk, 2007; Duan and Wang, 2014; Alharbi, 2021) is mainly focused on defining weak, strong, mild, and martingale solutions, in specifying under which conditions these solutions exist and are unique, and in constructing numerical approximation schemes for the solutions (Davie and Gaines, 2000; Cialenco et al., 2012; Burrage and Lythe, 2014; Jentzen and Kloeden, 2009; Kloeden and Shott, 2001), among other aspects. First, let us define the concept of a mild solution in this context. Let $\left(\mathrm{\Omega },\mathcal{F},\mathbb{P}\right)$ be a given complete probability space and $H\left(‖\cdot ‖,〈\cdot ,\ cdot 〉\right)$ a separable Hilbert space with a norm $‖\cdot ‖$ and inner product $〈\cdot ,\cdot 〉$. Equation (13) can be rewritten in the more general form, as follows: $\begin{array}{}\text{(14)}& \begin{array}{rl}& {\mathcal{T}}_{t}=A\left(x\right)\phantom{\rule{0.25em}{0ex}}\left[E\left(x,\mathcal{T}\right)\phantom{\rule{0.25em}{0ex}}{\mathcal{T}}_{x}{\right]}_ {x}+F\left(x,\mathcal{T}\right)+\mathit{\epsilon }G\left(x,\mathcal{T}\right){\stackrel{\mathrm{˙}}{L}}^{\mathit{\alpha }}\left(t\right),\\ & {\mathcal{T}}_{x}\left(-\mathrm{1},t\right)={\mathcal{T}} _{x}\left(\mathrm{1},t\right)=\mathrm{0},\\ & \mathcal{T}\left(x,\mathrm{0}\right)={T}_{\mathrm{0}}\left(x\right),\end{array}\end{array}$ where $A,E,F,G$ are Lipschitz functions defined on $\left[-\mathrm{1},\mathrm{1}\right]×H$ and $G\left(x,\mathcal{T}\right){\stackrel{\mathrm{˙}}{L}}^{\mathit{\alpha }}\left(t\right)=\stackrel{\ mathrm{˙}}{\mathcal{L}}\left(t\right)$. Under certain assumptions (Yagi, 2010), the problem (Eq. 14) is formulated as a Cauchy problem, for which a local mild solution, a progressively measurable process 𝒯(t), for all $t\in \left[\mathrm{0},{t}_{F}\right]$ and T[0]∈H has the following integral representation: $\begin{array}{}\text{(15)}& \begin{array}{rl}\mathcal{T}\left(t\right)=& \phantom{\rule{0.25em}{0ex}}\mathrm{\Psi }\left(t\right){T}_{\mathrm{0}}+\underset{\mathrm{0}}{\overset{t}{\int }}\mathrm{\ Psi }\left(t-s\right)\mathrm{Υ}\left(\mathcal{T}\left(s\right)\right)\mathrm{d}s\\ & +\mathit{\epsilon }\underset{\mathrm{0}}{\overset{t}{\int }}\mathrm{\Psi }\left(t-s\right)G\left(\mathcal{T}\left (s\right)\right)\mathrm{d}\mathit{\beta }\\ & +\mathit{\epsilon }\underset{\mathrm{0}}{\overset{t}{\int }}\mathrm{\Psi }\left(t-s\right)G\left(\mathcal{T}\left(s\right)\right)\mathrm{d}\mathit{\gamma where the dependence on x is kept implicit, and β (γ) is the Poisson random measure (compensated Poisson random measure) defined through Lévy–Itô decomposition. Instead, Ψ(t) with t⩾0 is the evolution operator having the generalised semigroup property for the family of sector operators with the bounded inverses, and $\mathrm{Υ}\left(\mathcal{T}\right)=\mathcal{T}+F\left(x,\mathcal{T}\ right)$, 𝒯∈H is a nonlinear operator, which we assume to be Lipschitz continuous. Following the abstract theory presented in Yagi (2010), under certain structural assumptions for the operators Ψ and Υ and for the functional space, one can prove that the solution Eq. (15) is the unique local mild solution of Eq. (14). As mentioned above, things are radically different for the special case α=2, which corresponds to Gaussian noise. In this case, we revisit Eq. (14), and we define ${\stackrel{\mathrm{˙}}{L}}^{\mathit {\alpha }=\mathrm{2}}\left(t\right)=\stackrel{\mathrm{˙}}{W}\left(t\right)$, where (W(t)[t≥0]) is a Wiener process. We then define $\stackrel{\mathrm{˙}}{\mathcal{W}}\left(t\right)=G\left(x,\mathcal {T}\right)\stackrel{\mathrm{˙}}{W}\left(t\right)$ as the generalised derivative of a Wiener process in a suitably defined functional space. 3.2Noise-induced transitions: mean escape times By incorporating stochastic forcing into the system, its long-time dynamics change significantly, allowing transitions between the competing basins. This dynamical behaviour is called metastability and is graphically captured by Fig. 2, where, in Fig. 2a and b, a typical spatiotemporal evolution of the temperature field is shown for the stability parameters α=0.5 and α=1.5, respectively. Instead, in Fig. 2c and d, the temporal evolution of the global temperature $\stackrel{\mathrm{‾}}{\mathcal{T}}$ and of the averaged Equator and poles' temperature difference Δ𝒯 (as defined in Eqs. 10–11) is correspondingly shown. In what follows, we investigate the time statistics and the paths of the transitions between such basins. In a complete probability space $\left(\mathrm{\Omega },\mathcal{F},\mathbb{P}\right)$ we define the first exit time τ[x] of a cádlág mild solution $\mathcal{T}\left(\cdot ;x\right)$ of Eq. (13), starting at the $x\in {D}^{\mathrm{W}/\text{SB}}$ domain of a W/SB climate stable state as follows: $\begin{array}{}\text{(16)}& {\mathit{\tau }}_{x}\left(\mathit{\omega }\right)=inf\left\{t>\mathrm{0}|{\mathcal{T}}_{t}\left(\mathit{\omega },x\right)otin {D}^{\mathrm{W}/\text{SB}}\right\},\mathit{\ omega }\in \mathrm{\Omega },\phantom{\rule{0.33em}{0ex}}\phantom{\rule{0.33em}{0ex}}x\in H.\end{array}$ The mean escape time is then expressed by 𝔼[τ[x](ω)]. In the case of the infinite dimensional multistable reaction–diffusion system described by Chafee–Infante equation under the influence of additive infinite-dimensional α-stable Lévy noise – $\mathit{\alpha }\in \left(\mathrm{0},\mathrm{2}\right)$ – it was shown (Debussche et al., 2013) that, in the weak noise limit, ε→0 the mean escape time from one of the competing basins of attraction increases as ε^−α. In such a limit, the jump diffusion system reduces the Markov chain to a finite state, with values in the set of stable states. Details of this method are given in Appendix B. Similar results have been obtained for bistable one-dimensional stochastic differential equations (SDEs; Imkeller and Pavlyukevich, 2006a, b). The basic reason behind this result is that, in order to study the transitions between the competing basins of attraction, one can treat the Lévy noise as a compound Poisson process, where jumps arrive randomly, according to a Poisson process, and the size of the jumps x is given by a stochastic process that obeys a specified probability distribution. For a symmetric α-stable Lévy process, such a distribution asymptotically decreases as $|x{|}^{-\mathrm{1}-\mathit{\alpha }}$, as discussed in Appendix A. Let us assume that positive values of x bring the state of the system closer to the basin boundary (as in the case of positive fluctuations of the solar irradiance when studying escapes from the SB state). Assuming a simple geometry for the basin boundary, we find that a transition takes place when an event larger than a critical value x[crit]>0 is realised. The probability of such an event scales with ${x}_{\text{crit}}^{-\mathit{\alpha }}$. A similar argument applies when considering transitions triggered by negative fluctuations of the stochastic variable. Small-size events, which occur frequently and correspond to the non-occurrence of jumps, do not actually play any relevant role in determining the transitions, while they are responsible for the variability within each basin of attraction. We now consider the case α=2. While the corresponding finite dimensional problem is thoroughly documented in the literature (Freidlin and Wentzell, 1984) and has been applied in a similar context by some of the authors (Lucarini and Bódai, 2019, 2020; Ghil and Lucarini, 2020; Margazoglou et al., 2021), the treatment of infinite dimensional SDEs driven by an infinite dimensional Wiener process via the Freidlin–Wentzell theory requires further extension. In the present context, we refer to Budhiraja and Dupuis (2000) and Budhiraja et al. (2008) and references therein, where the problem of an infinite dimensional reaction–diffusion equation driven by an infinite dimensional Wiener process has been addressed. We assume that steady-state conditions and ergodicity are met, and we also assume that the analysing system is bistable and a unique edge state is present at the basin boundary, as in the case studied here. In the case of Gaussian noise, transitions between the competing basins of attraction are not determined by a single event as in the $\mathrm{0}<\mathit{\alpha }<\mathrm{2}$ case but, instead, occur as a result of very unlikely combinations of subsequent realisations of the stochastic variable acting as a forcing. In the weak noise limit, the transitions occur according to the least unlikely (yet very unlikely) chain of events, whose probability is described using a large deviation law (Varadhan et al., 1985). One finds that the mean escape time from either basin of attraction decreases exponentially with increasing noise intensity ε and is given by a generalised Kramers' law, as follows: $\begin{array}{}\text{(17)}& \mathbb{E}\left[{\mathit{\tau }}_{\mathrm{W}/\text{SB}}\left(\mathit{\epsilon }\right)\right]\approx \mathrm{exp}\left(\frac{\mathrm{2}\mathrm{\Delta }{\mathrm{\Phi }}_{\ mathrm{W}\to \mathrm{M}/\text{SB}\to \mathrm{M}}\left(\mathcal{T}\right)}{{\mathit{\epsilon }}^{\mathrm{2}}}\right),\end{array}$ where $\mathrm{\Delta }{\mathrm{\Phi }}_{\mathrm{W}\to \mathrm{M}}={\mathrm{\Phi }}_{\mathrm{M}}\left(\mathcal{T}\right)-{\mathrm{\Phi }}_{\mathrm{W}}\left(\mathcal{T}\right)$ is the height of the quasi-potential barrier in the W attractor; correspondingly, $\mathrm{\Delta }{\mathrm{\Phi }}_{\text{SB}\to \mathrm{M}}\left(\mathcal{T}\right)={\mathrm{\Phi }}_{\mathrm{M}}\left(\mathcal{T}\right)- {\mathrm{\Phi }}_{\text{SB}}\left(\mathcal{T}\right)$ is the height of the quasi-potential barrier in the SB attractor, and Φ is the Graham quasi-potential mentioned above (Graham, 1987; Graham et al., 1991). 3.3Noise-induced transitions: most probable transition paths In the weak noise limit, the most probable path to escape an attractor is defined by a class of trajectories named instantons (Grafke et al., 2015, 2017; Bouchet et al., 2016; Grafke and Vanden-Eijnden, 2019) or maximum likelihood escape paths (Lu and Duan, 2020; Dai et al., 2020; Hu and Duan, 2020; Zheng et al., 2020). However, note that different noise laws result into possibly radically different instantonic trajectories (Dai et al., 2020; Zheng et al., 2020). In our case, the theory indicates that, if the stochastic forcing is Gaussian, under a rather general hypothesis, the instanton will connect the attractor W/SB with the edge state M, which then acts as gateway for noise-induced transitions. Once the quasi-potential barrier is overcome, a free-fall relaxation trajectory links M with the competing attractor SB/W. For equilibrium systems, (e.g. for gradient flows), where a detailed balance is achieved, the relaxation and instantonic trajectories within the same basin of attraction are identical. On the contrary, for non-equilibrium systems, the relaxation and instantonic trajectories will differ and will only meet at the attractor. (See a detailed discussion of this aspect and of the dynamical interpretation of the quasi-potential Φ in Lucarini and Bódai, 2020, and Margazoglou et al., 2021). Instead, if the noise is of Lévy type, the theory formulated for simpler equations suggests that the instanton will connect the attractor with a region on the basin boundary that is the nearest, in the phase space, to the attractor, as the concept of the quasi-potential is immaterial (Imkeller and Pavlyukevich, 2006a, b). In general, the maximum likelihood transition trajectory 𝒯[M](t) can be defined (Zheng et al., 2020; Lu and Duan, 2020) as a set of system states at each time moment $t\in \left[\mathrm{0},{t}_{f}\ right]$ that maximises the conditional probability density function $p\left(\phantom{\rule{0.33em}{0ex}}.\phantom{\rule{0.33em}{0ex}}|\phantom{\rule{0.33em}{0ex}}\phantom{\rule{0.33em}{0ex}}.\phantom {\rule{0.33em}{0ex}};\phantom{\rule{0.33em}{0ex}}.\phantom{\rule{0.33em}{0ex}}\right)$ of the passage from the origin stable state ${\mathit{\varphi }}^{\mathrm{W}/\text{SB}}$ to the destination stable state ${\mathit{\varphi }}^{\text{SB}/\mathrm{W}}$ and is expressed as follows: $\begin{array}{}\text{(18)}& \begin{array}{rl}& {\mathcal{T}}_{\mathrm{M}}\left(t\right)\\ & =\mathrm{arg}\underset{x}{max}\phantom{\rule{0.33em}{0ex}}\left[\phantom{\rule{0.33em}{0ex}}p\phantom{\ \right)\phantom{\rule{0.33em}{0ex}}\right]\\ & =\frac{p\phantom{\rule{0.33em}{0ex}}\left(\mathcal{T}\left({t}_{f}\right)={x}_{f}\phantom{\rule{0.33em}{0ex}}|\phantom{\rule{0.33em}{0ex}}\mathcal{T}\ left(t\right)=x\right)\cdot p\phantom{\rule{0.33em}{0ex}}\left(\mathcal{T}\left(t\right)=x\phantom{\rule{0.33em}{0ex}}|\phantom{\rule{0.33em}{0ex}}\mathcal{T}\left(\mathrm{0}\right)={x}_{\mathrm{0}}\ where x[0] (x[f]) belongs to the basin of attraction ${D}^{\mathrm{W}/\text{SB}}$ (${D}^{\text{SB}/\mathrm{W}}$), and $p\phantom{\rule{0.33em}{0ex}}\left(\phantom{\rule{0.33em}{0ex}}.\phantom{\rule {0.33em}{0ex}}|\phantom{\rule{0.33em}{0ex}}\phantom{\rule{0.33em}{0ex}}.\phantom{\rule{0.33em}{0ex}}\right)$ is the probability density function evolving according to the Fokker–Planck equation ( Risken, 1996). This method is applicable either if efficient numerical algorithms are available to solve the Fokker–Planck equation associated to the studied stochastically driven system, or, empirically, when considering a large ensemble of simulations. Note that this is not an asymptotic approach, i.e. it does not require a weak noise limit ε→0 and is applicable for systems with either Gaussian or non-Gaussian noise. Yet, in the weak noise limit, the definition (Eq. 18) leads to constructing the optimal transition paths described above. In the following section, for practical purposes, we construct such optimal transition path in the coarse-grained 2D phase space ($\stackrel{\mathrm{‾}}{\mathcal{T}},\mathrm{\Delta }\mathcal{T}$) and 3D phase space ($\stackrel{\mathrm{‾}}{\mathcal{T}},\mathrm{\Delta }\mathcal{T},\mathcal{I}\right)$ of the variables defined in Sect. 2 by averaging the ensemble of transitions connecting the two competing states in the weak noise limit. 3.4Numerical methods We solve Eq. (13) through the MATLAB pdepe function, which is well suited for solving 1D parabolic and elliptic PDEs. We discretise the 1D space with a regular grid of 201 grid points, following Bódai et al. (2015). The time span of integration $t\in \left[\mathrm{0},{T}_{f}\right]$, varies for different cases, with ${T}_{f}\in \left({\mathrm{10}}^{\mathrm{5}},\mathrm{15}×{\mathrm{10}}^{\mathrm{5}}\right)$ years, with a time stepping of 1 year. Each year, we consider a different value for the relative solar irradiance by extracting a random number Z[j] (see Eq. 19). To simulate the stochastic noise term εL^α(t), which is added in the parameter μ in Eq. (13), we use the recursive algorithm from Duan (2015). The process values ${L}^{\mathit{\alpha }}\left({t}_{\mathrm{1}}\right),\mathrm{\dots }, {L}^{\mathit{\alpha }}\left({t}_{N}\right)$ at each moment t[j], j∈ℕ are obtained via the following: $\begin{array}{}\text{(19)}& {L}^{\mathit{\alpha }}\left({t}_{j}\right)={L}^{\mathit{\alpha }}\left({t}_{j-\mathrm{1}}\right)+\left({t}_{j}-{t}_{j-\mathrm{1}}{\right)}^{\frac{\mathrm{1}}{\mathit{\ alpha }}}{Z}_{j},j=\mathrm{1},\mathrm{\dots },N,\end{array}$ where the second term is an independent increment, and Z[j] are the independent standard symmetric α-stable random numbers generated by an algorithm in Weron and Weron (1995, see also Grafke et al., 2015, for a detailed explanation of the steps above). For illustrative reasons, some sample solutions of Eq. (13) for different values of α are shown in Fig. 2a and b. For the numerical simulations discussed below, we consider $\mathit{\alpha }=\left(\mathrm{0.5},\mathrm{1.0},\mathrm{1.5},\mathrm{2}\right)$ and $\mathit{\epsilon }\in \left(\mathrm{0.0001},\mathrm {0.3}\right)$. We select ε in such a way that the noise intensity is strong enough to induce at least an order of 10 transition, given our constraints in the time length of the simulations, and weak enough that we are not far from the weak noise limit, where the scaling laws discussed above apply and transitions paths are well organised. Our simulations are performed by taking the Itô interpretation for the stochastic equations. We remark that, when we consider Lévy noise, it does happen that, for some years, the solar irradiance has negative values. Of course these conditions bear no physical relevance, and are a necessary result of considering unbounded noise. Nonetheless, we have allowed for this to occur in our simulations in order to be able to stick to the desired mathematical framework. We remind the reader that this study does not aim at capturing, with any high degree of realism, the description of the actual evolution of climate. At any rate, as can be understood from the discussion below in Sect. 4.2.2 and from what is reported in Appendix C, were we to consider longer-lasting (e.g. 2 years vs. 1 year) fluctuations of the solar irradiance, a satisfactory exploration of the transitions between the competing W and SB states would be possible with a greatly reduced occurrence of such unphysical events, and the basic reason for this being the presence of a larger factor $\left({t}_{j}-{t}_{j-\ mathrm{1}}{\right)}^{\frac{\mathrm{1}}{\mathit{\alpha }}}$ in Eq. (19). In what follows, we aim at addressing three main questions: (1) what are the temporal statistics of the SB→W and W→SB transitions? (2) What are the typical transition pathways? (3) What are the fundamental differences between transitions caused by Gaussian vs. Lévy noise? A summary of the results of the numerical simulations is given in Table D1 in Appendix D, including the sample size, i.e. number of transitions, point estimates for mean escape time, and their 0.95 confidence intervals for exits from both the W and SB basins. See the data availability section for information on how to access the supplement (Lucarini et al., 2022), which contains the raw data produced in this study and some illustrative animations portraying noise-induced transitions between the two competing metastable states. 4.1Mean escape time Our analysis confirms that there is a fundamental dichotomy in the statistics of mean escape times between Lévy noise and Gaussian noise-induced transitions. Figure 3a shows the dependence of the mean escape time from either attractor on ε and α for the Lévy case. The red circles (blue squares) correspond to escapes from the W (SB) basin (see Lucarini et al., 2022, for additional details). The scaling $\propto {\mathit{\epsilon }}^{-\mathit{\alpha }}$ presented in Eq. (B6) is shown by the dotted black line for each value of α. We also portray the best power law fit of the mean residence time with respect to ε for each value of α; the confidence intervals of the exponent are shown in Table 1. Our empirical results seem to indicate, at least in this case, an agreement with the ε^−α scaling presented and discussed earlier in the paper. This points at the possibility that the ε^−α scaling might apply in more general conditions than what has been, as of yet, rigorously proven, and this is specifically the case when multiplicative Lévy noise is considered. The stochastically perturbed trajectories forced by Lévy noise consist of jumps, and the probability of occurrence of a high jump, which can trigger the escape from the reference basin of attraction, is polynomially small in noise intensity ε. The Gaussian case – where no jumps are present – is portrayed in Fig. 3b. We show, in semi-logarithmic scale, the mean residence time versus $\mathrm{1}/{\mathit{\epsilon }}^{\mathrm{2}}$. We perform a successful linear fit of the logarithm of the mean residence time in either attractor versus $\mathrm{1}/{\mathit{\epsilon }}^{\mathrm{2}}$, and using Eq. (17), we obtain an estimate of the local quasi-potential barrier $\mathrm{\Delta }{\mathrm{\Phi }}_{\mathrm{W}/\text{SB}\to \mathrm{M}}$, which is half of the slope of the corresponding straight lines of the linear fit (see the last column of Table 1). We conclude that, for μ=1.05, the local minimum of Φ corresponding to the W state is deeper than the one corresponding to the SB state. 4.2Escape paths for the noise-induced transitions We now explore the geometry of the transition paths associated with the metastable behaviour of the system. We first discuss the case of Gaussian noise because it is indeed more familiar and more extensively studied. 4.2.1Gaussian noise We estimate the transition paths by averaging among the escape plus relaxation trajectories using the run performed with the weakest noise (see Table D1). We first perform our analysis in the 2D-projected state space defined by $\left(\stackrel{\mathrm{‾}}{\mathcal{T}},\mathrm{\Delta }\mathcal{T}\right)$. We prescribe two small circular-shaped regions enclosing the two deterministic attractors and search the time series of the portions of the whole trajectory that leave one of these regions and reach the other one. This creates two subsets of our full dataset from which we build a 2D histogram for each of the SB→W and W→SB transitions in the projected space. We then estimate the most probable transition paths by finding, for each bin value of $\stackrel{\mathrm{‾}}{\mathcal {T}}$, the peak of the histogram in the Δ𝒯 direction. The distributions are very peaked, and almost indistinguishable estimates for the instantonic and relaxation trajectories are obtained when computing the average of Δ𝒯 according to the 2D histogram conditional on the value of $\stackrel{\mathrm{‾}}{\mathcal{T}}$. In the background of Fig. 4a, we show the empirical estimate of the invariant measure in the 2D-projected state space defined by $\left(\stackrel{\mathrm{‾}}{\mathcal{T}},\mathrm{\Delta }\mathcal{T}\ right)$. Additionally, we indicate the position of the deterministic attractors, where the blue (red) circle corresponds to the SB (W) state and of the M state (green square). In the inset of Fig. 4 a, we present the ensemble of W→SB (SB→W) transitions as deep blue (red) contours. The most probable transition paths are shown in blue for the W→SB and in red for the SB→W. The instantonic portion of the blue (red) line is the one connecting the W (SB) attractor to the M state and is portrayed as a solid line, while the relaxation portion, connecting the M state with the SB (W) attractor, is portrayed as a dashed line. Within each basin of attraction, the instantonic and relaxation trajectories do not coincide, and, instead, only meet at the corresponding attractor and at the M state. This is particularly clear for the W state. The presence of such a loop, proving the existence of non-vanishing probability currents and the breakdown of detailed balance, is a signature of non-equilibrium dynamics, which was also observed in Margazoglou et al. (2021) and has, instead, gone undetected in Lucarini and Bódai (2019) and Lucarini and Bódai (2020). See Lucarini et al. (2022) for some illustrative simulations of the transitions. Let us provide some physical interpretation of how the transitions occur. Looking at the SB→W most probable path, the escape includes a simultaneous increase in $\stackrel{\mathrm{‾}}{\mathcal{T}}$ and Δ𝒯. In practice, a SB→W transition takes place when, starting at the SB state, one has a (rare) sequence of positive anomalies in the fluctuating solar irradiance $\stackrel{\mathrm{̃}}{\mathit{\ mu }}$, i.e. $\stackrel{\mathrm{̃}}{\mathit{\mu }}>\mathit{\mu }$. While the planet is warming globally, the Equator is warming faster than the poles, resulting in a positive rate $\stackrel{\mathrm {˙}}{\mathrm{\Delta }\mathcal{T}}>\mathrm{0}$, because it receives, in relative and absolute terms, more incoming solar radiation. Considering that the Equator also in the SB state is warmer than the poles, the melting of the ice conducive to the transition occurs first at the Equator, with a subsequent decrease in the albedo in low latitudes. Once the system crosses the M state, and supposing that persistent $\stackrel{\mathrm{̃}}{\mathit{\mu }}<\mathit{\mu }$ do not appear at this stage, the system will relax towards the W state. The relaxation includes a consistent global warming of the planet but with a change in sign in the rate of $\stackrel{\mathrm{˙}}{\mathrm{\Delta }\mathcal{T}}$, and a subsequent decrease in Δ𝒯, implying that as soon as the temperature at the Equator has risen enough, the poles will then warm at a faster pace because the ice–albedo effect kicks in. The global freezing of the planet associated with the W→SB transition is qualitatively similar but not identical to the reverse SB→W process. Notice a considerable overlap of the transition paths ensembles in both basins of attraction, shown as red and blue contours in the inset of Fig. 4a. This implies the presence of less extreme non-equilibrium conditions compared to what was observed in Margazoglou et al. (2021), where the W→SB and SB→W transitions occurred through fundamentally different paths (see the discussion therein, especially regarding the role of the hydrological cycle). Figure 4b presents the optimal transition paths W→SB and SB→W in a three-dimensional projection, where we add, as a third coordinate, the variable ℐ, which indicates the fraction of the surface that has subfreezing temperatures (𝒯<273.15K). On the sides of the figure, two two-dimensional projections on the (𝒯,Δℐ) and on the (Δ𝒯,ℐ) planes are shown. Here, darker brown shadings indicate the higher density of points and the red and blue dots sample the highest probability for the SB→W and W→SB transitions paths, respectively. One could argue that the presence of an intersection between the SB→W and W→SB highest probability transition paths in Fig. 4a could have been a simple effect of 2D projection. Instead, we see here that the SB→W and W→SB most probable transition paths also cross in the 3D projection in a well-defined region, which indeed corresponds to the M state (pink square). 4.2.2Lévy noise There is scarcity of rigorous mathematical results regarding the weak noise limit of the transition paths between competing states in metastable stochastic systems forced by multiplicative Lévy noise. Indeed, the derivation of analytical results for this type of system largely remains an open problem. Recently, for stochastic partial differential equations with additive Lévy and Gaussian noise, the Onsager–Machlup action functional has been derived in Hu and Duan (2020), leading to a precise formulation of the most probable transition paths. Hence, we do not have solid mathematical results to interpret what we describe below, where, instead, we need to use heuristic arguments. As far as we know, this is the first attempt to estimate the most probable transition pathway between the metastable states in infinite stochastic systems with multiplicative pure Lévy process. A striking feature in Fig. 5 is that the invariant measure and the structure of the most probable transition paths (SB→W and W→SB), in the weak noise limit, are fundamentally different between the Lévy case and the Gaussian one. The invariant measure is highly peaked (dark red in the colour scheme) in a small region around the deterministic attractors, as, most typically, the Lévy noise fluctuations of $\stackrel{\mathrm{̃}}{\mathit{\mu }}$ are very small. Additionally, the most probable transition paths depend very weakly on the chosen value for the stability parameter α. This suggests that the geometry of most probable path of transitions does not depend on the frequency and height of the Lévy diffusion jumps but rather on the qualitative fact that we are considering jump processes. Note that each panel of Fig. 5 is computed using data coming from the weakest noise considered for the corresponding value of α (see Table D1). The W→SB most probable transition path is characterised by the simultaneous decrease in both $\stackrel{\mathrm{‾}}{\mathcal{T}}$ and Δ𝒯. This implies that the jump leads to a rapid and direct freezing of the whole planet. The stochastically averaged path crosses the basin boundary far from the M state. The most probable SB→W transition follows, instead, a path that is somewhat similar to the one found for the Gaussian case. We then argue that the closest region in the basin boundary to SB attractor is not too far from the M state. Further insight on difference between the Gaussian and Lévy case can be found by looking at the animations included in Lucarini et al. (2022). 4.2.3Lévy noise – singular perturbations Based on what is discussed in Sect. 3.2, we expect that the transitions occur through the nearest region to the outgoing attractor in the basin boundary. We now try to clarify the properties of the most probable escape paths in the Lévy noise case by considering an additional set of simulations, taking inspiration from the edge-tracking algorithm (Skufca et al., 2006). The idea is to exploit the fact that large jumps drive the transitions in the Lévy noise case. Starting from the deterministic SB state, we apply, in Eq. (6), singular perturbations of the form $\mathit{\mu }\to \mathit{\ mu }+\mathit{\kappa }\mathit{\delta }\left(t\right)$ and bracket the critical value ${\mathit{\kappa }}_{\text{crit}}^{\text{SB}\to \mathrm{W}}$, leading to a transition to the W state as ${\mathit{\ kappa }}_{\text{crit}}^{\text{SB}\to \mathrm{W}}\in \left[{\mathit{\kappa }}_{\text{crit}}^{\text{SB}\to \mathrm{W},\mathrm{s}},{\mathit{\kappa }}_{\text{crit}}^{\text{SB}\to \mathrm{W},\mathrm{u}}\ right]$, where the simulation performed with the supercritical (subcritical) value of $\mathit{\kappa }={\mathit{\kappa }}_{\text{crit}}^{\text{SB}\to \mathrm{W},\mathrm{u}}$ ($\mathit{\kappa }={\ mathit{\kappa }}_{\text{crit}}^{\text{SB}\to \mathrm{W},\mathrm{s}}$) asymptotically reaches the competing (comes back to the initial) steady state. We obtain ${\mathit{\kappa }}_{\text{crit}}^{\text {SB}\to \mathrm{W},\mathrm{s}}\approx \mathrm{1.149}$y and ${\mathit{\kappa }}_{\text{crit}}^{\text{SB}\to \mathrm{W},\mathrm{u}}\approx \mathrm{1.1492}$y. Starting from the W state, we follow a similar procedure and find ${\mathit{\kappa }}_{\text{crit}}^{\mathrm{W}\to \text{SB}}\in \left[{\mathit{\kappa }}_{\text{crit}}^{\mathrm{W}\to \text{SB},\mathrm{u}},{\mathit{\kappa }}_{\text{crit}}^ {\mathrm{W}\to \text{SB},\mathrm{s}}\right]$. We obtain ${\mathit{\kappa }}_{\text{crit}}^{\mathrm{W}\to \text{SB},\mathrm{s}}\approx -\mathrm{1.3458}$y and ${\mathit{\kappa }}_{\text{crit}}^{\ mathrm{W}\to \text{SB},\mathrm{u}}\approx -\mathrm{1.346}$y. In both cases, the value of κ of the supercritical and subcritical paths differs by δκ≈0.0002y. The projections on the 2D phase space spanned by (𝒯,Δ𝒯) of the supercritical and subcritical paths corresponding to the SB→W (W→SB) transition are shown in Fig. 6a (Fig. 6b), using the thick and thin black dashed lines, respectively. The basin boundary is indicated by a cyan (magenta) line for the SB→W (W→SB) transition. The steps to estimate the basin boundary are presented in Appendix C. Note that we are using, as background, the invariant measure and the subset of transitions referring to Lévy noise simulations performed using α=1.0. Nonetheless, what is discussed below would apply equally well had we chosen to consider as background, instead, data coming from the simulation performed with α=0.5 or α=1.5. Indeed, for the link we propose between transitions due to the Lévy noise and the case of singularly perturbed trajectories, what matters depends on the discontinuous nature of the Lévy noise. In Fig. 6a (Fig. 6b) the supercritical and subcritical paths are superimposed on the ensemble of the trajectories of the SB→W (W→SB) transitions due to Lévy noise. The lines are better visible in the insets. By construction, after the perturbation is applied, the supercritical and subcritical orbits are close to the basin boundary. Hence, they are attracted towards the M state before being repelled towards the final asymptotic state. For comparison, we also portray, for both the SB→W and the W→SB case, an additional pair of supercritical and subcritical paths that are constructed using values of κ that differ by δκ=0.3y, which is a much larger difference than the one mentioned previously (for the dashed lines). The paths depicted as thick (thin) solid lines cross (do not cross) the basin boundary. When looking at the W→SB transitions due to Lévy noise, we understand that, if the perturbation sends the orbit near the basin boundary, then the subsequent evolution of the system follows the supercritical paths. Of course, the Lévy perturbation often overshoots the basin boundary. In this case, after the transition, the orbit is not necessarily attracted towards the M state, whereas it converges directly to the final SB state. The signature of the attracting influence of the M state persists in the stochastically averaged transition trajectory. Note the bending towards higher values of Δ𝒯 before the eventual convergence to the SB state. In the case of the SB→W transitions, a similarly good correspondence between the supercritical path and the stochastically averaged transition trajectory is found. Further support to the viewpoint proposed here is given by Fig. 7a and b, which are constructed along the lines of Fig. 4b and portray the supercritical and subcritical paths and the stochastically averaged transition trajectory realised via Lévy noise, with α=1.0 for the SB→W and W→SB case, respectively. The transitions shown in Figs. 6 and 7 have been obtained by considering a discrete approximation of Dirac's δ, where the forcing acts at a constant value for 1y. Specifically, Dirac's δ(t) is approximated as Δ[τ](t), where ${\mathrm{\Delta }}_{\mathit{\tau }}\left(t\right)=\mathrm{1}/\mathit{\tau }$, if $\mathrm{0}<t<\mathit{\tau }$, and vanishes elsewhere. The results are virtually unchanged if one considers τ<1y because the main dynamical processes of the re-equilibration of the system act on longer timescales. The effect of the negative feedbacks of the system starts to become apparent when considering slower perturbations, lasting 2 or more years. Indeed, the resilience of the system to transitions is reduced when, ceteris paribus, faster perturbations are considered (see the analyses in this direction dealing with stability of the large-scale ocean circulation; Stocker and Schmittner, 1997; Lucarini et al., 2005, 2007; Alkhayuon et al., 2019). Nonetheless, also in this case, the agreement with the results presented in Figs. 6 and 7 is considerable. We remark that considering longer-lasting perturbations allows one to observe W→SB transitions without using, at any time, (unphysical) negative values for the solar irradiance. This is reassuring in terms of the robustness and of the physical sense of our results. Further details on the impact of considering different values of τ are reported in Appendix C. It is a well-known that, as a result of the competition between the Boltzmann stabilising feedback and the ice–albedo destabilising feedback, under current astronomical and astrophysical conditions, the climate system is multistable, as at least two competing and distinct climates are present, i.e. the W and the SB. More recent investigations indicate that the partition of the phase space of the climate system might be more complex, as more than two asymptotic states might be present, some of them, possibly, associated with small basins of attraction. For deterministic multistable systems, the asymptotic state of an orbit depends uniquely on the initial condition, and, specifically, on which basin of attraction it belongs to. The presence of stochastic forcing allows for transitions to occur between competing basins, thus giving rise to the phenomenon of metastability. Gaussian noise as a source of stochastic perturbations has been widely studied by the scientific community in recent years and provided very fruitful insight into the multiscale nature of the climatic time series. However, it has become apparent that more general classes of α-stable Lévy noise laws might also be suitable for modelling the observed climatic phenomena. In this regard, it is important to achieve a deeper understanding of the possible noise-induced transitions between competing stable climate states under α-stable Lévy perturbations and compare them with the Gaussian case. As a starting point in this direction, we have studied the influence of different noise laws on the metastability properties of the randomly forced Ghil–Sellers EBM, which is governed by a nonlinear, parabolic, reaction–diffusion PDE. In the deterministic version of the model, we have three steady-state solutions, i.e. two stable, attractive climate states and one unstable saddle, corresponding to the edge state. The stable states correspond to the well-known W and SB climates. There is a fundamental dichotomy in the properties of the noise-induced transitions determined by whether we consider a stochastic forcing of intensity ε with a Gaussian versus an α-stable Lévy noise law. Note that, instead, the spatial structure of the noise is unchanged. This indicates that the phenomenology associated with the metastable behaviour depends critically on the choice of the noise law. Not many studies have investigated, numerically or through mathematical theory, the properties of transitions in metastable systems driven by multiplicative Lévy noise, as done here. First, in the weak noise limit ε→0, the mean residence times inside either competing basin of attraction for diffusions driven by Gaussian vs. Lévy noise have a fundamentally different dependence on ε. Our results show that the logarithm of the mean residence time for Gaussian diffusions scales with ε^−2, while, instead, a much weaker dependence is found for the Lévy case. Indeed, we find that the mean residence time is proportional to ε^−α, where α is the stability parameter of the noise law. This result is in agreement with what has been proven in some special cases for additive Lévy noise and might indicate that these scaling properties are more general than usually assumed. We propose a simple argument based on approximating the Lévy noise as a composed Poisson process to support the applicability of the result in general circumstances, but, clearly, detailed mathematical investigations in this direction are needed. Second, the results obtained for the most probable transition paths confirm that, in the weak noise limit, escapes from basins of attraction driven by Gaussian noise take place through the edge state. Additionally, instantonic and relaxation portions within each basin of attraction are clearly distinct, indicating nonequilibrium conditions that are, yet, qualitatively similar. In turn, Lévy diffusions leave the basin through the boundary region closest to the outgoing attractor, which seems to be the vicinity of the edge state when the thawing transition is considered. The freezing transition, instead, proceeds along a path that is fundamentally different. Finally, the most probable transition paths for the Lévy case appear to depend very weakly on the value of the stability parameter α, but seem, instead, to be determined by the nature of the Lévy noise of being a jump process. Indeed, we suggest that these properties can be better understood by considering that, to a first approximation, the transitions due to the Lévy diffusion correspond to supercritical paths associated with Dirac's δ-like singular perturbations to the solar irradiance. This viewpoint seems of general relevance to other problems where Lévy noise is responsible for exciting transitions between competing metastable states. Our findings provide strong evidence that choosing noise laws other than Gaussian leads to fundamental changes in the metastability properties of a system, both in terms of statistics of the transitions between competing basins of attraction and most probable paths for such transitions. Leaving the door open for general noise laws might be relevant, both for interpreting observational data and for performing modelling exercises for the climate system and complex systems in general. Let us give an example of the impact of making a wrong assumption on the nature of the acting stochastic forcing. Were we to naively interpret one of the panels in Fig. 5 as resulting from the dynamics of a dynamical system perturbed by Gaussian noise, then we would have to conclude that the unperturbed deterministic system possesses at least two edge states on the basin boundary separating the competing basins of attraction (see Margazoglou et al., 2021, for a case where this situation applies). Hence, we would infer fundamentally wrong properties on the geometry of the phase space. Additionally, we would infer fundamentally different properties for the drift term. Recent developments in data-driven methods based on the formalism of the Kramers–Moyal equation allow one to test accurately whether data are compatible with the hypothesis that stochasticity in the dynamics enters as a result of Gaussian noise or more general form of random forcing (Rydin Gorjão et al., 2021b; Li and Duan, 2022). Indeed, we point the reader to the recent contribution by Rydin Gorjão et al. (2021a), which shows that the analysis of proxy climatic datasets indicates the need to go beyond Langevin equation-based modelling, as they discover that it is necessary to treat noise as the sum of continuous and discontinuous processes. This indicates the need to consider, in future modelling exercises, the possibility of investigating the properties of metastable systems where the stochastic forcing comes as the result of simultaneous Gaussian and α-stable Lévy noise perturbations. Appendix A:Stochastic perturbations of Lévy type In this section, we provide a summary of the basic properties of a symmetric α-stable Lévy process in a Hilbert space in which the solutions to SPDE (Eq. 13) are defined. We also repeat some properties in the ℝ^n space that are more familiar to a wide audience of readers. It is pertinent to refer to the distribution law of Lévy increments, its characteristic function, the Lévy–Itô decomposition, and the Lévy jump measure for a deeper study of the metastable behaviour of the stochastic climate system (Eq. 13). Let $\left(\mathrm{\Omega },\mathcal{F},\mathbb{P}\right)$ be a given complete probability space and $H\left(‖\cdot ‖,〈\cdot ,\cdot 〉\right)$ a separable Hilbert space with the norm $‖\cdot ‖$ and inner product $〈\cdot ,\cdot 〉$. A stochastic process (L^α(t)[ t≥0]) is a symmetric α-stable Lévy process in H if it satisfies the following: 1. L^α(0)=0, almost surely. 2. Independent increments – for any n∈ℕ and $\mathrm{0}\mathit{⩽}{t}_{\mathrm{1}}<{t}_{\mathrm{2}}<\mathrm{\cdots }<{t}_{n-\mathrm{1}}<{t}_{n}$, the vector is as follows: $\begin{array}{}\text{(A1)}& \left({L}^{\mathit{\alpha }}\left({t}_{\mathrm{1}}\right)-{L}^{\mathit{\alpha }}\left({t}_{\mathrm{0}}\right),\mathrm{\dots },{L}^{\mathit{\alpha }}\left({t}_{n}\ right)-{L}^{\mathit{\alpha }}\left({t}_{n-\mathrm{1}}\right)\right),\end{array}$ where there is a family of independent random vectors in H. 3. Stationary increments – for 0⩽l<t random vectors L^α(t)−L^α(l) and L^α(t−l) have the same law 𝔏(.) in H, as follows: $\begin{array}{}\text{(A2)}& \mathfrak{L}\left({L}^{\mathit{\alpha }}\left(t\right)-{L}^{\mathit{\alpha }}\left(l\right)\right)=\mathfrak{L}\left({L}^{\mathit{\alpha }}\left(t-l\right)\right).\ This law in ℝ^n is a symmetric α-stable distribution $\mathfrak{L}\left(.\right)={S}_{\mathit{\alpha }}\left(\left(t-l{\right)}^{\frac{\mathrm{1}}{\mathit{\alpha }}},\mathrm{0},\mathrm{0}\right)$ , i.e. zero skewness and shift parameters, with a stability parameter $\mathit{\alpha }\in \left(\mathrm{0},\mathrm{2}\right]$ and a scaling parameter $\left(t-l{\right)}^{\frac{\mathrm{1}}{\ mathit{\alpha }}}$. The stable distribution by the generalised central limit theorem (Schertzer and Lovejoy, 1997) is a limit in the distribution as n→∞ of the normalised sum ${Y}_{n}=\frac{\ mathrm{1}}{{B}_{n}}\sum _{i=\mathrm{1}}^{n}\left({X}_{i}-{M}_{n}\right)$ of n independent identically distributed random variables X[i], with a common probability distribution function F(x), which does not necessarily have to have moments of both the first and second order. A necessary and sufficient condition for this is as follows (Keller and Kuske, 2000; Burnecki et al., 2015): $\begin{array}{}\text{(A3)}& \begin{array}{rl}F\left(x\right)& =\left[{c}_{\mathrm{1}}+{r}_{\mathrm{1}}\left(x\right)\right]\phantom{\rule{0.25em}{0ex}}|x{|}^{-\mathit{\alpha }},x<\mathrm{0},\\ & =\mathrm{1}-\left[{c}_{\mathrm{2}}+{r}_{\mathrm{2}}\left(x\right)\right]\phantom{\rule{0.25em}{0ex}}{x}^{-\mathit{\alpha }},x>\mathrm{0},\end{array}\end{array}$ with $\mathrm{0}<\mathit{\alpha }\le \mathrm{2}$, c[1], and c[2] positive constants, r[1](x)→0 as $x\to -\mathrm{\infty }$ and r[2](x)→0 as $x\to +\mathrm{\infty }$. When this condition holds and α=2, then we can set B[n]=h(n), where h(n) satisfies h^2=nlnh, and the stable distribution is just the Gaussian law. 4. Sample paths are continuous in probability, i.e. for any t⩾0 and η>0, as follows: $\begin{array}{}\text{(A4)}& \underset{l\to t}{lim}\mathbb{P}\left(‖{L}^{\mathit{\alpha }}\left(t\right)-{L}^{\mathit{\alpha }}\left(l\right)‖>\mathit{\eta }\right)=\mathrm{0}.\end{array}$ For $\mathit{\alpha }\in \left(\mathrm{0},\mathrm{2}\right)$ the symmetric α-stable Lévy process in ℝ^n has a characteristic function of the following form: $\begin{array}{}\text{(A5)}& \mathbb{E}\left[{e}^{i〈u,{L}^{\mathit{\alpha }}\left(t\right)〉}\right]={e}^{-C\left(\mathit{\alpha }\right)\phantom{\rule{0.25em}{0ex}}t\phantom{\rule{0.25em}{0ex}} ||u|{|}^{\mathit{\alpha }}},u\in {\mathbb{R}}^{n},t\ge \mathrm{0},\end{array}$ where $C\left(\mathit{\alpha }\right)={\mathit{\pi }}^{-\mathrm{1}/\mathrm{2}}\phantom{\rule{0.25em}{0ex}}\frac{\mathrm{\Gamma }\left(\left(\mathrm{1}+\mathit{\alpha }\right)/\mathrm{2}\right)\ mathrm{\Gamma }\left(n/\mathrm{2}\right)}{\mathrm{\Gamma }\left(\left(n+\mathit{\alpha }\right)/\mathrm{2}\right)}$, and Γ(.) is the Gamma function. In the case where α=2, we set $C\left(\mathrm {2}\right)=\mathrm{1}/\mathrm{2}$ and (A5) becomes the characteristic function of a standard Brownian motion. However, the Brownian motion cannot be seen as a weak limit of α-stable Lévy process because of the divergence C(α)→∞ as α→2. The properties of the sample paths of L^α(t) are, in fact, quite different for α=2 and α<2. First, the α-stable Lévy process is a discontinuous, pure jump process, while the Brownian motion has continuous paths. Second, the Brownian motion has moments of all orders, whereas $\mathbb{E}\phantom{\rule{0.25em}{0ex}}|{L}^{\mathit{\alpha }}\left(t\ right){|}^{\mathit{\gamma }}<\mathrm{\infty }$ if γ<α. It can also be proved that the tails of L^α(t) are heavy, i.e. $\mathbb{P}\phantom{\rule{0.25em}{0ex}}\left({L}^{\mathit{\alpha }}\left(t\ right)>u\right)\sim {u}^{-\mathit{\alpha }},\phantom{\rule{0.25em}{0ex}}\phantom{\rule{0.25em}{0ex}}u\to \mathrm{\infty }$ , which is quite the opposite of the exponentially light Gaussian tails. Moreover, for $\mathit{\alpha }\in \left(\mathrm{0},\mathrm{1}\right)$, the path variation in L^α(t) is bounded on finite time intervals and unbounded for $\mathit{\alpha }\in \left[\mathrm{1},\ Although neither the incremental nor the marginal distributions of a Lévy process in general are representable by the elementary functions, the Lévy motion is completely determined by the Lévy–Khintchine formula, which specifies the characteristic function of the Lévy process. If L^α(t) is a symmetric α-stable Lévy process in H, then, in the following: 1. The characteristic function of the Lévy–Khintchine formula is as follows: ${\mathrm{\Lambda }}_{t}\left(h\right)=\mathbb{E}\left[{e}^{i〈h,{L}^{\mathit{\alpha }}\left(t\right)〉}\right]={e}^{t\mathit{\psi }\left(h\right)},h\in H,t\ge \mathrm{0},$ where, in the following: $\begin{array}{}\text{(A6)}& \mathit{\psi }\left(h\right)=\underset{H}{\int }\left({e}^{i〈h,y〉}-\mathrm{1}-i〈h,y〉{\mathbf{1}}_{\mathit{\left\{}\mathrm{0}<‖y‖\mathit{⩽}\mathrm{1}\mathit{\right \}}}\right)\mathit{u }\left(\mathrm{d}y\right),\end{array}$ where 1[S] is the indicator function for a set S, taking 1 on S and 0, otherwise, and ν is a Borel measure (also called the Lévy jump measure) in H, for which $\underset{H}{\int }\left(\mathrm{1} \wedge ‖y{‖}^{\mathrm{2}}\right)\mathit{u }\left(\mathrm{d}y\right)<\mathrm{\infty }$ with $\mathrm{1}\wedge ‖y{‖}^{\mathrm{2}}=min\mathit{\left\{}\mathrm{1},‖y{‖}^{\mathrm{2}}\mathit{\right\}}$. A Borel measure, in addition, can be defined as the expected value of the number of jumps of specified size Q in the unit time interval, i.e. $\mathit{u }\left(Q\right)=\mathbb{E}N\left(\mathrm {1},Q\right)\left(\mathit{\omega }\right)$, ω∈Ω. 2. In the Lévy–Itô decomposition, for any sequence of positive radii r[n]→0 and ${\mathcal{O}}_{n}=\mathit{\left\{}y\in H\phantom{\rule{0.33em}{0ex}}|\phantom{\rule{0.33em}{0ex}}{r}_{n+\mathrm{1}}<| |y||\mathit{⩽}{r}_{n}\mathit{\right\}}$, there exists a sequence of independent compensated compound Poisson processes $\left({\overline{L}}_{n}\left(t\right){\right)}_{t\mathit{⩾}\mathrm{0}}$, n ⩾0 in H, with jump measures ${\mathit{u }}_{n}\left(B\right)=\mathit{u }\left(B\cap {\mathcal{O}}_{n}\right)$ for B∈ℬ(H), the Borel σ-algebra in H, and n⩾1, which satisfy ℙ, almost surely for all t⩾0 as follows: $\begin{array}{}\text{(A7)}& & L\left(t\right)=\sum _{n=\mathrm{1}}^{\mathrm{\infty }}{\overline{L}}_{n}+{L}_{\mathrm{0}}\left(t\right),\text{(A8)}& & {\overline{L}}_{n}\left(t\right)={L}_{n}\ left(t\right)-t\underset{H}{\int }y{\mathit{u }}_{n}\left(\mathrm{d}y\right),n\mathit{⩾}\mathrm{1}.\end{array}$ If L^α(t) is a symmetric α-stable Lévy process in ℝ^n with the generating triplet $\left(\mathrm{0},\mathrm{0},{\mathit{u }}_{\mathit{\alpha }}\right)$, then there exists an independent Poisson random measure N on ${\mathbb{R}}^{+}×\left({\mathbb{R}}^{n}\setminus \mathit{\left\{}\mathrm{0}\mathit{\right\}}\right)$ (quantifying the number of jumps of L^α(t)), such that, for each t⩾0, the following applies: $\begin{array}{}\text{(A9)}& {L}^{\mathit{\alpha }}\left(t\right)=\underset{\parallel y\parallel <\mathrm{1}}{\int }y\stackrel{\mathrm{̃}}{N}\left(t,\mathrm{d}y\right)+\underset{\parallel y\ parallel \mathit{⩾}\mathrm{1}}{\int }yN\left(t,\mathrm{d}y\right),\end{array}$ where $\stackrel{\mathrm{̃}}{N}\left(\mathrm{d}t,\mathrm{d}x\right)=N\left(\mathrm{d}t,\mathrm{d}x\right)-{\mathit{u }}_{\mathit{\alpha }}\left(\mathrm{d}x\right)\phantom{\rule{0.25em}{0ex}}\ mathrm{d}t$ is the compensated Poisson random measure, and ν[α](dx) is the jump measure. The small $\parallel y\parallel <\mathrm{1}$ (large $\parallel y\parallel \mathit{⩾}\mathrm{1}$) jumps are controlled by $\stackrel{\mathrm{̃}}{N}\left(t,\mathrm{d}y\right)$ (N(t,dy)). 3. Its Lévy jump measure ν is symmetric in the sense that $\mathit{u }\left(-Q\right)=\mathit{u }\left(Q\right)$ for Q∈ℬ(H) and has the following specific geometry: $\begin{array}{}\text{(A10)}& \mathit{u }\left(Q\right)=\underset{Q}{\int }\mathit{u }\left(\mathrm{d}y\right)=\underset{Q}{\int }\frac{\mathrm{d}r}{{r}^{\mathrm{1}+\mathit{\alpha }}}\mathit{\ sigma }\left(\mathrm{d}s\right),\end{array}$ where $r=‖y‖$ and $s=y/‖y‖$ and $\mathit{\sigma }:\mathcal{B}\left(\partial {B}_{\mathrm{1}}\left(\mathrm{0}\right)\right)\to \left[\mathrm{0},\mathrm{\infty }\right)$ is an arbitrary finite Radon measure on the unit sphere of H. The jump measure for a symmetric α-stable Lévy motion L^α(t) in ℝ^n is defined by the following: $\begin{array}{}\text{(A11)}& {\mathit{u }}_{\mathit{\alpha }}\left(\mathrm{d}u\right)=c\left(n,\mathit{\alpha }\right)\frac{\mathrm{d}u}{||u|{|}^{n+\mathit{\alpha }}},\end{array}$ with the intensity constant $c\left(n,\mathit{\alpha }\right)=\frac{\mathit{\alpha }\mathrm{\Gamma }\left(\left(n+\mathit{\alpha }\right)/\mathrm{2}\right)}{{\mathrm{2}}^{\mathrm{1}-\mathit{\ alpha }}{\mathit{\pi }}^{n/\mathrm{2}}\mathrm{\Gamma }\left(\mathrm{1}-\mathit{\alpha }/\mathrm{2}\right)}$, where Γ(.) is the Gamma function (see Duan, 2015, and Applebaum, 2009). One can come to a more intuitive interpretation of the stability parameter $\mathit{\alpha }\in \left(\mathrm{0},\mathrm{2}\right)$ variation. For smaller values of α, the process is characterised by higher jumps with a lower frequency. As α increases, jumps decrease in height, and the frequency of their occurrence increases. Appendix B:Probabilistic theory for the Lévy noise-induced escape We briefly recapitulate here the main ideas behind the proof given in Debussche et al. (2013) of how the mean residence time in the competing metastable states of stochastically perturbed Chafee–Infante reaction–diffusion PDE scales with the intensity ε of the additive L(t) α-stable Lévy noise that acts as stochastic forcing. One proceeds by considering the decomposition in the driving Lévy process by regularly varying the jump measure ν into small ξ^ε and large η^ε jump components. Let ${\mathrm{\Delta }}_{t}L=L\left(t\ right)-L\left(t-\right)$ denote the jump increment of L at time t⩾0, and $\frac{\mathrm{1}}{{\mathit{\epsilon }}^{\mathit{\rho }}}$ for ε, $\mathit{\rho }\in \left(\mathrm{0},\mathrm{1}\right)$ the jump height threshold of L. The process η^ε is a compound Poisson process consisting of all jumps of height $‖{\mathrm{\Delta }}_{t}L‖>{\mathit{\epsilon }}^{-\mathit{\rho }}$, with the following $\begin{array}{}\text{(B1)}& {\mathit{\beta }}_{\mathit{\epsilon }}=\mathit{u }\left(\frac{\mathrm{1}}{{\mathit{\epsilon }}^{\mathit{\rho }}}{B}_{\mathrm{1}}^{c}\left(\mathrm{0}\right)\right)\approx {\mathit{\epsilon }}^{\mathit{\alpha }\mathit{\rho }},\end{array}$ and the jump probability measure outside the ball $\frac{\mathrm{1}}{{\mathit{\epsilon }}^{\mathit{\rho }}}{B}_{\mathrm{1}}\left(\mathrm{0}\right)$ is as follows: $\begin{array}{}\text{(B2)}& \mathit{u }\left(\cdot \cap \frac{\mathrm{1}}{{\mathit{\epsilon }}^{\mathit{\rho }}}{B}_{\mathrm{1}}^{c}\left(\mathrm{0}\right)\right)/{\mathit{\beta }}_{\mathit{\epsilon where B[1](0) is a ball of unit radius in H centred at the origin. The occurrence time of a kth large jump is defined recursively by the following: $\begin{array}{}\text{(B3)}& {\mathcal{Z}}_{\mathrm{0}}=\mathrm{0},\phantom{\rule{0.25em}{0ex}}{\mathcal{Z}}_{k}=inf\mathit{\left\{}t>{\mathcal{Z}}_{k-\mathrm{1}}\phantom{\rule{0.25em}{0ex}}|\phantom {\rule{0.25em}{0ex}}‖{\mathrm{\Delta }}_{t}L‖>{\mathit{\epsilon }}^{-\mathit{\rho }}\mathit{\right\}},\phantom{\rule{0.25em}{0ex}}k\mathit{⩾}\mathrm{1}.\end{array}$ The waiting times between successive ${\mathit{\eta }}_{t}^{\mathit{\epsilon }}$ jumps have an exponential distribution ${\mathcal{Z}}_{k}-{\mathcal{Z}}_{k-\mathrm{1}}\sim \text{Exp}\left({\mathit{\ beta }}_{\mathit{\epsilon }}\right)$. Small jump processes ${\mathit{\xi }}^{\mathit{\epsilon }}=L-{\mathit{\eta }}^{\mathit{\epsilon }}$, due to the symmetry of Lévy measure ν, are a mean zero martingale in H with finite exponential moments. Probabilistic events causing small jumps in the stochastic solution of the system are not able to overcome the force of its deterministic stable state, and therefore, do not contribute to the exit from the basin of attraction. Formally, during the time between two large jumps ${t}_{k}={\mathcal{Z}}_{k}-{\mathcal{Z}}_{k-\mathrm{1}}$, the solution of Eq. (13), following the deterministic path (Eq. 1), returns to a small vicinity of the stable equilibria ${\mathit{\varphi }}^{\mathrm{W}/\text{SB}}$, as follows: $\begin{array}{}\text{(B4)}& \underset{x\in {D}^{\mathrm{W}/\text{SB}}}{sup}\underset{{\mathcal{Z}}_{k-\mathrm{1}}\mathit{⩽}t\mathit{⩽}{\mathcal{Z}}_{k}}{sup}‖\mathcal{T}\left(t\right)-T\left(t\ right)‖\to \mathrm{0}\phantom{\rule{0.25em}{0ex}}\phantom{\rule{0.25em}{0ex}}\text{for}\phantom{\rule{0.25em}{0ex}}\phantom{\rule{0.25em}{0ex}}\mathit{\epsilon }\to \mathrm{0}.\end{array}$ When a first large jump occurs, the solution process moves to the neighbouring domain of attraction with the following probability: $\begin{array}{}\text{(B5)}& \begin{array}{rl}& \mathbb{P}\left({\mathit{\varphi }}^{\mathrm{W}/\text{SB}}+\mathit{\epsilon }{\mathrm{\Delta }}_{{t}_{i}}Lotin {D}^{\mathrm{W}/\text{SB}}\right)\\ & =\ mathbb{P}\left({\mathrm{\Delta }}_{{t}_{i}}L\in \frac{\mathrm{1}}{\mathit{\epsilon }}\left[{\left({D}^{\mathrm{W}/\text{SB}}\right)}^{c}-{\mathit{\varphi }}^{\mathrm{W}/\text{SB}}\right]\right)\\ & = \frac{\mathit{u }\left(\frac{\mathrm{1}}{\mathit{\epsilon }}\left[\left({D}^{\mathrm{W}/\text{SB}}{\right)}^{c}-{\mathit{\varphi }}^{\mathrm{W}/\text{SB}}\right]\cap \frac{\mathrm{1}}{{\mathit{\ epsilon }}^{\mathit{\rho }}}{B}_{\mathrm{1}}^{c}\left(\mathrm{0}\right)\right)}{\mathit{u }\left(\frac{\mathrm{1}}{{\mathit{\epsilon }}^{\mathit{\rho }}}{B}_{\mathrm{1}}^{c}\left(\mathrm{0}\right)\ right)}\approx {\mathit{\epsilon }}^{\mathit{\alpha }\left(\mathrm{1}-\mathit{\rho }\right)}.\end{array}\end{array}$ This is the probability that, at time t[i], there will be a jump increment ${\mathrm{\Delta }}_{{t}_{i}}L$ that exceeds the distance between the attractor and its domain of attraction boundary, as expressed by the jump probability measure (Eq. B2). In the zero-noise limit, the mean residence time in a basin of attraction is given by the following: $\begin{array}{}\text{(B6)}& \begin{array}{rl}& \mathbb{E}\left[\mathit{\tau }\left(\mathit{\epsilon }\right)\right]\\ & \approx \sum _{i=\mathrm{1}}^{\mathrm{\infty }}\mathbb{E}\left[{\mathcal{Z}}_ {i}\right]\phantom{\rule{0.25em}{0ex}}\mathbb{P}\left[inf\left\{j:{\mathit{\varphi }}^{\mathrm{W}/\text{SB}}+\mathit{\epsilon }{\mathrm{\Delta }}_{{t}_{j}}Lotin {D}^{\mathrm{W}/\text{SB}}\right\}=i\ right]\\ & \approx \mathbb{E}\left[{t}_{\mathrm{1}}\right]\phantom{\rule{0.25em}{0ex}}\mathbb{P}\left({\mathit{\varphi }}^{\mathrm{W}/\text{SB}}+\mathit{\epsilon }{\mathrm{\Delta }}_{{t}_{\mathrm {1}}}Lotin {D}^{\mathrm{W}/\text{SB}}\right)\\ & \cdot \sum _{i=\mathrm{1}}^{\mathrm{\infty }}i\phantom{\rule{0.25em}{0ex}}{\left(\mathrm{1}-\mathbb{P}\left[{\mathit{\varphi }}^{\mathrm{W}/\text{SB}} +\mathit{\epsilon }{\mathrm{\Delta }}_{{t}_{\mathrm{1}}}Lotin {D}^{\mathrm{W}/\text{SB}}\right]\right)}^{i-\mathrm{1}}\\ & \approx \frac{\mathrm{1}}{{\mathit{\epsilon }}^{\mathit{\alpha }\mathit{\rho }}}{\mathit{\epsilon }}^{\mathit{\alpha }\left(\mathrm{1}-\mathit{\rho }\right)}{\left(\frac{\mathrm{1}}{{\mathit{\epsilon }}^{\mathit{\alpha }\left(\mathrm{1}-\mathit{\rho }\right)}}\right)}^{\ mathrm{2}}=\frac{\mathrm{1}}{{\mathit{\epsilon }}^{\mathit{\alpha }}},\end{array}\end{array}$ that is, by the sum of all the mean values of a large jump occurrence time times the probability that the minimum of a sample of size i of jump increments is sufficiently large to reach into the neighbouring domain of attraction. Thus, at the random time instant of large jumps, the solution process transitions, in an abrupt move, from one attractor to another. Such behaviour of the random dynamical system is known as a metastability. In Debussche et al. (2013), it was proven that, in the timescale $\mathit{\lambda }\left(\mathit{\epsilon }\right)=\mathit{u }\left(\frac{\mathrm{1}}{\mathit{\epsilon }}{B}_{\mathrm{1}}^{c}\left(\ mathrm{0}\right)\right),\phantom{\rule{0.25em}{0ex}}\mathit{\epsilon }>\mathrm{0}$ the metastable shifting of the diffusion process between neighbourhoods of the two attractors represents a continuous time Markov chain in a state space {ϕ^SB,ϕ^W}, with a transition rate matrix 𝔔, as follows: $\begin{array}{}\text{(B7)}& \mathfrak{Q}=\frac{\mathrm{1}}{\mathit{\mu }\left({B}_{\mathrm{1}}^{c}\left(\mathrm{0}\right)\right)}\left(\begin{array}{cc}-\mathit{\mu }\left(\left({D}^{\text{SB}}-{\ mathit{\varphi }}^{\text{SB}}{\right)}^{c}\right)& \mathit{\mu }\left(\left({D}^{\text{SB}}-{\mathit{\varphi }}^{\text{SB}}{\right)}^{c}\right)\\ \mathit{\mu }\left(\left({D}^{\mathrm{W}}-{\mathit{\ varphi }}^{\mathrm{W}}{\right)}^{c}\right)& -\mathit{\mu }\left(\left({D}^{\mathrm{W}}-{\mathit{\varphi }}^{\mathrm{W}}{\right)}^{c}\right)\end{array}\right),\end{array}$ where μ(⋅) is the limit measure of ν. Appendix C:Transitions induced by singular perturbations In Sect. 4.2.3, and in particular Figs. 6 and 7, we have studied the effect of singular perturbations of a Lévy kick. The idea is that transitions in a system perturbed by Lévy noise are primarily driven by rare large jumps. By applying a singular perturbation of the form $\mathit{\mu }\to \mathit{\mu }+\mathit{\kappa }\mathit{\delta }\left(t\right)$ (where $\mathit{\mu }={\mathit{\mu }}_{\ mathrm{0}}=\mathrm{1.05}$ throughout), we have been able to bracket the critical values ${\mathit{\kappa }}_{\text{crit}}^{\mathrm{W}↔\text{SB},\mathrm{s}}$ allowing for transitions between the two attractors. The expression κδ(t) is approximated as κΔ[τ](t), where ${\mathrm{\Delta }}_{\mathit{\tau }}\left(t\right)=\mathrm{1}/\mathit{\tau }$ if $\mathrm{0}<t<\mathit{\tau }$ and vanishes elsewhere. In Sect. 4.2.3 the results are shown for τ=1 year. We performed additional simulations to locate the supercritical and subcritical values of κ for τ=1 month, 6 months, 2 years, and 4 years. The corresponding supercritical values of ${\mathit{\kappa }}_{\text{crit}}^{\text{SB}\to \mathrm{W},\mathrm{u}}$ and ${\mathit{\kappa }}_{\text{crit}}^{\mathrm{W}\to \text{SB},\mathrm{u}}$ are shown in Table C1. In Fig. C1, we plot the corresponding supercritical transition trajectories for the values of Table C1 at different durations. Notice that now we use coloured solid lines for the supercritical cases. To estimate the basin boundary, we record the final point of when the forcing was active, in the (𝒯,Δ𝒯) projected space, for each duration. This point for each case is particularly visible, in the insets of Fig. C1, as a rapid reflection of the trajectory, which then follows closely the basin boundary (depicted as a thick black line). The basin boundary we can explore through this procedure is then estimated by linking the points obtained when considering various values of τ. Notice that the estimated basin boundaries are slightly different when looking at the two SB→W and W→SB transitions, as the basin boundaries have folds than cannot be captured in the sampled 2-dimensional projection used in Fig. C1. Finally, as stated earlier, from the third column of Table C1, we remark that, when considering forcings with durations of, for example, 2 years and longer, transitions from the W to the SB state can be achieved while retaining, at all times, a positive value for the solar irradiance because, while the forcing is active, its value is $\mathit{\mu }+\mathit{\kappa }/\mathit{\tau }$. Appendix D:Estimates for the mean escape time We report in Table D1 a summary of the statistics of the escape times from the W state and from the SB state for various choices of the noise law. VL conceptualized the paper, developed the methodology, validated the results, conducted the data analysis. and took the lead role in writing and revising the paper. LS and GM conceptualized the paper, developed the methodology and software, and helped with the writing and revising of the paper. At least one of the (co-)authors is a member of the editorial board of Nonlinear Processes in Geophysics. The peer-review process was guided by an independent editor, and the authors also have no other competing interests to declare. Publisher's note: Copernicus Publications remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This article is part of the special issue “Centennial issue on nonlinear geophysics: accomplishments of the past, challenges of the future”. It is not associated with a conference. The authors wish to thank Peter Ashwin, Reyk Börner, Jesus I. Diaz, Jinqiao Duan, Michael, Ghil, Tobias Grafke, Serafim Kalliadasis, Alessandro Laio, Xue-Mei Li, and Greg Pavliotis, for the useful exchanges on various topics covered in this paper. Valerio Lucarini wishes to thank Myles Allen, for suggesting that we look into 3D projections of the phase space when studying transitions paths. The authors acknowledge the support provided by the EU Horizon 2020 project TiPES (grant no. 820970). Valerio Lucarini acknowledges the support provided by the EPSRC project (grant no. EP/T018178/1). This paper is dedicated to K. Hasselmann. This research has been supported by the Horizon 2020 (TiPES; grant no. 820970) and the Engineering and Physical Sciences Research Council (grant no. EP/T018178/1). This paper was edited by Daniel Schertzer and reviewed by two anonymous referees. Abbot, D. S., Voigt, A., and Koll, D.: The Jormungand global climate state and implications for Neoproterozoic glaciations, J. Geophys. Res.-Atmos., 116, D18103, https://doi.org/10.1029/2011JD015927, Alharbi, R.: Nonlinear parabolic stochastic partial differential equation with application to finance, Doctoral thesis (PhD), University of Sussex, Brighton, http://sro.sussex.ac.uk/id/eprint/96730 (last access:5 May 2022), 2021.a, b Alkhayuon, H., Ashwin, P., Jackson, L. C., Quinn, C., and Wood, R. A.: Basin bifurcations, oscillatory instability and rate-induced thresholds for Atlantic meridional overturning circulation in a global oceanic box model, P. Roy. Soc. A-Math. Phy., 475, 20190051, https://doi.org/10.1098/rspa.2019.0051, 2019.a Applebaum, D.: Lévy Processes and Stochastic Calculus, Cambridge Studies in Advanced Mathematics, 2nd edn., Cambridge University Press, https://doi.org/10.1017/CBO9780511809781, 2009.a, b Ashwin, P., Wieczorek, S., Vitolo, R., and Cox, P.: Tipping points in open systems: bifurcation, noise-induced and rate-dependent examples in the climate system, Philos. T. R. Soc. A, 370, 1166–1184, https://doi.org/10.1098/rsta.2011.0306, 2012.a Barker, S., Knorr, G., Edwards, R. L., Parrenin, F., Putnam, A. E., Skinner, L. C., Wolff, E., and Ziegler, M.: 800,000 Years of Abrupt Climate Variability, Science, 334, 347–351, https://doi.org/ 10.1126/science.1203580, 2011.a Bensid, S. and Díaz, J. I.: On the exact number of monotone solutions of a simplified Budyko climate model and their different stability, Discrete Cont. Dyn.-B, 24, 1033–1047, 2019.a, b Benzi, R., Sutera, A., and Vulpiani, A.: The mechanism of stochastic resonance, J. Phys. A-Math. Gen., 14, L453–L457, https://doi.org/10.1088/0305-4470/14/11/006, 1981.a Bódai, T., Lucarini, V., Lunkeit, F., and Boschi, R.: Global instability in the Ghil–Sellers model, Clim. Dynam., 44, 3361–3381, 2015.a, b, c, d Bouchet, F., Gawedzki, K., and Nardini, C.: Perturbative calculation of quasi-potential in non-equilibrium diffusions: a mean-field example, J. Stat. Phys., 163, 1157–1210, https://doi.org/10.1007/ s10955-016-1503-2, 2016.a Brunetti, M., Kasparian, J., and Vérard, C.: Co-existing climate attractors in a coupled aquaplanet, Clim. Dynam., 53, 6293–6308, https://doi.org/10.1007/s00382-019-04926-7, 2019.a Budhiraja, A. and Dupuis, P.: A variational representation for positive functionals of infinite dimensional Brownian motion, Probability and Mathematical Statistics–Wroclaw University, 20, 39–61, Budhiraja, A., Dupuis, P., and Maroulas, V.: Large deviations for infinite dimensional stochastic dynamical systems, Ann. Probab., 36, 1390–1420, https://doi.org/10.1214/07-AOP362, 2008.a Budyko, M. I.: The effect of solar radiation variations on the climate of the Earth, Tellus, 21, 611–619, https://doi.org/10.3402/tellusa.v21i5.10109, 1969.a, b Burnecki, K., Wylomanska, A., and Chechkin, A.: Discriminating between Light- And heavy-tailed distributions with limit theorem, PLoS ONE, 10, e0145604, https://doi.org/10.1371/journal.pone.0145604, Burrage, K. and Lythe, G.: Accurate stationary densities with partitioned numerical methods for stochastic partial differential equations, Stochastics and Partial Differential Equations: Analysis and Computations, 2, 262–280, https://doi.org/10.1007/s40072-014-0032-8, 2014.a Cai, R., Chen, X., Duan, J., Kurths, J., and Li, X.: Lévy noise-induced escape in an excitable system, J. Stat. Mech. Theory E., 2017, 063503, https://doi.org/10.1088/1742-5468/aa727c, 2017.a Chechkin, A., Sliusarenko, O., Metzler, R., and Klafter, J.: Barrier crossing driven by Levy noise: Universality and the Role of Noise Intensity, Phys. Rev. E, 75, 041101, https://doi.org/10.1103/ PhysRevE.75.041101, 2007.a Cialenco, I., Fasshauer, G. E., and Ye, Q.: Approximation of stochastic partial differential equations by a kernel-based collocation method, Int. J. Comput. Math., 89, 2543–2561, https://doi.org/ 10.1080/00207160.2012.688111, 2012.a Dai, M., Gao, T., Lu, Y., Zheng, Y., and Duan, J.: Detecting the maximum likelihood transition path from data of stochastic dynamical systems, Chaos, 30, 113124, https://doi.org/10.1063/5.0012858, 2020.a, b Davie, A. M. and Gaines, J. G.: Convergence of numerical schemes for the solution of parabolic stochastic partial differential equations, Math. Comput., 70, 121–134, https://doi.org/10.1090/ s0025-5718-00-01224-2, 2000.a Debussche, A., Högele, M., and Imkeller, P.: The dynamics of nonlinear reaction-diffusion equations with small lévy noise, in: Lecture Notes in Mathematics, Springer, Berlin, https://doi.org/10.1007/ 978-3-319-00828-8_1, 2013.a, b, c, d, e, f, g Díaz, G. and Díaz, J. I.: Stochastic energy balance climate models with Legendre weighted diffusion and an additive cylindrical Wiener process forcing, Discrete Cont. Dyn.-S., https://doi.org/10.3934 /dcdss.2021165, 2021.a Díaz, J. I., Hernández, J., and Tello, L.: On the Multiplicity of Equilibrium Solutions to a Nonlinear Diffusion Equation on a Manifold Arising in Climatology, J. Math. Anal. Appl., 216, 593–613, https://doi.org/10.1006/jmaa.1997.5691, 1997.a, b Ditlevsen, P. D.: Observation of α-stable noise induced millennial climate changes from an ice-core record, Geophys. Res. Lett., 26, 1441–1444, https://doi.org/10.1029/1999GL900252, 1999.a, b, c Doering, C. R.: A stochastic partial differential equation with multiplicative noise, Phys. Lett. A, 122, 133–139, https://doi.org/10.1016/0375-9601(87)90791-2, 1987.a, b Duan, J.: An introduction to stochastic dynamics, Cambridge University Press, New York, 2015.a, b, c Duan, J. and Wang, W.: Effective Dynamics of Stochastic Partial Differential Equations, Elsevier, Boston, https://doi.org/10.1016/C2013-0-15235-X, 2014.a, b Dybiec, B. and Gudowska-Nowak, E.: Lévy stable noise-induced transitions: stochastic resonance, resonant activation and dynamic hysteresis, J. Stat. Mech. Theory E., 2009, P05004, https://doi.org/ 10.1088/1742-5468/2009/05/p05004, 2009.a Fan, A. H.: Sur les chaos de Lévy stables d'indice $\mathrm{0}<\mathit{\alpha }<\mathrm{1}$, Ann. Sci. Math. Québec, 1, 53–66, 1997.a Feudel, U., Pisarchik, A. N., and Showalter, K.: Multistability and tipping: From mathematics and physics to climate and brain–Minireview and preface to the focus issue, Chaos, 28, 033501, https:// doi.org/10.1063/1.5027718, 2018.a Freidlin, M. I. and Wentzell, A. D.: Random perturbations of dynamical systems, Springer, New York, 1984.a, b Gao, T., Duan, J., Kan, X., and Cheng, Z.: Dynamical inference for transitions in stochastic systems with α-stable Lévy noise, J. Phys. A-Math. Theor., 49, 294002, https://doi.org/10.1088/1751-8113/ 49/29/294002, 2016.a Garain, K. and Sarathi Mandal, P.: Stochastic sensitivity analysis and early warning signals of critical transitions in a tri-stable prey–predator system with noise, Chaos, 32, 033115, https:// doi.org/10.1063/5.0074242, 2022.a Ghil, M.: Climate stability for a Sellers-type model, J. Atmos. Sci., 33, 3–20, https://doi.org/10.1175/1520-0469(1976)033<0003:CSFAST>2.0.CO;2, 1976.a, b, c, d, e, f Ghil, M.: Energy-Balance Models: An Introduction, in: Climatic Variations and Variability: Facts and Theories: NATO Advanced Study Institute First Course of the International School of Climatology, Ettore Majorana Center for Scientific Culture, Erice, Italy, March 9–21, 1980, edited by: Berger, A., Springer Netherlands, Dordrecht, 461–481, https://doi.org/10.1007/978-94-009-8514-8_27, 1981.a, Ghil, M.: A mathematical theory of climate sensitivity or, How to deal with both anthropogenic forcing and natural variability?, in: Climate Change: Multidecadal and Beyond, edited by Chang, P. C., Ghil, M., Latif, M., and Wallace, J. M., World Scientific/Imperial College Press, 31–51, 2015.a Ghil, M. and Childress, S.: Topics in Geophysical Fluid Dynamics: Atmospheric Dynamics, Dynamo Theory, and Climate Dynamics, Springer-Verlag, Berlin, 1987.a Ghil, M. and Lucarini, V.: The physics of climate variability and climate change, Rev. Mod. Phys., 92, 035002, https://doi.org/10.1103/RevModPhys.92.035002, 2020.a, b Gottwald, G.: A model for Dansgaard-Oeschger events and millennial-scale abrupt climate change without external forcing, Clim. Dynam., 56, 227–243, https://doi.org/10.1007/s00382-020-05476-z, 2021.a , b Gottwald, G. A. and Melbourne, I.: Homogenization for deterministic maps and multiplicative noise, P. Roy. Soc. A-Math. Phy., 469, 20130201, https://doi.org/10.1098/rspa.2013.0201, 2013.a Gould, S. J.: Wonderful Life: The Burgess shale and the Nature of History, W.W. Norton, New York, 1989.a Grafke, T. and Vanden-Eijnden, E.: Numerical computation of rare events via large deviation theory, Chaos, 29, 063118, https://doi.org/10.1063/1.5084025, 2019.a Grafke, T., Grauer, R., and Schäfer, T.: The instanton method and its numerical implementation in fluid mechanics, J. Phys. A-Math. Theor., 48 333001, https://doi.org/10.1088/1751-8113/48/33/333001, 2015.a, b Grafke, T., Schäfer, T., and Vanden-Eijnden, E.: Long Term Effects of Small Random Perturbations on Dynamical Systems: Theoretical and Computational Tools, in: Recent Progress and Modern Challenges in Applied Mathematics, Modeling and Computational Science, edited by: Melnik, R., Makarov, R., and Belair, J., Fields Institute Communications, Springer, New York, NY, https://doi.org/10.1007/ 978-1-4939-6969-2_2, pp. 17–55, 2017.a Graham, R.: Macroscopic potentials, bifurcations and noise in dissipative systems, in: Fluctuations and Stochastic Phenomena in Condensed Matter, edited by: Garrido, L., Springer Berlin Heidelberg, 1–34, ISBN',978-3-540-47401-2, 1987.a, b Graham, R., Hamm, A., and Tél, T.: Nonequilibrium potentials for dynamical systems with fractal attractors or repellers, Phys. Rev. Lett., 66, 3089–3092, https://doi.org/10.1103/PhysRevLett.66.3089, 1991.a, b Grebogi, C., Ott, E., and Yorke, J. A.: Fractal Basin Boundaries, Long-Lived Chaotic Transients, and Unstable-Unstable Pair Bifurcation, Phys. Rev. Lett., 50, 935–938, https://doi.org/10.1103/ PhysRevLett.50.935, 1983.a Grigoriu, M. and Samorodnitsky, G.: Dynamic Systems Driven by Poisson/Lévy White Noise, in: IUTAM Symposium on Nonlinear Stochastic Dynamics, edited by: Namachchivaya, N. S. and Lin, Y. K., Springer Netherlands, Dordrecht, 319–330, https://doi.org/10.1007/978-94-010-0179-3_28, 2003.a Hänggi, P.: Escape from a metastable state, J. Stat. Phys., 42, 105–148, 1986.a Hasselmann, K.: Stochastic climate models, Part I. Theory, Tellus, 28, 473–485, 1976.a Hetzer, G.: The structure of the principal component for semilinear diffusion equations from energy balance climate models, Houston J. Math., 16, 203–216, 1990.a, b Hoffman, P. F., Kaufman, A. J., Halverson, G. P., and Schrag, D. P.: A Neoproterozoic Snowball Earth, Science, 281, 1342–1346, https://doi.org/10.1126/science.281.5381.1342, 1998.a Hu, J. and Duan, J.: Onsager-Machlup action functional for stochastic partial differential equations with Levy noise, arXiv [preprint], https://doi.org/10.48550/ARXIV.2011.09690, 4 December 2020.a, b, c Imkeller, P. and Pavlyukevich, I.: First exit times of SDEs driven by stable L'evy processes, Stoch. Proc. Appl., 116, 611–642, https://doi.org/10.1016/j.spa.2005.11.006, 2006a.a, b, c, d Imkeller, P. and Pavlyukevich, I.: Lévy flights: transitions and meta-stability, J. Phys. A-Math. Gen., 39, L237–L246, https://doi.org/10.1088/0305-4470/39/15/l01, 2006b.a, b, c, d Imkeller, P. and von Storch, J. S.: Stochastic Climate Models, Birkhauser, Basel, 2001.a Jentzen, A. and Kloeden, P. E.: The numerical approximation of stochastic partial differential equations, Milan J. Math., 77, 205–244, https://doi.org/10.1007/s00032-009-0100-0, 2009.a Kaper, H. and Engler, H.: Mathematics and climate, SIAM, Philadelphia, 2013.a, b Keller, J. and Kuske, R.: Rate of convergence to a stable law, SIAM J. Appl. Math., 61, 1308–1323, https://doi.org/10.1137/s0036139998342715, 2000.a Kloeden, P. E. and Shott, S.: Linear-implicit strong schemes for itô-galkerin approximations of stochastic PDES, Journal of Applied Mathematics and Stochastic Analysis, 14, 697341, https://doi.org/ 10.1155/S1048953301000053, 2001.a Kramers, H.: Brownian motion in a field of force and the diffusion model of chemical reactions, Physica, 7, 284–304, https://doi.org/10.1016/S0031-8914(40)90098-2, 1940.a Kraut, S. and Feudel, U.: Multistability, noise, and attractor hopping: The crucial role of chaotic saddles, Phys. Rev. E, 66, 015207, https://doi.org/10.1103/PhysRevE.66.015207, 2002.a Kuhwald, I. and Pavlyukevich, I.: Stochastic Resonance in Systems Driven by α-Stable Lévy Noise, International Conference on Vibration Problems 2015, Procedia Engineer., 144, 1307–1314, https:// doi.org/10.1016/j.proeng.2016.05.129, 2016.a Lenton, T. M., Held, H., Kriegler, E., Hall, J. W., Lucht, W., Rahmstorf, S., and Schellnhuber, H. J.: Tipping elements in the Earth's climate system, P. Nat. Acad. Sci. USA, 105, 1786–1793, 2008.a Lewis, J. P., Weaver, A. J., and Eby, M.: Snowball versus slushball Earth: Dynamic versus nondynamic sea ice?, J. Geophys. Res., 112, C11014, https://doi.org/10.1029/2006JC004037, 2007.a Li, Y. and Duan, J.: Extracting Governing Laws from Sample Path Data of Non-Gaussian Stochastic Dynamical Systems, J. Stat. Phys., 186, 30, https://doi.org/10.1007/s10955-022-02873-y, 2022.a Linsenmeier, M., Pascale, S., and Lucarini, V.: Climate of Earth-like planets with high obliquity and eccentric orbits: Implications for habitability conditions, Planet. Space Sci., 105, 43–59, https://doi.org/10.1016/j.pss.2014.11.003, 2015.a Lovejoy, S. and Schertzer, D.: The Weather and Climate: Emergent Laws and Multifractal Cascades, Cambridge University Press, Cambridge, 2013.a Lu, Y. and Duan, J.: Discovering transition phenomena from data of stochastic dynamical systems with Lévy noise, Chaos, 30, 093110, https://doi.org/10.1063/5.0004450, 2020.a, b, c Lucarini, V. and Bódai, T.: Edge states in the climate system: exploring global instabilities and critical transitions, Nonlinearity, 30, R32–R66, https://doi.org/10.1088/1361-6544/aa6b11, 2017.a, b , c Lucarini, V. and Bódai, T.: Transitions across Melancholia States in a Climate Model: Reconciling the Deterministic and Stochastic Points of View, Phys. Rev. Lett., 122, 158701, https://doi.org/ 10.1103/PhysRevLett.122.158701, 2019.a, b, c, d, e Lucarini, V. and Bódai, T.: Global stability properties of the climate: Melancholia states, invariant measures, and phase transitions, Nonlinearity, 33, R59–R92, https://doi.org/10.1088/1361-6544/ ab86cc, 2020.a, b, c, d, e, f, g Lucarini, V., Calmanti, S., and Artale, V.: Destabilization of the thermohaline circulation by transient changes in the hydrological cycle, Clim. Dynam., 24, 253–262, https://doi.org/10.1007/ s00382-004-0484-z, 2005.a Lucarini, V., Calmanti, S., and Artale, V.: Experimental mathematics: Dependence of the stability properties of a two-dimensional model of the Atlantic ocean circulation on the boundary conditions, Russ. J. Math. Phys., 14, 224–231, https://doi.org/10.1134/S1061920807020124, 2007.a Lucarini, V., Fraedrich, K., and Lunkeit, F.: Thermodynamic analysis of snowball Earth hysteresis experiment: Efficiency, entropy production and irreversibility, Q. J. Roy. Meteor. Soc., 136, 2–11, https://doi.org/10.1002/qj.543, 2010.a Lucarini, V., Pascale, S., Boschi, R., Kirk, E., and Iro, N.: Habitability and Multistability in Earth-like Planets, Astron. Nachr., 334, 576–588, https://doi.org/10.1002/asna.201311903, 2013.a Lucarini, V., Blender, R., Herbert, C., Ragone, F., Pascale, S., and Wouters, J.: Mathematical and physical ideas for climate science, Rev. Geophys., 52, 809–859, https://doi.org/10.1002/2013RG000446 , 2014a.a Lucarini, V., Serdukova, L., and Margazoglou, G.: Lévy-noise versus Gaussian-noise-induced Transitions in the Ghil-Sellers Energy Balance Model, figshare, https://doi.org/10.6084/m9.figshare.16802503 , 2022.a, b, c, d, e, f Margazoglou, G., Grafke, T., Laio, A., and Lucarini, V.: Dynamical landscape and multistability of a climate model, P. Roy. Soc. A-Math. Phy., 477, 20210019, https://doi.org/10.1098/rspa.2021.0019, 2021.a, b, c, d, e, f, g, h, i Millàn, H., Cumbrera, R., and Tarquis, A. M.: Multifractal and Levy-stable statistics of soil surface moisture distribution derived from 2D image analysis, Appl. Math. Model., 40, 2384–2395, https:// doi.org/10.1016/j.apm.2015.09.063, 2016.a Nicolis, C.: Stochastic aspects of climatic transitions – response to a periodic forcing, Tellus, 34, 308–308, https://doi.org/10.3402/tellusa.v34i3.10817, 1982.a North, G. and Stevens, M.: Energy-balance climate models, in: Frontiers of Climate Modeling, edited by: Kiehl, J. T. and Ramanathan, V., Cambridge University Press, Cambridge, 52–72, https://doi.org/ 10.1017/CBO9780511535857.004, 2006.a, b North, G. R.: Multiple solutions in energy balance climate models, Global Planet. Change, 2, 225–235, https://doi.org/10.1016/0921-8181(90)90003-U, 1990.a, b North, G. R., Cahalan, R. F., and Coakley Jr., J. A.: Energy balance climate models, Rev. Geophys., 19, 91–121, https://doi.org/10.1029/RG019i001p00091, 1981.a, b Ott, E.: Chaos in dynamical systems, 2nd edn., Cambridge University Press, Cambridge, https://doi.org/10.1017/CBO9780511803260, 2002.a Pavliotis, G. and Stuart, A.: Multiscale methods, Texts in applied mathematics, Springer, New York, NY, 2008.a, b Peixoto, J. P. and Oort, A. H.: Physics of Climate, AIP Press, New York, New York, 1992.a Penland, C. and Sardeshmukh, P. D.: Alternative interpretations of power-law distributions found in nature, Chaos, 22, 023119, https://doi.org/10.1063/1.4706504, 2012.a Peszat, S. and Zabczyk, J.: Stochastic Partial Differential Equations with Levy Noise: An Evolution Equation Approach, Cambridge University Press, https://doi.org/10.1017/cbo9780511721373, 2007.a, b Pierrehumbert, R., Abbot, D., Voigt, A., and Koll, D.: Climate of the Neoproterozoic, Annu. Rev. Earth Pl. Sc., 39, 417–460, https://doi.org/10.1146/annurev-earth-040809-152447, 2011.a, b Ragon, C., Lembo, V., Lucarini, V., Vérard, C., Kasparian, J., and Brunetti, M.: Robustness of Competing Climatic States, J. Climate, 35, 2769–2784, https://doi.org/10.1175/JCLI-D-21-0148.1, 2022.a Rhodes, R., Sohier, J., and Vargas, V.: Levy multiplicative chaos and star scale invariant random measures, Ann. Probab., 42, 689–724, https://doi.org/10.1214/12-AOP810, 2014.a Risken, H.: The Fokker–Planck equation, Springer, Berlin, 1996.a Rydin Gorjão, L., Riechers, K., Hassanibesheli, F., Witthaut, D., Lind, P. G., and Boers, N.: Changes in stability and jumps in Dansgaard–Oeschger events: a data analysis aided by the Kramers–Moyal equation, Earth Syst. Dynam. Discuss. [preprint], https://doi.org/10.5194/esd-2021-95, in review, 2021a.a Rydin Gorjão, L., Witthaut, D., Lehnertz, K., and Lind, P. G.: Arbitrary-Order Finite-Time Corrections for the Kramers-Moyal Operator, Entropy, 23, 517, https://doi.org/10.3390/e23050517, 2021b.a Saltzman, B.: Dynamical Paleoclimatology: Generalized Theory of Global Climate Change, Academic Press New York, New York, 2001.a Schertzer, D. and Lovejoy, S.: Multifractal simulations and analysis of clouds by multiplicative processes, Atmos. Res., 21, 337–361, https://doi.org/10.1016/0169-8095(88)90035-X, 1988.a Schertzer, D. and Lovejoy, S.: Universal multifractals do exist!: Comments on “a statistical analysis of mesoscale rainfall as a random Cascade”, J. Appl. Meteorol., 36, 1296–1303, https://doi.org/ 10.1175/1520-0450(1997)036<1296:UMDECO>2.0.CO;2, 1997.a, b Schertzer, D., Larchevêque, M., Duan, J., Yanovsky, V. V., and Lovejoy, S.: Fractional Fokker–Planck equation for nonlinear stochastic differential equations driven by non-Gaussian Lévy stable noises, J. Math. Phys., 42, 200, https://doi.org/10.1063/1.1318734, 2001.a Schmitt, F., Lovejoy, S., and Schertzer, D.: Multifractal analysis of the Greenland Ice-Core Project climate data, Geophys. Res. Lett., 22, 1689–1692, https://doi.org/10.1029/95GL01522, 1995.a Schmitt, F., Schertzer, D., Lovejoy, S., and Brunet, Y.: Multifractal temperature and flux of temperature variance in fully developed turbulence, Europhys. Lett., 34, 195–200, https://doi.org/10.1209 /epl/i1996-00438-4, 1996.a Sellers, W. D.: A global climatic model based on the energy balance of the earth-atmosphere system, J. Appl. Meteorol., 8, 392–400, 1969.a, b Serdukova, L., Zheng, Y., Duan, J., and Kurths, J.: Metastability for discontinuous dynamical systems under Lévy noise: Case study on Amazonian Vegetation, Sci. Rep.-UK, 7, 9336, https://doi.org/ 10.1038/s41598-017-07686-8, 2017.a Singla, R. and Parthasarathy, H.: Quantum robots perturbed by Levy processes: Stochastic analysis and simulations, Commun. Nonlinear Sci., 83, 105142, https://doi.org/10.1016/j.cnsns.2019.105142, Skufca, J. D., Yorke, J. A., and Eckhardt, B.: Edge of Chaos in a Parallel Shear Flow, Phys. Rev. Lett., 96, 174101, https://doi.org/10.1103/PhysRevLett.96.174101, 2006.a, b, c Solanki, S. K., Krivova, N. A., and Haigh, J. D.: Solar Irradiance Variability and Climate, Annu. Rev. Astron. Astr., 51, 311–351, https://doi.org/10.1146/annurev-astro-082812-141007, 2013.a Steffen, W., Rockström, J., Richardson, K., Lenton, T. M., Folke, C., Liverman, D., Summerhayes, C. P., Barnosky, A. D., Cornell, S. E., Crucifix, M., Donges, J. F., Fetzer, I., Lade, S. J., Scheffer, M., Winkelmann, R., and Schellnhuber, H. J.: Trajectories of the Earth System in the Anthropocene, P. Nat. Acad. Sci. USA, 115, 8252–8259, https://doi.org/10.1073/pnas.1810141115, 2018.a Stocker, T. F. and Schmittner, A.: Influence of CO[2] emission rates on the stability of the thermohaline circulation, Nature, 388, 862–865, https://doi.org/10.1038/42224, 1997.a Tessier, Y., Lovejoy, S., and Schertzer, D.: Universal multifractals: theory and observations for rain and clouds, J. Appl. Meteorol. Clim., 32, 223–250, https://doi.org/10.1175/1520-0450(1993)032 <0223:UMTAOF>2.0.CO;2, 1993.a Thompson, W. F., Kuske, R. A., and Monahan, A. H.: Reduced α-stable dynamics for multiple time scale systems forced with correlated additive and multiplicative Gaussian white noise, Chaos, 27, 113105, https://doi.org/10.1063/1.4985675, 2017.a Varadhan, S. R. S.: Large deviations and applications, Society for Industrial and Applied Mathematics Philadelphia, 75 pp., https://doi.org/10.2307/2287939, 1985.a Voigt, A. and Marotzke, J.: The transition from the present-day climate to a modern Snowball Earth, Clim. Dynam., 35, 887–905, https://doi.org/10.1007/s00382-009-0633-5, 2010.a Vollmer, J., Schneider, T. M., and Eckhardt, B.: Basin boundary, edge of chaos and edge state in a two-dimensional model, New J. Phys., 11, 013040, https://doi.org/10.1088/1367-2630/11/1/013040, Weron, A. and Weron, R.: Computer simulation of Levy alpha-stable variables and processes, Chaos – The Interplay Between Stochastic and Deterministic Behaviour, edited by: Garbaczewski, P., Wolf, M., and Weron, A., Springer Berlin Heidelberg, Berlin, Heidelberg, 379–392, ISBN978-3-540-44722-1, 1995.a Wu, J., Xu, Y., and Ma, J.: Lévy noise improves the electrical activity in a neuron under electromagnetic radiation, PloS one, 12, e0174330–e0174330, https://doi.org/10.1371/journal.pone.0174330, 2017. a Yagi, A.: Dynamical Systems, in: Abstract Parabolic Evolution Equations and their Applications, Springer Monographs in Mathematics, Springer Berlin Heidelberg, https://doi.org/10.1007/ 978-3-642-04631-5, 2010.a, b Zheng, Y., Serdukova, L., Duan, J., and Kurths, J.: Transitions in a genetic transcriptional regulatory system under Lévy motion, Sci. Rep.-UK, 6, 29274, https://doi.org/10.1038/srep29274, 2016.a Zheng, Y., Yang, F., Duan, J., Sun, X., Fu, L., and Kurths, J.: The maximum likelihood climate change for global warming under the influence of greenhouse effect and Lévy noise, Chaos, 30, 013132, https://doi.org/10.1063/1.5129003, 2020.a, b, c
{"url":"https://npg.copernicus.org/articles/29/183/2022/","timestamp":"2024-11-02T02:14:27Z","content_type":"text/html","content_length":"659702","record_id":"<urn:uuid:773a4d27-7d46-425f-ac47-2a22e331054c>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00201.warc.gz"}
Psi Gpm Chart Psi Gpm Chart - Web easily convert gpm (gallons per minute) to psi (pounds per square inch) with our online calculator. Web qrfs makes using a pitot gauge to calculate gallons per minute simple: Web the psi from gpm calculator is a powerful tool designed to simplify hydraulic calculations, connecting various. Convert psi to gpm manually or with this convenient online. Web the gpm to psi calculator is a tool designed to convert the flow rate of water, measured in gallons per minute, to. Web the flow rate of water in gallons per minute, or gpm, can be calculated with the help of the bernoulli equation and. Web convert water's flow rate in gpm from the psi reading of a pressure gauge using the psi to gpm calculator. Web the psi to gpm calculator is designed to help users convert pressure (psi) into flow rate (gpm). 1250 gpm 114 psi 1250 gpm 114 psi Fire Pump Rental Web the psi to gpm calculator is designed to help users convert pressure (psi) into flow rate (gpm). Web qrfs makes using a pitot gauge to calculate gallons per minute simple: Web easily convert gpm (gallons per minute) to psi (pounds per square inch) with our online calculator. Web convert water's flow rate in gpm from the psi reading of. Flow Rates Psi Gpm Chart Web the psi to gpm calculator is designed to help users convert pressure (psi) into flow rate (gpm). Web easily convert gpm (gallons per minute) to psi (pounds per square inch) with our online calculator. Web the gpm to psi calculator is a tool designed to convert the flow rate of water, measured in gallons per minute, to. Web the. How To Calculate Water Flow Rate With Pipe Size And Pressure Printable Online Web the gpm to psi calculator is a tool designed to convert the flow rate of water, measured in gallons per minute, to. Web the psi to gpm calculator is designed to help users convert pressure (psi) into flow rate (gpm). Convert psi to gpm manually or with this convenient online. Web convert water's flow rate in gpm from the. Psi To Gpm Chart Web easily convert gpm (gallons per minute) to psi (pounds per square inch) with our online calculator. Web the psi from gpm calculator is a powerful tool designed to simplify hydraulic calculations, connecting various. Web the flow rate of water in gallons per minute, or gpm, can be calculated with the help of the bernoulli equation and. Convert psi to. Gpm To Psi Chart Web easily convert gpm (gallons per minute) to psi (pounds per square inch) with our online calculator. Web convert water's flow rate in gpm from the psi reading of a pressure gauge using the psi to gpm calculator. Web the psi to gpm calculator is designed to help users convert pressure (psi) into flow rate (gpm). Web the gpm to. Blast Nozzle Selection Guide (VIDEO) Convert psi to gpm manually or with this convenient online. Web the psi to gpm calculator is designed to help users convert pressure (psi) into flow rate (gpm). Web easily convert gpm (gallons per minute) to psi (pounds per square inch) with our online calculator. Web the gpm to psi calculator is a tool designed to convert the flow rate. Gpm Chart For Pipe Convert psi to gpm manually or with this convenient online. Web the flow rate of water in gallons per minute, or gpm, can be calculated with the help of the bernoulli equation and. Web easily convert gpm (gallons per minute) to psi (pounds per square inch) with our online calculator. Web the psi to gpm calculator is designed to help. Psi To Gpm Conversion Chart Best Picture Of Chart Web qrfs makes using a pitot gauge to calculate gallons per minute simple: Web easily convert gpm (gallons per minute) to psi (pounds per square inch) with our online calculator. Web the psi from gpm calculator is a powerful tool designed to simplify hydraulic calculations, connecting various. Web convert water's flow rate in gpm from the psi reading of a. Psi To GPM Conversion Chart Convert psi to gpm manually or with this convenient online. Web easily convert gpm (gallons per minute) to psi (pounds per square inch) with our online calculator. Web the psi to gpm calculator is designed to help users convert pressure (psi) into flow rate (gpm). Web the flow rate of water in gallons per minute, or gpm, can be calculated. Calculate Gpm From Psi And Pipe Size Calculator Web easily convert gpm (gallons per minute) to psi (pounds per square inch) with our online calculator. Web qrfs makes using a pitot gauge to calculate gallons per minute simple: Web the flow rate of water in gallons per minute, or gpm, can be calculated with the help of the bernoulli equation and. Web the psi from gpm calculator is. Web the psi to gpm calculator is designed to help users convert pressure (psi) into flow rate (gpm). Web the flow rate of water in gallons per minute, or gpm, can be calculated with the help of the bernoulli equation and. Web convert water's flow rate in gpm from the psi reading of a pressure gauge using the psi to gpm calculator. Web qrfs makes using a pitot gauge to calculate gallons per minute simple: Web the gpm to psi calculator is a tool designed to convert the flow rate of water, measured in gallons per minute, to. Convert psi to gpm manually or with this convenient online. Web easily convert gpm (gallons per minute) to psi (pounds per square inch) with our online calculator. Web the psi from gpm calculator is a powerful tool designed to simplify hydraulic calculations, connecting various. Web Qrfs Makes Using A Pitot Gauge To Calculate Gallons Per Minute Simple: Web the psi to gpm calculator is designed to help users convert pressure (psi) into flow rate (gpm). Web the flow rate of water in gallons per minute, or gpm, can be calculated with the help of the bernoulli equation and. Web the gpm to psi calculator is a tool designed to convert the flow rate of water, measured in gallons per minute, to. Web easily convert gpm (gallons per minute) to psi (pounds per square inch) with our online calculator. Web The Psi From Gpm Calculator Is A Powerful Tool Designed To Simplify Hydraulic Calculations, Connecting Various. Web convert water's flow rate in gpm from the psi reading of a pressure gauge using the psi to gpm calculator. Convert psi to gpm manually or with this convenient online. Related Post: Free Printable Rental Application Form Mandolin Finger Chart Printable Resistance Band Workout Roblox Shading Template 585 X 559 Shaded Soft Summer Color Palette Ppg Automotive Blue Paint Colors Harley Davidson Bike Size Chart Dot To Dot 100 Printable Middle Finger Printable Printable Cheat Sheet Chess Rules
{"url":"https://xinisfestival.edu.gr/en/Psi-Gpm-Chart.html","timestamp":"2024-11-15T00:12:35Z","content_type":"text/html","content_length":"27842","record_id":"<urn:uuid:a49fb8a9-2bcc-4938-a956-40ba38b71b98>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00044.warc.gz"}
5.1. Development of ANFIS model The ANFIS model shown and described below was developed in the Python programming language, in the PyCharm 2023.2.1 editor. (Community Edition, Jet Brains) which is open user access. ANFIS systems represent a synergy of artificial neural networks and fuzzy logic (fuzzy inference system). The advantage of these systems is reflected in the combination of using their positive features, namely the ability to learn with artificial neural networks and the use of expert knowledge with fuzzy logic. The structure of the ANFIS system is similar to the structure of artificial neural networks, where based on the input-output set of data, a corresponding fuzzy inference system is formed and the parameters of the membership functions that transform the input data are calculated. The general structure of the ANFIS model consists of five layers ( Figure 3 ). Below is a brief description of the layers. In the first layer, the input data is transformed into a system of appropriate fuzzy sets. Accordingly, the output data of the first layer is determined by: $O 1 d , i = μ d , i x , i = 1 , 2 ,$ is the input argument of the first layer, and $μ d , i$ is the membership function of the corresponding linguistic variable The second layer of the ANFIS model combines the output arguments of the different variables of the previous layer. The output data is determined by: $O 2 , d 1 , d 2 , i , j = ω 2 , d 1 , d 2 , i , j = μ d 1 , i x · μ d 2 , j y , i , j = 1 , 2 ,$ $d 1$ $d 2$ are two different variables. The next layer includes the process of normalization of the values obtained in the second layer. The normalization process is carried out as follows: $O 3 , d 1 , d 2 , i , j = ω d 1 , d 2 , i , j ¯ = ω d 1 , d 2 , i , j ∑ d 1 , d 2 ω d 1 , d 2 , i , j , i , j = 1 , 2 .$ The next layer is a layer that combines normalized values from the previous layer and first-order polynomials: $O 4 , d 1 , d 2 , i , j = ω d 1 , d 2 , i , j ¯ f d 1 , d 2 , i , j = ω ¯ d 1 , d 2 , i , j p d 1 , d 2 , i , j x + q d 1 , d 2 , i , j y + r d 1 , d 2 , i , j , i , j = 1 , 2 .$ $p d 1 , d 2 , i , j$ $q d 1 , d 2 , i , j$ $r d 1 , d 2 , i , j$ are the parameters of the fourth layer model. In the last layer, the normalized values of the previous layer are added using the following formula: $O 5 d 1 , d 2 , i , j = ∑ i , j , d 1 , d 2 , ω d 1 , d 2 , i , j ¯ f d 1 , d 2 , i , j = ∑ d 1 , d 2 , i , j ω d 1 , d 2 , i , j f d 1 , d 2 , i , j ∑ d 1 , d 2 , i , j ω d 1 , d 2 , i , j$ Figure 4 . the general architecture of the ANFIS model described in the previous part is shown. Training of a neuro-fuzzy system is best done by applying a back-propagation process that uses the $R M S E$ as the error function, which is defined by: $R M S E = 1 n ∑ i = 1 n ( y i − y ^ i ) 2 ,$ $y 1$ $y 2$ $y n$ are actual values, and $y ^ 1 , y ^ 2$ $y ^ n$ are values predicted by the ANFIS model. When the input membership function parameters are set, the output from the ANFIS model is calculated as follows: $f = w 1 w 1 + w 2 ⋅ f 1 + w 2 w 1 + w 2 ⋅ f 2 = w 1 ¯ ⋅ f 1 + w 2 ¯ ⋅ f 2$ $f 1 = x · p 1 + y · q 1 + r 1$ $f 2 = x · p 2 + y · q 2 + r 2$ , the following equality is obtained: $f = w 1 ¯ ⋅ x ⋅ p 1 + w 1 ¯ ⋅ y ⋅ q 1 + w 1 ¯ ⋅ r 1 + w 2 ¯ ⋅ x ⋅ p 2 + w 2 ¯ ⋅ y ⋅ q 2 + w 2 ¯ ⋅ r 2$ The process of training, i.e., model training, is based on the determination of parameter values, adjusted according to the training data. Back-propagation method is the basic way of training the system. This algorithm tries to minimize the error between the network and the desired output. The determination of the availability of continuous systems and its partial indicators was processed through the results obtained through questionnaires related to the expert assessment of partial indicators of availability and to historical data on downtime and work, which include the time period from 2016 to 2019. The availability of the ECC system is a function of the appropriate factors, which are most often divided into two groups - partial indicators, reliability and maintainability. These partial indicators (synthetic indicators) are further a function of a larger number of independent parameters (sub-indicators) that are considered as variables in this ANFIS model. Figure 5. Presentation of partial availability indicators [ ] (synthetic indicators, sub-indicators). Within this model, availability decomposes into partial sub-indicators that are assessed by expert assessment, in the form of a questionnaire. Each part of the I ECC system (bucket wheel excavator, beltwagon, belt conveyors and crushing plant) is evaluated. In the expert assessment, 10 experts from the field of continuous systems in surface exploitation were surveyed, who provided estimates for the sub-indicators of availability in a certain quarter and covering the period from 2016 to 2019 for each part of the ECC system. Data from 2016-2018 were used to train the ANFIS model (480 data - training data set), while data from 2019 (160 data - test data set) were used to test the obtained model. The experts were offered grades in the questionnaire ranging from (the worst grade) to (the best grade). The layout of the questionnaire is shown in Figure 6 , with the fact that in this questionnaire the expert is required to make assessments at the quarterly level in a predetermined period of time for each part I of the ECC system. The scores obtained in this way were used as input data of this model. Before creating the model, a database was created related to the duration of mechanical, electrical and other failures of the ECC system over a period of 4 years (2016, 2017, 2018, 2019). Data from this database is used to determine historical availability on a quarterly basis, and as such is used as output data of the ANFIS model. Availability per quarter was calculated based on the formula Table 1 . part of the database is shown. The data was taken from the Electric Power Company of Serbia and contained information about downtimes on the specific system in the specified time period. Based on the available data, the availability of the system was determined quarterly and the obtained values are shown in Table 2 The resulting ANFIS model received the survey results for all 9 partial sub-indicators for each part of the I ECC system as input parameters, while the output represented the corresponding availability in the quarter to which the survey results refer, which was obtained based on historical data taken from the Electric Power Company of Serbia. In the first step of the model, fuzzification was performed, which represents the transformation of partial indicator scores, using membership functions, into the corresponding $j$ -scale for $j$=10. Predefined fuzzy sets are not used for probability functions, but membership functions are used instead, the parameters of which are estimated within the model training process. The membership functions used are Bell-shaped membership function, Gaussian membership function and Sigmoid membership function. Using IF-THEN rules that are pre-defined, the synthetic indicator $R$ is determined based on the partial sub-indicators $o$, $c$, $b$ and the synthetic indicator $M$ is determined based on the partial sub-indicators $t$, $e$, $u$, $d$, $m$, $s$. In the following, we will illustrate the determination of the synthesis indicator using IF-THEN rules based on sub-indicators . Let the IF-THEN rule be defined by IF $o i$ $c j$ $b k$ $R l$ where i,j,k are in the set { $A , B , C , D , E , F$ }, and is in the set { $A , B , C , D , E$ }. Then the fuzzy sets come together: $μ o i x · μ c j y · μ b k z ,$ are the input values of grades respectively for partial indicators , assigns the value . The fuzzy set corresponding to the rating of the indicator is the sum of all fuzzy sets assigned the value . In a similar way, on the basis of sub-indicators , the synthesis indicator is calculated. In the next step, using the IF-THEN rules, as described in the previous paragraph, the availability indicator is determined by synthetic indicators . Then, the Euclidean distance of the obtained fuzzy sets from the fuzzy sets assigned to the availability indicator is determined based on the corresponding membership functions whose parameters we estimate within this ANFIS model. The distances $d 1$ $d 2$ $d 3$ $d 4$ $d 5$ determined in this way can be joined by the normalized reciprocal values of the relative distances determined by: $μ i = d m i n d i d m i n d 1 + d m i n d 2 + d m i n d 3 + d m i n d 4 + d m i n d 5 , i ∈ 1 , 2 , 3 , 4,5 .$ These values represent belonging to the appropriate set of grades that determine the indicator of availability, i.e., $A = μ 1 , E , μ 2 , D , μ 3 , C , μ 4 , B , μ 5 , A .$ Finally, the linguistic description is transformed into a numerical designation: $1 μ 1 + 2 μ 2 + 3 μ 3 + 4 μ 4 + 5 μ 5 μ 1 + μ 2 + μ 3 + μ 4 + μ 5 .$ Dividing by 5 gives the predicted value of availability, which is compared with the realized value of availability calculated on a quarterly basis. The IF-THEN rules used in this ANFIS model are shown in the following tables. So, for example, the values shown in the first type of this table are interpreted as follows: If the partial sub-indicator $o$ is $F$ (the conditions of the working environment are such that the engaged equipment generally does not meet them) and if the partial sub-indicator $c$ is $F$ (write-off machine, very high level of failure) and if the partial sub-indicator $b$ is $F$ (underdeveloped basic engineering) then indicator $R$ is unreliable $E$. $. o$ $c$ $b$ $R$ F F F E E E E D D D D C C C C B B B B A B C B A C B C B C B B A B C D B C C B B D C D C E B E C C D B A E E D D C C A A D C B B B B A A B C D B A B A A D B B A D E A B A C C A A C D B A B B A A E D B A B C A B C E B B D D B B E E C B B A A A A A A F E E D F D D C F C C B F A A A F E D D E D C C D C B B C B A A $. t$ $e$ $u$ $d$ $m$ $s$ $M$ F F F F F F E E E E E E E D D C C C B B B D C C C C C B D C A C B B A C B B B C B A B C B B B B A B D D C C C B C C D B B B B D C D C D B B E D C B B C B C B B B B B A B D C C B B B C C B B D C B B B B B B A A B A C C C B A C C B C B B A C C C C B B B C D C C D C B C C B B C D B C A B C D C B B C D D D B B D D D D D D C C C C C C C B B B B B B B A A A A A A A A E E E D D D C D D D C C C B C C C B B B A B B B A A A A F E D C B A B E E D C B A B D E D C B A B C E D C B A B B E D C B A A A E D C B A A $. R$ $M$ $A$ D D D D C C D B C D A B C D C B D C C C C B B B A A A E D D C B B B A A C A B A summary of the considered models for predicting availability is given in Table 6 5.2. Development of Simulation model During the creation of the simulation model, all failures were classified into one of three types of failure (mechanical, electrical, and others). As in the case of the ANFIS model, the simulation model was developed based on data from three years (2016., 2017. and 2018. year). Table 7 . experimental and theoretical frequencies of machine failures by intervals are given. The distribution of mechanical failure times, which was considered in the 96 percentile of the data, conforms to the Weibull distribution with parameters $γ = 5$ $β = 0.9511$ $η = 18.4311$ . More precisely, the empirical distribution function is determined by: $F x = 1 − e x p − ( x − 5 18.4311 ) 0.9511$ The number of mechanical failures on which this model was developed amounted to 1238 failures. The testing of the hypothesis about the distribution of data was performed with the help of the Kolmogornov-Smirnov test whose statistic value $n D n$ is equal to 1.7944, so with a significance level of 0.001 we cannot reject the null hypothesis that claims that the data are in accordance with the Weibull distribution. The following figure shows the experimental and theoretical function of the distribution of mechanical failures. Table 8 . experimental and theoretical frequencies of electrical failures by intervals are given. The distribution of the duration of electrical failure, which was considered in the 98.5 percentile of the data, is in accordance with the Weibull distribution with parameters $γ = 5$ $β = 0.9066$ $η = 28.6022$ . More precisely, the empirical distribution function is determined by: $F x = 1 − e x p − ( x − 5 28.6022 ) 0.9066$ The number of electrical failures on which this model was developed amounted to 908 failures. Testing of the hypothesis about data distribution was performed with the help of the Kolmogornov-Smirnov test whose statistic value $n D n$ is equal to 1.2804, so with a significance level of 0.05 we cannot reject the null hypothesis which claims that the data are in accordance with the Weibull distribution. The following figure shows the experimental and theoretical distribution function of electrical failure. Table 9 . experimental and theoretical frequencies of other failures by intervals are given. No. The lower bound of the interval The upper bound of the interval Experimental pdf Experimental cdf Theoretical pdf Theoretical cdf KS test 1 5 53 0.3803 0.3803 0.3910 0.3910 0.0107 2 53 101 0.2837 0.6640 0.2381 0.6291 0.0350 3 101 149 0.1384 0.8025 0.1450 0.7741 0.0284 4 149 198 0.0823 0.8847 0.0883 0.8624 0.0223 5 198 246 0.0433 0.9281 0.0538 0.9162 0.0119 6 246 294 0.0217 0.9498 0.0328 0.9490 0.0008 7 294 342 0.0172 0.9670 0.0200 0.9689 0.0019 8 342 390 0.0069 0.9739 0.0122 0.9811 0.0072 9 390 438 0.0113 0.9852 0.0074 0.9885 0.0032 10 438 486 0.0054 0.9906 0.0045 0.9930 0.0023 11 486 534 0.0015 0.9921 0.0027 0.9957 0.0036 12 534 583 0.0015 0.9936 0.0017 0.9974 0.0038 13 583 631 0.0015 0.9951 0.0010 0.9984 0.0033 14 631 679 0.0020 0.9970 0.0006 0.9990 0.0020 15 679 727 0.0020 0.9990 0.0004 0.9994 0.0004 16 727 775 0.0010 1.0000 0.0002 0.9996 0.0004 The distribution of the duration of other failures, which was considered in the 100 percentile of the data, is in accordance with the exponential distribution with parameters $γ = 5$ $λ = 0.0103$ . More precisely, the empirical distribution function is determined by: $F x = 1 − e x p − 0.0103 · ( x − 5 )$ The number of other failures on which this model was developed amounted to 2030 failures. The testing of the hypothesis about the data distribution was performed with the help of the Kolmogornov-Smirnov test whose value of the statistic $n D n$ is equal to 1.5761, so with a significance level of 0.01 we cannot reject the null hypothesis which claims that the data are in accordance with the exponential distribution. The following figure shows the experimental and theoretical distribution function of other failures. Table 10 . the presentation of experimental and theoretical frequencies of duration between failures by intervals is given. The distribution of the duration between failures, which was considered in the 95 percentile of the data, is in accordance with the Erlang distribution with parameters $γ = 60$ $k a = 2$ $λ = 0.0055$ . More precisely, the empirical distribution function is determined by: $F x = 1 − 1 + 0.0055 · x − 20 e x p ⁡ ( − 0.0055 · x − 20 )$ The number of times between failures on which this model was developed was 5212. The testing of the hypothesis about the distribution of data was carried out with the help of the Kolmogornov-Smirnov test whose value of the statistic $n D n$ is equal to 1.0192, so with a significance level of 0.2 we cannot reject the null hypothesis that claims that are data in accordance with the Erlang distribution. The following figure shows the experimental and theoretical time distribution function between failures. The following figure shows the frequency distributions of the considered failure types. The simulation model, whose algorithm is shown in the following figure ( Figure 12 ), based on a randomly selected number from the distribution of the type of failure, generates a type of failure, and based on the next randomly selected number, the length of the failure of the selected type of failure is generated based on the distribution functions described above. Then, based on the new randomly selected number and the time-between-failure distribution function, the duration between failures is generated. t[sim] – duration of the simulation (s), No[sim] – number of simulations, State – state ECC system: 1 – „working time“; 0 – „downtime“, r[number] – random number generated by uniform distribution in the interval [0...1], cs- current simulation, t[sim] – simulation time, TBF – time between failures (current), DT – downtime failures (current), t[ef] – failure completion time (in simulation), t[bf] – failure start time (in simulation), VR[flr] – type of failure: 1 - mechanical; 2 – electrical; 3- other, A[ECC] – system availability - ECC, A (t) – availability of the system at a given time t, k[a] – stationary availability value.
{"url":"https://www.preprints.org/manuscript/202401.1743/v1","timestamp":"2024-11-13T18:31:48Z","content_type":"text/html","content_length":"969344","record_id":"<urn:uuid:3fc1bc5b-5cad-4a76-9603-3112ef37a788>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00734.warc.gz"}
A student calculates the density of an unknown solid. The mass is 10.04 grams, and the volume is 8.21 cubic centimeters. How many significant figures should appear in the final answer? | HIX Tutor A student calculates the density of an unknown solid. The mass is 10.04 grams, and the volume is 8.21 cubic centimeters. How many significant figures should appear in the final answer? Answer 1 Density = mass #xx# volume So you are multiplying two numbers. When multiplying, the number of significant figures is determined by the number with the least number of significant digits. The volume has 3 significant figures. The mass has four significant figures (remember to count "sandwiched" zeros). Because volume has the lowest number of significant figures, that is the number of significant figures in the final answer. The answer is three. Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer from HIX Tutor When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some Not the question you need? HIX Tutor Solve ANY homework problem with a smart AI • 98% accuracy study help • Covers math, physics, chemistry, biology, and more • Step-by-step, in-depth guides • Readily available 24/7
{"url":"https://tutor.hix.ai/question/a-student-calculates-the-density-of-an-unknown-solid-the-mass-is-10-04-grams-and-8f9af7f745","timestamp":"2024-11-07T23:34:18Z","content_type":"text/html","content_length":"576706","record_id":"<urn:uuid:feb1823f-2b64-41c9-905f-34df88c228fa>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00478.warc.gz"}
Mathematical Inequalities Questions for SBI Clerk When preparing for the SBI Clerk Reasoning, practising Mathematical Inequalities questions becomes necessary. Therefore, at Mockers, we provide the Mathematical Inequalities Questions for SBI Clerk exam preparation. You can access more than 10 sets of Mathematical Inequalities Questions at Mockers in which 1 mark will be rewarded for each correct answer. There is no negative marking for Mathematical Inequalities Questions when you practise at Mockers. To begin practising the Mathematical Inequalities Questions for SBI Clerk use the above links or navigate to the SBI Clerk Chapter Wise Test for Reasoning. Mathematical Inequalities Questions and Answers for SBI Clerk Along with the Mathematical Inequalities questions to practise, our experts have provided the solutions to help you cross-check your answers. The Mathematical Inequalities Questions and Answers for SBI Clerk are also accessible for free; however, before you access the answers, you have to attempt the Mathematical Inequalities questions. Once you submit, the options to find the solutions to Mathematical Inequalities Questions will appear. Remember that the questions of Mathematical Inequalities are available here in online mode for which you have to create an account at Mockers.in-account creation is simple and free. Features of Our SBI Clerk Mathematical Inequalities Reasoning Questions At Mockers, there are a total of 5 features of SBI Clerk Mathematical Inequalities Reasoning questions you should know before you start practising questions. • Dual Language Mathematical Inequalities Test: Our SBI Clerk chapter-wise Reasoning test is in dual language due to which you can practice Mathematical Inequalities questions for SBI Clerk Exam preparation in Hindi and English. This feature helps lakhs of aspirants nationwide to do effective exam preparation. • Instant Performance Report: To give you an insight into your overall learning in Mathematical Inequalities, the Mockers platform offers you a free instant performance report in which an overall result as well as a breakdown of your time consumption is given. This feature is capable of giving you feedback on your weak and strong areas in the Mathematical Inequalities reasoning questions. • Available 24/7 in Online Mode: You can practice Mathematical Inequalities Questions for SBI Clerk Exam preparation on your own time because the test is in online mode and available 24/7. • Detailed Solutions: You may answer all the Mathematical Inequalities questions accurately, but there are many who can’t. Therefore, our experts have prepared detailed solutions to all the Mathematical Inequalities Reasoning questions that enable candidates to get instant help. However, note that the Mathematical Inequalities solutions are unavailable till you submit the test. • Re-Attempt Allowed: Many platforms restrict you to re-attempting the test but Mockers don’t. One of the best features that our tests have is that re-attempt is allowed. If you fail to score full marks in one set of Mathematical Inequalities questions, you can retry to attempt the same test as many times as you want. How to Practise Mathematical Inequalities Questions for SBI Clerk Exam? Just follow these simple steps to practise Mathematical Inequalities Questions for SBI Clerk at Mockers.in: • Search www.mockers.in using the internet browser on your smartphone or PC/Laptop • Once the official website loads, click ‘Exam Categories’; on smaller screens such as smartphones, you’ll have to click on the navigation icon - as mentioned in the image below. • Click Bank and then SBI Clerk • A new page will load - choose ‘Chapter Wise Test’ • The page will quickly reload, now, click ‘Reasoning’ to get access to the Mathematical Inequalities Questions for SBI Clerk. Benefits of Practicing Mathematical Inequalities Questions for SBI Clerk Practising Mathematical Inequalities questions for the SBI Clerk exam has several benefits such as - • A Thorough Revision: Practising the SBI Clerk Mathematical Inequalities Reasoning questions allows you to focus on the topic in depth. With the help of Mockers Chapter-wise Reasoning Questions for Mathematical Inequalities, you can do a thorough revision of the Mathematical Inequalities questions that enables you to find the sections in which you need to pay more attention. • Self-Analysis: For effective exam preparation, there is nothing better than Self-analysis. When practising, the Mathematical Inequalities Questions online at Mockers, you face easy to very challenging questions which enable you to assess your understanding of learning in Mathematical Inequalities reasoning questions. This enables you to make some changes in your preparation to improve in those areas. • Assistance in Exam Preparation: The SBI Clerk Mathematical Inequalities Questions and Answers assist you in exam preparation which guides you to manage time, practice questions, revise and review the overall preparation. Along with all of these, the Online SBI Clerk Reasoning Tests help you keep track of your learning, improvements and the areas that need your attention. Techniques to Master Mathematical Inequalities Questions for SBI Clerk Exam Mathematical Inequalities is a type of reasoning question that is asked in SBI Clerk exam, to master the Mathematical Inequalities questions no hard and fast rules exist; however, these proven techniques may help you Master Mathematical Inequalities Questions for the SBI Clerk Exam. • Understand the Mathematical Inequalities Question: Read the Mathematical Inequalities question carefully to grasp the information provided and then Identify any given conditions, constraints, or relationships that are mentioned. This is the first and the most important step towards mastering the Mathematical Inequalities questions. • Break Down the Problem: Sometimes questions of Mathematical Inequalities may appear lengthy and vague but this isn’t the reality. To better understand the questions, break the problem into smaller, manageable parts. Identify the key elements, relationships, and patterns within the question. This step helps you organize your thoughts and develop a systematic approach to answer the Mathematical Inequalities Questions for SBI Clerk. • Practice With Similar Questions: One of the best ways to master the Mathematical Inequalities reasoning questions is to practise with similar questions - you can use the SBI Clerk Mock Test, chapter-wise test or Previous year question papers for this. Remember, practice makes perfect and therefore, the more you practice a variety of Mathematical Inequalities questions, the better you become at answering them. • Review Your Mistakes: It is essential to learn from your mistakes if scoring full marks in Mathematical Inequalities questions is your goal. After completing the SBI Clerk Mathematical Inequalities reasoning questions, review your answers, whether correct or incorrect. Understand the reasoning behind both correct and incorrect answers to identify areas for improvement. Doing this will help you improve yourself to answer Mathematical Inequalities questions with confidence and accuracy. • Re-Attempt Mathematical Inequalities Questions: Once, you review your performance or mistakes, re-attempt the Mathematical Inequalities questions to learn how you have improved your grip on them Mathematical Inequalities questions. Start practising for free at Mockers to re-attempt Mathematical Inequalities questions. To solve Mathematical Inequalities in bank exams - read the question carefully, break down the problem and use your critical thinking skills. And while doing the SBI Clerk exam preparation, practise lots of Mathematical Inequalities questions. Mathematical Inequalities questions may appear to be difficult in the SBI Clerk Exam if you haven’t prepared well or don't know the correct way to answer. Use Mockers.in to master Mathematical Inequalities questions. The Mathematical Inequalities Questions for SBI Clerk exam is one of the basic topics of the Mathematical Inequalities Ability section. To score full marks in SBI Clerk Mathematical Inequalities, you must pay attention to the Verbal Mathematical Inequalities questions. To become comfortable answering SBI Clerk Mathematical Inequalities questions practice at least 5 to 10 sets of Verbal Mathematical Inequalities questions. However, more is better. Open Mockers.in to find Mathematical Inequalities Questions and Answers for SBI Clerk exam preparation. The questions are in accordance with the SBI Clerk Mathematical Inequalities Syllabus.
{"url":"https://www.mockers.in/online/sbi-clerk-chapter-wise-test-for-mathematical-inequalities","timestamp":"2024-11-13T18:43:37Z","content_type":"text/html","content_length":"340230","record_id":"<urn:uuid:f12d3359-d923-4d0e-9e12-cf76beaef696>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00486.warc.gz"}
Graph Construction Published under the CC-BY4.0 license Open reviews and editorial process: Yes Preregistration: No All supplementary files can be accessed at the OSF project page: https://doi.org/10.17605/OSF.IO/HXK2U Graph Construction: An Empirical Investigation on Setting the Range of the Y-Axis Jessica K. Witt Colorado State University Graphs are an effective and compelling way to present scientific results. With few rigid guidelines, researchers have many degrees-of-freedom regarding graph construction. One such choice is the range of the y-axis. A range set just beyond the data will bias readers to see all effects as big. Conversely, a range set to the full range of options will bias readers to see all effects as small. Researchers should maximize congruence be-tween visual size of an effect and the actual size of the effect. In the experiments psented here, participants viewed graphs with the y-axis set to the minimum range re-quired for all the data to be visible, the full range from 0 to 100, and a range of approxi-mately 1.5 standard deviations. The results showed that participants’ sensitivity to the effect depicted in the graph was better when the y-axis range was between one to two standard deviations than with either the minimum range or the full range. In addition, bias was also smaller with the standardized axis range than the minimum or full axis ranges. To achieve congruency in scientific fields for which effects are standardized, the y-axis range should be no less than 1 standard deviations, and aim to be at least 1.5 stand-ard deviations. Keywords: Graph Design, Effect size, Sensitivity, Bias One way to lie with statistics is to set the range of the y-axis to form a misleading impression of the data. A range set too narrow will exaggerate a small effect and can even make a non-significant trend appear to be a substantial effect (Pandey, Rall, Sat-terthwaite, Nov, & Bertini, 2015). Yet the default set-ting of many statistical and graphing software pack-ages automatically sets the range as narrow as the data will allow. The problem of creating misleading graphs persists even when the full range is shown instead. As shown in the studies reported below, a range set too wide also creates a misleading impres-sion of the data by making effects seem smaller than they are. Here, I argue that for scientific fields that use standardized effect sizes and adopt Cohen’s convention that an effect of d = 0.8 is big, the range of the y-axis should be approximately 1.5 standard deviations (SDs). How should the y-axis range of a graph be deter-mined? Graph construction should account for the visual experience of the people reading the graphs (Cleveland & McGill, 1985; Kosslyn, 1994; Tufte, 2001) and the strong link between perception and cogni-tion (Barsalou, 1999; Glenberg, Witt, & Metcalfe, 2013). When the visual size of the effect aligns with the actual size of the effect, the person reading the graph does not have to exert mental effort to decode effect size from the graph. Instead, the size of the effect is processed automatically. This increases graph fluency by making it easier to understand Jessica K. Witt, Department of Psychology, Colorado State University. Data, scripts, and supplementary materials available at osf.io/hw2ac. Address correspondence to JKW, Department of Psy-chology, Colorado State University, Fort Collins, CO 80523, USA. Email: [email protected] Table 1. Overview of the five experiments. Experiment N Effect sizes Graph Type Standardized condition1 1 9 0.1, 0.3, 0.5, 0.8 Bar graph 2 SDs 2 14 0.1, 0.3, 0.5, 0.8 Bar graph 1.4 SDs 3 13 0, 0.3, 0.5, 0.8 Bar graph with error bars 1.2 SDs 4 20 0, 0.3, 0.5, 0.8 Line graph 1.4 SDs 5 15 0, 0.3, 0.5, 0.8 Line graph 1 SD Notes. 1[This refers to the range depicted in the standardized condition, so a range of 1.4 SDs is when the ] graph was centered on the grand mean and extended 0.7 SDs in either direction. that an effect is big when it looks big and an effect is small when it looks small. To increase graph fluency, the range of the y-axis should be selected to maximize compatibility be-tween visual size and actual effect size (Kosslyn, 1994; Pandey et al., 2015; Tufte, 2001). However, the current literature fails to provide clear guidelines on how to achieve this compatibility. For example, some recommend displaying only the relevant range so that the axis goes from just below the lowest data point to just above the highest data point (Kosslyn, 1994). This would not achieve the recommended compatibility because small effects would look big. Others assert that the y-axis should always start from 0, particularly for bar graphs (Few, 2012; Pan-dey et al., 2015; Wong, 2010). This too could fail to achieve compatibility by making effects look too small. In the case of scientific fields for which effect size is standardized based on standard deviation, the range of the y-axis should be a function of the stand-ard deviation (SD). In behavioral sciences such as psychology and economics, for example, the mean effect size is approximately half a SD (Bosco, Aguinis, Singh, Field, & Pierce, 2015; Open Science Collabo-ration, 2015; Paterson, Harms, Steel, & Crede, 2016), and a standardized effect size of d = .8 is considered a big effect (Cohen, 1988). Consequently, an appro-priate range for the y-axis would be one to two SDs, which would be plotted as the group mean ± 0.75 SD (or ±0.5 – 1 SDs). With this range, big effects such as a Cohen’s d of .8 would look big and small effects of d = .3 would look small. In other words, this range would help achieve compatibility between the visual impression of the size of the effect and the actual size of the effect. Empirical Studies The effect of visual-conceptual size compatibility on graph fluency was empirically tested in 57 partic-ipants across 5 experiments (see Table 1). The par-ticipants were naïve college students, which serves as an appropriate sample given that scientific results should be accessible and comprehensible to this population and not just to experts in one’s field. The stimuli were bar or line graphs that had been constructed from simulated data. Data were simu-lated from two (hypothetical) groups of participants by sampling from normal distributions in R (R Core Team, 2017). For one group, the data were drawn from a normal distribution with a mean of 50 and a standard deviation of 10 (as in a memory experiment with mean performance of 50% and SD of 10%). For the other group, the data were drawn from a normal distribution with a standard deviation of 10 and the mean at 49, 47, 45, or 42. These means correspond to effect sizes of d = 0.1, 0.3, 0.5, and 0.8, respec-tively. In Experiments 3-5, the mean of 49 (d = 0.1) was replaced with the mean of 50 (d = 0). In Exper-iments 2-5, the data were re-sampled if the attained effect size differed by more than 0.1 from the in-tended effect size. Data were simulated 10 times for each of the four effect sizes to create 40 sets of data for each Experiment. In Experiments 1-3, the means of the simulated data were displayed as a bar graph depicting two groups of participants who engaged in different study strategies (spaced versus massed; see Figure 1). In Experiments 4-5, the means were used to determine the end points of a line graph, and the x-axis was labeled as “hours spent studying”. For each set of data, three graphs were constructed that varied in the range of the y-axis. The full condition showed the full range from 0 to 100 on a hypothet-ical memory test. The minimal condition showed the smallest range necessary to see the data. The standardized condition was centered on the group mean and extended by one to two SDs in either di-rection (the exact value differed across experiments, see Table 1 or the Appendix). Figure 1 shows several examples of graphs that served as stimuli. In Exper-iment 3, error bars were also included and explained to the participants. Within an experiment, the same set of 120 graphs (3 axis ranges x 4 effect sizes x 10 sets) were shown to the participants. Graphs were shown one at a time, order was randomized, and participants completed 4 blocks of 120 trials. In all experiments, the participants’ task was to indicate whether there was no effect, a small effect, a me-dium effect, or a big effect for each graph by press-ing 1, 2, 3 or 4 on the keyboard. Graph fluency was measured using linear regres-sions rather than accuracy because regression coef-ficients have the advantage that they provide two separate measures. The slope provides an estimate of sensitivity to the magnitude of the effect depicted in the plot. A steeper slope indicates better sensitiv-ity to effect size than a shallower slope. The inter-cept provides an estimate of bias. Two graphs could lead to similar levels of sensitivity but different lev-els of bias. Separate linear regressions were calcu-lated for each participant for each y-axis range con-dition (full, standardized, and minimal). Full Standardized Minimal Figure 1. Sample stimuli in the experiments on bar graphs and on line graph. The bar graphs show final test score as a function of whether study style was spaced or massed. The line graphs show final test score as a function of hours spent studying from 1 to 4. Within each experiment, the same data were plotted using the full range from 0-100, the standard-ized range (in this case, the group mean +/- 0.7 SD), or the minimal range necessary to see the data. In this example, a medium effect (Cohen’s d = 0.5) was simulated for the bar graphs (top row) and a small effect (Cohen’s d = 0.3) was simu-lated for the line graphs (bottom row). The participant’s task was to indicate whether there was no effect, a small effect, a medium effect, or a big effect. In each regression, the dependent measure was response (on the scale of 1 to 4). The effect sizes were recoded to also be on a scale from 1 to 4 then centered by subtracting 2.5 so that perfect perfor-mance would produce a regression coefficient for the slope of 1 and an intercept of 2.5. Figure 2 shows the mean slope coefficients across all 5 experiments. Sensitivity was best for the standardized graphs and worse for the full range graphs. Participants were better able to assess the size of the effect depicted in the graph for the stand-ardized graphs, than for the minimal or full graphs. Participants were also less biased when viewing the standardized graphs. Figure 3 shows the mean bias across all 5 experiments. Bias scores were calcu-lated as a percent bias based on the coefficients for the intercept. A negative score indicates a bias to respond that effects were small, and a positive score indicates a bias to respond that the effects were big. For the full graphs, there was a large bias to respond that the effects were small. When looking at graphs with the full range, participants responded that al-most all effects (86%) were null or small. For the minimal graphs, there was a large bias to respond that the effects were substantial. When looking at graphs with the minimal range for Cohen’s d = 0.10 – 0.80, participants responded that the effect was big on 49% of the trials. In contrast, there was much less bias with the standardized graphs (see Supple-mental Materials). Figure 2. Sensitivity is plotted as a function of graph axis condition for the three types of graphs across all 5 experiments. Sensitivity was measured as the coefficient for the slope from regressions of actual effect size on estimated effect size. Only trials for which the graph depicted an effect size greater than d = 0.1 are included (see supplementary materials for all the data). A higher sensitivity score corresponds to better performance, and a coefficient of 1 corresponds to perfect performance. A coefficient of 0 indicates chance performance. In the left panel, mean sensitivity across all experiments is shown. Error bars are 1 SEM calculated within-subjects, and are approximately the same size as the symbols. The y-axis range is 3 SD. The right panel shows sensitivity for each participant for each experiment. The data are color-coded by experiment (e.g. red = Experiment 1, orange = Experiment 2) and are also laterally positioned from left to right within graph type category. Each point corresponds to one participant, and each participant has one symbol for each of the three graph types. The solid horizontal line at 0 shows the point of no sensitivity and the dashed horizontal line at 1 shows the point of perfect sensitivity. The visual impression of the size of an effect has a strong influence on the judged size of an effect. When the visual impression was compatible with the actual effect size, judgments of effect size were bet-ter calibrated and less biased compared with the typical default setting of showing the minimum range to display the data and the setting of showing the full potential range. Based on the current stud-ies, the recommendation is to center the y-axis on the grand mean and extend the range 0.75 SDs in ei-ther direction so that the range of the y-axis is 1.5 SDs. The current studies show improved sensitivity to effect size and reduced bias in estimating effect size when the range of the y-axis was centered on the grand mean of the data and extended approximately 0.7 SDs in either direction. The various studies used slightly different extensions ranging from 0.5 SDs to 1 SD. There were not large detectable differences in sensitivity or bias depending on the exact range that was used, so the precise value of the y-axis range might not be critical. Rather, the key feature is that the visual size aligns with the actual size of the ef-fect. The specific range to be used might vary as a function of the size of the error bars (the range should be large enough to encompass them), the size of the effect (the range would have to be ex-tended for particularly large effects, such as was done with the current results), if doing so would make the range include nonsensical numbers (such as negative numbers for performance), and to achieve a consistent scale across multiple graphs to enhance across-graph comparisons. Given that the exact range in terms of SD could vary from plot to plot, it could be useful to indicate the range in SD units in the figure caption. This indication would be particularly useful in cases for which researchers do not include error bars. The current experiments explored graphs of stimulated data from between-subjects designs. The recommendations likely generalize to within-subject designs with the caveat that the y-axis Figure 3. Bias (as a percentage) is plotted as a function of graph axis condition for the three types of graphs across all 5 experiments. A negative bias corresponds to responding that effects are smaller than they are, and a positive bias corre-sponds to responding that effects are bigger than their actual size. In the left panel, mean bias across all experiments is shown. Error bars are 1 SEM calculated within-subjects, and are approximately the same size as the symbols. The y-axis range is 4 SD. The right panel shows bias for each participant for each experiment. The data are color-coded by experi-ment (e.g. red = Experiexperi-ment 1, orange = Experiexperi-ment 2) and are also ordered from left to right within graph type category. Each point corresponds to one participant, and each participant has one symbol for each of the three graph types. should be a function of the denominator used to cal-culate the within-subjects effect size. For example, the denominator for Cohen’s dz is the square root of the sum of the squares of the standard deviations minus the product of the standard deviations and the correlation between the two measures. Graphs plotting within-subjects data could be ± 0.75 times this denominator (or one of the other suggested measures for within-subjects effects sizes; e.g. Lakens, 2013). In cases for which there are both be-tween-subjects and within-subjects factors, the re-searchers will have to decide which denominator to use for the range depending on which effect they most want to emphasize. It is debatable whether the recommendation of-fered here should be employed with bar graphs. Some have shown that graphs that start at a position other than 0 are deceptive (e.g., Pandey et al., 2015). The idea is that bar graphs should always start at 0 because the height of the bar signifies the value of the condition being represented. When the y-axis starts at a value greater than 0, the height of the bar corresponds to the difference between the condi-tion’s value and the starting point, rather than the condition’s value itself. Consider the following ex-ample: imagine that group A scored 70% on a memory test and group B scored 60%. On a plot for which the y-axis starts at 50%, group A’s score would appear twice as big as group B’s score, even though they only scored 10% higher. The issue at hand concerns the visual impression of the data. If the graph gives the impression that the differences are big, and that aligns with the size of the effect, the graph would be produce compatibility between vi-sion and true effect size. If, however, the impresvi-sion is that one group’s performance was twice as good as the other group’s performance, this would pro-duce a misleading impression of the data. The cur-rent experiments cannot speak to which impression was experienced because participants were asked to rate the size of the effect as being no effect, small, medium, or big, rather than quantifying the size of one bar relative to another. The specific task used here did not permit measuring the spontaneous im-pression given by the graphs. One option is for re-searchers to use alternative types of graphs to avoid the issue. Alternatives include point graphs and a newly-designed type of graph called a hat graph (Witt, 2019). The recommendation to set the y-axis range to be 1.5 SDs does not generalize to fields for which the SD is unknown or irrelevant for interpreting effect size. For these fields, previous recommendations such as Tufte’s Lie Detector Ratio could be appro-priate (Tufte, 2001). But for scientific fields that rely on standard deviation to interpret effect size, this is the first empirically-based recommendation that provides clear guidelines for constructing graphs to communicate the magnitude of the effects. Maximizing compatibility between visual size and conceptual size improved comprehension of the ef-fects shown in the graphs. The data presented in the graphs were exactly the same, yet participants were less biased and were more sensitive to the size of the depicted effect when the axis range was one to two SDs. Furthermore, emphasizing SD and effect size in graph construction could help shift researchers’ focus to effect size, rather than statistical signifi-cance. Indeed, effect size (as measured with Co-hen’s d) provides a better measure for discriminat-ing real effects from null effects than p values or Bayes factors (Witt, 2019). Such a shift could help guard against practices that have contributed to re-cent failures to replicate in various scientific fields (Camerer et al., 2016; Open Science Collaboration, 2015). In his famous book on how to lie with statistics, Huff noted that as long as the y-axis is correctly la-beled, “nothing has been falsified – except the im-pression that it gives” (Huff, 1954, p. 62). The impres-sion matters. Researchers should select the range of the y-axis so that small effects look small and big effects look big (based on the field’s adopted con-ventions). A simple way to do this is to set the range to be 1.5 (or more) standard deviations of the de-pendent measure. That this improves graph com-prehension is both intuitive and is now supported by empirical evidence. Open Science Practices This article earned the Open Data and the Open Materials badge for making the data and materials openly available. It has been verified that the analy-sis reproduced the results presented in the article. The entire editorial process, including the open re-views, are published in the online Author contribution Witt is solely responsible for this manuscript. The author read and approved the final manuscript. This work was supported by grants from the Na-tional Science Foundation (1632222 and BCS-1348916). Conflict of Interest Statement The author declares there were no conflicts of interest. Barsalou, L. W. (1999). Perceptions of perceptual symbols. Behavioral and Brain Sciences, 22, 577-660. Belia, S., Fidler, F., Williams, J., & Cumming, G. (2005). Researchers misunderstand confidence intervals and standard error bars. Psychological Methods, 10(4), 389-396. doi: Bosco, F. A., Aguinis, H., Singh, K., Field, J. G., & Pierce, C. A. (2015). Correlational effect size benchmarks. Journal of Applied Psychology, 100(2), 431-449. doi: 10.1037/a0038047 Camerer, C. F., Dreber, A., Forsell, E., Ho, T.-H., Huber, J., Johannesson, M., . . . Chan, T. (2016). Evaluating replicability of laboratory experiments in economics. Science, 351(6280), 1433-1436. Cleveland, W. S., & McGill, R. (1985). Graphical Perception and Graphical Methods for Analyzing Scientific Data. Science, 229(4716), 828-833. doi: 10.1126/science.229.4716.828 Cohen, J. (1988). Statistical Power Analyses for the Behavioral Sciences. New York, NY: Routledge Collaboration, O. S. (2015). Estimating the reproducibility of psychological science. Science, 349(6251), aac4716. doi: Cumming, G., & Finch, S. (2005). Inference by eye: confidence intervals and how to read pictures of data. American Psychologist, 60(2), 170-180. doi: 10.1037/0003-066X.60.2.170 Few, S. (2012). Show Me the Numbers: Designing Tables and Graphs to Enlighten (Second Edition ed.). Burlingame, CA: Analytics Press. Glenberg, A. M., Witt, J. K., & Metcalfe, J. (2013). From revolution to embodiment: 25 years of cognitive psychology. Perspectives on Psychological Science, 8(5), 574-586. Huff, D. (1954). How to Lie with Statistics. New York, NY: W. W. Norton & Company. Kosslyn, S. M. (1994). Elements of Graph Design. New York: W. H. Freeman and Company. Lakens, D. (2013). Calculating and reporting effect sizes to facilitate cumulative science: A practical primer for t-tests and ANOVAs. Frontiers in Psychology, 4, 863. doi: Morey, R. D., Rouder, J. N., & Jamil, T. (2014). BayesFactor: Computation of Bayes factors for common designs (Version 0.9.8), from Pandey, A. V., Rall, K., Satterthwaite, M. L., Nov, O., & Bertini, E. (2015). How Deceptive are Deceptive Visualizations?: An Empirical Analysis of Common Distortion Techniques. Paper presented at the Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, Seoul, Republic of Korea. Paterson, T. A., Harms, P. D., Steel, P., & Crede, M. (2016). An assessment of the magnitude of effect sizes: Evidence from 30 years of meta-analysis in management. Journal of Leadership & Organizational Studies, 23(1), 66-81. Revelle, W. (2018). psych: Procedures for Psychological, Psychometric, and Personality Research. Retrieved from R Core Team. (2017). R: A language and environment for statistical computing. Retrieved from Tufte, E. R. (2001). The Visual Display of Quantitative Information (Second Edition ed.). Cheshire, CT: Graphics Press. Witt, J. K. (2019). Introducing hat graphs. Retrieved from psyarxiv.com/sg37q. Witt, J. K. (2019). Insights into criteria for statistical significance from signal detection analysis. Meta-Psychology, 3, MP.2018.871. doi: Wong, D. M. (2010). The Wall Street Journal Guide to Information Graphics: The Dos & Don'ts of Presenting Data, Facts, and Figures. New York, Appendix: Experimental Details Experiment 1: Bar Graphs with Axis Range of 2 SD Participants judged the size of effects depicted in bar graphs that were constructed with three axis range options. Participants. Nine students in an introductory psychology course participated in exchange for course credit. In this and all subsequent experi-ments, the number of participants was maximized within a pre-determined time limit. Stimuli and Apparatus. Graphs were con-structed in R (R Core Team, 2017). For each graph, two means were generated. One mean was 50, and the other mean was 49, 47, 45, or 42. These equated to effect sizes of Cohen’s d = .1, .3, .5, and .8, respec-tively. To add some noise to each graph, each mean was drawn from a normal distribution centered on the desired mean with 1000 samples and a standard deviation of 10. The means were presented in bar graphs (see Figure A1). The left bar was white and labeled “Spaced” and the right bar was black and la-beled “Massed”. For each set of simulated data, three bar graphs were constructed that corre-sponded to the three y-axis range conditions. For the full graphs, the y-axis range went from 0 to 100. For the minimal graphs, the y-axis went from the smallest data value minus 1 to the largest data value plus 1. For the standardized graphs, the mean of the two groups was calculated, and 1 SD (10) was added in either direction to set the y-axis range. This pro-cess of creating 3 graphs for each set of data was re-peated 10 times for each of the 4 effect sizes for a total of 120 graphs. Graphs were 500 pixels by 500 pixels and were shown on a 19” computer monitors with 1028 x 1024 resolution. Procedure. After providing informed consent, each participant was seated at a computer. They were given the following instructions: “You will see graphs showing the effect of study style on final test performance. There were two study styles. Massed is like cramming everything at once at just before the exam. Spaced refers to studying a little bit every day for weeks before the exam. The y-axis shows final test performance, with higher value meaning better performance. Exp (SD Range) Full Standardized Minimal 1 (2) 2 (1.4) 3 (1.2) 4 (1.4) 5 (1) Figure A1. Sample stimuli for each of the 5 experiments. Each row corresponds to one experiment and shows a single set of a data plotted in the three different ways (full, standardized, and minimal). In all cases, the data show a medium effect (Cohen’s d = 0.5). The number in parentheses under the experiment number indicates the range of the standardized condition. For each graph, indicate if study style had 1. No effect, 2. A small effect, 3. A medium effect, 4. A big effect on final performance. Ready? Press ENTER”. A trial began with a fixation cross at the center of the screen for 500ms. The graph was then shown. Above the graph, text reminded participants of the four response options. The graph remained until participants made a response, at which point, the graph disappeared and a blank screen was shown for 500ms. Each block of trails consisted of the presen-tation of each of the 120 graphs (3 graph types x 4 depicted effect sizes x 10 repetitions). Order was randomized within block, and participants com-pleted 4 blocks for a total of 480 trials. Results and Discussion One participant only completed 431 trials, but their data were still included. The depicted effect size was recoded on a scale from 1 to 4 to be con-sistent with the scale of the response. The smallest effect size (d = .1) was coded as 1.5 to account for the idea that this effect is smaller than a small effect but bigger than no effect. In later experiments, these graphs were replaced with graphs for which there was no effect instead of d = .1. For each participant for each of the 3 axis range conditions, the data were submitted to separate lin-ear regressions with estimated effect size as the de-pendent factor and actual effect size (recoded on a scale from 1-4 then centered by subtracting 2.5) as the independent factor. The regressions produced two coefficients for each participant for each axis range condition. The slope indicates sensitivity to the size of the effect. A slope of 1 indicates perfect sensitivity. A slope less than 1 indicates attenuated sensitivity. The intercept indicates any bias to see effects as smaller or larger than their true size. One participant had slopes that were identified as outli-ers in the full and minimal conditions because they were greater than 1.5 times the interquartile range for each condition. This participant was excluded from the analysis (despite being the best performer in the group) because their data were not typical of the rest of the sample. Another participant had a slope less than 1.5 times the interquartile range in the full condition, and was also excluded for not be-ing typical of the rest of the sample. The coefficients were analyzed using paired-samples t-tests to compare each graph condition to the others. Analyses were done in R (R Core Team, 2017). Bayes factors were calculated using the BayesFactor package in R with a medium prior (Mo-rey, Rouder, & Jamil, 2014). A Bayes factor greater than 3 indicates moderate evidence, and a Bayes factor greater than 10 indicates substantial evidence for the alternative hypothesis over the null hypoth-esis. Conversely, a Bayes factor less than .33 and less than .10 indicates moderate and substantial evi-dence for the null hypothesis over the alternative hypothesis. Effect sizes were calculated using the recommendations of Lakens (2013), and 95% confi-dence intervals (CIs) on the effect size were calcu-lated using the cohen.d.ci function in the PSYCH package (Revelle, 2018). Figure A2. Mean response is plotted as a function of depicted effect size and graph type for Experi-ment 1. Error bars are 1 SEM calculated within-subjects. Solid lines represent linear regressions for depicted effects d ≥ .3. Dashed lines represent lin-ear regressions for depicted effects less than d ≤ .3. The standardized graphs produced significantly greater slopes than the full graphs, t(6) = 3.84, p = .009, dz = 1.45, 95% CIs [.33, 2.51], Bayes factor = 7.54 (see Figure A2). With the standardized y-axis range, participants were more sensitive to the differences in actual effect size (M = .47, SD = .11) compared with graphs that showed the full range from 0 to 100 (M = .30, SD = .07). Sensitivity was also better for the standardized graphs than the minimal graphs, t(6) = 3.61, p = .011, dz = 1.37, 95% CIs [.28, 2.40], Bayes fac-tor = 6.17. The minimal graphs (M = .28, SD = .04) produced sensitivity similar to the full graphs, p = .51, dz = .26, 95% CIs [-.50, 1.01], Bayes factor = 0.43. These data show an advantage for the standard-ized graphs because participants were more sensi-tive to differences among magnitudes of the de-picted effect sizes with the standardized graphs than with the full or minimal graphs. However, the standardized graphs led to performance that was far from perfect. The slope was .47, and perfect perfor-mance would have produced slopes of 1. Thus, even though the standardized graphs signify an improve-ment over the other two options, more work is still necessary to improve graph comprehension. Another advantage for the standardized graphs can be seen with respect to bias. Bias scores were calculated as a percentage score of underestimation (negative values) and overestimation (positive val-ues). They were calculated as the participant’s co-efficient for the intercept minus the true intercept (2.5) divided by the true intercept. There were sig-nificant differences between the bias scores across all conditions, ps < .003. The bias scores for the full graphs was negative (M = -27%, SD = 10%) and sig-nificantly below 0, t(6) = -7.01, p < .001, dz = 2.64, 95% CIs [1.00, 4.27], Bayes factor = 82. The bias scores for the minimal graphs were positive (M = 36%, SD = 19%) and significantly above 0, t(6) = 4.91, p = .003, dz = 1.86, 95% CIs [.57, 3.10], Bayes factor = 19. In contrast, the bias scores for the standardized graphs were significantly less biased than in the other con-ditions (ps < .003), and were not significantly differ-ent from 0 (M = 1%, SD = 4%), t(6) = 0.47, p = .66, dz = .18, 95% CIs [-.58, .82], Bayes Factor = 0.39. With the full graphs, most effects looked like small effects. Indeed, 91% of the trials with the full graphs were labeled as showing no effect or a small effect. With the minimal graphs, 58% of the effects were labeled as big effects and 88% were labeled as medium or big. With the standardized graphs, small effects looked small and medium effects looked medium (see Figure A3). However, the big effects only looked medium. Thus, the experiment was replicated but with a smaller range in the standardized condition to determine if that would improve detection of big effects. Experiment 2: Bar Graphs with Axis Range of 1.4 SD Standardized graphs, for which the y-axis range is a function of the standard deviation, produced better sensitivity and less bias in participants who judged the size of the depicted effect compared with graphs that showed the full range and graphs that showed only the minimal range necessary to see the data. However, sensitivity with the standardized graphs was still below perfect performance. In this experiment, the range of the standardized graphs was decreased from 2 SDs to 1.4 SDs. Fourteen students in an introductory psychology course participated in exchange for course credit. Everything was the same in Experiment 1 except for the construction of the standardized graphs, for which the y-axis range went from the group mean minus 0.7 SD to the group mean plus 0.7 SD (see Fig-ure A1). Thus, the standardized range was 1.4 SD (in-stead of 2 SD as in Experiment 1). In addition, the simulated data were evaluated to ensure that the outcomes were similar to the intended outcomes. The effect size of the simulated data were compared to the intended effect size, and if they differed by more than 0.1, the data were resampled until the dis-crepancy was less than 0.1. Participants completed 4 blocks of 120 trials, and order was randomized within block. Figure A3. Response is plotted as a function of depicted effect size for the three types of axis range condi-tions (full, minimal, and standardized) for Experiment 1. The bottom right panel shows the correct response. Response was entered as 1 (no effect), 2 (small effect), 3 (medium effect), and 4 (big effect). Each point cor-responds to one participant’s response on one trial. The data have been jittered along both axes to enable visibility. Results and Discussion The data were analyzed as before. Three partic-ipants had a slope that was deemed an outlier for being beyond at least 1.5 times the interquartile range for the full or minimal graphs. The slope, which indicates sensitivity to the size of the effect in the graph, was greater for the stand-ardized graphs (M = .54, SD = .17) than the full graphs (M = .31, SD = .08), t(10) = 3.46, p = .006, dz = 1.04, 95% CIs [.28, 1.77], Bayes factor = 9.00 (see Figure A4). Sensitivity was also greater for the standardized graphs than the minimal graphs (M = .30, SD = .06), t(10) = 4.07, p = .002, dz = 1.23, 95% CIs [.42, 2.00], Bayes factor = 20. Replicating Experiment 1, the cur-rent data show that setting the range of the y-axis to be a function of the standard deviation, rather than the full range of options or the minimal range necessary to show the data, improved graph com-prehension. Recall, participants were not asked to indicate how big the effect looked but rather how big the effect was. Full and minimal graphs both produced misleading impressions of the data that severely attenuated sensitivity to effect size. Simply setting the range of the y-axis in relation to the standard deviation improved readers’ sensitivity to the data. Figure A4. Mean response is plotted as a function of depicted effect size and graph type for Experi-ment 2. Error bars are 1 SEM calculated within-subjects. Solid lines represent linear regressions for depicted effects d ≥ .3. Dashed lines represent lin-ear regressions for depicted effects less than d ≤ .3. Bias was again found for the full and minimal graphs but not the standardized graphs. For the full graphs, the bias was to underestimate effect size by 28% (SD = 9%), t(10) = -10.51, p < .001, dz = 3.17, 95% CIs [1.67, 4.64], Bayes factor > 100. Indeed, of all the trials with the full graphs, the effect was labeled as small or no effect on 90% of responses. The bias was of a similar magnitude but in the opposite direction for the minimal graphs, t(10) = 4.91, p < .001, dz = 1.48, 95% CIs [.59, 2.33], Bayes factor = 61. With the min-imal graphs, participants overestimated the size of the effects by 31% (SD = 21%). Over half of all effects with the minimal graphs were labeled big (53%), and 81% were labeled as medium or big. In contrast, the bias was much smaller (M = 6%, SD = 9%) for the standardized graphs, and only marginally signifi-cantly different from 0, t(10) = 2.13, p = .059, dz = .64, 95% CIs [-.02, 1.28], Bayes factor = 1.50. The bias with the standardized graphs was far less than the biases observed with the full and minimal graphs, ps < .001. The evidence thus far is clear: graphs with a y-axis range that is a function of the standard devia-tion produces better sensitivity and less bias in par-ticipants when they are tasked with judging the size of an effect, compared with graphs that present the full range and with graphs that present only the minimal range necessary to view all of the data. Experiment 3: Bar Graphs with Error Bars The graphs in Experiments 1 and 2 did not contain error bars. As a result, the graphs did not contain enough information to know if an effect was null, small, medium, or big. This was a conscious decision given that introductory psychology students might not know how to interpret error bars. Yet, it is nec-essary to know if standardized graphs still produce an advantage even when there is enough infor-mation presented in the graphs to be able to accu-rately answer the question. In addition, the graphs with the smallest effects in Experiments 1 and 2 had the awkward feature of being bigger than no effect but smaller than a “small” effect, so it was unclear whether the correct answer should be 1 or 2. This ambiguity was eliminated in the current experiment. Thirteen students in an introductory psychology course participated in exchange for course credit. Graphs were constructed similarly as in Experi-ment 2 with the following exceptions. The four ef-fect sizes that were modeled were Cohen’s d = 0, .3, .5, and .8, which corresponds to no effect, a small ef-fect, a medium efef-fect, and a big efef-fect, respectively. The data were simulated as coming from two inde-pendent groups of 100 participants. The mean used to model the data for the hypothetical group that used the spaced studying strategy was always 50 (as in 50% accuracy on a memory test). The mean used to model the data for the hypothetical group that used the massed studying strategy was 50 minus 0, 3, 5, or 8 depending on the effect size being mod-eled. Using these means and a SD of 10, data were sampled from a normal distribution and summarized for the graphs. Error bars were calculated as 95% confidence intervals. In addition to the instructions given in Experiments 1 and 2, participants were also told the following: “Important! An effect is statisti-cally significant if p < .05. However, you can also as-sess statistical significance by looking at error bars. Error bars are lines that extend from the mean of each condition. The mean of each condition is shown by the top of the bar. If the error bar from one condition overlaps the mean from the other condition, the effect is NOT significant. If neither bar overlaps the mean of the other condition, then the effect is significant. The farther apart the error bars, the bigger the effect.” Note that this rule of thumb is overly simplified. There can be cases for which the error bars overlap but the effect is statis-tically significant at the p < .05 level (Cumming & Finch, 2005), but this level of nuance was not pre-sented to the participants. For each set of simulated data, 3 graphs were constructed. For the full graphs, the y-axis range went from 0 to 100. For the standardized graphs, the y-axis range went from the grand mean minus 0.6 SD to the grand mean plus 0.6 SD. For the min-imal graphs, the bottom of the y-axis range was the smallest combination of the mean minus the lower confidence interval minus 0.1, and the top of the range was the biggest combination of the mean plus the upper confidence interval plus 0.1. Participants completed 4 blocks of 120 randomized trials. Results and Discussion The data were analyzed as before. One partici-pant had a negative slope for the standardized graphs, and another participant had a high slope for the full graphs. Both were 1.5 times beyond the in-terquartile range and excluded from analyses. Figure A5. Mean response is plotted as a function of depicted effect size and graph type for Experiment 3. Er-ror bars are 1 SEM calculated within-subjects. Solid lines represent linear regressions for depicted effects d ≥ .3. Dashed lines represent linear regressions for depicted ef-fects less than d ≤ .3. The slopes were steeper, showing better sensi-tivity, for the standardized graphs (M = .62, SD = .19) compared with the full graphs (M = .24, SD = .09) and the minimal graphs (M = .55, SD = .20). The differ-ence in slopes between the standardized and full graphs was significant, t(10) = 7.76, p < .001, dz = 2.34, 95% CIs [1.16, 3.50], Bayes factor > 100. The differ-ence in slopes between the standardized versus minimal graphs was also significant, t(10) = 3.09, p = .011, dz = .93, 95% CIs [.20, 1.63], Bayes factor = 5.46. Even though all the information was the same across the three graph conditions and even though this in-formation was sufficient for determining the size of each effect, participants were better able to deter-mine effect size when the range of the y-axis was a function of the standard deviation (see Figure A5). The impression given by Figure A3 indicates that sensitivity was just as good if not better for the min-imal graphs than the standardized graphs when comparing no effect to a small effect (ds = 0 and .3), but sensitivity was better (steeper) for the standard-ized graphs when comparing across small, medium, and big effects (ds = .3, .5, and .8). This impression prompted an unplanned analysis. Linear regres-sions were again conducted for each participant for each graph condition. However, in one set of re-gressions, only effect sizes 0 and .3 were included. In another set of regressions, only effect sizes .3, .5, and .8 were included. Two additional participants were identified as outliers because the slopes for all three graphs in the latter analysis were 1.5 times be-yond the interquartile range, and were excluded from the remaining analyses. With respect to determining whether or not an effect is present (by comparing slopes for graphs de-picting ds = 0 and .3), all three graph types led to similar performance (Standardized: M = .89, SD = .40; Full: M = .49, SD = .22; Minimal: M = 1.02, SD = .45). With all three types of graphs, participants were sensitive to whether or not there was an effect, as shown by coefficients for each graph type that were positive and significantly greater than 0, ps < .001. The standardized graph produced some benefit over the full graphs, t(8) = 2.82, p = .022, dz = .85, 95% CIs [.14, 1.53], Bayes factor = 3.76. The standardized graph was no better, and marginally worse, than the minimal graphs, t(8) = -1.82, p = .11, dz = .55, 95% CIs [-.10,1.17], Bayes factor = 1.03. It should be noted that a bias to see all effects as being bigger (as found with minimal graphs) would lead to a steeper slope when comparing just the graphs that depict a null effect and a small effect. Thus, it cannot be known whether sensitivity is better with the minimal graphs or if the bias caused by the minimal graphs leads to greater estimates of sensitivity. With respect to determining the magnitude of an effect that is present (by comparing slopes for graphs depicting ds = .3, .5, and .8), the standardized graphs produced better sensitivity than the full or minimal graphs (Standardized: M = .46, SD = .11; Full: M = .09, SD = .06; Minimal: M = .25, SD = .13), ps ≤ .001. The comparison between the standardized graphs to the full graphs resulted in a Bayes factor greater than 100, dz = 2.81, 95% CIs [1.45, 4.14]. The comparison between the standardize graphs to the minimal graphs resulted in a Bayes factor of 65, dz = 1.50, 95% CIs [.60, 2.35]. In each of the three graph types, participants showed some level of sensitivity to the magnitude of the effect, as shown by the co-efficients being significantly greater than 0, ps < .003. In addition to better sensitivity with the stand-ardized graphs, the standstand-ardized graphs also pro-duced less bias compared with the other graphs, ps <= .001. For the full graphs, there was a 28% bias (SD = 12%) to underestimate effect size, which was sig-nificantly different from 0, t(10) = -7.82, p < .001, dz = 2.36, 95% CIs [1.17, 3.52], Bayes factor > 100. For the minimal graphs, there was a 14% bias (SD = 24%) to overestimate the size of the effect, which was marginally significantly from 0, t(10) = 2.03, p = .069, dz = .61, 95% CIs [-.05, 1.25], Bayes factor = 1.33. For the standardized effects, the bias was 7% (SD = 19%) and was not significantly different from 0, t(10) = 1.29, p = .227, dz = .39, 95% CIs [-.23, .99], Bayes fac-tor = .58. In summary, even with error bars, graphs with the y-axis range set as a function of the standard de-viation produced better sensitivity and less bias compared with graphs that showed the full range and graphs that showed only the minimal range nec-essary to see the data. Experiment 4: Line Graphs with Axis Range of 1.4 SD The current experiment used line graphs as stim-uli instead of bar graphs to see if the previous rec-ommendations generalized to a different kind of graph. Twenty students in an introductory psychology course participated in exchange for course credit. Stimuli were graphs that were constructed by sim-ulating data from two groups, and connecting their means with a line to create an impression of data across four groups. The four effect sizes that were modeled were Cohen’s d = 0, .3, .5, and .8, which cor-responds to no effect, a small effect, a medium ef-fect, and a big efef-fect, respectively. The y-axis range was full (0-100), minimal (smallest value minus 1 to largest value plus 1), or standardized (group mean minus 0.7 SD to the group mean plus 0.7 SD). Eve-rything else was the same as in the previous experi-ments, except the x-axis was labeled as hours spent studying on a range from 1-4. Results and Discussion The data are shown in Figure A6. The data were analyzed as before with three separate linear re-gressions for each participant for each graph type for each combination of all effect sizes, d = 0 and .3 only, and d = .3 - .8 only. One participant had slopes greater than 1.5 times the interquartile range for the full and minimal graphs, and 3 participants had slopes less than 1.5 times the interquartile range for the minimal graphs. All 4 were excluded. For regressions on all effect sizes depicted in the graphs, the standardized graphs lead to greater slopes than the full graphs, t(15) = 7.16, p < .001, dz = 1.79, 95% CIs [.98, 2.59], Bayes factor > 100 (see Table A1). The standardized graphs did not lead to signifi-cantly different slopes than the minimal graphs when calculated across the entire range, t(15) = 0.18, p = .86, dz = .05, 95% CIs [-.45, .53], Bayes factor = .26. However, this is because the minimal graphs produced superior performance with respect to de-termining whether there was an effect or not but in-ferior performance when an effect was present and the magnitude had to be determined. For regres-sions comparing d = 0 to d = .3, the slopes for the minimal graphs were higher than for the standard-ized graphs, t(15) = -4.70, p < .001, dz = 1.17, 95% CIs [.52, 1.81], Bayes factor > 100. Again, recall that the bias generated by the minimal graphs to see effects as bigger would produce greater sensitivity scores even if participants were not necessarily more sen-sitive to the effect. Indeed, the slope coefficient is 1.29, which is greater than perfect accuracy of 1, which implies some bias. For regressions comparing ds > 0, the slopes for the standardized graphs were higher than for the minimal graphs, t(15) = 3.05, p = .008, dz = .76, 95% CIs [.19, 1.31], Bayes factor = 6.46. This suggests that the standardized graphs still pro-duced better outcomes than the full or minimal graphs. Figure A6. Mean response is plotted as a function of depicted effect size and graph type for Experi-ment 4. Error bars are 1 SEM calculated within-subjects. Solid lines represent linear regressions for depicted effects d ≥ .3. Dashed lines represent lin-ear regressions for depicted effects less than d ≤ .3. Table A1. Mean (and SD) coefficients for the slopes for each graph type for each analysis from Experiment 4. Graph Type All data ds = .3-.8 ds = 0 - .3 Full .30 (.08) .15 (.08) .61 (.22) Standardized .61 (.15) .52 (.18) .86 (.32) Minimal .61 (.06) .31 (.25) 1.29 (.56) Note. The slopes indicate the linear relationship between the size of the effect depicted and the estimate of the ef-fect size, both of which were coded on a scale from 1-4. Regarding bias, similar results were found as in previous experiments. The bias was -26% (SD = 11%) with the full graphs, indicating a bias to underesti-mate the effects, t(15) = -9.52, p < .001, dz = 2.38, 95% CIs [1.39, 3.34], Bayes factor > 100. The bias was 19% (SD = 17%) with the minimal graphs, indicating a bias to overestimate the size of the effects, t(15) = 4.36, p < .001, dz = 1.09, 95% CIs [.46, 1.70], Bayes factor = 64. With the standardized graphs, the bias was 2% (SD = 10%), which was not significantly different from 0, t(14) = 0.73, p = .48, dz = .18, 95% CIs [-.31, .67], Bayes factor = .32. With the line graphs, as with the bar graphs, the standardized axis range produced better sensitivity and less bias than the full axis range or the minimal axis range. Experiment 5: Line Graphs with Axis Range of 1 SD The current experiment replicated Experiment 4 using a smaller axis range for the standardized graphs. Fifteen students in an introductory psychology course participated in exchange for course credit. The stimuli were the same as in Experiment 4 except that for the standardized graphs, the range was the group mean ± 0.5 SD. Results and Discussion The data were analyzed as before with three sep-arate linear regressions for each participant for each graph type for each combination of all effect sizes, d = 0 and .3 only, and d = .3 - .8 only. One participant had a slope that was less than 1.5 times the inter-quartile range for the minimal graphs, and one had a slope greater than 1.5 times the interquartile range for the full graphs. Both were excluded. The mean slope coefficients for the remaining participants are shown in Table A2 and the data are shown in Figure A7. Table A2. Mean (and SD) coefficients for the slopes for each graph type for each analysis from Experiment 5. Graph Type All data ds = .3-.8 ds = 0 - .3 Full .32 (.12) .20 (.13) .58 (.22) Standardized .55 (.20) .48 (.17) .76 (.49) Minimal .53 (.16) .36 (.25) .96 (.52) Figure A7. Mean response is plotted as a function of depicted effect size and graph type for Experi-ment 5. Error bars are 1 SEM calculated within-subjects. Solid lines represent linear regressions for depicted effects d ≥ .3. Dashed lines represent lin-ear regressions for depicted effects less than d ≤ .3. The patterns match those found in Experiment 4. Participants were more sensitive to the size of the effect for the standardized graphs than for the full graphs when all trials were included, t(13) = 4.41, p < .001, dz = 1.22, 95% CIs [.48, 1.94], Bayes factor = 46, and when trials for which the effect size depicted was greater than 0, t(13) = 6.69, p < .001, dz = 1.86, 95% CIs [.93, 2.76], Bayes factor > 100, but not when only trials for which the effect size depicted was null or small, t(13) = 1.41, p = .19, dz = .39, 95% CIs [-.18, .95], Bayes factor = .62. Participants were more sen-sitive to the size of the effect for the standardized graphs than for the minimal graphs but only when the depicted effect in the graph was greater than 0, t(13) = 2.59, p = .023, dz = .72, 95% CIs [.09, 1.32], Bayes factor = 2.88. There was no difference in sen-sitivity across all effect sizes, p = .65, Bayes factor = .31, and the minimal graphs produced better sensi-tivity when only data from graphs depicting a null or small effect were included, t(13) = -3.11, p = .009, dz = .86, 95% CIs [.21, 1.49], Bayes factor = 6.27. As be-fore, the bias created by the minimal graphs could account for this apparent increase in sensitivity. Regarding the bias, the full graphs produced a bias of -15% (SD = 17%), indicating a bias to underes-timate effect size, t(12) = -3.07, p = .010, dz = .85, 95% CIs [.20, 1.48], Bayes factor = 5.90. The minimal graphs produced a bias of 12%, (SD = 20%), which was marginally above 0, t(12) = 2.21, p = .047, dz = .61, 95% CIs [.01, 1.20], Bayes factor = 1.69. The stand-ardized graphs led to a small bias of 6% (SD = 14%), that was marginally close to 0, t(12) = 1.63, p =.13, dz = .45, 95% CIs [-.13, 1.01], Bayes factor = .80. Across Experiment Comparisons Sample size was not selected to achieve sufficient power to do analyses across experiments. To facili-tate preliminary exploration of the data, the coeffi-cients are reported in Tables S3, S4, and S5, and are plotted in Figure A8 and Figures 2 and 3 in the main text. It may be interesting to note that sensitivity of the size of the effect was not notably better with er-ror bars than without erer-ror bars even though erer-ror bars are necessary to understand effect size. Alt-hough this may not be surprising given the partici-pants being introductory psychology student, the pattern is consistent with previous findings that many researchers do not know how to interpret er-ror bars (Belia, Fidler, Williams, & Cumming, 2005). In addition, the lack of noticeable differences in sen-sitivity between the experiments suggests that the use of a y-axis range that is approximately 1.5 SDs could help better report the results in cases for which researchers neglect to include error bars. Table A3. Mean slopes (and standard deviations) from regressions on all trials for each of the 5 experiments. Graph Type Exp 1 Exp 2 Exp 3 Exp 4 Exp 5 Full .28 (.10) .31 (.08) .21 (.08) .30 (.08) .32 (.13) Standardized .46 (.13) .54 (.17) .58 (.17) .61 (.15) .55 (.20) Minimal .27 (.03) .30 (.06) .49 (.15) .61 (.06) .53 (.16) Note. A slope of 1 indicates perfect performance and a slope of 0 indicates chance performance. Table A4. Mean slopes (and standard deviations) from regressions on trials for which Cohen’s d > 0.1 for each of the 5 experiments. Graph Type Exp 1 Exp 2 Exp 3 Exp 4 Exp 5 Full .17 (.10) .18 (.10) .09 (.06) .15 (.08) .20 (.13) Standardized .42 (.16) .46 (.16) .46 (.11) .52 (.18) .48 (.17) Minimal .07 (.09) .13 (.14) .25 (.13) .31 (.25) .36 (.25) Table A5. Mean bias scores as a percentage (and standard deviations) for each of the 5 experiments. Graph Type Exp 1 Exp 2 Exp 3 Exp 4 Exp 5 Full -27 (5) -28 (9) -25 (10) -26 (11) -15 (17) Standardized 1 (4) 6 (9) 14 (13) 2 (10) 6 (14) Minimal 36 (21) 31 (21) 23 (16) 19 (17) 12 (20) Note. Bias scores were calculated as a percent bias based on intercepts from regressions on all trials including those for which Cohen’s d = 0.
{"url":"https://5dok.org/document/4zpn8dry-graph-construction.html","timestamp":"2024-11-09T23:46:17Z","content_type":"text/html","content_length":"214401","record_id":"<urn:uuid:61e1045d-7b22-45bb-9da8-aa5273e47960>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00169.warc.gz"}
Real-World Applications of Kinetic Energy (Transportation, Sports, etc.) in context of kinetic energy to work 31 Aug 2024 Title: Harnessing the Power of Kinetic Energy: Real-World Applications in Transportation and Beyond Kinetic energy, a fundamental concept in physics, plays a crucial role in various real-world applications across transportation, sports, and other domains. This article explores the significance of kinetic energy to work (W) in these contexts, highlighting its impact on efficiency, performance, and sustainability. Kinetic energy (KE) is the energy of motion, calculated as half the product of an object’s mass (m) and velocity squared (v^2): KE = 0.5 * m * v^2. In transportation systems, kinetic energy is a vital component in determining the efficiency and performance of vehicles. Transportation Applications: 1. Electric Vehicles: Electric cars convert electrical energy into kinetic energy through electric motors. The efficiency of these conversions depends on the motor’s power output (P) and the vehicle’s mass (m): Efficiency = P / (KE * m). 2. Hybrid Vehicles: Hybrid vehicles combine internal combustion engines with electric motors, optimizing kinetic energy usage to improve fuel efficiency. 3. High-Speed Transportation: Maglev trains and hyperloops rely on kinetic energy to propel vehicles at high speeds, reducing travel times and increasing efficiency. Sports Applications: 1. Athletic Performance: Kinetic energy is critical in sports like track and field, where athletes strive to maximize their speed and power output. 2. Golf Swing Dynamics: Golfers aim to generate optimal kinetic energy through proper swing mechanics, resulting in increased ball speed and distance. 3. Cycling Efficiency: Cyclists focus on optimizing their pedaling technique to convert kinetic energy into forward motion, reducing energy expenditure. Other Applications: 1. Wind Energy: Wind turbines harness kinetic energy from wind flows to generate electricity. 2. Hydroelectric Power: Hydroelectric power plants tap into the kinetic energy of moving water to produce electricity. 3. Aerospace Engineering: Kinetic energy plays a crucial role in spacecraft propulsion, navigation, and maneuverability. Kinetic energy is a vital component in various real-world applications, from transportation systems to sports and beyond. Understanding the relationship between kinetic energy and work (W) = ∫F * dx, where F is force and x is displacement, is essential for optimizing performance, efficiency, and sustainability. As technology continues to evolve, harnessing the power of kinetic energy will remain a crucial aspect in shaping our future. 1. Feynman, R. P., Leighton, R. B., & Sands, M. (1963). The Feynman Lectures on Physics. 2. Halliday, D., Resnick, R., & Walker, J. (2013). Fundamentals of Physics. 3. Serway, R. A., & Jewett, J. W. (2014). Physics for Scientists and Engineers. Related articles for ‘kinetic energy to work ‘ : • Reading: **Real-World Applications of Kinetic Energy (Transportation, Sports, etc.) in context of kinetic energy to work ** Calculators for ‘kinetic energy to work ‘
{"url":"https://blog.truegeometry.com/tutorials/education/df2077d57f3e3fbe7041a1b0f674cb8c/JSON_TO_ARTCL_Real_World_Applications_of_Kinetic_Energy_Transportation_Sports_.html","timestamp":"2024-11-06T09:03:56Z","content_type":"text/html","content_length":"17585","record_id":"<urn:uuid:a5f29775-cf69-441b-ba33-5cc4b6551702>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00170.warc.gz"}
Sign and Zero Restricted VAR Add-In Authors and guest post by Davaajargal Luvsannyam and Ulziikhutag Munkhtsetseg In our previous blog entry, we discussed the sign restricted VAR (SRVAR) add-in for EViews. Here, we will discuss imposing a further zero restrictions on the impact period of the impulse response function (IRF) using the ARW and SRVAR add-ins in tandem. Table of Contents Note that it is certainly possible to impose both sign and exclusion restrictions. For example, Mountford and Uhlig (2009) are motivated by the idea that fiscal policy shocks are identified as orthogonal to both monetary policy and business cycle shocks, and use a penalty function approach (PFA) to impose zero restrictions. (For details on the PFA, please see our SRVAR blog entry.) They also considered anticipated government revenue shocks in which government revenue is restricted to rise one year following some impulse. Furthermore, Beaudry, Nam, and Wang (2011) estimate a structural VAR model including total factor productivity, stock prices, real consumption, real federal funds rate and hours worked. They use the PFA to show that a positive optimism shock causes an increase in both consumption and hours worked. Recently, Arias, Rubio-Ramirez, and Waggoner (2018), henceforth ARW, developed algorithms to independently draw from a family of conjugate posterior distributions over the structural parameterization when sign and zero restrictions are used to identify SRVARs. They showed the dangers of using the PFA when implementing sign and zero restrictions together to identify structural VARs (SVARs). Orthogonal Reduced-Form Parameterization ARW focus on two SVAR parameterizations. In addition to the classical structural parameterization, they show that SVARs can also be written as a product of a reduced-form parameters and a set of orthogonal matrices. This is called the orthogonal reduced-form parameterization, henceforth, ORF. The algorithms ARW propose draw from a conjugate posterior distribution over the ORF and then transform said draws into a structural parameterization. In particular, they use the normal-inverse-Wishart distribution as the prior conjugate distribution, and develop a change of variable theory that characterizes the induced family of densities over the structural parameterization. This theory shows that a uniform-normal-inverse-Wishart density over the ORF parameterization induces a normal-generalized-normal density over the structural parameterization. To motivate their contribution, ARW first show that existing algorithms for SVARs identified only by sign restrictions, conditional on a sign restriction using the change of variable theory, operate on independent draws from the normal-generalized-normal distribution over the structural parameterization. These algorithms independently draw from the uniform-normal-inverse-Wishart distribution over the ORF parameterization and only accept draws that impose a sign restriction. Next, ARW generalize these algorithms to also consider zero restrictions. The key to this generalization is that, conditional on the reduced-form parameters, the class of zero restrictions on the structural parameters maps to linear restrictions on the orthogonal matrices. The resulting approach generalization independently draws from normal-inverse-Wishart over the reduced-form parameters and from the set of orthogonal matrices such that the zero restrictions hold. In this regard, conditional on the zero restrictions, they show that this generalization does not induce a distribution over the structural parameterization from the family of normal-generalized-normal distributions. Furthermore, they derive the induced distribution and write an importance sampler that, conditional on the sign and zero restrictions, independently draws from normal-generalized-normal distributions over the structural parameterization. To formalize these ideas, consider the SVAR with the general form: Y_t^{\prime} A_{0} = \sum_{i=1}^{p} Y_{t-i}^{\prime}A_{i} + c + \epsilon_t^{\prime}, \quad t=1, \ldots, T \label{eq1} where $ Y_t $ is an $ n\times 1 $ vector of endogenous variables, $ A_i $ are parameter matrices of size of $ n\times n $ with $ A_{0} $ invertible, $ c $ is a $ 1\times n $ vector of parameters, $ \epsilon_t $ is an $ n\times 1 $ vector of exogenous structural shocks, $ p $ is the lag length, and $ T $ is the sample size. We can also summarize equation \eqref{eq1} as follows: Y_{t}^{\prime}A_{0} = X_{t}^{\prime}A_{+} + \epsilon_{t}^{\prime} \label{eq2} where $ A_{+}^{\prime} = \left[A_{1}^{\prime}, \ldots, A_{p}^{\ prime}, c^{\prime}\right]$ and $ X_{t}^{\prime} = \left[Y_{t-1}^{\prime}, \ldots, Y_{t-p}^{\prime}, 1\right] $. The reduced form can now be written as: Y_{t}^{\prime} = X_{t}^{\prime}B + u_{t}^{\prime} \label{eq3} where $ B = A_{+}A_{0}^{-1}, u_{t}^{\prime} = \epsilon_{t}^{\prime}A_{0}^{-1} $, and $ E(u_{t}u_ {t}^{\prime}) = \Sigma = \left(A_{0}A_{0}^{\prime}\right)^{-1} $. Naturally, $ B $ and $ \Sigma $ are the reduced form parameters. We can further write equation \eqref{eq3} as the orthogonal reduced-form parameterization Y_{t}^{\prime} = X_{t}^{\prime}B + \epsilon_{t}^{\prime}Q^{\prime}h(\Sigma) \label{eq4} where the $ n\times n $ matrix $ h(\Sigma) $ is the Cholesky decomposition of covariance matrix $ \Sigma $. Given equations \eqref{eq2} and \eqref{eq4}, in addition to the Cholesky decomposition $ h $, we can define a mapping between $ \left(A_{0}, A_{+}\right) $ and $ (B, \Sigma, Q) $ by: f_{h}\left(A_ {0}, A_{+}\right) = \left(A_{+}A_{0}^{-1}, \left(A_{0}A_{0}^{\prime}\right)^{-1}, h\left(\left(A_{0}A_{0}^{\prime}\right)^{-1}\right)A_{0}\right) \label{eq5} where the first element of the triad on the right corresponds to $ B $, the second to $ \Sigma $, and the third to $ Q $. Note further that the function $ f_{h} $ is invertible with inverse defined by: f_{h}^{-1} (B,\Sigma, Q) = \left(h(\Sigma)^{-1}Q, Bh(\Sigma)^{-1}Q\right) \label{eq6} where the first term on the right corresponds to $ A_{0} $ and the second to $ A_{+} $. Thus, the ORF parameterization makes clear how the structural parameters depend on the reduced form parameters and orthogonal matrices. ARW Algorithms Although ARW propose three different algorithms, the most important is in fact the third. The latter draws from a distribution over the ORF parameterization conditional on the sign and zero restriction and then transforms the draws into the structural parameterization. Since Algorithm 3 also depends on Algorithm 2, we present the latter here and recommend readers to refer to the supplementary materials of ARW (2018) if they require further details. Algorithm 2 Let $ Z_j $ define the zero restriction matrix on the $ j^{\text{th}} $ structural shock, and let $ z_{j} $ denote the number of zero restrictions associated with the $ j^{\text{th}} $ structural shock. Then: 1. Draw $ (B, \Sigma) $ independently from Normal-inverse-Wishart distribution. 2. For $ j \in \{1, \ldots, n\} $ draw $ X_{j} \in \mathbf{R}^{n+1-j-Z_{j}} $ independently from a standard normal distribution and set $ W_{j} = X_{j} / ||X_{j}||$. 3. Define $ Q = [q_{1}, \ldots q_{n}] $ recursively as $ q_{j} = K_{j}W_{j} $ for any matrix $ K_{j} $ whose columns form an orthonormal basis for the null space of the $ (j-1+z_{j})\times n $ matrix M_{j} = \left[q_{1}, \ldots, q_{j-1},\left(Z_{j}F\left(f_{h}^{-1}(B, \Sigma, I_{n})\right)\right)\right] 4. Set $ (A_{0},A_{+}) = f_{h}^{-1}(B,\Sigma,Q) $. Algorithm 3 Let $ \mathcal{Z} $ denote the set of all structural parameters that satisfy the zero restrictions, and define $ v_{(g^{\circ}f_{h})|\mathcal{Z}} $ as te volume element. Then: 1. Use Algorithm 2 to independently draw $ (A_{0}, A_{+}) $. 2. If $ (A_{0}, A_{+}) $ satisfies the sign restrictions, set its importance weight to $$ \frac{|\det(A_{0})|^{-(2n+m+1)}}{v_{(g^{\circ}f_{h})|\mathcal{Z}}(A_{0}A_{+})} $$ otherwise, set its importance weight to zero. 3. Return to Step 1 until the required number of draws has been obtained. 4. Re-sample with replacement using the importance weights. ARW EViews Add-in Now we turn to the implementation of the ARW add-in. First, we need to download and install the add-in from the EViews website. The latter can be found at https://www.eviews.com/Addins/arw.aipz. We can also do this from inside EViews itself. In particular, after opening EViews, click on Add-ins from the main menu, and click on Download Add-ins.... From here, locate the ARW add-in and click on Figure 1: Add-in installation After installing, we open the data file named as data.WF1 which can be found in the installation folder, typically located in [Windows User Folder]/Documents/EViews Addins/ARW. Figure 2: ARW (2018) Data We now replicate Figures 1 and Table 3 from ARW. We can of course do this in EViews as follows. 1. Click on the Add-ins menu item in the main EViews menu, and click on Sign restricted VAR. 2. Under Endogenous variables enter tfp stock cons ffr hour. 3. Check the Include constant option. 4. Under Number of lags, enter 4. 5. In the Sign restriction vector textbox enter +2. 6. Under Sign restriction method check Penalty. 7. In the Number of horizons enter 40 8. Under Zero restriction textbox enter tfp. 9. Check the variance decomposition box. 10. Hit OK. Figure 3: SRVAR Add-in (PFA) The steps above produce the following output (Panel A of Figure 1 of ARW): Figure 4: PFA Output Next, we invoke the ARW add-in and proceed with the ARW Algorithm 3. 1. Click on the Add-ins menu item in the main EViews menu, and click on Sign and zero restricted VAR. 2. Under Endogenous variables enter tfp stock cons ffr hour. 3. Check the Include constant option. 4. Under Number of lags, enter 4. 5. In the Sign restriction vector textbox enter +stock. 6. In the Zero restrictions textbox enter tfp. 7. UnderNumber of steps enter 40. 8. Check the variance decomposition box. 9. Hit OK. Figure 5: ARW Add-in (Importance Sampler) The steps above produce the following output (Panel B of Figure 1 of ARW): Figure 6: Importance Sampler Output Figures 5 and 6 above illustrates the IRFs using the PFA and importance sampler methods, respectively. In case of the former, we can see the IRFs with probability bands for adjusted TFP, stock prices, consumption, real interest rate,and hours worked under the PFA. Examining the confidence bands around IRFs allows us to conclude that optimism shocks boost consumption and hours worked, as the corresponding IRFs do not contain a zero for at least 20 quarters. Alternatively, the IRFs of the same variables obtained using the importance sampler yield a different result. For consumption and hours worked, the confidence bands are wider and contain zero. Furthermore, the corresponding point-wise median IRFs are closer to zero compared to those obtained using the PFA. This shows that the PFA exaggerates the effects of optimism shocks on stock prices, consumption, and hours worked, by generating much narrower confidence bands and larger point-wise median IRFs. In this regard, as pointed out by Uhlig (2005), we can see that the PFA includes additional identification restrictions when implementing sign and zero restrictions. To further summarize the results, we present the table below which gives the specifics of the output figures above. Penalty Function Approach Importance Adjusted TFP 0.07 0.17 0.29 0.03 0.11 0.23 Stock Prices 0.54 0.72 0.84 0.05 0.29 0.57 Consumption 0.13 0.27 0.43 0.03 0.17 0.50 Real Interest Rate 0.07 0.14 0.23 0.08 0.20 0.39 Hours Worked 0.20 0.31 0.45 0.04 0.18 0.56 Table I: Forecast Error Variance Decomposition (FEVD) Table I shows the contribution of shocks to the Forecast Error Variance Decomposition (FEVD) using the PFA and the importance sampler for the chosen horizon of 40 periods and 68 percent equal-tailed probability intervals. Under the PFA, the share of FEVD attributable to optimism shocks of consumption and hours worked is 27 and 31 percent, respectively. However, the contribution of optimism shocks to the FEVD of stock prices is 72 percent under the PFA in contrast to 29 percent using the importance sampler. It should be noted that for most variables, when using the importance sampler, optimism shocks contribute less to the FEVD, and probability intervals for the FEVD are broader as opposed to those obtained under the PFA. In this blog entry we presented the ARW add-in for EViews. The add-in is based on the work of ARW (2018) and generates impulse response curves based on the importance sampler which accommodates both sign and zero restrictions in the VAR model. 5 comments: 1. Greeting Sir, I was trying to replicate ARW as show in figure 5 above but it was giving an error message saying "@RIWISH is illegal or reserved name". Please, what can I do to resolve the problem. Thank you. 1. Hello, I have the same error. Is there a fix about that? 2. Thanks for your work. It is very interesting to read the opinion and analysis of the professionals. 3. Hello, I would like to enquire the Same Question. When I estimated the ARW model, the notification "@IRIWISH is illegal or reserved name" always appears. What should I do then? 4. Hi and thanks for this useful add-in. However, I cant get it to work (using Eviews13): I get the error message "Error 72 in encrypted program" using ARW and "Error 247in encrypted program" using the earlier SRVAR add-in. It used to work with Eviews12 so I wonder whether this has anything to do with that?
{"url":"https://blog.eviews.com/2019/12/sign-and-zero-restricted-var-add-in.html","timestamp":"2024-11-13T01:35:40Z","content_type":"application/xhtml+xml","content_length":"112475","record_id":"<urn:uuid:e6e6eb10-6e71-44bd-a575-a6a004e29d76>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00403.warc.gz"}
Impact of spurious shear on cosmological parameter estimates from weak lensing observables Residual errors in shear measurements, after corrections for instrument systematics and atmospheric effects, can impact cosmological parameters derived from weak lensing observations. Here we combine convergence maps from our suite of ray-tracing simulations with random realizations of spurious shear. This allows us to quantify the errors and biases of the triplet (Ω[m],w ,σ[8]) derived from the power spectrum (PS), as well as from three different sets of non-Gaussian statistics of the lensing convergence field: Minkowski functionals (MFs), low-order moments (LMs), and peak counts (PKs). Our main results are as follows: (i) We find an order of magnitude smaller biases from the PS than in previous work. (ii) The PS and LM yield biases much smaller than the morphological statistics (MF, PK). (iii) For strictly Gaussian spurious shear with integrated amplitude as low as its current estimate of σ[sys]^2≈10^-7 , biases from the PS and LM would be unimportant even for a survey with the statistical power of Large Synoptic Survey Telescope. However, we find that for surveys larger than ≈100 deg^2 , non-Gaussianity in the noise (not included in our analysis) will likely be important and must be quantified to assess the biases. (iv) The morphological statistics (MF, PK) introduce important biases even for Gaussian noise, which must be corrected in large surveys. The biases are in different directions in (Ω[m],w ,σ[8]) parameter space, allowing self-calibration by combining multiple statistics. Our results warrant follow-up studies with more extensive lensing simulations and more accurate spurious shear estimates. Physical Review D Pub Date: December 2014 □ 95.30.Sf; □ 98.62.Sb; □ Relativity and gravitation; □ Gravitational lenses and luminous arcs; □ Astrophysics - Cosmology and Nongalactic Astrophysics 17 pages, 8 figures, 7 tables
{"url":"https://ui.adsabs.harvard.edu/abs/2014PhRvD..90l3015P/abstract","timestamp":"2024-11-03T20:46:17Z","content_type":"text/html","content_length":"42993","record_id":"<urn:uuid:b17fb2d1-edce-4d57-94cc-30f857e9d84d>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00252.warc.gz"}