content
stringlengths
86
994k
meta
stringlengths
288
619
University of Pittsburgh The rapidly expanding collection and use of data is driving transformations across broad segments of industry, science, and society. These changes have sparked great demand for individuals with skills in managing and analyzing complex data sets. Such skills are interdisciplinary, involving ideas typically associated with computing, information processing, mathematics, and statistics as well as the development of new methodologies spanning these fields. Our major in Data Science (offered jointly with the Department of Statistics and SCI) offers a program specifically geared to training students to participate in this data revolution. This undergraduate major allows students to gain critical skill sets that span key areas of mathematics, computing, and statistics, with foundational training providing literacy in four areas (data, algorithmic, mathematical, and statistical) that every student needs to master data science. Students will develop expertise that connects theory to the solution of real-world problems and be able to specialize their studies towards a more specific career focuses. Completing this major will prepare students to work as a data science professional or to pursue graduate study in a direction involving data in a significant way. Choosing between Data Science and other Mathematics Majors Students who graduate with any of the Mathematics majors or the Data Science major will be well qualified for positions in data science. The Data Science major is designed for students whose main passion is working with data, including mathematical, statistical, and computing aspects. The other Mathematics majors are designed for students whose primary interest is in mathematics: its beauty, its elegance, its logical structure, and/or its utility for modeling real-world systems and solving real-world problems. Double Majors in Data Science and Math It is quite possible to complete a double major in Data Science and one of the Mathematics directions. In this case, up to five courses can be chosen to count for both majors. (This is an exception to the usual limit of 8 overlapping credits between Dietrich School majors.) Students considering this option should choose Math 1180 as their linear algebra class, as this is the required linear algebra course for all math majors. Major Requirements For full details, see the official major sheet. • CMPINF/STAT 1061 Data Science Foundations • CMPINF 0401 Intermediate Programming • CS 0445 Algorithms and Data Structures 1 • MATH 0220 Calculus 1 • MATH 0230 Calculus 2 • MATH 0280/MATH 1180 Intro to Matrices/Linear Algebra • MATH 0480/CS 0441 Applied Discrete Mathematics • STAT 1151/STAT 1631 Intro Probability • STAT 1152/STAT 1632 Intro Mathematical Statistics • CS 0590 Social Implications of Computing Technology • CS 1501 Algorithms and Data Structures 2 • CS 1656 Introduction to Data Science • CS 1675 Introduction to Machine Learning or STAT 1361 Learning and Data Science • MATH 1101 Optimization • STAT 1261 Principles of Data Science Students should take 3 courses from one of the following 4 specialty areas: Students must complete one of the following 3 courses: • CMPINF 1981 Project Studio • MATH 1103 BIG Problems • STAT 1961 Data Science in Action
{"url":"https://www.mathematics.pitt.edu/bachelor-science-data-science","timestamp":"2024-11-08T04:37:55Z","content_type":"text/html","content_length":"93924","record_id":"<urn:uuid:0d968412-0d6e-4e3c-a524-cb61372994b0>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00526.warc.gz"}
EViews Help: Estimating Least Squares with Breakpoints in EViews Estimating Least Squares with Breakpoints in EViews To estimate an equation using least squares with breakpoints, select or from the main EViews menu, then select in the drop-down menu, or simply type the keyword breakls in the command window: You should enter the dependent variable followed by a list of variables with breaking regressors in the top edit field, and optionally, a list of non-breaking regressors in the bottom edit field. Next, click on the tab to display additional settings for calculation of the coefficient covariance matrix, specification of the breakpoints, weighting, and the coefficient name. The weighting and coefficient name settings are common to other EViews estimators so we focus on the covariance computation and break specification. Coefficient Covariance Matrix The covariance matrix section of the page offers various computation settings. The top drop-down menu should be used to specify the estimator for the coefficient covariances. The is to use the conventional estimator for least squares regression. You may instead elect to use heteroskedasticity robust or covariance calculations. If you specify HAC covariances, EViews will display the button which you may press to bring up a dialog for customizing your calculation. By default, the covariances will be computed assuming homogeneous errors variances with a common distribution across regimes. You may use the to relax this common distribution restriction. If you specify either the White or HAC form of robust covariance, EViews will commonly display the checkbox. EViews generally follow Bai and Perron (2003a) who, with one exception, do not impose the restriction that the distribution of the Bai and Perron do impose the homogeneity data restriction when computing HAC robust variances estimators with homogeneous errors. To match the Bai-Perron common error assumptions, you will have to select the . (Note that EViews does not allow you to specify heterogeneous error distributions and robust covariances in partial breaking models.) Break Specification The section of the dialog contains a drop-down where you may specify the type of test you wish to perform. You may choose between: • Sequential L+1 breaks vs. L • Sequential tests all subsets • Global L breaks vs. none • L+1 breaks vs. global L • Global information criteria • Fixed number - sequential • Fixed number - global • User-specified The first two entries determine the optimal number of breaks based on the sequential methodology as described in “Sequential Determination” above. The methods differ in whether, for a given ), or whether we test the single added breakpoint that most reduces the sum-of-squares (). The next three methods employ the global optimizers to determine the number and identities of breaks as described in “Global Maximization” . If you select one of the global methods, you will see a second drop-down prompting you to specify a sub-method. • For the method, there are four possible sub-methods. The method chooses the last significant number of breaks, determined sequentially. Selecting chooses the number of breaks that is largest from amongst the significant tests. The latter two settings choose the number of breaks using the corresponding double max test. • Similarly, if you select the method, a drop-down offers a choice between and . • The method lets you choose between using the or the . The next two methods, and , pre-specify the number of breaks and choose the breakpoint dates using the specified method. The method allows you to specify your own break dates. Depending on your choice of method, you may be prompted to provide information on one or more additional settings: • If you specify one of the two fixed number of break methods, you will be prompted for the number of breakpoints (not depicted). • The and setting limits the number of breakpoints allowed via global testing, and in sequential or mixed • The drop-down menu should be used to choose between test size values of (0.01, 0.025, 0.05, and 0.10). This setting is not relevant for methods which do not employ testing. Additional detail on all of the methodologies outlined above is provided in “Multiple Breakpoint Tests”
{"url":"https://help.eviews.com/content/multibreak-Estimating_Least_Squares_with_Breakpoints_in_EVi.html","timestamp":"2024-11-12T05:27:48Z","content_type":"application/xhtml+xml","content_length":"19138","record_id":"<urn:uuid:75c9467a-a340-480e-9f60-3d8f62fd92e5>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00343.warc.gz"}
(PDF) Scalability of Network Visualisation from a Cognitive Load Perspective Author content All content in this area was uploaded by Yalong Yang on Aug 21, 2020 Content may be subject to copyright. Scalability of Network Visualisation from a Cognitive Load Perspective Vahan Yoghourdjian, Yalong Yang, Tim Dwyer, Lee Lawrence, Michael Wybrow and Kim Marriott Fig. 1: EEG topographical maps of median theta brain activity for (from left-to-right) easy tasks, hard tasks and the difference (hard-easy) for participants undertaking path-finding exercises on network diagrams. — Node-link diagrams are widely used to visualise networks. However, even the best network layout algorithms ultimately result in ‘hairball’ visualisations when the graph reaches a certain degree of complexity, requiring simplification through aggregation or interaction (such as filtering) to remain usable. Until now, there has been little data to indicate at what level of complexity node-link diagrams become ineffective or how visual complexity affects cognitive load. To this end, we conducted a controlled study to understand workload limits for a task that requires a detailed understanding of the network topology—finding the shortest path between two nodes. We tested performance on graphs with 25 to 175 nodes with varying density. We collected performance measures (accuracy and response time), subjective feedback, and physiological measures (EEG, pupil dilation, and heart rate variability). To the best of our knowledge this is the first network visualisation study to include physiological measures. Our results show that people have significant difficulty finding the shortest path in high density node-link diagrams with more than 50 nodes and even low density graphs with more than 100 nodes. From our collected EEG data we observe functional differences in brain activity between hard and easy tasks. We found that cognitive load increased up to certain level of difficulty after which it decreased, likely because participants had given up. We also explored the effects of global network layout features such as size or number of crossings, and features of the shortest path such as length or straightness on task difficulty. We found that global features generally had a greater impact than those of the shortest path. Index Terms—Data Visualisation, Network Visualisation, Cognitive Load, EEG. Visualisation helps analysts to understand and explain complex data. However, there exist factors that limit the amount of information that can be visualised. Scaling to a large number of data elements is a major issue in visualisation design. Eick and Karr [23] discuss how human perception, monitor resolution, visual metaphors, interactivity, data structures and algorithms, as well as computational infrastructure affect visual scalability. For network visualisation, the last five factors have been well explored [36]. However, scalability of human perception remains under-studied. A recent survey of 152 experimental studies of node-link visualisation techniques found that most of the networks considered in these studies were relatively small and sparse [66]. The survey authors called for studies that control for the size and complex- • Vahan Yoghourdjian, Tim Dwyer, Michael Wybrow and Kim Marriott are with the Department of Human-Centred Computing, Faculty of Information Technology, Monash University, Melbourne, Australia. E-mail: • Lee Lawrence is with the Faculty of Business and Economics, Monash University, Melbourne, Australia. E-mail: lee.lawrence@monash.edu • Yalong Yang was with the Department of Human-Centred Computing, Faculty of Information Technology, Monash University, Melbourne, Australia. He is now with the School of Engineering and Applied Sciences, Harvard University, MA, USA. E-mail: yalongyang@g.harvard.edu Manuscript received xx xxx. 201x; accepted xx xxx. 201x. Date of Publication xx xxx. 201x; date of current version xx xxx. 201x. For information on obtaining reprints of this article, please send e-mail to: reprints@ieee.org. Digital Object Identifier: xx.xxxx/TVCG.201x.xxxxxxx ity of the network to explicitly test perceptual scalability of network visualisation techniques. Here we address perceptual scalability of node-link diagrams, which are undoubtedly the most common way of visualising networks. Sur- veys like that of Jankun et al. [36] speak about the so-called ‘hair-ball effect’, wherein, node-link diagram representations of larger small- world or scale-free graphs are no longer useful for understanding the connectivity of all but peripheral nodes in the visualisation. Previous studies suggest that, while matrix-based representations are more ef- fective than node-link diagrams for some tasks [28,41, 49], node-link diagrams are superior for connectivity tasks. For this reason we focus on the scalability of node-link diagrams for a widely used connectivity task, that of finding the shortest-path between two nodes. We conducted an experiment in which 22 participants found the length of the shortest path between two nodes on 42 small-world net- works, ranging from 25 to 175 nodes with three levels of density. In addition to task completion speed, accuracy and self-reported difficulty, we also collected physiological measures known to be associated with mental effort: brain electrical activity (EEG), heart rate, and pupil size. This is the first study that we know of to provide a holistic analysis across subjective, physiological as well as performance measurements for a network visualisation task. Our main contributions are as follows. We establish that the usefulness of node-link diagrams for finding shortest paths quickly deteriorates as the number of nodes and edges increases—as discussed in Section 4. For small-world graphs with 50 or more nodes and a density (ratio of edges to nodes) of 6, participants were unable to correctly answer in more than half of the trials. This was also the case for graphs with a arXiv:2008.07944v1 [cs.HC] 18 Aug 2020 density of 2 and more than 100 nodes. We provide an analysis of the relationship between task hardness and the physiological measures of cognitive load—see Section 5. We found that these measures of load increased with task hardness until a threshold is reached, after which it decreases, suggesting that participants give up. This analysis relied on combining accu- racy and self-reported difficulty to give a single measure of task hardness for each of the 42 stimuli. We make an initial identification of brain regions associated with a network visualisation task—in Section 5.2—and reveal functional differences in brain activity between hard and easy tasks. Here we used self-reported difficulty. We explored the effects of global network layout features (such as size or number of crossings) and features of the shortest path (such as length or straightness) on task difficulty. We found that global features generally had a greater impact on hardness than those of the shortest path—see Section 6. Furthermore, measuring cognitive load through physiological measures requires careful setup and analysis. We believe that our experience and methodology will inform future visualisa- tion researchers who also wish to use such measures to evaluate cognitive load for other kinds of visualisation tasks—Section 7. Our research adds to the understanding of the perceptual scalability of node-link diagrams. It informs visualisation designers about the size of network for which node-link diagrams are appropriate and at what point the number of nodes and links displayed to the user should be limited, e.g., through filtering or aggregation techniques. It also helps to clarify the visual features that layout algorithms should focus on to improve usability. 2 BACKG RO UN D AN D RE LATED WORK 2.1 Network Visualisation Effectiveness Studies Task performance—accuracy and/or response time—is the standard measure of visualisation efficiency used across many network visuali- sation studies. There has been a lot of research exploring the effects of layout features of node-link diagrams on task effectiveness, many of which use shortest-path finding as a task [20,32, 34,43,54, 55]. In partic- ular, a study by Ware et al. [63] explores the effects of different layout features on response time in a shortest-path finding task on node-link diagrams, which they attribute to ‘cognitive cost’. Their results indicate that the number of hops on the shortest path is the highest contributor to cognitive cost, followed by the straightness of the shortest path. A number of empirical studies compare node-link diagrams with alternate visualisation types and techniques. For example, Ghoniem et al. [28] compare the effectiveness of node-link diagrams with adjacency matrices for various tasks. The study is unusual in testing relatively dense graphs, e.g., up to 100 nodes and 3,600 edges. For such graphs they found matrices provided better support than node-link diagrams for many tasks, the exception being path finding, which remains very difficult in matrices regardless of density. A recent survey of network visualisation user studies has explored the literature in terms of number of nodes and edges used in published studies [66]. While there are many studies evaluating different repre- sentations of network data, these rarely significantly vary the size of the data, preferring one or two data sets, carefully chosen to be well within the capabilities of at least one of the techniques being tested (e.g., [29, 44, 57]). We are aware of a few studies that involve large graphs (e.g., hundreds or thousands of nodes) [8, 22, 45,47, 48,62], but they all use interactive query or aggregation techniques, allowing the user to filter the input graph, so that only a small subset of the nodes and links are actually shown to the participant. There is, therefore, a lack of evidence regarding the effectiveness of large node-link or other network visualisations for tasks that require a detailed understanding of the network structure. This work aims to determine the thresholds for node-link visualisations after which designers should limit the number of nodes and edges on display or switch to a summary representation [67] in order to best cater for human perceptual and cognitive capabilities. 2.2 Cognitive Load Cognitive Load Theory suggests that humans process information using limited working memory [59]. The theory was initially developed in the fields of education and instructional design. It distinguishes between in- trinsic,extraneous, and germane cognitive load. Intrinsic cognitive load is associated with the inherent difficulty of the instruction or task. Extra- neous cognitive load depends on how the instruction and information is presented, while germane cognitive load refers to processing, acquiring and automating schemata [16,60]. Three main types of measures can be used to assess cognitive load: subjective feedback, performance-based (accuracy and response time), and physiological [21] such as brain activity, pupil dilation or heart rate variability. Only one study (to our knowledge) has directly used cognitive load as a measure to evaluate network visualisations. Huang et al. [33] conducted a study that explored cognitive load in node-link diagrams. They utilised an efficiency measure based on the approach proposed by Paas and Van Merrienboer [51] for comparing instructional materials which combines self-reported mental effort and performance measures. Huang et al. manipulated complexities of the visual representation, data, and task, to show that cognitive load is affected by these factors. Their results confirmed that the participant feedback matched their expectations in terms of task difficulty. However, the graphs they used were fairly small and they did not consider physiological measures of cognitive load. Even though physiological measures have been frequently used to measure cognitive load in systems engineering and psychology, to our knowledge, there have been very few studies that use physiological measures to evaluate data visualisations and no studies that utilise them to measure the effectiveness of network visualisations. Instead researchers have almost totally relied on performance-based and sub- jective measures. An exception is Anderson et al. [6] who compared the cognitive load of participants when identifying the larger interquartile range on a variation of box plot types. They measured task difficulty, response time and brain activity using EEG. They used the spectral differences in the alpha and theta frequency bands of the signals acquired by EEG as an indicator of cognitive load. The results showed a correlation between these three measures, with an increase in response time and cognitive load as tasks became more difficult. In another study Peck et al. [53] used functional near-infrared spec- troscopy to compare the cognitive load imposed by pie charts versus bar charts. They also used accuracy, response time, and subjective feedback (NASA-TLX). They asked participants to estimate differences between two highlighted sections, given either a pie or bar chart. The results did not show a difference in cognitive load between the two visualisation idioms. This is perhaps attributable to the task not really involving problem solving, but relying mostly on visual perception. One contribution of this paper is our initial exploration of the appli- cability of different physiological measures to network visualisation. 2.3 EEG Measurement of Cognitive Load Quantitative EEG is the broad name given to the analysis of brain elec- trical activity with respect to its oscillations, or frequency components. Data is collected from electrodes placed in a standard configuration on the surface of the head—see Figure 2. Generally speaking, brain elec- trical oscillations occurring between 4 and 8 Hertz, called theta activity, have been associated with memory processing, such as during memory encoding [42], recognition or processing during spatial navigation [65] and other related processes including error detection [61]. Theta activity is also commonly used to measure cognitive load. Increased activity is associated with increased cognitive load process- ing and task difficulty, typically at the central frontal lobe electrodes, FZ [19]. However, the region of the brain associated with increased activity depends upon the kind of task. For instance, a linguistic (i.e., hypertext) based task highlighted electrodes F7 and P3 as being more important [7]. Briefly, whilst brain regions do not typically work in isolation, differ- ent brain areas have different roles. Figure 2 includes a schematic map of the major regions. Broadly speaking, the frontal lobe is involved in CZ T8C3 FC5 FC1 F3 FZ F4 FP1 FP2 AF3 AF4 CP1 CP2 CP6 Occipital Lobe Frontal Lobe Parietal Lobe FC2 FC6 T7 C4 Fig. 2: The arrangement of the 32 dry electrodes of the g.Nautilus EEG cap [1] used in our study, and a broad indication of the major brain regions they are associated with. The placement is based on the International 10-20 system with extra electrodes. reward and error processing, impulse control, decision making, problem solving and abstract reasoning, motivation, language production, and motor planning, control and execution. The parietal lobe is generally involved in touch sensation, as well as visuo-spatial processing and perception, including mental imagery. The temporal lobe is involved in auditory sensation, object recognition, memory, language comprehen- sion, and emotions. Finally, at the rear of the brain, the occipital cortex is involved in lower-level visual processing. As no previous study has investigated brain activity for a network visualisation task, it is difficult to predict precisely which areas of the brain and hence electrodes will be involved. For instance, finding the shortest path could involve memory encoding, decision making (including error detection), and spatial processing, within what is a predominately spatial navigation decision task. A spatial navigation task in the literature implicated right temporal and bilateral parieto- occipital theta increases, with left posterior decreases [65]. However, this task is a less than ideal comparison because it did not manipulate cognitive load. The brain imaging study with the most similar task did not use EEG analysis. Instead, it used functional Magnetic Resonance Imaging (fMRI) to detect oxygen/blood flow in the brain as a measure of activity. Kaplan and colleagues [39] sought to identify regions of brain activity during a maze processing task. Participants were required to decide what was the shortest path between their starting point and an end location. Some mazes only had one choice point, while others had two choice points when deciding the shortest path. Their results suggested that there were brain activity differences depending on the number of decisions. These results suggest that shortest path tasks could involve theta activity in the left-frontal, right-parietal, and left-temporal regions. 3 USER ST UDY Our study was designed to investigate the perceptual scalability of node-link diagrams for graph connectivity tasks, identifying the graph complexity and size beyond which they cease to be useful for such tasks. This extends previous studies such as Ware [63] by considering a much greater range of graph sizes and densities. We also explore physiological measures of cognitive load: EEG, pupil dilation and heart rate variability. Fig. 3: One of the participants during the study (face obscured for anonymity) wearing the g.Nautilus EEG cap. The eye-tracker device is mounted under the display and the heart rate monitor is worn under 3.1 Setup The study had 22 participants: 14 male, 8 female. 18 participants were in age range 20–30, while 4 were aged 35–45. All participants had a background in Computer Science. The participants were asked about their familiarity with node-link diagrams and the shortest path problem. 9 participants frequently encountered node- link diagrams and the shortest path problem, while the remaining 13 participants occasionally did. We chose a network connectivity task because this is a common high-level task and node-link diagrams have been found to be particu- larly effective for this task [28]. Specifically, participants were shown a range of graphs and instructed to identify the shortest path between two highlighted nodes and determine the number of nodes on this path, if they could. Graph corpus. The graphs shown to the participants varied in two dimensions; numbers of nodes and edge density. Of course, other as- pects of the graph structure can be expected to affect task complexity. However, to keep our study under two hours we were forced to consider only a single type of graph structure. We wanted our generated graphs to be similar to real-world graphs. We chose to use the Barab model [9] as this is known to produce graphs with small-world charac- teristics. Such graphs are common in nature and are frequently studied, e.g., in cell-biology [5], bibliography [25] and internet topology [24]. We generated our stimuli using code written in JavaScript and based on the Barab asi-Albert [9] model. We preferred not to use standard generators, since most require parameters specifying the total number of nodes and the number of edges to added at each iteration. Instead, we wanted to specify the total number of nodes and edges. The number of nodes in the generated graphs ranged from 25 to 175 nodes in increments of 25. We experimented with different nodes ranges, but our pilot studies showed that task accuracy was at a maximum at 25 nodes, while the task was too difficult at 175 nodes. To calculate the edges, we used densities of 2, 4 and 6, where density =number of edges/number of nodes . We chose these den- sities because real-world examples often have densities of less than 10 [46] and the results of our pilot studies showed that the graphs be- came unreadable beyond these values. We generated two graphs for each combination of number of nodes and density, giving 42 graphs, plus 3 graphs for training. The graphs were arranged using the force- directed layout of WebCola [4] and saved as drawings in SVG format. For each graph we computed a start and end node of the path. We first selected the furthest node from the centre of the diagram and then selected the nearest node to the opposite side of the vector, passing through the centre. Due to the small-world nature of the graphs and the force-directed layout, this led to non-trivial shortest paths. The study was conducted in a quiet office at Monash University with no natural light variation. The set up is depicted in Figure 3. The study was run on a Windows 10 Dell Latitude E7440 laptop, equipped with 2.7 GHz i7 processor and 8 GB RAM. The visual representations were displayed in a 1920 ×1080 pixel area on a 22-inch HP monitor. Mozilla Firefox 46.0 was used to to display the visualisations and collect participant responses. A Tobii Pro X3-120 eye tracker [3] was used throughout the study. This was directly linked to the laptop. A Polar H10 heart rate sensor was used to acquire heart rate in- formation. This was linked to an iPhone 4 via Bluetooth and HRV Logger [2]. Ag.Nautilus [1] electroencephalography (EEG) cap was also linked to the laptop to record the electrical activity of the brain via g.Recorder; a software provided by g.tec [1]. The cap exposes 32 data channels with dry electrodes spatially organised, based on the International 10-20 EEG placement system, with Modified Combinatorial Nomenclature as shown in Figure 2. Additional reference and ground electrodes were attached to the back of the participants’ ears. The EEG sampling was set to 250 Hz. An analogue bandpass filter was applied between 0.5 Hz and 100 Hz. A notch filter was used to suppress 48 Hz to 52 Hz power line interference. Sensitivity was set to +/- 2250 mV. 3.2 Procedure The participants were shown an explanatory statement and were asked to sign a consent form. They were then presented with a short tutorial explaining the concept of shortest path and the task requirements. For each experimental task, the start and end pair of nodes were highlighted in orange and participants were asked to find the shortest path, taking note of the number of nodes between these end nodes. The correct answers for our tasks ranged from one to six. We also allowed participants to answer with ‘unsure’ so that they did not need to guess. See Figure 10(a) for an example of the task. Both the answer and the time taken were recorded. Each participant had to perform the task 45 times, of which 3 were training. The stimuli were shown in randomised order such that no consecutive graph contained the same number of nodes or number of edges. All graphs were shown to each participant using this order but starting with a different graph, resulting in an incomplete Latin Square design. Each task was preceded by five seconds of blank screen to serve as a rest period, which also served as baseline for the physiological After each task, the participants were asked to rate its difficulty. They were given a nine-grade symmetrical category scale used by Huang et al. [33] and evaluated by Bratfisch et al. [12]. The scale uses the following terms: ‘very very easy’, ‘very easy’, ‘easy’, ‘rather easy’, ‘neither easy nor difficult’, ‘rather difficult’, ‘difficult’, ‘very difficult’, ‘very very difficult’. The participants were allowed to take breaks between each task. The breaks were not timed and the respective physiological measures were excluded from the analysis. Moreover, no fatigue was reported, or signs of fatigue observed. Unlike Ghoniem et al. [28] who allowed the participants to inter- act with the visualisation by highlighting neighbouring nodes when hovering over a specific node, we did not allow any interaction. We wanted to keep the variables of the study at a minimum in order to understand the basics of cognitive load and scalability. Even though the participants had access to a mouse, they were asked not to use it during the task, and only used it to submit their answers. Our dependent measures fall into three categories: performance (com- pletion time and accuracy), subjective (self-reported difficulty) and physiological (pupil dilation, heart rate and EEG). The overall logic of our data analysis is Determine scalability of the task in terms of performance and subjective measures. Develop a measure of task for each of the stimuli based on performance and subjective measures. Determine which regions of the brain are involved for the EEG Investigate the relationship between hardness and the physiologi- cal measures of cognitive load. 5. Explain hardness in terms of graph metrics. In this section we focus on the first two steps. Fig. 4: Self-reported difficulty rating for the different sizes of graphs and densities. Fig. 5: Correctness for the different sizes of graphs and densities. 4.1 Scalability Figures 4, 5 and 6 respectively show the self-reported difficulty, accu- racy and response time of the participants in seconds with respect to each stimulus. Stimuli are grouped by density. We used repeated measures correlation [37] to investigate the corre- lation between these measures and the number of nodes overall and for each density. Overall, there is a strong correlation between number of nodes and self-reported difficulty: rrm(901) = 0.71, 95% CI [0.67,0.74],p<0.0001. This strong correlation holds for each of the three densities: Density 2: rrm (285) = 0.76, 95% CI [0.70,0.80],p<0.0001. Density 4: rrm (285) = 0.75, 95% CI [0.70,0.80],p<0.0001. Density 6: rrm (285) = 0.72, 95% CI [0.66,0.77],p<0.0001. We weighted correct as 0, incorrect as 1. It is less clear how to weigh unsure. We cannot remove them entirely as failure to complete the task is an important signifier of task difficulty. One could argue that an unsure response indicates that the participant found it harder than those who gave an incorrect answer, as even with the wrong answer they at least felt that they could answer the question. We evaluated the effect on correlation (see supplementary materials). With a weight of 1 for unsure the correlation is 0.42. Correlation increases to 0.55 with a weight of 2 and then levels out. We therefore decided to weigh unsure as 2. Note that we redid the analysis for weighing unsure as 1 and it makes little difference (see supplementary materials). rrm(901) = 0.55, 95% CI [0.50,0.59],p<0.0001. This strong correlation also holds for each of the three densities: Density 2: rrm (285) = 0.48, 95% CI [0.39,0.57],p<0.0001. Density 4: rrm (285) = 0.68, 95% CI [0.61,0.74],p<0.0001. Density 6: rrm (285) = 0.59, 95% CI [0.51,0.66],p<0.0001. Overall, there is a correlation between number of nodes and response time: rrm(901) = 0.22, 95% CI [0.15,0.28],p<0.0001. However, while this correlation holds for density 2 and 4 it does not hold for density 6: Density 2: rrm (285) = 0.37, 95% CI [0.27,0.47],p<0.0001. Density 4: rrm (285) = 0.24, 95% CI [0.13,0.35],p<0.0001. Fig. 6: Response time for the different sizes of graphs and densities. Density 6: rrm (285) = −0.04, 95% CI [−0.16,0.07],p=0.4592. One reason for this is might be that with the larger and denser examples participants quickly realise that the task is too difficult and select ‘unsure’ actually reducing their response time. For this reason we also checked the correlation when times for unsure responses were excluded. As expected this strengthens the correlation. Overall: rrm(601) = 0.42, 95% CI [0.35,0.49],p<0.0001. Density 2: rrm (217) = 0.48, 95% CI [0.37,0.57],p<0.0001. Density 4: rrm (198) = 0.41, 95% CI [0.29,0.52],p<0.0001. Density 6: rrm (140) = 0.31, 95% CI [0.16,0.46],p=0.0001. What is striking about these results is how hard participants find the task. Participants are: wrong or unsure more than 50% of the time for graphs with 100 or more nodes for density 2; they are wrong or unsure more than 50% of the time for graphs with 75 or more nodes for density 4 and wrong or unsure more than 50% of the time for graphs with 50 or more nodes for density 6. Indeed for density 6 and graphs with 75 or more nodes participant accuracy is around 16.67%, the value we would expect by random selection. Our results strongly suggest that for path-based connectivity tasks node-link diagrams do not scale. Even for low-density graphs we find that determining shortest paths is only reasonable for graphs with less than 100 nodes and for higher-density graphs no more than 50 nodes. 4.2 Task Hardness For the subsequent analyses it is useful to have a single measure of the task hardness for each of the stimuli (graph and shortest path). Of course we don’t have a direct measure of this but accuracy, response time and self-reported difficulty are all possible proxy measures with task hardness an underlying latent variable. There are a number of ways to do latent variable analysis. Basically the observed variables are modelled as linear combinations of the potential latent variables, plus “error” terms. We used Principal Axis Factoring to extract the latent variable as it is one of the standard techniques used in psychological data analysis [18, 38]. Based on the prior discussion we chose not to use response times for unsure responses. The analysis had three steps. We first normalised the measures for each participant in order to better take account of individual differences. The normalised score was simply the -score of the measure w.r.t. all responses of the participant. For each normalised measure we calculated the mean score for all participants for each of the 42 items (questions). We then conducted Principal Axis Factoring, with the first princi- pal component giving an estimate of task hardness. This indicated that 78% of the variance in responses can be explained by task hardness and that the factor loadings for difficulty, accuracy and time were 1.04, 0.87 and 0.71 respectively. In the next part of our analysis we explored the relationship between task difficulty and the physiological measures of cognitive load. 5.1 Data Preprocessing Pupil Dilation: We used Tobii Studio to record the eye tracker data from the Tobii Pro X3-120. We used the average of the two eyes in order to reduce noise. In cases where we had pupil size information for just one eye, we used that alone. For each task, we used the five seconds pre-task resting period to extract an average baseline, then we calculated pupil dilation by subtracting the average pupil size during the inter-trial rest period from the peak pupil size during task performance. We used peak dilation instead of mean pupil dilation since the latter does not work well with tasks that vary in length across participants [10]. We used z-score to normalise the pupil dilation for each participant. Heart Rate Variability: The polar H10 heart rate monitor recorded the beats per minute (bpm) and interval for each participant. We used root mean squared successive difference (RMSSD) [31,58] which is a common measure for heart rate variability analysis [56] and used z-score to normalise this for each participant. G.tec’s g.Nautilis dry 32 channel EEG system was used to record and digitise EEG using g.Recorder. Online, left ground and right refer- ence ears were used in accordance to technical recommendations [30]. Raw EEG was converted to a format compatible with and then analysed using Fieldtrip [50] within MATLAB. Offline pre-processing consisted of re-referencing to the electrode average. Afterwards, the data was first visualised so that bad electrodes could be identified and interpolated using symmetrically chosen electrodes within a 5 cm radius. EEG data pre-processing continued after using a 1 to 30 hertz FIR band pass filter on whole data, using a hamming, 53dB/octave slope. These filters allowed a reduction of slow wave potentials whilst keeping the traditional shape of the eye blink response. Otherwise, this range was chosen to attenuate a level of noise associated with signals outside the range of frequency interest for this study whilst maintaining the ability to visualise muscle activity for later epoch rejection. After this, PCA ocular correction algorithms were performed on whole data to remove blinks and eye movements from the EEG data. Each participant’s data was epoched for 5 seconds before the point where a participant signalled they had an answer. This was decided because there was a large variation in individual response times, with some trials taking over 2 minutes. This variability meant there was no guarantee that participants were concentrating for the entire time. We felt that epoching the 5 seconds prior to indicating an answer meant that the EEG results would be more comparable across trials and participants, as, at this point in time, they were more likely to be fully engaged in the task (see supplementary materials for further We chose to analyse theta frequencies as theta power has been more consistently found to increase with cognitive load than other frequencies such as alpha whose power has been found to both increase and decrease with cognitive load [15]. Therefore, after epoching, FFT was performed on each stimuli, exporting 4 to 7.8 hertz as absolute power, using intervals of 0.2 and a 1 Hz taper. The data was converted into -scores to normalise between partici- pants. Despite the cleaning algorithms, EEG outliers were found in the data that seemingly related to the amplification of noise, which did not seem to have any particular pattern within and between participants. To overcome this, it was decided to use the non-parametric estimates (i.e., median rather than mean) in all analyses involving EEG data. Trouble was found with two participants’ EEG data—one partici- pant’s recording dropped out during online recording, and the other had reference problems—leaving EEG data from twenty participants for 5.2 EEG Analysis As discussed in Section 2, different regions of the brain are associated with different functions. As a first step in our analysis of the EEG data we wished to determine which regions were involved in the shortest path task. For each participant we split the stimuli into easy and hard tasks based on the individual participant’s subjective ranking. We used the individual subjective ranking rather than the task hardness as we felt that this would better reflect the difficulty that that individual found with the stimuli. Even if a stimuli was generally found hard it could be that some participants found it easy just because they were lucky and happened to quickly see the shortest path. We then computed the median theta power for the easy and hard stimuli at the different electrode locations, giving the EEG topographical maps shown in Figure 1. For the easy tasks the main activation is at the rear of the brain and slightly to the right in the parietal and occipital regions. There is also activation in the temporal region and little activity elsewhere. This pattern is similar to that previously found for spatial navigation [65]. It suggests that the decision making for these tasks is essentially visuo- spatial and that during their final decision, participants mostly relied on perceptually estimating the length of all possible routes. On the other hand, when we look at the activation for hard instances there is activity on both sides of the occipital and parietal cortex and the right parietal and frontal regions and the left frontal region. This pattern of activation is more similar to that found in [39]. It suggests that for these stimuli a much more systematic step-by-step process is being used to find the shortest path with participants keeping track of the best path found so far in memory for comparison with the path under consideration. These distinctions are evident in the difference EEG topographical map. Increased activation in the left parieto-occipital region for the harder instances is conjectured to reflect greater use of memory and pattern recognition (i.e., comparing the memory trace of the current path to previously considered paths [35], and/or possibly pattern recog- nition more broadly when considering the role of the posterior temporal lobe [27]. On the other hand, the centro-parietal activity on the right hand side could represent the specialisation of the right ventro-parietal cortex for non-language-based spatial tasks, reflecting attention process- ing that also includes memory processes [13, 40]. Finally, the frontal region activity on the left side is probably explained by traditional cog- nitive load theory [7,19] and reflects semantic encoding and possibly retrieval processing [14,64] and working memory [11, 14]. This analysis and the difference map suggests that electrodes in the left frontal region (F3, FC1), right centro-parietal (C4, CP2, CP6) and left parieto-occipital (PO7, P7) are the most likely electrodes to indicate increased cognitive load for our task. That said, noteworthy trace activation was also found in the right frontal region (F4) but not as strong as the other electrodes previously mentioned. We used the Wilcoxon signed rank test to evaluate the effect of easy vs. hard. We also calculated the effect size of these tests. We can interpret the effect size using Cohen’s classification of effect sizes which is 0.1 (small effect), 0.3 (moderate effect) and 0.5 and above (large effect) [17]. We note that the differences for these electrodes between the hard and easy stimuli are statistically significant and all have large effects: F3 ), FC1 ( ), F4 ( ), C4 ( ), CP2 ( ), CP6 ( ), P4 ( ), PO7 ( P7 (p=.0028,r=.67). 5.3 Correlation with Task Difficulty We next used repeated measures correlation to investigate the correla- tion between the physiological measures discussed above and the task We did not include unsure responses . For EEG data, we considered theta for the electrodes identified in the previous section (F3, FC1, F4, C4, CP2, CP6, P4, PO7, P7). Table 1 shows only pupil dilation and heart rate variability demon- strated a statistically significant positive correlation with hardness. Even then the correlations were not strong based on their correlation coefficients. This lack of correlation was surprising as we would have expected cognitive load to be highly correlated with task difficulty. In order to better understand the relationship between the physiolog- ical measures and task hardness, we binned the measurements using aquantile interval (i.e. each category contains an equal number of tasks) method to divide the range of hardnesss into five categories from easy to hard. We then plotted the 95% CI of each measure as shown in Figures 7 and 8. We see for pupil dilation and for most of the EEG measures (F3, FC1, CP2, CP6, P4, PO7, P7) that they first increase with task hardness but then decrease. This is why they exhibit only a weak correlation with task hardness. We conjecture that this is because once the task becomes very difficult participants switch off and no longer make the effort to find the right answer so cognitive load actually decreases. While we might have expected this for the unsure answers, our results suggest that this happens even if the participants do not indicate they are unsure. For heart rate the story is not so straightforward but may be because heart rate is also influenced by stress and so participants’ cognitive load decreased but stress was increased resulting in the overall increase. 6 GR AP H AN D LAYOU T FEATURES AFFECTI NG TAS K HARD- Clearly the size/complexity of the underlying graph affects the difficulty of finding the shortest path. As discussed in Section 2 a number of papers have suggested other features that impact on this: length, i.e., number of nodes on the shortest path [63], number of crossings and Degree of Freedom 95% CI p Pupil dilation 0.09 590 [0.01, 0.17] 0.0261 Heart rate variability 0.12 601 [0.04, 0.20] 0.0038 F3 0.03 459 [-0.06, 0.12] 0.5652 FC1 0.02 463 [-0.07, 0.11] 0.6529 F4 0.03 460 [-0.06, 0.12] 0.5 C4 0.03 462 [-0.06, 0.12] 0.5004 CP2 -0.02 470 [-0.11, 0.07] 0.6766 CP6 0.05 459 [-0.04, 0.14] 0.2493 P4 0.01 455 [-0.08, 0.10] 0.8345 PO7 0.04 448 [-0.06, 0.13] 0.4397 P7 0.03 458 [-0.06, 0.12] 0.5065 Table 1: Correlation between physiological measures and hardness. Fig. 7: Pupil dilation and heart rate variability as a function of task crossing angle on the shortest path [34, 63], directness of shortest path [63], and degree of nodes on the shortest path [63]. Given that we have a measure of task hardness we decided to conduct an analysis investigating how different features of the overall graph and of the shortest path(s) between the two target nodes affected task hardness. As our stimuli had not been designed to directly answer this question our analysis is necessarily limited but is still illuminating. 6.1 Graph and Layout Features One limitation of the stimuli is that most of them contained more than one shortest path. Therefore, for most features we provide both the total value over all of the shortest paths and the average value overall all shortest paths. We would expect the first value to be more highly correlated if participants examine all shortest paths but the second to be more highly correlated if they only examine one (or few) of the shortest paths. We also consider a global measure of crossings, e.g. the total number of crossings. The rational for this is that other paths in the graph must be examined to complete the task, not just the shortest path itself. In some cases it is unclear how best to measure a particular feature so we used a number of metrics. The complete list of metrics is: Measures of size/complexity: •nodes: number of nodes •edges: number of edges Measures of crossings and crossing angle: •gLLCrossingCount: total number of link-link crossings •gLNCrossingCount: total number of link-node crossings •gCrossingAngle: overall sum of the angles at which links cross gCrossingCount: total number of link-link and link-node cross- gCrossingLLAngleLNCount: the overall sum of the angles at which links cross + the number of link-node crossings sLLCrossingCount: total number of link-link crossings on the shortest paths dsLLCrossingCount: average number of link-link crossings on the shortest paths sLNCrossingCount: total number of node-link crossings on the shortest paths dsLNCrossingCount: average number of node-link crossings on the shortest paths Fig. 8: EEG measures as a function of task hardness. sCrossingAngle: overall sum of the angles at which links cross on the shortest paths dsCrossingAngle: average sum of the angles at which links cross on the shortest paths Length of shortest path: •LengthOfShortestPath: Number of nodes on the shortest path •sEuclidean: total Euclidean distance of shortest paths •dsEuclidean: average Euclidean distance of shortest paths Degrees of nodes on the shortest path: sDegrees: total sum of the degrees of nodes on the shortest paths dsDegrees: average total sum of the degrees of nodes on the shortest paths Straightness of shortest path: •sEquator: total geodesic path deviation on the shortest paths •dsEquator: avg. total geodesic path deviation on shortest paths •sTurningAngle: sum of turning angles on the shortest paths •dsTurningAngle: avg. sum of turning angles on shortest paths The penalty for small angles was calculated by measuring the angle between two link crossings and subtracting it from 90 degrees. This applies a high penalty for small angles, but a small penalty for crossings where the links are close to orthogonal. Figure 10 shows examples where some of these features are high- lighted. The graph shown in the examples is from our study corpus. It has 6 possible shortest paths (Fig. 10(a)), each with 4 intermediate nodes. The nodes on the shortest paths have an accumulated degree of 56 (Fig. 10(h), average = 28.33/path). The node-link diagram represent- ing the graph has 57 link-link crossings (Fig. 10(b)) and 6 node-link crossings (Fig. 10(c)). There are 33 link-link crossings (Fig. 10(d), average = 8.33/path) and 1 node-link crossing (Fig. 10(e), average = 0.17/path) on the shortest paths. The sum of the Euclidean distance of the links on the shortest paths is 2803.66 (Fig. 10(f), average = 1067.47/path). The sum of the distance between the nodes on the shortest paths from the Geodesic path is 874.46 (Fig. 10(g), average = 415.66/path). The sum of the penalty for small angles of link-link cross- ings on the shortest paths is 925.92 (Fig. 10(i), average = 217.08/path). Lastly, the sum of the turning angles on the shortest paths is 426.72 degrees (Fig. 10(j), average = 105.58 degrees). 6.2 Analysis As a first step we computed the correlation between these different measures and also with task hardness. See Figure 9. As one would expect the different measures of the same basic feature are often closely correlated. Thus, for instance the measures of crossings and cross- ing angle are highly correlated. Furthermore, as the graph grows the number of node-link and link-link crossings increase. We also see that measures of the Euclidean length of the shortest path(s), its straightness and the degree of the nodes on it are all highly correlated. We then built multi-level linear models to understand how the above graph features influence task hardness (a similar approach was em- ployed in [63]). This exploratory study used all-subsets methods to consider different combinations of features as predictors. Following Field et al. [26, Chap 7.9], we only considered models meeting the following assumptions: Limited multicollinearity; we used the VIF statistics to calculate VIF values and we considered VI F <5 as meet the requirement. Independence; we used the Durbin-Watson test and considered models within the range of [1.5,2.5]as meeting the requirement. Fig. 9: Correlation matrix of graph metrics. Homoscedasticity (means that residuals at each level of the pre- dictors should have the same variance): different transformations were applied to different graph metrics before building the linear models to meet this requirement: log transformation : gLLCrossingCount, gCrossingAngle, sLL- CrossingCount, sCrossingAngle, dsCrossingAngle, sEuclidean, dsEuclidean, sDegrees and dsDegrees. square root transformation : gLNCrossingCount, dsLNCross- ingCount, sEquator, dsEquator, sTurningAngle, dsTurningAngle. 4th square root transformation: gCrossingCount, sLNCross- ingCount and dsLLCrossingCount. Density was excluded from modelling as the transformed values still did not meet the homoscedasticity assumption. We also normalised each input factor to 0–1 range before modelling. We report representative models for different number of predictors. Those for one predictor are, of course, the features most highly corre- lated with task hardness. The top 12 models are given in Table 2. What we see is that the number of nodes and global measures of crossing angle or crossing count are the best predictors, followed by number of edges and measures of crossing angle and count for the shortest path. Other features associated with the shortest path such as its length or straightness are poor predictors of task hardness. The top 12 models for two predictors are given in Table 3. Here we see that the best model combines global measures of crossing angle or crossing count with the length of the shortest path. The next best combine global measures of crossing angle or crossing count with measures of the number of nodes or graph density. These are followed by predictors combining global measures of crossings with local measures of crossings on the shortest paths. We also considered models with three predictors but none had suffi- cient extra explanatory power to warrant the use of a third predictor. What we see is that the global graph features are much better predic- tors of task hardness than features of the shortest path(s). This is a little surprising, likely reflecting that the participants looked at many more (a) Shortest paths (b) Link-link crossings (c) Node-link crossings (d) Link-link crossings on the shortest paths (e) Node-link crossings on the shortest paths (f) Euclidean distance of the shortest paths (g) Geodesic path deviation (h) Degrees of nodes on the shortest paths (i) Crossing angles on the shortest paths (j) Turning angles on the shortest paths Fig. 10: Features for a sample network used in the study with 25 nodes and 50 edges (density 2). AIC BIC lm1 edges 0.70 77.6 82.8 0.84 nodes 0.70 77.9 83.1 0.84 gCrossingAngle 0.70 78.2 83.4 0.84 gLLCrossingCount 0.69 79.7 84.9 0.83 gLNCrossingCount 0.68 79.9 85.1 0.83 gCrossingCount 0.62 88.0 93.2 0.79 gCrossingLLAngleLNCount 0.58 91.8 97.0 0.77 dsCrossingAngle 0.53 96.5 101.7 0.74 sCrossingAngle 0.50 99.3 104.5 0.71 sLLCrossingCount 0.47 101.4 106.6 0.70 dsLNCrossingCount 0.47 101.38 106.60 0.70 sLNCrossingCount 0.45 102.82 108.03 0.68 Table 2: Top 12 linear models with one predictor. AIC BIC lm1 lm2 nodes + gCrossingLLAngleLNCount 0.84 51.4 58.4 0.60 0.45 gCrossingAngle + LengthOfShortestPath 0.84 52.8 59.8 0.92 0.39 gLLCrossingCount + LengthOfShortestPath 0.83 54.1 61.1 0.92 0.40 nodes + gCrossingAngle 0.82 57.4 64.4 0.49 0.49 nodes + gLLCrossingCount 0.81 58.4 65.4 0.50 0.48 LengthOfShortestPath + edges 0.80 61.4 68.3 0.32 0.89 gCrossingAngle + sLNCrossingCount 0.77 67.5 74.4 0.66 0.33 gLLCrossingCount + sLNCrossingCount 0.76 68.6 75.5 0.65 0.34 gCrossingAngle + dsLNCrossingCount 0.76 69.8 76.8 0.66 0.31 gLLCrossingCount + dsLNCrossingCount 0.75 70.9 77.8 0.65 0.32 sLNCrossingCount + gCrossingLL AngleLNCount 0.71 77.1 84.1 0.41 0.57 dsLNCrossingCount + gCrossingLLAngleLNCount 0.70 78.7 85.6 0.41 0.56 Table 3: Top 12 linear models with two predictors. paths than the shortest one or because in the stimuli the shortest path was deliberately constructed to run from one side of the graph to the other. We also see that the best combination of two predictors utilise global measures of crossings with either the number of nodes or the number of nodes on the shortest path. The most similar study is that of [63]. They also studied the effect of different graph features on the difficulty of finding the shortest path. They evaluated the effect of shortest path length (number of nodes) and its Euclidean length, shortest path straightness, degree of nodes on shortest path, number of crossings and average crossing angles on shortest path, as well as the global number of crossings. They did not vary the number of nodes in the stimuli or consider density or number of edges. They found that the two best predictors were shortest path length and straightness of the path. In particular they found that the global number of crossings was not a good predictor but that the number of crossings on the shortest path was. This contrasts with our finding that global predictors such as number of nodes or number of crossings are more influential than number of crossings or straightness of the shortest path. We believe this is because the graphs used in [63] were small—only 42 nodes—and relatively sparse with only a few crossings. Consequently the task was much easier: 93% of responses were correct. We suspect that this means that the participants quickly found the shortest path and so its features dominated, while in our harder experiment participants considered many other paths to the shortest path and so global features were more important. Our study had a number of limitations. The first is that it was restricted to scale-free graphs and that we used a particular layout algorithm. While we believe that our results apply generally to node-link diagrams, further studies are required to validate this. We considered only one task: finding the shortest path between two nodes. We believe that this complex task is representative of a wide-range of path following tasks and that it involves a variety ‘sub-tasks’, such as dis-ambiguating edges, inspecting neighbours, re- membering previously inspected nodes, browsing through paths, and so on. Nonetheless other tasks should be considered in future work. We also realised a number of limitations of the study with respect to our measurement and analysis of the physiological data. Measurements of pupil dilation are sensitive to illumination [10]. A limitation of this study was not considering the effect of the stimuli on illumination. With larger or denser graphs the screen is slightly darker, reducing illumi- nation and so increasing pupil dilation. This could potentially explain some of the increase in pupil dilation as task difficulty increased. How- ever, we believe the impact was minimal as the study was conducted in a well-lit office and we actually see that pupil dilation decreased when the stimuli became sufficiently difficult. Whilst the brain activity patterns revealed in the EEG data accord with the limited available literature, these results should be viewed cautiously. Not only was there a significant level of noise in the data but the EEG results are likely to be more nuanced. In particular, we ignored individual strategy differences or spatial abilities which are likely to significantly impact on the brain regions used in the task. While participants were instructed not to use the mouse while completing the task, a few participants began to use the mouse to trace over the path before being asked not to. This may have resulted in a greater amount of motor, and more importantly, pre-motor cortex activity on the opposite brain hemisphere to the hand being used. This could have resulted in minor differences in brain activity between the easy and hard stimuli in left frontal-central regions, if the right hand was used more with the mouse. However we see little indication of this. Future studies, should take steps to ensure that participants are not able to use the mouse. It is also important to note the limitations of EEG analysis. While it gives a broad indication of brain activity it is not possible to confidently point out detailed brain regions from our results; source localisation techniques [52] are required to allow certainty. We have explored the perceptual limitations of node-link diagrams for a representative connectivity task, finding the shortest path between two nodes. We found that the usefulness of node-link diagrams rapidly deteriorates as the number of nodes and edges increases. For small- world graphs with 50 or more nodes and a density (ratio of edges to nodes) of 6, participants were unable to correctly answer in more than half of the trials. This was also the case for graphs with a density of 2 and more than 100 nodes. To the best of our knowledge this is the first study to consider physiological measures of cognitive load (EEG, pupil dilation and heart rate variation) for a network visualisation task. We found that these measures of load initially increase with task hardness but then decrease, presumably because participants give up. The analysis of EEG data was particularly revealing, indicating that the left frontal, right centro- parietal and left parieto-occipital regions display increased cognitive load for our task. Trace activation was also found in the right frontal region. We hope that our experience will inform future visualisation researchers who also wish to use physiological measures to reveal cognitive load for other kinds of visualisation tasks. We also explored the effects of global network layout features such as size or number of crossings and features of the shortest path such as length or straightness on task difficulty. We found that the global measures such as number of crossings had a greater impact than features of the shortest path such as straightness. This is in contrast to an earlier study of Ware [63] and may reflect the harder stimuli used in our study. Our results can guide visualisation designers when creating visu- alisations that must scale to larger graph data (e.g., setting limits on neighbourhood size in overview-and-detail techniques using node-link diagrams for detail). We also hope this work stimulates development of new techniques that demonstrably scale to larger, more complex networks such as summary representations [67]. The authors wish to acknowledge the support of the Australian Research Council (ARC) through DP140100077. Yalong Yang was partially supported by a Harvard Physical Sciences and Engineering Accelerator Award. We also wish to thank all our participants for their time and our reviewers for their comments and feedback. [1] g.tec: http://gtec.at. Heart Rate Variability Logger: https://www.marcoaltini.com/blog/heart- [3] Tobii Pro: http://tobiipro.com. [4] webcola: https://ialab.it.monash.edu/webcola/. R. Albert. Scale-free networks in cell biology. Journal of cell science, 118(21):4947–4957, 2005. E. W. Anderson, K. C. Potter, L. E. Matzen, J. F. Shepherd, G. A. Preston, and C. T. Silva. A user study of visualization effectiveness using eeg and cognitive load. In Computer Graphics Forum, vol. 30, pp. 791–800. Wiley Online Library, 2011. P. Antonenko, F. Paas, R. Grabner, and T. Van Gog. Using electroen- cephalography to measure cognitive load. Educational Psychology Review, 22(4):425–438, 2010. D. Archambault, H. C. Purchase, and B. Pinaud. The readability of path- preserving clusterings of graphs. In Computer Graphics Forum, vol. 29, pp. 1173–1182. Wiley Online Library, 2010. A.-L. Barab asi and R. Albert. Emergence of scaling in random networks. science, 286(5439):509–512, 1999. J. Beatty, B. Lucero-Wagoner, et al. The pupillary system. Handbook of psychophysiology, 2:142–162, 2000. R. S. Blumenfeld, C. M. Parks, A. P. Yonelinas, and C. Ranganath. Putting the Pieces Together: The Role of Dorsolateral Prefrontal Cortex in Rela- tional Memory Encoding. Journal of Cognitive Neuroscience, 23(1):257– 265, Jan. 2011. doi: 10. 1162/jocn.2010.21459 O. Bratfisch et al. Perceived item-difficulty in three tests of intellectual performance capacity. 1972. R. Cabeza, E. Ciaramelli, and M. Moscovitch. Cognitive contributions of the ventral parietal cortex: an integrative theoretical account. Trends in Cognitive Sciences, 16(6):338–352, June 2012. doi: 10. 1016/j.tics. 2012. R. Cabeza and L. Nyberg. Imaging Cognition II: An Empirical Review of 275 PET and fMRI Studies. Journal of Cognitive Neuroscience, 12(1):1– 47, Jan. 2000. doi: 10. 1162/08989290051137585 L. J. Castro-Meneses, J.-L. Kruger, and S. Doherty. Validating theta power as an objective measure of cognitive load in educational video. Educational Technology Research and Development, 68(1):181–202, 2020. P. Chandler and J. Sweller. Cognitive load theory and the format of instruction. Cognition and instruction, 8(4):293–332, 1991. J. Cohen. Statistical power analysis for the behavioral sciences. Academic press, 2013. A. Costello and J. Osborne. Best practices in exploratory factor analysis: four recommendations for getting the most from your analysis. Practical Assessment, Research, and Evaluation, 10(1), Nov. 2019. doi: 10.7275/ A. Dan and M. Reiner. Real Time EEG Based Measurements of Cogni- tive Load Indicates Mental States During Learning. JEDM Journal of Educational Data Mining, 9(2):31–44, Dec. 2017. Number: 2. doi: 10. J. Q. Dawson, T. Munzner, and J. McGrenere. A search-set model of path tracing in graphs. Information Visualization, 14(4):308–338, 2015. D. De Waard. The measurement of drivers’ mental workload. Groningen University, Traffic Research Center Netherlands, 1996. C. Dunne and B. Shneiderman. Motif simplification: improving network visualization readability with fan, connector, and clique glyphs. In Pro- ceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 3247–3256. ACM, 2013. S. G. Eick and A. F. Karr. Visual scalability. Journal of Computational and Graphical Statistics, 11(1):22–43, 2002. M. Faloutsos, P. Faloutsos, and C. Faloutsos. On power-law relationships of the internet topology. In ACM SIGCOMM computer communication review, vol. 29, pp. 251–262. ACM, 1999. I. Farkas, I. Der enyi, H. Jeong, Z. Neda, Z. Oltvai, E. Ravasz, A. Schubert, A.-L. Barab asi, and T. Vicsek. Networks in life: Scaling properties and eigenvalue spectra. Physica A: Statistical Mechanics and its Applications, 314(1-4):25–34, 2002. A. Field, J. Miles, and Z. Field. Discovering statistics using R. Sage publications, 2012. C. Gaser and G. Schlaug. Brain Structures Differ between Musicians and Non-Musicians. The Journal of Neuroscience, 23(27):9240–9245, Oct. 2003. doi: 10. 1523/JNEUROSCI.23-27-09240.2003 M. Ghoniem, J.-D. Fekete, and P. Castagliola. A comparison of the readability of graphs using node-link and matrix-based representations. In Information Visualization, 2004. INFOVIS 2004. IEEE Symposium on, pp. 17–24. IEEE, 2004. N. Greffard, F. Picarougne, and P. Kuntz. Visual community detection: An evaluation of 2d, 3d perspective and 3d stereoscopic displays. In International Symposium on Graph Drawing, pp. 215–225. Springer, 2011. g.tec medical engineering GmbH. g.Nautilus wireless biosignal acquisi- tion: Instructions for use, Oct 2017. V1.16.06. E. Haapalainen, S. Kim, J. F. Forlizzi, and A. K. Dey. Psycho- physiological measures for assessing cognitive load. In Proceedings of the 12th ACM international conference on Ubiquitous computing, pp. 301–310. ACM, Copenhagen Denmark, Sept. 2010. doi: 10.1145/1864349 W. Huang. Using eye tracking to investigate graph layout effects. In Visu- alization, 2007. APVIS’07. 2007 6th International Asia-Pacific Symposium on, pp. 97–100. IEEE, 2007. W. Huang, P. Eades, and S.-H. Hong. Measuring effectiveness of graph visualizations: A cognitive load perspective. Information Visualization, 8(3):139–152, 2009. W. Huang, S.-H. Hong, and P. Eades. Effects of crossing angles. In Visualization Symposium, 2008. PacificVIS’08. IEEE Pacific, pp. 41–46. IEEE, 2008. J. Jacobs, G. Hwang, T. Curran, and M. J. Kahana. EEG oscillations and recognition memory: Theta correlates of memory retrieval and de- cision making. NeuroImage, 32(2):978–987, Aug. 2006. doi: 10.1016/j. neuroimage.2006. 02.018 T. Jankun-Kelly, T. Dwyer, D. Holten, C. Hurter, M. N C. Weaver, and K. Xu. Scalability considerations for multivariate graph vi- sualization. In Multivariate Network Visualization, pp. 207–235. Springer, L. R. M. Jonathan Z. Bakdash. Repeated Measures Correlation. Frontiers in Psychology, 8:456, 2017. doi: 10.3389/fpsyg.2017. 00456 J. H. Kahn. Factor Analysis in Counseling Psychology Research, Training, and Practice: Principles, Advances, and Applications. The Counseling Psy- chologist, 34(5):684–718, Sept. 2006. doi: 10.1177/0011000006286347 R. Kaplan, J. King, R. Koster, W. D. Penny, N. Burgess, and K. J. Friston. The neural representation of prospective choice during spatial planning and decisions. PLoS biology, 15(1), 2017. R. Kaplan, J. King, R. Koster, W. D. Penny, N. Burgess, and K. J. Friston. The Neural Representation of Prospective Choice during Spatial Planning and Decisions. PLOS Biology, 15(1):e1002588, Jan. 2017. doi: 10.1371/ journal.pbio. 1002588 R. Keller, C. M. Eckert, and P. J. Clarkson. Matrices or node-link diagrams: which visual representation is better for visualising connectivity models? Information Visualization, 5(1):62–76, 2006. W. Klimesch. EEG alpha and theta oscillations reflect cognitive and memory performance: a review and analysis. Brain Research Reviews, 29(2-3):169–195, Apr. 1999. doi: 10.1016/S0165-0173(98)00056-3 S. G. Kobourov, S. Pupyrev, and B. Saket. Are crossings important for drawing large graphs? In International Symposium on Graph Drawing, pp. 234–245. Springer, 2014. A. Lee and D. Archambault. Communities found by users–not algorithms: Comparing human and algorithmically generated communities. In Pro- ceedings of the 2016 CHI Conference on Human Factors in Computing Systems, pp. 2396–2400. ACM, 2016. M. R. Marner, R. T. Smith, B. H. Thomas, K. Klein, P. Eades, and S.-H. Hong. Gion: Interactively untangling large graphs on wall-sized displays. In International Symposium on Graph Drawing, pp. 113–124. Springer, G. Melancon. Just how dense are dense graphs in the real world?: a methodological note. In Proceedings of the 2006 AVI workshop on BEyond time and errors: novel evaluation methods for information visualization, pp. 1–7. ACM, 2006. T. Moscovich, F. Chevalier, N. Henry, E. Pietriga, and J.-D. Fekete. Topology-aware navigation in large networks. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 2319– 2328. ACM, 2009. D. Nekrasovski, A. Bodnar, J. McGrenere, F. Guimbreti ere, and T. Mun- zner. An evaluation of pan & zoom and rubber sheet navigation with and without an overview. In Proceedings of the SIGCHI conference on Human Factors in computing systems, pp. 11–20. ACM, 2006. M. Okoe, R. Jianu, and S. G. Kobourov. Node-link or adjacency matri- ces: Old question, new insights. IEEE transactions on visualization and computer graphics, 2018. R. Oostenveld, P. Fries, E. Maris, and J.-M. Schoffelen. FieldTrip: Open Source Software for Advanced Analysis of MEG, EEG, and Invasive Electrophysiological Data. Computational Intelligence and Neuroscience, 2011:1–9, 2011. doi: 10. 1155/2011/156869 F. G. Paas and J. J. Van Merri enboer. The efficiency of instructional condi- tions: An approach to combine mental effort and performance measures. Human factors, 35(4):737–743, 1993. R. D. Pascual-Marqui. Standardized low-resolution brain electromagnetic tomography (sLORETA): technical details. Methods and Findings in Experimental and Clinical Pharmacology, 24 Suppl D:5–12, 2002. E. M. M. Peck, B. F. Yuksel, A. Ottley, R. J. Jacob, and R. Chang. Using fnirs brain sensing to evaluate information visualization interfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 473–482. ACM, 2013. H. Purchase. Which aesthetic has the greatest effect on human under- standing? In International Symposium on Graph Drawing, pp. 248–261. Springer, 1997. H. C. Purchase, R. F. Cohen, and M. James. Validating graph drawing aesthetics. In International Symposium on Graph Drawing, pp. 435–446. Springer, 1995. D. W. Rowe, J. Sibert, and D. Irwin. Heart rate variability: indicator of user state as an aid to human-computer interaction. In Proceedings of the SIGCHI conference on Human factors in computing systems - CHI ’98, pp. 480–487. ACM Press, Los Angeles, California, United States, 1998. doi: 10.1145/274644. 274709 B. Saket, C. Scheidegger, S. G. Kobourov, and K. B orner. Map-based visualizations increase recall accuracy of data. In Computer Graphics Forum, vol. 34, pp. 441–450. Wiley Online Library, 2015. F. Shaffer and J. P. Ginsberg. An Overview of Heart Rate Variability Metrics and Norms. Frontiers in Public Health, 5:258, Sept. 2017. doi: 10.3389/fpubh. 2017.00258 J. Sweller. Cognitive load during problem solving: Effects on learning. Cognitive science, 12(2):257–285, 1988. J. Sweller, J. J. Van Merrienboer, and F. G. Paas. Cognitive architecture and instructional design. Educational psychology review, 10(3):251–296, L. T. Trujillo and J. J. B. Allen. Theta EEG dynamics of the error-related negativity. Clinical Neurophysiology: Official Journal of the International Federation of Clinical Neurophysiology, 118(3):645–668, Mar. 2007. doi: 10.1016/j. clinph.2006.11. 009 C. Ware and R. Bobrow. Supporting visual queries on medium-sized node–link diagrams. Information Visualization, 4(1):49–58, 2005. C. Ware, H. Purchase, L. Colpoys, and M. McGill. Cognitive measure- ments of graph aesthetics. Information visualization, 1(2):103–110, 2002. M. Werkle-Bergner, V. Mller, S.-C. Li, and U. Lindenberger. Cortical EEG correlates of successful memory encoding: implications for lifespan comparisons. Neuroscience and Biobehavioral Reviews, 30(6):839–854, 2006. doi: 10. 1016/j.neubiorev. 2006.06.009 D. J. White, M. Congedo, J. Ciorciari, and R. B. Silberstein. Brain oscillatory activity during spatial navigation: theta and gamma activity link medial temporal and parietal regions. Journal of cognitive neuroscience, 24(3):686–697, 2012. V. Yoghourdjian, D. Archambault, S. Diehl, T. Dwyer, K. Klein, H. C. Purchase, and H.-Y. Wu. Exploring the limits of complexity: A survey of empirical studies on graph visualisation. Visual Informatics, 2(4):264–282, 2018. doi: 10. 1016/j.visinf.2018. 12.006 V. Yoghourdjian, T. Dwyer, K. Klein, K. Marriott, and M. Wybrow. Graph thumbnails: Identifying and comparing multiple graphs at a glance. IEEE Transactions on Visualization and Computer Graphics, 2018.
{"url":"https://www.researchgate.net/publication/343735270_Scalability_of_Network_Visualisation_from_a_Cognitive_Load_Perspective","timestamp":"2024-11-13T03:16:30Z","content_type":"text/html","content_length":"792306","record_id":"<urn:uuid:7d05426d-5e00-4920-a458-86dea4043732>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00623.warc.gz"}
Simple Sverchok 04 - Apply Matrix This is part four of my introduction to Sverchok the parametric node geometry add-on for Blender. This post I'm sticking to my theme of copying one object to the structure (either the vertices or polygons) or another mesh. There is an awful lot more than just copying that can be done with Sverchok (see here for example) but copying is good for explaining the basics. The very simple node tree below takes a copy of the cube, scales it to half size and places a copy on each each of the corners of the original large cube. The scale values are accessed by clicking on the scale drop down. We could also connect a "Vector In" node here to give the three scale values. Very straight forward. How would we continue this copy in a recursive fashion to produce a fractal structure? That is copy a small cube (64 in total) to each corner of the eight mid-sized cubes. Then repeat this with 512 tiny cubes etc. This would produce a 3D version of the T-square fractal. The "Viewer Draw" node takes a list of vertices from the "Object In" node and a list of matrices defining the location of each of vertices of the original cube and a scale factor to apply to each of the copies. To do another level of copying we need a list of matrices that give the location of the corners of the eight mid-sized cubes. The "Matrix Apply" node takes a list of vertices and a list of matrices and produces a nested list of vertices for all the mid sized cubes. Take a look at the output of the "Matrix Apply" node with the "Viewer Text" node. It consists of a nested list with three levels. The outermost level (level 1) contains 8 objects, one for each cube. Each of these level 1 objects contains eight lists (level 2), one for each corner of a cube. Each of these level 2 list contains three numbers (level 3) for the x, y, z coordinates of the vertex. (8) object(s) =0= (8) (1.5, 1.5, -1.5) (1.5, 0.5, -1.5) (0.5, 0.5, -1.5) (0.5, 1.5, -1.5) (1.5, 1.5, -0.5) (1.5, 0.5, -0.5) (0.5, 0.5, -0.5) (0.5, 1.5, -0.5) =1= (8) (1.5, -0.5, -1.5) (1.5, -1.5, -1.5) (0.5, -1.5, -1.5) (0.5, -0.5, -1.5) (1.5, -0.5, -0.5) (1.5, -1.5, -0.5) (0.5, -1.5, -0.5) (0.5, -0.5, -0.5) We need to flatten this list so it contains 1 object with a list of 64 vertices. This is done using the "List Join" node with the "JoinLevelLists" set to 2. Check with the "Viewer Text" node that this is what happened. (1) object(s) =0= (64) (1.5, 1.5, -1.5) (1.5, 0.5, -1.5) (0.5, 0.5, -1.5) (0.5, 1.5, -1.5) (1.5, 1.5, -0.5) We then feed this list into another "Matrix In" node, set the scale values and send the resulting list of 64 matrices to another "Viewer Draw" node. We can repeat this process as many times as we like to get a fractal structure. Another simple 3D fractal is the Menger Sponge. The sponge (right) and its negative space (left) are shown below. The sponge and its negative would fit together to completely fill a cube. Looking at these, the negative sponge looks easier to build by copying. The basic underlying structure is shown below. The structure consists of ever smaller copies of the central cross. The cross can be made by extruding each face of a cube. The positions to copy to are given by the wire frame cube. This is made by deleting the faces from a cube and subdividing each edge. The cross should be three units across and the wire frame cube 2 units across. For the first level of copying (the bottom row on the following node tree) we just copy the Cross to the Cube vertices of the wireframe. The second level of copying is done in the top row. The scale factors for each "Matrix In" node can be calculated or found by trial and error. They are for the two nodes in the top row (from left to right) 0.33 (1/3) and 0.1666 (1/6) and 0.5 (1/2) for the "Matrix In" node in the bottom row. More levels of smaller copies can be made in a similar way. Once we understand the use of a separate mesh to define the copy to points, constructing the positive Menger Sponge is just as simple. We start by making the base unit which is the boolean difference between a cube and our previous base cross object. The wire frame object is the same as for the negative sponge. Once having set up the recursive scaled copy of the wire frame mesh we only need to copy one size of unit sponge object to each vertex position. The scale factor is 1/9 for a a basic sponge unit of dimension 3 and three levels of recursion.
{"url":"https://elfnor.com/simple-sverchok-04-apply-matrix.html","timestamp":"2024-11-05T16:40:50Z","content_type":"text/html","content_length":"51939","record_id":"<urn:uuid:4bed5cfe-1570-428e-88d3-0effc32575bf>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00535.warc.gz"}
Explain Pre-Algebra to me, please I'm looking through pre-algebra curriculum and I'm having a hard time figuring out what makes it "pre-algebra," as a lot of it looks similar to what my girls are doing now (5th grade) and what they'll be doing in 6th grade math. It seems to be a general review of the concepts they will have already studied. Specifically, I'm looking at the list of subjects covered in Derek Owens pre-algebra. http://www.derekowens.com/course_info_prealgebra.php By the time my girls are in 7th grade, they will have already covered at least 80% of that. Is the pre-algebra difference in application? Integration? Is there a different way of putting it all together that is preparing them for Algebra? Is it a transition class, preparing them for a larger workload or a bigger output? Honestly, all I remember about pre-algebra was that it was really easy and a review of things I had learned in elementary school. I'm having a hard time extracting the point of it all. Please enlighten me! Pre algebra is a a fancy name for the skills a student must have before starting algebra. Students need to be proficient in arithmetic with positive and negative integers and fractions (including percent and decimals.) Aside from that, there really is no such thing as "prealgebra". I used Derek Owens' prealgebra course with my son last year. Prealgebra is basically a great big review of arithmetic with a small foray into algebra. The DO course does this admirably. Most chapters review arithmetic and then take it the extra step into algebra. It's a solid course and I recommend it for students who need the extra year prior to starting algebra. That said, for a kid who has mastered arithmetic and won't benefit from review, an algebra course like Jacobs Algebra that has a gentle introduction will work just fine. I did that with my older son and it worked well. I used Derek Owens' prealgebra course with my son last year. Prealgebra is basically a great big review of arithmetic with a small foray into algebra. The DO course does this admirably. Most chapters review arithmetic and then take it the extra step into algebra. It's a solid course and I recommend it for students who need the extra year prior to starting algebra. That said, for a kid who has mastered arithmetic and won't benefit from review, an algebra course like Jacobs Algebra that has a gentle introduction will work just fine. I did that with my older son and it worked well. Yay. Thanks for this. I really love the look of Derek Owen's classes. Pre algebra is a a fancy name for the skills a student must have before starting algebra. Students need to be proficient in arithmetic with positive and negative integers and fractions (including percent and decimals.) Aside from that, there really is no such thing as "prealgebra". OK, this is what I was thinking, but I wanted to make sure there wasn't something special I was missing. Thanks. Most of us back in the day never bothered with "pre Algebra." It was offered to kids whose math skills were not strong enough to handle algebra after a standard arithmetic curriculum; it served as a bridge course between arithmetic and algebra. Theoretically it should still no longer be necessary, as it does not exist as any type of mathematics (just as pre-calc is non-existent). The thing to watch for is whether a publsher writes the scope and sequence as if everyone needs PA now that it is popular; skipping it could lead to skipping exponents or other instruction. Pre algebra is a a fancy name for the skills a student must have before starting algebra. Students need to be proficient in arithmetic with positive and negative integers and fractions (including percent and decimals.) Aside from that, there really is no such thing as "prealgebra". As much as Maria Miller insists her MM6 isn't a prealgebra course, my husband looked it over before we started is very confident it will cover all the arithmetic before we hit algebra (adding in some extra work with DragonBox and HOE for equations). He never did "prealgebra" and transitioned fine into algebra (I did take a "prealgebra" course, but I'm significantly younger than him). I think it depends on the publisher. We are doing Horizons PreAlgebra and it gets into Algebra topics. I think it depends on the publisher. We are doing Horizons PreAlgebra and it gets into Algebra topics. So does math mammoth 6. It does depend on the publisher. Definitely. Pre-Algebra is typically a big review of the 4 operations, fractions, decimals, percents, ratios/proportions, and a hodgepodge of things that typically don't get taught earlier like exponents, square and cube roots, using prime factorization to calculate the GCF and LCM, scientific notation, negative numbers, etc. If you skip straight from a 5th or 6th grade book to algebra 1 without doing pre-algebra, I would pick something like Jacob's that includes these topics. FWIW, I decided to accelerate my DD by skipping Singapore 6 and going directly into Discovering Math 1 (now renamed 7A/B). Pre algebra is a a fancy name for the skills a student must have before starting algebra. Students need to be proficient in arithmetic with positive and negative integers and fractions (including percent and decimals.) Aside from that, there really is no such thing as "prealgebra". I have seen many reviews here from people whose dc finished R&S's 8th grade text, which does not call itself "pre-algebra," and went on to successfully do algebra and above in other texts. Most of us back in the day never bothered with "pre Algebra." It was offered to kids whose math skills were not strong enough to handle algebra after a standard arithmetic curriculum; it served as a bridge course between arithmetic and algebra. What do you consider "back in the day?" I took prealgebra in 7th grade in 1984-1985, and it was an honors-level course. I think prealgebra courses typically review elementary math while getting more deeply into how and why the procedures work - for example, going over the mathematical proof for why, when dividing by a fraction, "ours not to reason why - just invert and multiply." Our course also transitioned us into using the kind of notation we'd need in higher level math, like not using x to mean "times." I think prealgebra courses typically review elementary math while getting more deeply into how and why the procedures work - for example, going over the mathematical proof for why, when dividing by a fraction, "ours not to reason why - just invert and multiply." Shouldn't a good math program teach the Why right along with the How? I can dream, right? It just would never occur to me to teach a procedure without the reasoning behind it. Shouldn't a good math program teach the Why right along with the How? I can dream, right? It just would never occur to me to teach a procedure without the reasoning behind it. I agree, but there are a lot of different ways people approach teaching math, and what I think we are seeing from this discussion is that depending on how math was taught throughout the elementary years will determine whether pre-A is necessary or not. If a child has a solid *conceptual* understanding of arithmetic, you can teach any bits and pieces like exponents as they come up, or over a couple week mini-course at the start of the year. You can also see them covered in places like Khan Academy if you find gaps. OTOH, if your child has a more skills and drills and less depth/why approach to arithmetic and math, a pre-algebra course would probably be a good bridge. In my mind, teaching my kids to count their blocks was pre-algebra and pouring milk into batter was pre-calculus, so they have been getting the WHY and depth (and their brains are naturally wired for abstraction like mine) from Day 1. :D What do you consider "back in the day?" I took prealgebra in 7th grade in 1984-1985, and it was an honors-level course. I think prealgebra courses typically review elementary math while getting more deeply into how and why the procedures work - for example, going over the mathematical proof for why, when dividing by a fraction, "ours not to reason why - just invert and multiply." Our course also transitioned us into using the kind of notation we'd need in higher level math, like not using x to mean 7th was roughly early 80's. In our district, preA was strictly remedial, and very few took it. Formal proofs were introduced in algebra; concepts were always part of math from the ground up in K-8. Most kids didn't need a transition course for parenthetical and dot notation; it was introduced, then you used it. If you were advanced, you simply walked over to the high school and took algebra in 7th, or were among the few who took algebra in 8th. You had to pass the high school mid-term and final. If you were a typical average to strong student, then you took whatever math they offered through 8th and started algebra in 9th. If you needed more time to develop, then you were offered preA in Shouldn't a good math program teach the Why right along with the How? I can dream, right? It just would never occur to me to teach a procedure without the reasoning behind it. I can tell you that for me, if you explain the why first, I'll be like :blink: But if you teach me how to do it, and how to do it multiple ways, and then we work on it until I know that I know that I know, and *then* you explain why, I'll be like :w00t: I graduated from high school in 1969. My "back in the day" was much earlier than yours. :lol: Some people have gotten so much into the importance of teaching why something works that they don't teach children how to use what they know. Understanding that addition is this group plus this group is all well and good, but you have to know your math facts if you want to balance your check book, KWIM? I can tell you that for me, if you explain the why first, I'll be like :blink: But if you teach me how to do it, and how to do it multiple ways, and then we work on it until I know that I know that I know, and *then* you explain why, I'll be like :w00t: I graduated from high school in 1969. My "back in the day" was much earlier than yours. :lol: Some people have gotten so much into the importance of teaching why something works that they don't teach children how to use what they know. Understanding that addition is this group plus this group is all well and good, but you have to know your math facts if you want to balance your check book, KWIM? I think you have to teach the kid you have, not the kid you wish you had. Not everyone processes information identically, and there is plenty of room in the world for people who just want to know how to balance the checkbook. I get equally irritated with parents who try to turn a mechanic into a mathematician and parents who try to reign in a mathematician and force him to be a mechanic, all because "this is how it ought to be" or because of their own fears and biases about mathematics (ie rocking excessive drill on a kid who doesn't need it, or excess theory on a kid who just wants to be functional). NB by mechanic I am referring the approach to mathematical problem solving, not the profession of auto repair. I am not disparaging the intelligence of auto repair workers.
{"url":"https://forums.welltrainedmind.com/topic/435763-explain-pre-algebra-to-me-please/","timestamp":"2024-11-13T15:22:50Z","content_type":"text/html","content_length":"395186","record_id":"<urn:uuid:716d989e-5c23-40a2-8944-d2a4be74751f>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00039.warc.gz"}
All the maths you need for machine learning for FREE! Python Programmer 7 Sept 202003:22 TLDRDiscover a free resource for mastering the math behind machine learning. The book 'Mathematics for Machine Learning' offers a comprehensive guide covering linear algebra, calculus, and statistics, essential for understanding and building machine learning algorithms. Though not for complete beginners, it's perfect for those with a high school math foundation. The book is available for free online, complete with exercises and tutorials, making it an invaluable resource for anyone looking to enhance their machine learning knowledge without breaking the bank. • 📚 The video introduces a free resource for learning the math needed for machine learning. • 🧠 Mathematics is essential in machine learning due to the prevalence of vectors and matrices. • 📈 Linear algebra, calculus, and probability are key areas of math covered for understanding machine learning algorithms. • 📘 The book 'Mathematics for Machine Learning' is recommended but can be expensive. • 🆓 The same topics can be found for free by the same authors, suitable for those with high school level math. • 🔢 The book assumes a basic understanding of vectors, calculus, and matrices, directing beginners to free resources like Khan Academy. • 🛠️ It is designed for those without a math degree but who want to understand the math behind deploying and building machine learning algorithms. • 📚 The book is divided into two parts: mathematical foundations and applying those foundations to build machine learning algorithms. • 🈚️ The book contains no code, focusing solely on the mathematical concepts. • 📖 The explanations are clear, and the book does a good job of explaining mathematical symbols for non-mathematicians. • 🔗 The book is available as a free PDF download from the authors' website, which also includes teaching exercises and tutorials. Q & A • What is the importance of mathematics in machine learning? -Mathematics is crucial in machine learning because most concepts are based on vectors and matrices, requiring a good understanding of linear algebra, calculus, and probability and statistics. • What is the title of the book mentioned in the transcript that covers mathematics for machine learning? -The title of the book is 'Mathematics for Machine Learning'. • Is the book 'Mathematics for Machine Learning' available for free? -Yes, the book 'Mathematics for Machine Learning' is available for free from the authors. • Who is the target audience for the book 'Mathematics for Machine Learning'? -The book is aimed at people who do not have a mathematics degree but want to understand enough math to deploy and build machine learning algorithms. • What are the prerequisites for understanding the content of 'Mathematics for Machine Learning'? -The prerequisites include having the equivalent of high school mathematics, with basic knowledge of vectors, calculus, and matrices. • How is the book 'Mathematics for Machine Learning' structured? -The book is divided into two parts: the first part covers mathematical foundations like linear algebra, matrix decomposition, vector calculus, probability, and distributions. The second part shows how to use these foundations to build machine learning algorithms. • Does the book 'Mathematics for Machine Learning' include any coding examples? -No, the book does not include code; it focuses solely on teaching the mathematical concepts. • What additional resources are available on the book's website? -The book's website offers downloadable PDFs of the book, a table of contents, and teaching exercises for further learning. • How can one access the book 'Mathematics for Machine Learning' and its resources? -You can access the book and its resources by visiting the book's website, where you can download the PDF and find additional tutorials and exercises. • What is the publisher of the book 'Mathematics for Machine Learning'? -The book is published by Cambridge University Press. • How does the book 'Mathematics for Machine Learning' explain mathematical symbols? -The book is noted for explaining mathematical symbols in a clear and well-explained manner, making it accessible for non-mathematicians. 📚 Free Resource for Machine Learning Mathematics The speaker introduces the concept of a free, comprehensive resource for learning the mathematics necessary for machine learning. They emphasize the importance of understanding linear algebra, calculus, and statistics in this field. The speaker then mentions a book, 'Mathematics for Machine Learning,' which they found on Amazon but discovered is also available for free from the authors. It's noted that the book is not for complete beginners but requires a high school level of mathematics. The book is divided into two parts: the first covering mathematical foundations and the second applying these to build machine learning algorithms. The speaker appreciates the clarity and explanation of mathematical symbols in the book, which can be daunting for non-mathematicians. The video features a humorous moment with turkeys walking by, adding a light-hearted touch to the educational content. The book is published by Cambridge University Press, and the speaker highly recommends the resource, mentioning that it can be downloaded as a PDF from the book's website, which also includes teaching exercises. 💡Machine Learning Machine learning is a subset of artificial intelligence that enables computers to learn from and make predictions or decisions based on data. It is central to the video's theme as the entire discussion revolves around understanding the mathematical concepts necessary for effectively implementing machine learning algorithms. The script emphasizes the importance of math in machine learning, stating that 'practically everything is either a vector or a matrix'. 💡Linear Algebra Linear algebra is a branch of mathematics that deals with linear equations, linear transformations, and their representations in vector spaces and through matrices. In the context of the video, linear algebra is a foundational math area needed for machine learning because it provides the tools to understand and manipulate vectors and matrices, which are ubiquitous in machine learning 💡Gradient Descent Gradient descent is an optimization algorithm used in machine learning to minimize a function by iteratively moving in the direction of the steepest descent as defined by the negative of the gradient. The script mentions that gradient descent 'relies on calculus,' highlighting its importance in the optimization process of machine learning algorithms. Calculus is a branch of mathematics that deals with limits, derivatives, integrals, and infinite series. It is essential in machine learning for understanding how algorithms learn from data, particularly in the context of optimization problems like gradient descent. The video script underscores the reliance of gradient descent on calculus. Probability is the measure of the likelihood that a given event will occur. In machine learning, probability is crucial for understanding the uncertainty inherent in predictions and for developing statistical models. The script mentions probability as one of the key areas of math needed for machine learning. Statistics is the discipline that concerns the collection, analysis, interpretation, presentation, and organization of data. It plays a vital role in machine learning for model evaluation, hypothesis testing, and data analysis. The video script includes statistics as a key mathematical concept necessary for those looking to understand and build machine learning algorithms. 💡Mathematics for Machine Learning This refers to the book 'Mathematics for Machine Learning' mentioned in the script, which is a resource that covers the mathematical concepts needed for machine learning. The book is not for complete beginners and assumes a high school level of understanding of math. It is highlighted as being available for free from the authors, which is a significant point in the video's message about accessible learning resources. A vector is a mathematical object that has both magnitude and direction, or in the context of machine learning, it can be a one-dimensional array of numbers. The script notes that 'practically everything is either a vector or a matrix,' indicating the fundamental role vectors play in representing data in machine learning. A matrix is a two-dimensional array of numbers arranged in rows and columns. In the script, matrices are mentioned alongside vectors as fundamental structures in machine learning, where they are used for various operations including transformations and calculations. 💡Khan Academy Khan Academy is a non-profit educational organization that provides free online courses, lessons, and practice exercises. In the video script, it is recommended for those who do not have a high school level of math to get up to speed before diving into more advanced machine learning mathematics. 💡Cambridge University Press Cambridge University Press is a renowned publisher of academic and educational content. The script mentions that the book 'Mathematics for Machine Learning' is published by Cambridge University Press, indicating the quality and credibility associated with the book. PDF stands for Portable Document Format, a file format used to present documents in a manner independent of application software, hardware, and operating systems. The script informs viewers that they can download the 'Mathematics for Machine Learning' book as a PDF from the book's website, emphasizing the ease of access to this free resource. 💡Teaching Exercises Teaching exercises are practical activities designed to help learners understand and apply concepts. The script mentions that the book's website includes teaching exercises, which are additional resources for those learning the mathematical foundations for machine learning. Learning all the maths needed for machine learning from one free source is possible. Mathematics is essential in machine learning due to the prevalence of vectors and matrices. Linear algebra, gradient descent, and probability are fundamental areas of maths for machine learning. The book 'Mathematics for Machine Learning' is frequently recommended on Amazon. A free alternative to the book is available directly from the authors. The book is not suitable for complete beginners; a basic understanding of maths is required. For those lacking the basics, resources like Khan Academy can help prepare for the book. The book targets individuals without a maths degree who want to understand machine learning algorithms. The book is divided into two parts: mathematical foundations and application to machine learning algorithms. Part one covers linear algebra, matrix decomposition, vector calculus, probability, and distributions. Part two demonstrates the use of mathematical foundations in building machine learning algorithms. The book contains no code, focusing solely on the mathematical concepts. The explanations are clear and well-structured, making the maths accessible to non-mathematicians. The book explains mathematical symbols thoroughly, which can be challenging for those unfamiliar with them. The book is published by Cambridge University Press, ensuring high-quality content. The entire book can be downloaded as a PDF from the book's official website. Teaching exercises are available on the website to complement the book's content. The book's website also includes a table of contents and tutorials for further learning. A link to the book's website is provided in the video description for easy access.
{"url":"https://math.bot/blog-All-the-maths-you-need-for-machine-learning-for-FREE-38043","timestamp":"2024-11-07T00:44:56Z","content_type":"text/html","content_length":"121301","record_id":"<urn:uuid:69df8235-bc9d-4775-9984-4ec7d5787118>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00160.warc.gz"}
Solved: pad to square in PyTorch - SourceTrail Solved: pytorch pad to square Padding an image or matrix to make it a square is a common task in computer vision, image processing, and data science. The main objective of padding is to ensure consistent dimensions across multiple images and matrices, allowing for smoother processing and analysis. In this article, we will explore an efficient solution to the pad to square problem using Python, alongside a comprehensible explanation of the steps involved, and delve into some related libraries and functions that can aid us in solving similar problems. Solution to the Pad to Square Problem The primary solution we will be discussing is based on the popular Python library, NumPy, which provides a wide array of tools for working with arrays and matrices. Using NumPy, we will zero-pad an image or matrix to make it square. Zero-padding means adding rows and columns filled with zeros around the original image or matrix until it has equal dimensions. import numpy as np def pad_to_square(array): """Pad an array to make it square with zeros.""" height, width = array.shape size = max(height, width) padded = np.zeros((size, size), dtype=array.dtype) padded[:height, :width] = array return padded Step-by-step Explanation of the Code 1. First, we import the NumPy library with the alias ‘np’ for ease of use. 2. We define a function called ‘pad_to_square’, which takes an input array as an argument. 3. Inside the function, we retrieve the height and width of the input array using its ‘shape’ attribute. 4. We calculate the maximum value between the height and width to determine the size of our new square array. 5. Next, we create a new square array called ‘padded’ filled with zeros and the same data type as the input array. 6. We copy the contents of the input array onto the top-left corner of the ‘padded’ array. 7. Finally, we return the padded array as output. NumPy Library and its Applications NumPy stands for “Numerical Python” and is an incredibly powerful library for working with numerical data in Python. It provides fast and efficient operations on arrays and matrices, making it an essential tool for a wide range of applications, including scientific computing, data analysis, and artificial intelligence. • Efficient Array Operations: NumPy offers a variety of built-in functions to perform element-wise, linear algebra, and statistical operations on arrays, thereby allowing users to manipulate and analyze data with ease. • Broadcasting: With NumPy’s broadcasting system, users can perform arithmetic operations on arrays of different shapes and sizes, making it a versatile choice for handling multidimensional data. • Interoperability: NumPy arrays can be easily converted to and from other data structures such as Python lists, tuples, and Pandas DataFrames, providing seamless integration with other libraries and packages. Similar Libraries and Functions for Array Manipulation In addition to NumPy, there are other libraries and functions available in Python for a wide range of tasks related to array manipulation and processing. 1. SciPy: The SciPy library builds upon NumPy by providing additional functionality for scientific and technical computing, including image processing, optimization, and signal processing functions. SciPy’s `ndimage` module has a `pad` function that can be used for padding arrays with several padding modes and constant values. 2. OpenCV: OpenCV is a popular open-source computer vision library with efficient implementations of various image processing and computer vision algorithms. It can be used for a wide range of tasks, including image padding using the `copyMakeBorder` function. 3. TensorFlow and PyTorch: TensorFlow and PyTorch are popular deep learning libraries that provide different methods for padding tensors or arrays according to the requirements of specific neural network architectures. TensorFlow’s `pad` function and PyTorch’s `Pad` module can be used for customizable padding operations. Understanding and mastering these libraries and their associated functions greatly enhance a developer’s ability to tackle a broad array of data manipulation and processing problems, making them invaluable assets in contemporary programming and data science. Leave a Comment Related posts: Experts programming in Python. Our intention is to spread this language and help people with programming problems related to Python and its Frameworks.
{"url":"https://www.sourcetrail.com/python/pytorch/pytorch-pad-to-square/","timestamp":"2024-11-03T19:01:06Z","content_type":"text/html","content_length":"226169","record_id":"<urn:uuid:3bbce319-7b9b-469e-aae5-b61020fee9e7>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00711.warc.gz"}
Project Euler &gt; Problem 182 &gt; RSA encryption (Java Solution) The RSA encryption is based on the following procedure: Generate two distinct primes p and q. Compute n=pq and φ=(p-1)(q-1). Find an integer e, 1[<]e[<]φ, such that gcd(e,φ)=1. A message in this system is a number in the interval [0,n-1]. A text to be encrypted is then somehow converted to messages (numbers in the interval [0,n-1]). To encrypt the text, for each message, m, c=me mod n is calculated. To decrypt the text, the following procedure is needed: calculate d such that ed=1 mod φ, then for each encrypted message, c, calculate m=cd mod n. There exist values of e and m such that me mod n=m. We call messages m for which me mod n=m unconcealed messages. An issue when choosing e is that there should not be too many unconcealed messages. For instance, let p=19 and q=37. Then n=19*37=703 and φ=18*36=648. If we choose e=181, then, although gcd(181,648)=1 it turns out that all possible messages m (0[≤]m[≤]n-1) are unconcealed when calculating me mod n. For any valid choice of e there exist some unconcealed messages. It's important that the number of unconcealed messages is at a minimum. Choose p=1009 and q=3643. Find the sum of all values of e, 1[<]e[<]φ(1009,3643) and gcd(e,φ)=1, so that the number of unconcealed messages for this value of e is at a minimum. The solution may include methods that will be found here: Library.java . public interface EulerSolution{ public String run(); We don't have code for that problem yet! If you solved that out using Java, feel free to contribute it to our website, using our "Upload" form.
{"url":"http://www.javaproblems.com/2013/12/project-euler-problem-182-rsa.html","timestamp":"2024-11-03T19:22:05Z","content_type":"application/xhtml+xml","content_length":"47949","record_id":"<urn:uuid:167ed87f-5a9a-4550-81e6-4fdddad770e9>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00766.warc.gz"}
Blog | μF This article describes a method for the very precise measurement of a frequency difference using an oscilloscope with multiple channels. The following video shows the display of an oscilloscope whose four channels are connected to four different As seen in the above video, four different waveforms originating form the oscillators are displayed on the oscilloscope screen. The TCXOs are very accurate and are rated at In order to determine the exact frequency difference If a waveform is scrolling towards the left side relative to the reference waveform, then its frequency The difference in parts per million The TCXOs measured in the above setup have been used for increasing the accuracy of a classic Casio digital wrist watch. Hence, it is of our interest to calculate the particular oscillator’s drift Taking the purple waveform on channel 3 as an example, we can measure For the above example, I hope that you found this article useful. Please be welcome post your feedback and comments below. The Ideal Diode The following article explains the theory of operation of an ideal diode circuit implemented using a p-channel MOSFET and a matched PNP transistor pair. Typical applications for the ideal diode are devices such as solar chargers, where power efficiency is of a great importance. Table of Contents Diodes are devices that allow the electric current to flow in only one direction. As shown in the image below, the current is allowed to flow from the anode towards the cathode but not the other way Diodes have many applications ranging from simple reverse polarity protection to full bridge rectifiers. There is plenty of available material explaining the diode basics, therefore I would like to skip this part and only cover one particular aspect of diodes which makes them rather power inefficient devices. This article shall cover the forward voltage drop denoted in the datasheet as The above diagram plots the forward voltage drop As the name of this article suggests, the ideal diode is one which exhibits no (or very little) power loss. Thus, it should have Circuit Diagram As seen in the schematic below, the ideal diode consists of a p-channel MOSFET Q2 and a voltage comparator consisting of a matched PNP transistor pair Q1A and Q1B. The IRLML2244 p-channel MOSFET Q2 is driven in the reverse direction, whereas its drain pin 3 is connected to the input voltage This MOSFET exhibits a very low static drain-to-source on-resistance datasheet). The resulting measured forward voltage drop amounts to Voltage Comparator A voltage comparator circuit has been implemented around the PNP transistors Q1A and Q1B. It is important that these transistors have identical characteristics, otherwise the comparator will not have the required precision. Thus, these transistors part of a BC857BS matched transistor pair sharing the same package. Having both transistors inside one physical package ensures that they are thermally coupled and avoids diverging characteristics due to different junction temperatures. The voltage comparator compares the voltages R2. The following equations apply for the voltage where Q1A and Q1B. And: Where datasheet). Due to the properties of the base-emitter junction which is essentially a diode, the voltages Q1A or Q1B if the corresponding emitter-base voltage Forward Bias The following holds true when the ideal diode is forward biased: The larger of Consequently, current will flow through the emitter-collector path of transistor Q1A while no current will flow through the emitter-collector path of Q1B. Thus, the voltage R2 will be equal (or near equal) to 0V. This will lead to a negative gate-source voltage Q1B to turn on. Reverse Bias The following holds true when the ideal diode is reverse biased: The larger of Consequently, current will flow through the emitter-collector path of transistor Q1B while no current will flow through the emitter-collector path of Q1A. Thus, the voltage R2 will be equal (or near equal) to Q1B to turn off. PCB Layout The circuit has been implemented on a SMD prototyping board as shown in the pictures below. The surface mount MOSFET, transistor pair and 0805 resistors have been connected using jumper wires. The three terminals of the ideal diode have been connected to a pin header. The left picture shows the top side of the PCB with the visible MOSFET (3 pin package) and dual transistor (6 pin package). The right picture shows the backside of the PCB with the two 0805 resistors. Note that the pads on both sides are connected through the holes. Following are the pin assignments on the pin header, assuming pin 1 is the leftmost pin and pin 3 is the rightmost pin on the left picture: • Pin 1: • Pin 2: GND • Pin 3: Bill of Material Following is the list of parts required for building the ideal diode. Please consider supporting this website by purchasing your the required parts using the affiliate links below: • IRLML2244 logic level p-channel MOSFET • BC857BS PNP transistor pair • 10KΩ 0805 SMD resistors • 2.54mm pin header • SMD prototyping board Update (April 19, 2021) David Albert has kindly provided the following feedback to this design. With his permission, I hereby quote his emails and diagrams he has provided. Feedback on April 14, 2021 Hi Karim, I found your ideal diode circuit on your blog (https://www.microfarad.de/blog/the-ideal-diode/); thanks for sharing it; it’s a clever design! I simulated it in LTSpice and it appeared to work, so I designed it into a circuit as an ideal diode replacement for ORing USB power with a 6-7V battery. Unfortunately, I found that whenever I connected the battery, the USB port experienced a surge and would shut down. I went back to the simulation and I think I see the problem; if you agree, I think it would be good to note it on your blog so others don’t have the same problem. I’m still thinking about how to fix it and will let you know if I come up with an elegant solution (please let me know if you come up with one too). The problem is that the MOSFET requires a finite amount of time to turn off. When the battery (V2 in my circuit) turns on, the BE junction of Q2 is forward biased and current flows through it and R2, causing a larger current flow through the emitter to collector (as your circuit intends); this works as designed. This current through Q2 is what charges M1’s gate causing it to turn off. Unfortunately, the BC857BS has a maximum current of 200mA and that charges the MOSFET gate slowly (high-current MOSFET drivers exist for exactly this reason: to charge/discharge gate capacitance During the time Q2 is charging M1’s gate, M1 remains on and creates a short circuit between the two power sources and a *lot* of current will flow. This caused the protection circuitry on my USB port to kick in and shut down the port, but it could do much worse if such protection circuitry is not present. The LTSpice simulation actually shows this problem (I just wasn’t sharp enough to look for it when I simulated it initially). The blue trace below shows the current through D1 (output of battery). Notice how the current surges momentarily above 13A. You can reduce the size of the surge (and the power consumption of the circuit) by increasing R1. You can reduce the surge a little more (at the expense of wasted power) by reducing R2, but the underlying problem remains: there is a brief period when the supplies are short circuited. I’ve attached the LTSpice simulation in case you want to try it yourself…if you have any ideas for an elegant fix, I’d be grateful if you’d share them. Thanks and regards, Dave Albert Follow-up on April 14, 2021 Hi Karim, You are welcome to post the explanation and thank you for looking at it and responding so quickly! I slept on it and adding a soft-start circuit to the higher voltage source seems likely to solve the issue (and is probably a good idea anyway to reduce capacitor inrush currents). I simulated it below; the values shown below were not carefully chosen; I’ll tweak them later and then test it, but the simulation seems to work without the large reverse currents. If you have any better ideas, please let me know. Best regards, Oscilloscope Comparison I’ve been lately searching for an entry level digital sampling oscilloscope (DSO) around the 400€ price tag. Having exhaustively read through the various forums and watched the numerous product reviews on YouTube, it became pretty clear that there are currently two major candidates on the market that fulfil the current price tag and provide a decent feature set. • Rigol DS1054Z • Siglent SDS1202X-E Most of the product reviews that I have come across were clearly biased towards the one or the other device. Thus, I have ordered and tested both devices whose specs and reviews are widely available on the web. I have created the following decision matrix that I would like to share with you, hoping to provide you with a more objective opinion about the two great devices from an electronics hobbyist’s point of view. The Decision Matrix Following is the decision matrix that i have used in order to come to a final decision on the device i should keep. Important: Please bear in mind that the resulting score largely depends on your individual preferences and is by no means an absolute verdict over the goodness of the particular device. Thus, applying different grades and using different priorities might lead to a completely different outcome. Following is the Excel table that i have used for the generating the above snapshot: I have only considered the features that I found relevant for my personal use. The features have been prioritized from 1 to 4, whereas 1 has the highest priority. Each device has been assigned a grade for each of the listed features as follows: • Grade 1: feature has exceeded my expectations • Grade 0: feature is on par with my expectations • Grade -1: feature is worse than I had expected The feature score has been calculated according to the following formula: The total score being the the sum of the individual feature scores. Please free to distribute and use the above table for your own purposes. I do hope that you do find this information useful.
{"url":"https://www.microfarad.de/blog/","timestamp":"2024-11-09T22:22:10Z","content_type":"text/html","content_length":"82647","record_id":"<urn:uuid:4ac35c1c-ed10-4a3c-ac46-9fa4ae1af38d>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00536.warc.gz"}
Inside the Matrix by Enno Cramer • Math • Computer Graphics • Linear Algebra Table of Contents A vector is simply a tuple of numbers, either written in a row as v = (v1, v2, ..., vn), or in a column as / v1 \ | v2 | v = | ... |. \ vn / For brevity, I'll mostly stick to the row format in this text, but they are absolutely equivalent. In computer graphics, we can use vectors to hold directions and distances in euclidean space. A vector then contains one number for each dimension of space, representing the distance along a single axis. Thus v = (x, y, z) is only a short form of writing v = x*a_x + y*a_y + z*a_z, where a_x, a_y, and a_z are the three axes of space. We can also use vectors to represent positions in space, by using a reference point O and storing the distance between a point and the reference point. However, distances and points now look the same, which is a Bad Thing, because they aren't. This is where homogeneous coordinates come into play. We can solve this ambiguity by adding another element, the homogenous coordinate, to the vector. For distances, this coordinate is 0, for points it is 1. (Actually, for points it must only be non-0, but I won't go into all the details of homogeneous coordinates. Suffice to say that a vector v = (x, y, z, w) with w != 0 represents the same point as v' = (x/w, y/w, z/w, 1).) With this addition, the vector v = (x, y, z, w) is a short form of writing v = x*a_x + y*a_y + z*a_z + w*O. Vectors are always relative to a /coordinate system/, defined by the three axes a_x, a_y, and a_z, and the reference point O, also called the /origin/. Affine transformations, such as scaling, rotation, translation, shearings, and any combination thereof, can be expressed by defining a new coordinate system. Scalings are simply changes to the length of the axes, rotations change the orientation of the axes, translations move the reference point, and shearing change the angles between the axes. For example, an object in a coordinate system whose axes all have a length of two units, is twice as big as the same object in a coordinate system whose axes all have a length of one unit. Now, if we express the axes and origin of the transformed coordinate system in terms of the original coordinate system, we can apply the transformation to any vector (point or distance) by simply evaluating the equation given above. We'll call the transformed coordinate system T, with axes T_x, T_y, and T_z, and origin T_O, and the original coordinate system O, with axes O_x, O_y, and O_z, and origin O_O. The transformed coordinate system T is defined relative to the original coordinate system O: T_x = (T_xx, T_xy, T_xz, 0) T_y = (T_yx, T_yy, T_yz, 0) T_z = (T_zx, T_zy, T_zz, 0) T_O = (T_Ox, T_Oy, T_Oz, 1) (Notice how the axes all have a homogeneous coordinate of 0, as they are distances, and the origin has a homogeneous coordinate of 1, as it is a point.) With these definitions, we get v' = (x', y', z', w') = x*T_x + y*T_y + z*T_z + w*T_O = x * (T_xx * O_x + T_xy * O_y + T_xz * O_z + 0 * O_O) + y * (T_yx * O_x + T_yy * O_y + T_yz * O_z + 0 * O_O) + z * (T_zx * O_x + T_zy * O_y + T_zz * O_z + 0 * O_O) + w * (T_Ox * O_x + T_Oy * O_y + T_Oz * O_z + 1 * O_O) = (x*T_xx + y*T_yx + z*T_zx + w*T_Ox) * O_x + (x*T_xy + y*T_yy + z*T_zy + w*T_Oy) * O_y + (x*T_xz + y*T_yz + z*T_zz + w*T_Oz) * O_z + (x* 0 + y* 0 + z* 0 + w* 1) * O_O. Now, this looks rather complicated at first sight, but if you look closely, you'll notice a certain pattern. Enter the Matrix People familiar with linear algebra will probably recognize the pattern. It look suspiciously like the product of matrizes, and indeed, it can be written as such. We have to combine the axes and origin of the transformed coordinate system into a 4x4 matrix M. This can be done in more than one way, but I'll stick to the convention used by OpenGL. The other possibilities lead to different multiplication orders (remember matrix multiplication is not commutative) and vector notations. If we combine the axes and origin such that each vector occupies one column of the final matrix, and consider the vector as a column matrix (a matrix with only a single column, much like the column notation of vectors), the transformation can be expressed as a simple v' = M * v / T_xx T_yx T_zx T_Ox \ / x \ | T_xy T_yy T_zy T_Oy | | y | = | T_xz T_yz T_zz T_Oz | * | z |. \ 0 0 0 1 / \ w / Working with Transformations Now, that we have shown how affine transformations can be expressed as matrix multiplications, it is trivial to show how to combine transformations. Suppose we want to move and scale an object. We have the scaling transformation stored in the matrix S, and the translation in the matrix T. v' = T * (S * v) = (T * S) * v = TS * v Thus, we can combine both transformations into a single matrix, simply by multiplying the two transformation matrizes. Note, however, that the order is important. T * S moves the scaled object, whereas S * T scales the already moved object, /thus amplifying the movement/. When using OpenGL, transformations are accumulated from left to right. Thus, to produce the transform T * S, one has to call the gl-functions in the order Or, put another way, the first transformation affecting an object, is the transformation last executed.
{"url":"https://memfrob.de/posts/inside-the-matrix/","timestamp":"2024-11-01T23:54:07Z","content_type":"text/html","content_length":"10634","record_id":"<urn:uuid:505c1b87-0153-4923-bade-d0ce2c0f53e1>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00213.warc.gz"}
Evaluation metrics - Parea AI Evaluation metrics Add evaluation metrics to the playground. You can use evaluation functions in the playground by clicking the Evaluation metrics button in a prompt session. Here, you will have the option to select an existing metric or create a new one. Registering an auto-evaluation metric Parea provides use-case-specific evaluation metrics that you can use out of the box. To get started, click Register new auto-eval metric. This will allow you to create a metric based on your specific inputs. Next, find the metric you want to use based on your use case. Each metric has its required and optional variables. Your prompt template must have a variable for any required inputs. For example, the LLM Grader metric expects your prompt to have a {{question}} variable. If your variable is named something else, you can select which variable to associate with the question field from the drop-down menu. Click Register once you are done, and that metric will be enabled. Using a custom eval metric You can select any previously created metrics you want in the Evaluation metrics modal and then click Set eval metric(s) to attach them to your current session. To create a new custom evaluation functions, click Create new custom metric. The editor will be pre-populated with a template for you to get started. You can delete all the code and retain the eval_fun signature def eval_fun(log: Log) -> float:. To ensure that your evaluation metrics are reusable across the entire Parea ecosystem, and with any LLM models or LLM use cases, we introduced the log parameter. All evaluation functions accept the log parameter, which provides all the needed information to perform an evaluation. Evaluation functions are expected to return floating point scores or booleans. If you have this function and return a float or boolean, your new metric will be valid. A simple example could be: def eval_fun(log: Log) -> float: return float(log.output == log.target) Testing function calling with evaluation functions If you are using function calling in your prompt, you can still use evaluation metrics. When LLM models use function calling, they respond with a stringified list of JSON objects. The list will have at least one dictionary with the key function, and that dictionary will always have a name field and an arguments field. To display code snippets in the UI, Parea wraps the JSON string in triple backticks (```). If you want to validate that the function call has the correct arguments in your evaluation function, you can access it by: 1. First striping the backticks 2. Then parse the JSON string 3. Then access the fields
{"url":"https://docs.parea.ai/platform/playground/evaluation_metrics","timestamp":"2024-11-03T12:31:25Z","content_type":"text/html","content_length":"246882","record_id":"<urn:uuid:e34cc0dd-8790-417a-a755-beab99064995>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00795.warc.gz"}
Priority Queue implementation using Linked List shabbir Administrator Staff Member • Linked List Introduction - What is priority Queue A priority queue is an abstract data type (ADT) supporting the following three operations: 1. Add an element to the queue with an associated priority 2. Remove the element from the queue that has the highest priority, and return it 3. (optionally) peek at the element with highest priority without removing it The Code #include <iostream.h> #include <stdlib.h> ///// Implements Priority Queue class PriorityQueue // Class Prioriry Queue struct Node // Node of Priority Queue struct Node *Previous; int Data; struct Node *Next; struct Node *head; // Pointer to Head struct Node *ptr; // Pointer for travelling through Queue static int NumOfNodes; // Keeps track of Number of nodes int Maximum(void); int Minimum(void); void Insert(int); int Delete(int); void Display(void); int Search (int); // First Nodes Created With Constructor int PriorityQueue::NumOfNodes=1; // Constructor cout<<"Enter First Element of Queue"<<endl; // Function Finding Maximum Priority Element int PriorityQueue::Maximum(void) int Temp; if(ptr->Next==NULL && ptr->Data>Temp) // Function Finding Minimum Priority Element int PriorityQueue::Minimum(void) int Temp; if(ptr->Next==NULL && ptr->Data<Temp) // Function inserting element in Priority Queue void PriorityQueue::Insert(int DT) struct Node *newnode; newnode=new Node; // Function deleting element in Priority Queue int PriorityQueue::Delete(int DataDel) struct Node *mynode,*temp; cout<<"Cannot Delete the only Node"<<endl; return FALSE; /*** Checking condition for deletion of first node ***/ //delete temp; /*** Checking condition for deletion of ***/ /*** all nodes except first and last node ***/ delete temp; if(ptr->Next->Next==NULL && ptr->Next->Data==DataDel) /*** Checking condition for deletion of last node ***/ delete temp; // Function Searching element in Priority Queue int PriorityQueue::Search(int DataSearch) return ptr->Data; if(ptr->Next==NULL && ptr->Data==DataSearch) return ptr->Data; // Function Displaying elements of Priority Queue void PriorityQueue::Display(void) cout<<"Priority Queue is as Follows:-"<<endl; // Destructor of Priority Queue struct Node *temp; /* Temporary variable */ // delete head; delete head; //Main Function void main() PriorityQueue PQ; int choice; int DT; cout<<"Enter your choice"<<endl; cout<<"1. Insert an element"<<endl; cout<<"2. Display a priorty Queue"<<endl; cout<<"3. Delete an element"<<endl; cout<<"4. Search an element"<<endl; cout<<"5. Exit"<<endl; case 1: cout<<"Enter a Data to enter Queue"<<endl; case 2: case 3: int choice; cout<<"Enter your choice"<<endl; cout<<"1. Maximum Priority Queue"<<endl; cout<<"2. Minimum Priority Queue"<<endl; case 1: case 2: cout<<"Sorry Not a correct choice"<<endl; case 4: cout<<"Enter a Data to Search in Queue"<<endl; cout<<DT<<" Is present in Queue"<<endl; cout<<DT<<" is Not present in Queue"<<endl; case 5: cout<<"Cannot process your choice"<<endl; Dont copy but you can download the file as an attachment. This is the priority Queue implementation and the Queue implementation can be refered here - Queue implementation using Linked List. Attached Files: aisha.ansari84 New Member Feb 13, 2008 Likes Received: Trophy Points: priprity queues are good but for implementing them we should be thorough with our purpose rahul.mca2001 New Member Feb 13, 2008 Likes Received: Trophy Points: i agree programming girl New Member Apr 24, 2008 Likes Received: Trophy Points: yes , I used it in my solution in homework that is great i want to ask u shabbir if i want to insert the data in order by maximum node how can i do that in this function that i have it // ------------- Insert ----------------// template <class type> void priorityqueue<type>::insert(type datain) node<type> *current; current=new node<type>; node<type> *pnew; pnew= new node<type>; NOTE: i want the nodes ordered by maximum another question your display function is good but if i want it print the all nodes from maximum to minimum node what is the modification ????? Last edited by a moderator: Apr 30, 2008 shabbir Administrator Staff Member You need to sort your linked list hello shabbir, yes i mean sort the list shabbir Administrator Staff Member Search the forum and you will find that too. siya New Member Oct 15, 2008 Likes Received: Trophy Points: hi shabbir, Thanks for your immediate reply......the code is actually a bouncer to me....i forgot to mention in my post that i dont require the code....a simple explanation of how to execute that would be enough.......it would be helpful if you can explain the whole thing in simple language......... thanks in advance, ban1414 New Member Oct 25, 2008 Likes Received: Trophy Points: data in order by maximum that is great i want to ask u shabbir if i want to insert the data in order by maximum node how can i do that in this function that i have it. shabbir Administrator Staff Member Re: data in order by maximum What you mean by order of maximum node? hkp819 New Member Dec 4, 2008 Likes Received: Trophy Points: this program is very helpful for me. But same question is for you that if i want to insert the data order by maximum to minimum node than what would be the function. saly New Member Jan 4, 2010 Likes Received: Trophy Points: thank you so much Sneha Mohan New Member Apr 25, 2010 Likes Received: Trophy Points: same here thanks hj9c New Member Oct 30, 2010 Likes Received: Trophy Points: Software Engineering very nice sushiln New Member Nov 9, 2010 Likes Received: Trophy Points: Is "Current" an integer variable? It is not initialized in the beginning of your code. Salman786 New Member Jan 11, 2011 Likes Received: Trophy Points: Salam to all , im new here , would i'll get answer for every questions on programming knowledge here? Naveed Marwat New Member Feb 16, 2011 Likes Received: Trophy Points: Great sharing
{"url":"https://www.go4expert.com/articles/priority-queue-implementation-using-t1633/","timestamp":"2024-11-05T19:25:55Z","content_type":"text/html","content_length":"95767","record_id":"<urn:uuid:379d7655-8b92-44d5-9567-eda75611433b>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00862.warc.gz"}
Here you can find a mathematical Wordle, called a Funcdle. The game offers more than just numbers and math; you can also train your brain by using graphs that are based on coordinate systems. Are you comfortable with graphs now? Let's get started! Funcdle accepts mathematical formulas and may include or not include an "x" variable. When the formula "f" is entered, the graph "y=f (x) will appear. A point is colored black if it differs in vertical direction from another point on your answer graph by less than 0.5. If it doesn't happen, the point will be colored gray. It is therefore important to be aware of the spots that are black and to adapt your mathematical formula to them. Best of luck! How To Play • Enter the math formula and press 'Enter'. • You have 6 attempts to guess Funcdle.
{"url":"https://connections-game.com/funcdle","timestamp":"2024-11-11T09:55:26Z","content_type":"text/html","content_length":"60019","record_id":"<urn:uuid:0f37b722-9c94-4643-8421-8453c162541f>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00075.warc.gz"}
Deposits | Exactly Protocol Any depositor can supply assets to the pool at any time during the life of a Fixed Rate Pool. No deposits can be made once the fixed pool matures (after the maturity date). When there is a new deposit to a Fixed Rate Pool, the system determines the total amount of outstanding borrows backed by the Variable Rate Pool and calculates the interest pending payment. The deposit returns an equivalent amount of funds to the Variable Rate Pool, and its corresponding interest pending payment is assigned to the new depositor. Each fixed deposit is composed of two different amounts, principal and earnings. The principal is the initial amount deposited, whereas the earnings are the extra interest the depositor will acquire after maturity. It's important to highlight that fixed-rate deposits can't be used as collateral to borrow other assets. Early Withdrawals Depositors can withdraw anytime, even before maturity, provided enough liquidity is available in the protocol. So, early withdrawals are equivalent to requesting a fixed rate borrow for the total deposit (principal+earnings) in the same fixed rate pool. For example, Alice deposits $100 into a 1-year fixed-rate pool with an interest rate of 10% APR. Then her total deposit at maturity equals $110 ($100+$10). Now let's say that a few hours later, the utilization of the pools goes down, and the 1-year fixed rate for borrowing $110 is also 10% APR; Alice could decide to make an early withdrawal of her fixed rate deposit, and she will get back now the present value of her total deposit at maturity ($110) discounted by the current fixed borrow rate (10% APR), in this case, equal to the original $100 that she deposited ($110/1.1). Late Withdrawals Depositors can also withdraw once the maturity date is reached. There's no limit to the time they have to withdraw their funds but bear in mind that these deposited assets will not generate any extra interest rate fees.
{"url":"https://docs.exact.ly/guides/fixed-rate-operations/deposits","timestamp":"2024-11-06T21:29:03Z","content_type":"text/html","content_length":"186825","record_id":"<urn:uuid:af1db8d1-8c5a-4505-98f6-8569bef49216>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00356.warc.gz"}
Skewness and Kurtosis in Power BI with DAX - Ben's Blog Skewness and Kurtosis in Power BI with DAX Skewness and Kurtosis in Power BI with DAX In this post, I will describe what Skewness and Kurtosis are, where to use them and how to write their formula in DAX. What is Skewness Skewness is a measure of symmetry, or more precisely, the lack of symmetry. A distribution, or data set, is symmetric if it looks the same to the left and right of the centre point. For a unimodal (one mode only) distribution, negative skew commonly indicates that the tail is on the left side of the distribution, and positive skew indicates that the tail is on the right (see Figure below for an example). How to interpret the Skewness In most of the statistics books, we find that as a general rule of thumb the skewness can be interpreted as follows: • If the skewness is between -0.5 and 0.5, the data are fairly symmetrical • If the skewness is between -1 and – 0.5 or between 0.5 and 1, the data are moderately skewed • If the skewness is less than -1 or greater than 1, the data are highly skewed Postive Skewness The distribution of income usually has a positive skew with a mean greater than the median. In the USA, more people have an income lower than the average income. This shows that there is an unequal distribution of income. Here is another example: If Warren Buffet was sitting with 50 Power BI developers the average annual income of the group will be greater than 10 million dollars. Did you know that Power BI developers were making that much money? Of course, we’re not… the distribution is highly skewed to the right due to an extremely high income in that case the mean would probably be more than 100 times higher than the median. Negative Skewness Age at retirement usually has a negative skew, most people retire in their 60s, very few people work longer, but some people retire in their 50s or even earlier. Application of Skewness Skewness can be used in just about anything in real life where we need to characterize the data or distribution. • Many statistical models require the data to follow a normal distribution but in reality data rarely follows a perfect normal distribution. Therefore the measure of the Skewness becomes essential to know the shape of the distribution. • Skewness tells us about the direction of outliers. The positive skewness is a sign of the presence of larger extreme values and the negative skewness indicates the presence of lower extreme • Skewness can also tell us where most of the values are concentrated. Skewness is also widely used in finance to estimate the risk of a predictive model. Calculate Skewness in Power BI with DAX At the time of writing this post, there’s no existing DAX function to calculate the skewness, this function exists in Excel since 2013, SKEW or SKEW.P. The formula used by Excel is the “Pearson’s moment coefficient of skewness” there are other alternatives formulas but this one is the most commonly used. Calculate in DAX the Skewness of the distribution based on a Sample: Sample Skewness = -- Number of values in my sample var __N=calculate(COUNTROWS(height_data), -- sample mean var __Avg=calculate(AVERAGE(height_data[Height]), -- sample standard deviation var __Std=calculate(STDEV.S(height_data[Height]), DIVIDE(__N,(__N-1)*(__N-2)) * Sample data refers to data partially extracted from the population. Calculate in DAX the Skewness of the distribution based on a Population: SKEW.P equation Skewness = -- Number of values var __N=calculate(COUNTROWS(height_data), -- Mean var __Avg=calculate(AVERAGE(height_data[Height]), -- standard deviation var __Std=calculate(STDEV.P(height_data[Height]), DIVIDE(1,__N) * The population refers to the entire set that you are analysing. The difference between the two resides in the first coefficient factor “1/N” vs “N/((N-1)*(N-2))” so in practical use the larger the sample will be the smaller the difference will be. What is Kurtosis One of the most common pictures that we find online or in common statistics books is the below image which basically tells that a positive kurtosis will have a peaky curve while a negative kurtosis will have a flat curve, in short, it tells that kurtosis measures the peakedness of the curve. The above explanation has been proven incorrect since the publication “Kurtosis as Peakedness, 1905 – 2014. R.I.P.” of dr. Westfall. So the most correct interpretation of Kurtosis is that it helps to detect existing outliers. How to interpret Kurtosis “The logic is simple: Kurtosis is the average of the standardized data raised to the fourth power. Any standardized values that are less than 1 (i.e., data within one standard deviation of the mean, where the “peak” would be), contribute virtually nothing to kurtosis, since raising a number that is less than 1 to the fourth power makes it closer to zero. The only data values (observed or observable) that contribute to kurtosis in any meaningful way are those outside the region of the peak; i.e., the outliers. Therefore, kurtosis measures outliers only; it measures nothing about the Application of Kurtosis Similar to Skewness, kurtosis is a statistical measure that is used to describe the distribution and to measure whether there are outliers in a data set. And like Skewness Kurtosis is widely used in financial models, for investors high kurtosis could mean more extreme returns (positive or negative). Calculate Kurtosis in Power BI with DAX At the time of writing this post, there’s also no existing DAX function to calculate the Kurtosis, this function exists in Excel, the function is called Kurt. The formula used by Excel is an adjusted version of Pearson’s kurtosis called the excess kurtosis which is Kurtosis -3. It is very common to use the Excess Kurtosis measure to provide the comparison to the standard normal distribution. So in this post, I will calculate in DAX the Excess Kurtosis (Kurtosis – 3). Calculate in DAX the Excess Kurtosis of the distribution based on a Sample: Sample Kurtosis = -- Number of values in my sample var __N=calculate(COUNTROWS(height_data), -- sample mean var __Avg=calculate(AVERAGE(height_data[Height]), -- sample standard deviation var __Std=calculate(STDEV.S(height_data[Height]), DIVIDE(__N*(__N+1),(__N-1)*(__N-2)*(__N-3)) * -DIVIDE(3*(__N-1)^2,(__N-2)*(__N-3)) -- (-3 for excess kurtosis) Calculate in DAX the Excess Kurtosis of the distribution based on a Population: Kurtosis = -- Number of values var __N=calculate(COUNTROWS(height_data), -- mean var __Avg=calculate(AVERAGE(height_data[Height]), -- standard deviation var __Std=calculate(STDEV.P(height_data[Height]), DIVIDE(1,__N) * POWER(divide(height_data[Height]-__Avg,__Std),4))-3 -- (-3 for excess kurtosis) In this post, we covered the concept of skewness and kurtosis and why it is important in the statistics or data analysis fields. At the time of writing this post, there are no existing built-in functions in Power BI to calculate the Skewness or Kurtosis, however, we saw that it is pretty easy to translate a mathematic formula to a DAX formula. In one of my previous posts “AB Testing with Power BI” I’ve shown that Power BI has some great built-in functions to calculate values related to statistical distributions and probability but even if Power BI is missing some functions compared to Excel, it turns out that most of them can be easily written in DAX! 2 thoughts on “Skewness and Kurtosis in Power BI with DAX” 1. its really great website and great stuff is here i really like it if u have ur youtube channel then let me know i wanna to subrcribe it it would be great if u can share file of this topic 1. Hi Suleman, I don’t have a youtube channel maybe one day 🙂 I’ll make sure to upload the PBIX file and link it under your comment.
{"url":"https://datakuity.com/2021/08/24/skewness-kurtosis-powerbi-dax/","timestamp":"2024-11-05T13:49:33Z","content_type":"text/html","content_length":"103135","record_id":"<urn:uuid:3ea1fc62-54d9-4f5b-acd5-42e67e9d6de3>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00478.warc.gz"}
In many different fields of engineering, like automotive or civil engineering, room acoustical tasks are of interest. Sound fields have to be predicted in order to design the acoustic cavity by placing acoustic elements like reflectors or absorbers (passive absorbers or plate resonators) into the room. Hence, it is desirable to use models for the Fluid Structure Interaction (FSI) where passive absorbers or plate resonators can be considered with their specific characteristics depending on the sound wave's angle of incidence (figure 1). In order to reduce the number of degrees of freedom and thus the numerical effort, a model reduction method based on a Component Mode Synthesis (CMS) is applied. The acoustic cavity is modeled with Spectral Finite Elements (SFEM). The cavity boundary conditions, e.g. compound absorbers made of homogeneous plates and porous foams, are modeled using Integral Transform Methods (ITM). Therefore the differential equations of motion are established for the individual components, where the Lamé Equation is used for homogenous and the Theory of Porous Media (TPM) for porous materials. These equations are solved in the wavenumber-frequency domain after applying a Fourier Transformation. The results (wavenumber dependent impedances) for the absorptive structure are coupled with the acoustic cavity adding interface coupling modes for the fluid and applying Hamilton's principle, considering the velocity of both components to coincide as a constraint at the interface. Coupling Fluid and Substructures Equilibrium is established using Hamilton's principle, where the subsystems arecoupled with the Lagrange Mulitplier Method. $$\int_{t_1}^{t_2}\delta \Big( L_A(t)+L_{BC}(t,Z)+\mathbf(R)^{T}\lambda(t)\Big)+\delta W_{BC}^{nc}(t,Z) +\delta W_{Load}^{nc}(t)=0$$ With the help of a Ritz Approach the variational problem is reduced to an extremum problem. $$\mathbf{v}_A(\mathbf{x},t)=\displaystyle\sum_{m} \mathbf{v}_{m}^{N}(\mathbf{x}) \Big(\mathcal{A}_m e^{i\Omega t} +\bar{\mathcal{A}}_m e^{i\Omega t}\Big) \displaystyle\sum_n \mathbf{v}_n^{C}(\mathbf {x}) \Big(\mathcal{B}_n e^{i\Omega t} +\bar{\mathcal{B}}_n e^{i\Omega t}\Big) $$ In the scope of the CMS normal modes and coupling modes, which are computed with the SFEM, are applied (figure 2). The Lagrangian of the acoustic fluid can be computed directly from these modes: $$L_A(t)=T_A(t)-U_A(t)\ \ \ \ \ T_A(t)=\frac{\rho_A}{2} \displaystyle\int\limits_V \lvert \mathbf{V}_{A}(\mathbf{x},t)\rvert ^{2}dV \ \ \ \ U_A(t)=\frac{1}{2\rho_A c_a^{2}}\int\limits_V \lvert p_A(\ mathbf{x},t)\rvert^2 dV $$ For the absorber the Lagrangian (as well as the virtual work due to dissipation) is obtained from the wavenumber- and frequency-dependent impedances as well as from the Fourier coefficients of the trial function. $$\int \limits_0^{T} L_{BC} dt = \frac{T}{\Omega} L_y L_z \begin{bmatrix} \displaystyle\sum_n\mathcal{C}_n\bar{\mathcal{C}} \displaystyle\sum_r\displaystyle\sum_s \text{Im} \big(Z(r,s,\Omega)\big) \ lvert E_{nrs}\rvert^{2}\end{bmatrix}$$ $$\int\limits_{0}^{T}\delta W_{BC} dt =-\frac{T}{i\Omega}L_y L_z \displaystyle\sum_n\big( \bar{\mathcal{C}}_n \delta \mathcal{C}_n -\mathcal{C}_n \delta \bar{\mathcal{C}}_n\big) \displaystyle\sum_r \ sum\limits_s \text{Re}\big(Z(r,s,\Omega)\big) \lvert E_{nrs} \rvert ^{2}$$ Model for Absorptive Boundaries Compound absorbers, consisting of porous and elastic layers are modeled efficiently with the help of the ITM. The equations of motion, which are sketched for the porous foam exemplarily, $$\begin{align*} -n_G \ \text{grad}p + \big( \tilde{\lambda}_S + \mu_S \big) \text{grad} \ \text{div} \ \mathbf{u}_S+ \mu_s \ \text{div} \ \text{grad} \ {\mathbf{u}_S}+ S_G(\mathbf{v}_G-\mathbf{v}_S) &= \rho_S \mathbf{a}_S \\ -n_G \ \text{grad}\ p-S_G(\mathbf{v}_G+\mathbf{v}_S)&= \rho_G\mathbf{a}_G \end{align*} $$ $$\frac{n_G}{R\theta} \ \frac{\partial p}{\partial t} +\rho_{GR}\ n_G \ \text{div}(\mathbf{v}_G) +\rho_{GR}\ n_S \ \text{div}(\mathbf{v}_S)$$ are established for each material and solved in the Fourier domain after applying a Helmholtz decomposition considering of the boundary conditions at the interfaces between the individual layers in order to compute wavenumber and frequency-dependent impedances and absorption ratios. Numerical Results for the FSI-Problem The method is applied to a rectangular geometry (6m × 3m × 2m), which is modeled with 288 spectral finite elements. In the CMS 50 normal and 6 couplingmodes are considered. A pressure source is applied at x = 0.5 m, y = 1.3 m andz = 0.9 m. It is oscillating with a frequency of 122 Hz. The absorptive boundary condition is modeled as layer of melamine foam with a thickness of 7.2 cm. The sound pressure field is computed and visualized in figure 5.
{"url":"https://www.cee.ed.tum.de/en/bm/forschung/forschungsprojekte/acoustics/","timestamp":"2024-11-10T01:23:05Z","content_type":"text/html","content_length":"85842","record_id":"<urn:uuid:ab79ef77-129e-4351-ba1b-9828fe56054c>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00221.warc.gz"}
Presenting Parallel Performance (1) Part 1: Do you really want "speedup"? Since parallelism is now an absolute requirement for anything that claims to be a “Super Computer”, we need to present performance results that demonstrate that we are using it effectively as we use more and more of the machine to run our applications. Unfortunately, presenting this scaling data in a useful and not misleading manner is not easy. The (in)famous paper “Twelve Ways to Fool the Masses When Giving Performance Results on Parallel Computers” gives a number of ways of doing the opposite. Here I am going to discuss speedup curves, which are generally used for showing both inter-node scaling and in-node (shared memory or OpenMP® scaling), while in the following post I’ll dive a bit deeper into the presentation of in-node scaling. Speedup Curves There are a number of issues with speedup curves that I’ll run though one by one. 1: Always Beware of Ratios As David Bailey pointed out, we should always be careful when using any metric which is a ratio, since a ratio give us two ways to make the result go in the direction we want. For instance, if we want to show that our machine is faster than another we might present Performance Ratio = Performance(our machine)/Performance(competitor) That seems reasonable; if we can improve our performance the number gets bigger. But… we can also make the number bigger by using a poor performance on the bottom of the ratio, and, it’s much easier to make things run slowly than to make them run fast. An Example As an example, consider the performance of a trivial benchmark1 (which counts the number of primes below some limit) when compiled by two different compilers. (Here labelled “Compiler A” and “Compiler B”, since the point here is not the numbers, but the way we treat them, so knowing what the compilers are is not necessary). Running on one of the Isambard AArch64 machines, I see performance when counting the number of primes < 10,000,000 like this :- You can see that on any reasonable view, compiler A is out-performing compiler B, achieving 1.64x performance at -O3. But… if we were disreputable, and attempting to sell compiler B, we’d look for any way to show that compiler B is faster at something, and, if we compare Compiler B at -O3 with compiler A at -O0, that gives us a performance ratio of 3.233/3.187 = 1.01x, so if we don’t mention how we computed it compiler B is slightly faster. (Or, at least “has similar performance”). While this specific example is somewhat laboured, it is always important to check where the performance of a competitor’s product has been obtained from when a vendor is making comparisons. Using an old codebase compiled for an old ISA (e.g. compiling for 8087 floating point when benchmarking a 64 bit X86 machine, rather than using any of the many X86 vector FP ISAs) or compiling at -O0 and then comparing with newly revised code compiled with the latest compiler for the newest hardware is certainly not unknown… Or, comparing parallelised code on one machine with an older, serial, version on another. As before, making things slow is easy, making them fast is hard. 2: Beware of the Choice of Normalisation The particular point David Bailey was making is not quite the one above, but, rather that if we are comparing speedups between different systems we need to be very careful about how the normalisation that converts a set of times into a set of speedups is performed. If we compare the simple speedup curves that we’d use when tuning our code on a given machine, the normalisation will be the performance of the single thread/node/… code running on that machine. However, comparing those curves for different machines is highly misleading.2 As an example let’s compare two machines, where machine 2 is twice as fast as machine 1, but generates an induced serial fraction of 20%, whereas machine 1 (the slower machine) achieves a serial fraction of 10%. Using Amdahl’s law we can predict the performance of these two machines. If we compare the normalised-to-self speedups, we get a graph like this So, machine 1 is clearly better, right? Its speedup with parallelism of 20 somethings is nearly 7x, whereas machine 2 only achieves slightly over 4x. But, if we look at the actual time to solve the problem (so smaller is better), we see that although it does not scale as well, machine 2 outperforms machine 1 significantly over this whole range of parallelism which we’re looking at. Therefore, if we are going to compare speedup curves from different machines meaningfully, we must ensure that all of the raw performance data is normalised consistently (either to the serial performance of the best machine, or of the worst). When you do that you can see that machine 1 has only half the performance serially, and never catches up in this range of parallelism. Fundamentally, speedup is not a useful metric for comparing machines; for that it is better to stick to the simpler metric of time, or performance (1/time). This is both simpler to understand and harder to fake. 3: The Graph Itself The simplest plot (and the one that is easiest to produce) shows the time to solution (or, if you prefer, you can use performance (= 1/time) so that bigger is better). This has many advantages (there are no ratios involved), but it can be hard to see details at both ends of the x-axis. Therefore when investigating the scaling properties of a code on a given machine it is worth also using other Tufte3 (whom all should read) tells us that we should "maximize the data ink". But, a speedup graph does not meet this rule for three reasons:- 1. We can be almost certain that half of the area on the graph will not be used. (Everything in the top left of the graph represents super-linear speedup, so rarely contains any data). Therefore that area of the plot is wasted. 2. To understand how good the scaling is we need a “Perfect Scaling” line on the graph, and have to work out how much worse the data points are than this perfect line. That line is extra ink which is not for the data. 3. If we present scaling over a large range we can’t see what is going on in thearea where the parallelism is low. As an example, here is data about the simple question “How well can a relatively small parallel loop in which each iteration takes the same amount of time possibly execute as I change the available parallelism?” Note that this is not a question about how specific schedules (maps of iteration to execution entity [“thread”]) perform, but one about the mathematical fundamentals. When the number of threads does not divide the number of iterations the best any schedule can do is to have an imbalance between the number of iterations executed by any two threads of one iteration, which is what we model here4. We consider a loop with 50 iterations and look at performance out to 64 threads; assuming each iteration takes 1s, the expected time is like this:- The Speedup graph then looks like this :- There is clearly some drop-off even below 10 threads, but it is hard to see exactly what is going on. OK, So What Should I Plot Instead? When trying to understand scaling, it is better to plot parallel efficiency (i.e. speedup/parallelism, or, equivalently, the fraction of the available resources which are used) than speedup. The parallel efficiency shows you what proportion of the perfect speedup you are getting, so can be expressed as a percentage. It doesn’t need a “perfect speedup” line, and all levels of parallelism can easily be seen and compared. Here is the same example presented using parallel efficiency. Now we can see what is going on both when there are few threads and when there are many. This should match what you expect; you can achieve 100% efficiency only when the number of iterations is divisible by the number of threads. In all other cases there will be some threads which execute one more iteration than others. Clearly the worst case here is when we have 49 threads, since then each thread can execute one iteration, but one thread has to execute two. So 48 threads are idling for half of their execution time, therefore we’d expect (and see) an efficiency of 50/(2*49) ~= 51%. Of course, this graph is not great for a marketing presentation, since it doesn’t go “up to the right” (and it has each axis labelled, so is definitely not a marketing slide). However for engineering use, and to really understand how our applications scale, this is a better way to present the data5. What Have We Learned? • Be very careful when looking at metrics which are produced by dividing two values. • Simple plots of time to solution, or performance (1/time), remain the most important graphs for comparing machines or implementations, but when examining scaling performance of a single implementation parallel efficiency beats speedup. • Plots of parallel efficiency allow us to see more detail than speedup curves. In part 2 we’ll look at another mistake which is easy to make… Roger Shepherd for reading drafts and commenting. This work used the Isambard UK National Tier-2 HPC Service, operated by GW4 and the UK Met Office, and funded by EPSRC (EP/T022078/1). The code for this benchmark is available if you really want to run it yourself with different compilers to break the anonymisation. HIs point is actually slightly stronger even than this, since, as he says, the real comparison should be with the performance of the best available serial implementation rather than the performance of the parallel implementation being run serially. We also add no overhead for parallelism. If you were attempting to model a real system some such overhead would make sense, but here we’re keeping it simple and showing the fundamental mathematical FWIW, Wikipedia agrees with me on this. I wonder how that could possibly be!? :-)
{"url":"https://cpufun.substack.com/p/presenting-parallel-performance-1","timestamp":"2024-11-07T13:27:09Z","content_type":"text/html","content_length":"211972","record_id":"<urn:uuid:3c18500b-1ca0-4588-b6a9-54e6ac951f8c>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00172.warc.gz"}
Excel Factor Entry 4 INDEX and MATCH Two Criteria The need to look up and match multiple criteria is quite common; however as with most things in Excel there are many ways to ‘skin a cat’, I'll share two options with you here. Matt Duncan from Florida sent in this a cool INDEX and MATCH array formula that allows you to match two criteria from two separate columns and return the corresponding value. Thanks for sharing, Matt. I don’t recall ever seeing the MATCH function used this way. Beware, you may need a Brain-booster (a healthy fruit or vegetable snack they get my 6 year-old’s class to eat at about 9.30am to help them concentrate), or a strong coffee, whichever you prefer. I’ll show you Matt’s impressive formula, and then I’ll show you how I cheat with a non-array approach using VLOOKUP. Enter your email address below to download the sample workbook. By submitting your email address you agree that we can email you our Excel newsletter. Please enter a valid email address. Here’s the data Matt is looking up (Note: this is a simplified version for the purpose of this tutorial): The above table has the following Named Ranges because it’s quicker to build the formula and easier to follow: Range H4:H16 = GL_Account Range I04:I16 = Business_Unit Range J4:J16 = Amount Here’s the table Matt wants to populate (also simplified): So, you can see that Matt needs to match both the GL Account and Business Unit which are in separate columns in the Lookup Data table, and then return the corresponding amount from column J in the lookup data. Here is Matt’s formula from cell B4 (we’ll refer to cell B4 for the remainder of this tutorial): Note: this is an array formula so you need to enter it with CTRL+SHIFT+ENTER to get the curly brackets at each end that you see below. This is what you see in the formula bar when you enter it In English it reads: Return the Amount from column J that corresponds with GL Account 5597-10 in column H, and Business Unit B3607 in column I, if you don’t find it return a zero. Let’s take a closer look at the MATCH part of Matt’s formula because that’s where the magic is happening. How the MATCH Function Part Works First, remember the MATCH function searches for a specified item in a range of cells, and then returns the relative position of that item in the range. The twist is that Matt has two specific items he wants to match, the GL Account and Business Unit. The syntax for the MATCH function is: =MATCH(lookup_value, lookup_array, [match_type]) Lookup_value – the value you want to find Lookup_array – the column/row containing the value you want to find Match_type – optional argument represented by a -1, 0 or 1. • -1 finds the smallest value that is greater than or equal to lookup_value, • 0 finds the first value that is exactly equal to lookup_value, or • 1 finds the largest value that is less than or equal to lookup_value. Here’s the MATCH component of Matt’s formula: Note how he has two lookup_arrays: 1. GL Account column ($A4=GL_Account) 2. Business Unit (B$3=Business_Unit) And how, instead of using the lookup_value as his criteria to match, he has entered the criteria in with the lookup_array. I have never seen that before! So, How Does It All Work? Array formulas are testing for TRUE or FALSE outcomes and award them a numerical equivalent of 1 for TRUE, and 0 for FALSE. The MATCH part of the formula evaluates each cell in the GL_account column and if it matches the GL Account code in A4 (5597-10) is awards it a 1, and if it doesn’t it awards it a zero. It does the same for the Business_Unit column, matching B3607 to give you an array like this: Or in a formula it looks like this: Which is the same as doing this calculation: Since the only row resulting in a 1 is on row 9 of the array, the MATCH component of the formula evaluates to 9, which is used by the INDEX function as the row number argument. So, let's take a look at the INDEX function part of the formula now. INDEX Function Remember the INDEX function in the array form returns the value of an element in a table or an array, selected by the row and column number indexes. And the syntax for the INDEX function in the array form is: =INDEX(array, row_num, [column_num]) Array – the range of cells containing the data you want to find. Row_num – the row number your data is on. Column_num –the column number your data is in. Matt’s formula evaluates to this: The column_num argument is optional, and since there is only one column in the Amount Named Range the argument is not required. For icing on the top Matt then wraps his formula in an IFERROR function so if a match is not found a zero will be entered in the cell instead of an error. Here is Matt's formula one last time The VLOOKUP Cheat Now, if the carrot I gave you didn't help and all that array business still did your head in, I’ve got a simpler option that requires a helper column. Some would say the helper column is cheating 😉 but I say go with the path of least resistance, unless of course it is worthwhile in the long term, which it is for Matt. Remember the reason Matt couldn’t use a regular INDEX & MATCH formula or VLOOKUP is because both of these can only look up/match one value. So, an alternative solution is to create a unique value in your lookup data from the two columns you’re trying to match i.e. the GL Account and Business Unit, by joining them together. This is where the helper column comes in. In the table below I have a new column (G) that CONCATENATES (joins) the GL Account and Business Unit values together using a formula in cell G4 like this: The result is what you see below in column G. Now you have your unique GL/Business Unit identifier you can use a VLOOKUP formula like this: Which evaluates like this: For more on using VLOOKUP to find multiple criteria. To eliminate potential errors wrap it in an IFERROR function like this: Note: whilst array formulas are effective they can also be taxing on your system memory. If you have a lot of data to search through the VLOOKUP option may be more efficient. Thanks for sharing your cool trick and teaching me something new, Matt. Matt Duncan is an accountant for a global payroll processing company located in Florida. He has no formal training in Excel but loves to find new ways to use Excel’s vast functionality. “As you can imagine, a payroll company has limited time to complete all the processes that go into producing a single pay check not to mention a batch of 30,000 checks! There are numerous ways to make a mistake on a pay check and with only a few hours to produce a check and fund the hundreds of benefits, vendors, governments and garnishments, etc., there is a constant need for process improvement. One of the most time consuming processes has been funding verification. With global corporations clients sharing a payroll cycle between all its companies the verification process included manual inputs based on a variety of filtering and VLOOKUP’s. The process could take up to 2 hours depending on the number of accounts and the number of companies included in a pay cycle just to input My role is to audit all the pay cycles at the end of the month and report back to the client with any errors and, of course, improve processes that can cause errors, i.e. manual inputs!” Vote for Matt If you’d like to vote for Matt's tip (in X-factor voting style) use the buttons below to Like this on Facebook, Tweet about it on Twitter, +1 it on Google, Share it on LinkedIn, or leave a comment….or all of the above 🙂 1. Don Interesting way of doing it. If the data has more than 1 match, will it retrieve only the first number? What if you used sumifs() ? □ Mynda Treacy Hi Don, If your objective was to summarise values then yes, SUMIFS would be the tool for that, however if you wanted to lookup text values then SUMIFS wouldn’t work. 2. Precious I am trying to write an INDEX formula with four conditions, i am trying to pull DEPRECIATION amount for a specific Cost Centre, for a specific account for a specific month. I am working with about 9000 lines there are no duplicates. example (Amount, Cost Centre number, GL account and the corresponding month. I have tried to write it but i am getting either #N/A or #VALUES obviously I am doing something wrong. □ Catalin Bombea Hi Precious, Please use our Help Desk to upload a sample of your calculations, this way we can see where the problem is without guessing. Don’t forget to give all the details, even create a sample of desired result. I will gladly help you to solve this problem. Thanks for understanding 🙂 3. Paula I cannot for the life of me get an index match formula with 2 criteria to work. I am using named ranges and either get N/A or Value errors. Commas in the Match produce Value, equal signs produce N/A. What is wrong with this formula? Any help would be appreciated. □ Mynda Treacy Hi Paula, Have you entered it as an array formula with CTRL+SHIFT+ENTER? Kind regards, 4. Parda Mynda, Thanks A LOT ! Finally, I succeed in creating a right formula, thanks to your explanations and Matt formula 🙂 □ You’re welcome, Parda 🙂 Glad we could help. 5. Jude Briggs This could be done by using SUMIFS =SUMIFS(Amount,Business_Unit,B$3,GL_Account,$A4) Using the same named ranges. □ Cheers, Jude. You are correct. However if the data you wanted to find was text instead of values, SUMIFS wouldn’t work, whereas Matt’s solution and the VLOOKUP cheat would. Kind regards, 6. Ibrahim copy the above across and down, and this should do the same function. I would send the spreadsheet but not sure how! □ Carlo Estopia Hi Ibrahim, Send it here: HELP DESK 7. Ahsan Siddiqui Thank you very much Matt. Thanks a lot Mynda. □ You’re welcome, Ahsan 🙂 ☆ Muslih Mohamed Ismail please ignore my previous comment or request and concider the following. sorry for any inconvienience that this might cause. Please help regarding the following issue. I am working for the capital market regulator of maldives. I am dealing with statistics maintained by regulator regarding the Maldives Stock Market. I have got to calculate the maximum fee the brokers, stock exchange and regulator have earned on share transaction for this year. We have three level of fees approved by the regulator. The following are the three level, (MVR means Maldivian Ruffiyaa) 1. If the transaction value is > or = to MVR 50,000.00 the brokers may charge maximum 1.5% of the transaction value as Brokerage commission + maximum 0.5% of the transaction value as trade processing fee for the stock exchange. 2. If the transaction value is > MVR 50,001.00 and or = MVR 100,001.00 the brokers may charge maximum 0.5% of the transaction value as Brokerage commission + maximum 0.5% of the transaction value as trade processing fee for the stock exchange. basically I want to develop a formula which includes all the above three conditions applicable to the transaction value. For an example I have a 3 three columnar table where the first column or Column A has X number of Transaction Value(s) where each cell has one transaction value, second Column or Column B should have the brokerage commission(s) charged for each corresponding transaction value in Column A considering all the three conditions mentioned above and third Column or Column C should have the trade processing fee(s) charged for each corresponding transaction value in Column A consider three conditions mentioned above. ○ Carlo Estopia Hi Muslih, I see that your logic is incomplete, for example how much will be charge for beyond 100001 MVR? Anyways, here’s your formula (A2) Transaction Value - 50001 Brokerage commission - =IF(A2<=50000,A2*0.015,IF(AND(A2>=50001,A2<=100001),A2*0.005)) Processing Fees - =IF(A2<=50000,A2*0.005,IF(AND(A2>=50001,A2<=100001),A2*0.005)) Here’s your logic: IF Transaction value is <=50000 Brokerage commission 1.5% Processing fees .5% Elseif Transaction value is >= 50001 and <= 100001 Brokerage commission .5% Processing fees .5% Question: What if beyond 100001? Hence, your formula will return a false value. ☆ Muslih Mohamed Ismail correction of second level. 2. If the transaction value is > MVR 50,001.00 and or or = to MVR 100,001.00 the brokers may charge maximum 0.5% of the transaction value as Brokerage commission + maximum 0.5% of the transaction value as trade processing fee for the stock exchange. 8. chng william Dear Mynda, Thank for sharing, hope you wont mind I asking, what if I have both 2 account in column-H, can you advice me on how can I get the total $$ of 5250-05B7107 5251-05? ( $18+$50+$45) column-G , column-H, column-J 5250-05B7107 5250-05 $10 5250-05B7608 5250-05 $30 5250-05B7107 5251-05 $18 5250-05B7107 5251-05 $50 5250-05B7608 5250-05 $30 5250-05B7107 5251-05 $45 5250-05B7107 5250-05 $18 Best regards, □ Hi William, You need the SUMIFS function. IF you only have Excel 2003 then you can use an array formula or SUMPRODUCT. Array entered with CTRL+SHIFT+ENTER: Kind regards, ☆ chng william You are my master, Much appreciation. Big THANK YOU 9. ashkan I just have a similar question. in a excel file I have 2 colums (Column C: authors name, and column D: is the corresponding published articles). each author may have different articles for instance Carl may have 5 articles and Johana has 15 articles and so on. I want to make a list that under name of each authors we have a list of his/her articles summarized. I have used the below formula for retriveing the first Match and the formula below for second Match But it is not working for third and other articles corresponding to each author. In this file I have 2 columns (author name and corresponding projects) comprising a few rows (for simplicity.)I want to make a table that reads from these two columns ( which in reality may exceed 200 rows) and check when the authors name in cell for example I4 (Carl) matches the author name in the range of column C then write the corresponding projects. if you apply these fromulas you can see in this example Carl has 4 work that I can auto detect them by the first two formulas but not more than these two. Can you please see if you can help me to match the rest of papers for each article. I need a formula that checks the same procedure perhaps but ignores these 2 previous detected and written works but detects the new corresponding work. Many thanks in advance. an excel file is availabe to be sent if need be. Best Regards, □ Mynda Treacy Hi Ashkan, A better tool would be a PivotTable. If you want to send me your file I’ll insert a PivotTable for you as an example. Kind regards, ☆ Syed Raza I have more simpler non-array formula which I learned from one of my colleagues in Daniel’s ‘Excel Hero Academy’ class which does the same job =INDEX($J$4:$J$16,MATCH(H4&I4,INDEX ○ Syed Raza Above formula can be altered with named ranges as in the sample workbook =INDEX(Amount,MATCH(H4&I4,INDEX(GL_Account&Business_Unit,),0)) ■ Mynda Treacy Cheers, Syed 🙂 10. Jeanette Dorobek Thanks for posting all of the tips, and the training you provide. It sure is a lot of help to newcomers like me. I was just asking some one how to do this yesterday. Keep up the good work. And way to go Matt and Mynda! □ Mynda Treacy Thanks, Jeanette. Glad we could help. Kind regards, 11. chris Holy cow this is deep- thanks for the coffee warning and the post. So great to know there are other self taught individuals out there- Thanks, Matt and Mynda! □ Mynda Treacy You’re welcome, Chris 🙂 12. Miroslav Perhaps I’m missing something… but if he used Excel 2007 and up, couldn’t he just use the SUMIFS function to populate that table? No need to activate array formulas… =SUMIFS([sum range],[category range1],[value1],[category range2],[value2]) □ Mynda Treacy Hi Miroslav, Yes, you are correct. You can also use SUMIFS in this example. Note: if the data you wanted to find was text instead of values, SUMIFS wouldn’t work, whereas Matt’s solution and the VLOOKUP cheat would. Thanks for sharing. Pat on the back to you too 🙂 Kind regards, 13. Hi, Theoretically, great formula. Has lot of things to learn. However, could this not be easily obtained by using the Pivoting – as shown here under:- Sum of Amount Column Labels Row Labels B3607 B3608 B7107 B7309 B9127 Grand Total 5250-05 87495 96416 58509 103735 346155 5560-17 74519 99874 87516 88708 350617 5597-10 69115 94604 55457 72733 291909 5597-14 94035 94035 Grand Total 231129 290894 201482 265176 94035 1082716 Thanks & Regards, □ Mynda Treacy Hi Raghu, Yes, correct. You could do the same with a PivotTable. In Matt’s case he needed to populate another more complex table that wasn’t as simple as my example, and so the formula suited him Thanks for sharing. Pat on the back to you 🙂 Kind regards, 14. KM007 Thank you for the cool tricks. It really helps. □ Mynda Treacy You’re welcome, KM007 🙂 Leave a Reply Cancel reply
{"url":"https://www.myonlinetraininghub.com/excel-factor-entry-4-index-and-match-two-criteria","timestamp":"2024-11-05T04:09:40Z","content_type":"text/html","content_length":"275646","record_id":"<urn:uuid:591eaad6-d58f-4fe1-bfa5-a1cf4fffbb9c>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00766.warc.gz"}
A method to estimate an equivalent reservoir pressure in a given reservoir location $//<![CDATA[ \begin{array}{l}(x, \, y, \, z)\end{array} //]]>$ as if this location was translated vertically to the Datum. It allows compensating the pressure difference between various reservoir locations related to the differences in their elevation. If reservoir structure is flat then initial Reservoir Pressure at formation top is going to be constant across the field and so will be the Datum Pressure but with a constant shift accounting for the height between formation top and Datum. If reservoir structure is non-flat then initial Reservoir Pressure at formation top will be varying across the field while the Datum Pressure will be constant across the field. This particularly helpful in Reservoir Pressure analysis during production so that areal Reservoir Pressure distribution recalculated to Datum shows only those pressure variations across the field which are related to production and not to the field structure. The usual practise is to measure/assess reservoir pressure at formation tops and then recalculate it to Datum using Datum Pressure @model. If done systematically it provides a fair basement of analysis of field-areal reservoir pressure dynamics over time with account of formation tops elevation. The mathematical model of Datum Pressure is explained in Datum Pressure @model. See Also Petroleum Industry / Upstream / Petroleum Engineering / Subsurface E&P Disciplines / Reservoir Engineering / Reservoir pressure [ Datum ] [ Datum Pressure @model ]
{"url":"https://nafta.wiki/display/GLOSSARY/Datum+Pressure","timestamp":"2024-11-05T00:48:54Z","content_type":"text/html","content_length":"57371","record_id":"<urn:uuid:efa31342-84dc-49fe-8485-0d2c00cd9ef3>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00288.warc.gz"}
Mathematics, BS Through an array of available concentrations ranging from pure mathematics to individualized studies to data science and beyond, the Mathematics, BS provides students with a thorough, customizable, and exciting education in mathematics. Teacher Licensure Interested students should attend an information session early in their studies. For more information, visit the School of Education's website. Students majoring in biology who wish to pursue a career teaching secondary school may consider applying for the Secondary Education - Mathematics (6-12) Undergraduate Certificate offered by the College of Education and Human Development as an option in seeking an initial Virginia teaching license. Other routes to licensure include the Mathematics, BA or BS/Curriculum and Instruction, Accelerated MEd (Secondary Education Mathematics concentration) or select traditional Master's programs. Please contact the undergraduate advisor in the College of Education and Human Development for more information. University-wide admissions policies can be found in the Undergraduate Admissions Policies section of this catalog. To apply for this program, please complete the George Mason University Admissions Application. Students must fulfill all Requirements for Bachelor's Degrees, including the Mason Core. MATH 300 Introduction to Advanced Mathematics (Mason Core) meets the writing intensive requirement for this major. For policies governing all undergraduate programs, see AP.5 Undergraduate Policies. Graduating seniors are required to have an exit interview. Language Proficiency Recommendation The department recommends proficiency in French, German, or Russian. Course Recommendations and Policies A maximum of 6 credits of grades below 2.00 in coursework designated MATH or STAT may be applied toward the major. Students intending to enter graduate school in mathematics are strongly advised to take MATH 315 Advanced Calculus I and MATH 321 Abstract Algebra. Students may not receive credit for both MATH 214 Elementary Differential Equations and MATH 216 Theory of Differential Equations; both MATH 213 Analytic Geometry and Calculus III and MATH 215 Analytic Geometry and Calculus III (Honors); both MATH 351 Probability and STAT 344 Probability and Statistics for Engineers and Scientists I; and both MATH 352 Statistics and STAT 354 Probability and Statistics for Engineers and Scientists II. After receiving a grade of 'C' or better in one of the courses listed below on the left, students may not receive credit for the corresponding course on the right: Degree Requirements Total credits: minimum 120 In addition to the Mathematics Core, Science, and Computational Skills requirements, students must select one concentration and complete the requirements therein. Mathematics Core Course List Code Title Credits MATH 113 Analytic Geometry and Calculus I (Mason Core) 4 MATH 114 Analytic Geometry and Calculus II 4 MATH 125 Discrete Mathematics I (Mason Core) 3 MATH 203 Linear Algebra 3 MATH 213 Analytic Geometry and Calculus III 3 or MATH 215 Analytic Geometry and Calculus III (Honors) MATH 214 Elementary Differential Equations 3 or MATH 216 Theory of Differential Equations MATH 300 Introduction to Advanced Mathematics (Mason Core) ^1 3 MATH 322 Advanced Linear Algebra 3 Total Credits 26 ^ 1 Fulfills the writing intensive requirement. Computational Skills Course List Code Title Credits CS 112 Introduction to Computer Programming (Mason Core) 4 Total Credits 4 Individualized Concentration (IND) Students who are looking for a flexible concentration option are able to customize their degree with the Individualized Concentration. The Individualized Concentration allows students to take coursework in a variety of fields. Students should work closely with a mathematics advisor and have their individual degree plan approved no later than their junior year. Course List Code Title Credits Required Courses MATH 315 Advanced Calculus I 3 Select two from the following: 6 MATH 316 Advanced Calculus II MATH 321 Abstract Algebra MATH 421 Abstract Algebra II MATH 431 Topology MATH 432 Differential Geometry MATH 433 Algebraic Geometry MATH 464 Linear Algebra with Data Applications MATH 465 Mathematics of Data Science Choose 12 additional upper-level MATH-prefixed credits, not taken above. ^1 12 Additional Science Select one option from the following: 4-9 1. A second sequence from the choices under "Science" above 2. 6 credits from more advanced courses in biology, chemistry, geology, or physics ^2 4. Select two courses from the following: CDS 230 Modeling and Simulation I CDS 301 Scientific Information and Data Visualization CS 211 Object-Oriented Programming CS 310 Data Structures CS 330 Formal Methods and Models CS 483 Analysis of Algorithms Total Credits 25-30 ^ 1 Excluding MATH 400 History of Math (Topic Varies) (Mason Core) ^ 2 Only refers to courses acceptable for credit toward a natural science major. Consider courses from the following: BIOL 300-499, CHEM 300-499, GEOL 300-499, PHYS 300-499. Concentration in Pure Mathematics (PURM) Pure mathematics is the study of ideas and structures that underlie all of mathematics. This concentration provides exciting opportunities for students interested in advanced coursework in the fields traditionally referred to as "pure mathematics". The concentration prepares students for a wide variety of careers involving mathematical thinking or graduate studies in pure mathematics. Course List Code Title Credits Breadth Requirements MATH 315 Advanced Calculus I 3 MATH 321 Abstract Algebra 3 MATH 411 Functions of a Complex Variable 3 Choose one from the following: 3 MATH 312 Geometry MATH 431 Topology Depth Requirements Select two from the following: 6 MATH 312 Geometry (if not chosen above) MATH 316 Advanced Calculus II MATH 325 Discrete Mathematics II MATH 421 Abstract Algebra II MATH 431 Topology (if not chosen above) MATH 432 Differential Geometry MATH 433 Algebraic Geometry Additional Mathematics Choose 3 credits of upper level MATH-prefixed credits ^1 3 Additional Science Select one option from the following: 4-9 1. A second sequence from the choices under "Science" above 2. 6 credits from more advanced courses in biology, chemistry, geology, or physics ^2 4. Select two courses from the following: CDS 230 Modeling and Simulation I CDS 301 Scientific Information and Data Visualization CS 211 Object-Oriented Programming CS 310 Data Structures CS 330 Formal Methods and Models CS 483 Analysis of Algorithms Total Credits 25-30 ^ 1 Excluding MATH 400 History of Math (Topic Varies) (Mason Core) ^ 2 Only refers to courses acceptable for credit toward a natural science major. Consider courses from the following: BIOL 300-499, CHEM 300-499, GEOL 300-499, PHYS 300-499. Concentration in Actuarial Mathematics (ACTM) This concentration provides exciting opportunities for students interested in studying actuarial mathematics. Expertise in this field leads directly into a career as a practicing actuary with an insurance company, consulting firm, or in government employment. Course List Code Title Credits ACTM Courses MATH 351 Probability 3 MATH 352 Statistics 3 MATH 551 Regression and Time Series 3 MATH 554 Financial Mathematics 3 MATH 555 Actuarial Modeling I 3 MATH 557 Financial Derivatives 3 ACCT 203 Survey of Accounting 3 ECON 103 Contemporary Microeconomic Principles (Mason Core) 3 ECON 306 Intermediate Microeconomics ^1 3 or ECON 310 Money and Banking or FNAN 321 Financial Institutions STAT 362 Introduction to Computer Statistical Packages 3 Select two from the following: 6 MATH 441 Deterministic Optimization MATH 442 Stochastic Models MATH 446 Numerical Analysis I MATH 453 Advanced Mathematical Statistics Total Credits 36 ^ 1 For mathematics majors, the Department of Economics has agreed to waive the ECON 104 prerequisite. Concentration in Applied Mathematics (AMT) This concentration provides exciting opportunities for students interested in taking additional classes in applied mathematics. The concentration prepares students to deal with real-world applications in science and engineering, or to pursue graduate studies in applied mathematics. Course List Code Title Credits AMT Courses MATH 313 Introduction to Applied Analysis 3 MATH 315 Advanced Calculus I 3 MATH 351 Probability 3 MATH 413 Modern Applied Mathematics I 3 MATH 446 Numerical Analysis I 3 Select 3 credits of MATH courses numbered above 300 ^1 3 Select two courses from the following: 6 MATH 314 Advanced Differential Equations MATH 414 Modern Applied Mathematics II MATH 478 Introduction to Partial Differential Equations with Numerical Methods Additional Science Courses Select additional science credits from one of the following options: 4-9 1. A second sequence from the choices under "Science" above 2. Select 6 credits from more advanced courses in biology, chemistry, geology, or physics ^2 4. Select two courses from the following: CDS 230 Modeling and Simulation I CDS 301 Scientific Information and Data Visualization CS 211 Object-Oriented Programming CS 310 Data Structures CS 330 Formal Methods and Models CS 483 Analysis of Algorithms Total Credits 28-33 ^ 1 Excluding MATH 400 History of Math (Topic Varies) (Mason Core) ^ 2 Only refers to courses acceptable for credit toward a natural science major. Consider courses from the following: BIOL 300-499, CHEM 300-499, GEOL 300-499, PHYS 300-499. Concentration in Data Science (DSCI) The data science concentration prepares math majors for careers in industry and academia with a focus on the rapidly developing area of mathematics of data science.Students in this program will develop analytical and computational skills that will provide a deeper understanding of machine learning and data science concepts. By mastering the theoretical foundation underlying practical algorithms and uncovering inherent connections with several branches of modern mathematics, students will hone their creativity and independent thinking skills necessary to lead the data science revolution. Course List Code Title Credits Data Science Courses MATH 315 Advanced Calculus I 3 MATH 351 Probability 3 MATH 446 Numerical Analysis I 3 MATH 464 Linear Algebra with Data Applications 3 Select two options from the following: 6-7 MATH 447 Numerical Analysis II MATH 462& MATH 463 Mathematics of Machine Learning and Industrial Applications I and Mathematics of Machine Learning and Industrial Applications II MATH 465 Mathematics of Data Science Select one course from the following: 3 MATH 352 Statistics STAT 350 Introductory Statistics II STAT 360 Introduction to Statistical Practice II STAT 356 Statistical Theory Select one course from the following: 3 CDS 301 Scientific Information and Data Visualization CDS 302 Scientific Data and Databases (Mason Core) CS 310 Data Structures Additional Science Courses Select additional science credits from one of the following options: 3-4 1. Select one course from the following: BIOL 213 Cell Structure and Function (Mason Core) CHEM 211& CHEM 213 General Chemistry I (Mason Core) and General Chemistry Laboratory I (Mason Core) GEOL 101& GEOL 103 Physical Geology (Mason Core) and Physical Geology Lab (Mason Core) PHYS 160& PHYS 161 University Physics I (Mason Core) and University Physics I Laboratory (Mason Core) 2. 3 credits from more advanced courses in biology, chemistry, geology, or physics ^1 3. The 4 credit option of PHYS 262 and PHYS 263 Total Credits 27-29 ^ 1 Only refers to courses acceptable for credit toward a natural science major. Consider courses from the following: BIOL 300-499, CHEM 300-499, GEOL 300-499, PHYS 300-499. Concentration in Mathematical Statistics (MTHS) This concentration provides exciting opportunities for students interested in taking additional classes on statistics and data analysis. The concentration prepares data analysts able to deal with real world applications in science and engineering. Course List Code Title Credits MTHS Courses MATH 315 Advanced Calculus I 3 MATH 351 Probability 3 MATH 352 Statistics 3 MATH 453 Advanced Mathematical Statistics 3 MATH 551 Regression and Time Series 3 STAT 362 Introduction to Computer Statistical Packages 3 Select one from: 3 STAT 260 Introduction to Statistical Practice I STAT 350 Introductory Statistics II STAT 360 Introduction to Statistical Practice II Select two from the following: 6 STAT 455 Experimental Design STAT 460 Introduction to Biostatistics STAT 462 Applied Multivariate Statistics STAT 463 Introduction to Exploratory Data Analysis STAT 465 Nonparametric Statistics and Categorical Data Analysis STAT 472 Introduction to Statistical Learning STAT 474 Introduction to Survey Sampling Additional Science Courses Select additional science credits from one of the following options: 3-4 1. Choose one from the following different lab sciences: BIOL 213 Cell Structure and Function (Mason Core) CHEM 211& CHEM 213 General Chemistry I (Mason Core) and General Chemistry Laboratory I (Mason Core) GEOL 101& GEOL 103 Physical Geology (Mason Core) and Physical Geology Lab (Mason Core) PHYS 160& PHYS 161 University Physics I (Mason Core) and University Physics I Laboratory (Mason Core) 2. Choose 3 credits from more advanced courses in biology, chemistry, geology, or physics ^1 4. Choose one course from the following: CDS 230 Modeling and Simulation I CDS 301 Scientific Information and Data Visualization CS 211 Object-Oriented Programming CS 310 Data Structures CS 330 Formal Methods and Models CS 483 Analysis of Algorithms Total Credits 30-31 ^ 1 Only refers to courses acceptable for credit toward a natural science major. Consider courses from the following: BIOL 300-499, CHEM 300-499, GEOL 300-499, PHYS 300-499. Mason Core and Elective Credits In order to meet a minimum of 120 credits, this degree requires additional credits (specific credit counts by concentration are shown below), which may be applied toward any remaining Mason Core requirements (outlined below), Requirements for Bachelor's Degrees, and elective courses^1. Students are strongly encouraged to consult with their advisors to ensure that they fulfill all • INDC concentration: 51-57 credits • PURM concentration: 51-57 credits • ACTM concentration: 45-46 credits • AMT concentration: 48-54 credits • DSCI concentration: 52-55 credits • MTHS concentration: 50-52 credits ^ 1 A maximum of 12 credits between MATH 490 Internship and MATH 491 Reading and Undergraduate Research in Mathematics can be applied to this degree. Mason Core Some Mason Core requirements may already be fulfilled by the major requirements listed above. Students are strongly encouraged to consult their advisors to ensure they fulfill all remaining Mason Core requirements. Students who have completed the following credentials are eligible for a waiver of the Foundation and Exploration (lower level) requirement categories. The Integration category (upper level) is not waived under this policy. See Admissions for more information. • VCCS Uniform Certificate of General Studies • VCCS or Richard Bland Associate of Science (A.S.), Associate of Arts (A.A.), Associate of Arts and Sciences (A.A.&S.), or Associate of Fine Arts (A.F.A.) ^ 1 In addition to covering content related to the designated category, Exploration level courses marked with a Just Societies "flag" are specifically designed to help students learn how to interact effectively with others from all walks of life, including those with backgrounds and beliefs that differ from their own. Courses marked with the Just Societies flag are available for students starting in Fall 2024. Students admitted prior to the Fall of 2025 are not required to take courses with a Just Societies flag but may wish to do so to increase their knowledge and skills in this important area. Students interested in this approach to completing their Mason Core Exploration Requirements should work closely wiht their advisor to identify the appropriate Just Societies-flagged courses. ^ 2 Most programs include the writing-intensive course designated for the major as part of the major requirements; this course is therefore not counted towards the total required for Mason Core. ^ 3 Minimum 3 credits required. Honors in the Major Mathematics majors who have maintained a GPA of at least 3.50 in mathematics courses and a GPA of 3.50 in all courses taken at George Mason University may apply to the departmental honors program upon completion of two MATH courses at the 300+ level (excluding MATH 400 History of Math (Topic Varies) (Mason Core)), at least one of which has MATH 300 Introduction to Advanced Mathematics (Mason Core) as a prerequisite. Admission to the program will be monitored by the undergraduate committee. Honors Requirements To graduate with honors in mathematics, a student is required to maintain a minimum GPA of 3.50 in mathematics courses and successfully complete MATH 405 Honors Thesis in Mathematics I and MATH 406 RS: Honors Thesis in Mathematics II with an average GPA of at least 3.50 in these two courses. Mathematics, BA or BS/Curriculum and Instruction, Accelerated MEd, (Secondary Education Mathematics Concentration) Highly-qualified undergraduates may be admitted to the bachelor's/accelerated master's program and obtain a BA or BS in Mathematics and an MEd in Curriculum and Instruction (Secondary Education Mathematics concentration) in an accelerated time-frame after satisfactory completion of a minimum of 143 credits. See AP.6.7 Bachelor's/Accelerated Master's Degree for policies related to this program. This accelerated option is offered jointly by the Department of Mathematical Sciences and the School of Education. Students in an accelerated degree program must fulfill all university requirements for the master's degree. For policies governing all graduate degrees, see AP.6 Graduate Policies. BAM Pathway Admission Requirements Applicants to all graduate programs at George Mason University must meet the admission standards and application requirements for graduate study as specified in Graduate Admissions Policies and Bachelor's/Accelerated Master's Degree policies. For information specific to this accelerated master's program, see Application Requirements and Deadlines. Students will be considered for admission into the BAM Pathway after completion of a minimum of 60 credits, and additional unit-specific criteria. Students who are accepted into the BAM Pathway will be allowed to register for graduate level courses after successful completion of a minimum of 75 undergraduate credits and course-specific Accelerated Master’s Admission Requirements Students already admitted in the BAM Pathway will be admitted to the MEd program, if they have met the following criteria, as verified on the Bachelor’s/Accelerated Master’s Transition form: • 3.0 overall GPA • Completion of specific undergraduate coursework • Successfully meeting Mason’s requirements for undergraduate degree conferral (graduation) and completing the application for graduation. Accelerated Pathway Requirements To maintain the integrity and quality of both the undergraduate and graduate degree programs, undergraduate students interested in taking graduate courses must choose from the following which can be taken as Advanced Standing or Reserve Graduate credit (to be determined by the student and their advisor): Course List Code Title Credits EDRD 619 Disciplinary Literacy 3 SEED 522 Foundations of Secondary Education 3 SEED 540 Human Development and Learning: Secondary Education 3 SEED 572 Teaching Mathematics in the Secondary School 3 SEED 672 Advanced Methods of Teaching Mathematics in the Secondary School 3 SEED approved elective For more detailed information on coursework and timeline requirements, see AP.6.7 Bachelor's/Accelerated Master's Degree policies. Mathematics, BA or BS/Mathematics, Accelerated MS This bachelor's/accelerated master's degree program allows academically strong undergraduates with a commitment to advance their education to obtain the Mathematics, BA or Mathematics, BS and the Mathematics, MS degrees within an accelerated timeframe. Upon completion of this 138 credit accelerated program, students will be exceptionally well prepared for entry into their careers or into a doctoral program in the field or in a related discipline. Students are eligible to apply for this accelerated program once they have earned at least 60 undergraduate credits and can enroll in up to 18 credits of graduate coursework after successfully completing 75 undergraduate credits. This flexibility makes it possible for students to complete a bachelor's and a master's in five years. For more detailed information, see AP.6.7 Bachelor's/Accelerated Master's Degrees. For policies governing all graduate degrees, see AP.6 Graduate Policies. For more information on undergraduates enrolling in graduate courses, see AP.1.4.4 Graduate Course Enrollment by Undergraduates. Application Requirements Applicants to all graduate programs at George Mason University must meet the admission standards and application requirements for graduate study as specified in the Graduate Admission Policies section of this catalog. Important application information and processes for this accelerated master's program can be found here. Students should seek out the graduate program's advisor who will aid in choosing the appropriate graduate courses and help prepare the student for graduate studies. Successful applicants will have an overall undergraduate GPA of at least 3.00. Additionally, they will have completed the following courses with a GPA of 3.00 or higher: Accelerated Option Requirements After the completion of 75 undergraduate credits, students may complete 3 to 12 credits of graduate coursework that can apply to both the undergraduate and graduate degrees. In addition to applying to graduate from the undergraduate program, students in the accelerated program must submit a bachelor's/accelerated master's transition form (available from the Office of the University Registrar) to the College of Science's Office of Academic and Student Affairs by the last day to add classes of their final undergraduate semester. Students should enroll for courses in the master's program in the fall or spring semester immediately following conferral of the bachelor's degree, but should contact an advisor if they would like to defer up to one semester. Students must maintain an overall GPA of 3.00 or higher in all graduate coursework and should consult with their faculty advisor to coordinate their academic goals. Reserve Graduate Credit Accelerated master's students may also take up to 6 graduate credits as reserve graduate credits. These credits do not apply to the undergraduate degree, but will reduce the master's degree by up to 6 credits. With 12 graduate credits counted toward the undergraduate and graduate degrees plus the maximum 6 reserve graduate credits, the credits necessary for the graduate degree can be reduced by up to 18. Graduate Course Suggestions The following list of suggested courses is provided for general reference. To ensure an efficient route to graduation and post-graduation readiness, students are strongly encouraged to meet with an advisor before registering for graduate-level courses. BS (any)/Statistical Science, Accelerated MS Highly-qualified undergraduates may be admitted to the bachelor's/accelerated master's program (BAM) and obtain an undergraduate BS degree and the Statistical Science, MS in an accelerated time-frame after satisfactory completion of a minimum of 138 credits. Admitted students are able to use up to 12 graduate credits in partial satisfaction of requirements for the undergraduate degree. Upon completion and conferral of the bachelor's degree and with satisfactory performance (grade of 'B' or better) in each of the graduate courses, students are given advanced standing in the master's program. See AP.6.7 Bachelor's/Accelerated Master's Degrees for policies related to this program. Students in an accelerated degree program must fulfill all university requirements for the master's degree. For policies governing all graduate degrees, see AP.6 Graduate Policies. BAM Pathway Admission Requirements No specific undergraduate BS degree is required. Students enrolled in any BS degree may apply to the accelerated Statistical Science, MS program if such an accelerated Statistical Science, MS pathway is allowable from the student's BS program, which will be determined by the academic advisors of both the BS and MS programs. Applicants to all graduate programs at George Mason University must meet the admission standards and application requirements for graduate study as specified in Graduate Admissions Policies and Bachelor's/Accelerated Master's Degree policies. Students will be considered for admission into the BAM Pathway after completion of a minimum of 60 credits with an overall GPA of 3.0. Students who are accepted into the BAM Pathway will be allowed to register for graduate level courses after successful completion of a minimum of 75 undergraduate credits and course-specific Accelerated Master's Admission Requirements Students already admitted in the BAM Pathway will be admitted to the Statistical Science, MS program, if they have met the following criteria, as verified on the Bachelor’s/Accelerated Master’s Transition form: • Completion of Mason’s requirements for undergraduate degree conferral (graduation) and completion of application for graduation. • An overall GPA of 3.00. • Completion of the following Mason courses each with a grade of C or better: Course List Code Title Credits MATH 213 Analytic Geometry and Calculus III 3 MATH 203 Linear Algebra 3 or MATH 321 Abstract Algebra STAT 250 Introductory Statistics I (Mason Core) 3 or STAT 344 Probability and Statistics for Engineers and Scientists I STAT 346 Probability for Engineers 3 or MATH 351 Probability STAT 362 Introduction to Computer Statistical Packages 3 Accelerated Pathway Requirements To maintain the integrity and quality of both the undergraduate and graduate degree programs, students complete all credits satisfying degree requirements for the BS and MS programs, with up to twelve credits overlap chosen from the following graduate courses: Course List Code Title Credits STAT 544 Applied Probability 3 STAT 554 Applied Statistics I 3 STAT 560 Biostatistical Methods 3 STAT 574 Survey Sampling I 3 STAT 663 Statistical Graphics and Data Visualization 3 All graduate course prerequisites must be completed prior to enrollment. Each graduate course must be completed with a grade of B or better to apply toward the MS degree. While still in undergraduate status, a maximum of 6 additional graduate credits may be taken as reserve graduate credit and applied to the master's program. Reserve graduate credits do not apply to the undergraduate degree. For more detailed information on coursework and timeline requirements, see AP.6.7 Bachelor's/Accelerated Master's Degrees policies. Degree Conferral Students must apply the semester before they expect to complete the BS requirements to have the BS degree conferred. In addition, at the beginning of the student's final undergraduate semester, students must complete a Bachelor's/Accelerated Master's Transition form that is submitted to the Office of the University Registrar and Graduate Recruitment and Enrollment Services. At the completion of MS requirements, a master's degree is conferred.
{"url":"https://catalog.gmu.edu/colleges-schools/science/mathematical-sciences/mathematics-bs/#acceleratedmasterstext","timestamp":"2024-11-07T20:35:05Z","content_type":"text/html","content_length":"123416","record_id":"<urn:uuid:ddd8a382-7071-412b-ab4a-c418dec11134>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00153.warc.gz"}
Write a function defined by an expression in different but equivalent forms to reveal and explain different properties of the function Things like bank accounts, loans, investments, and mortgages are a part of life, and almost always, interest is involved. Sometimes, you need to deal with compound interest, so it would be good to know the formula for it! In this tutorial, you'll see the formula for compound interest. Take a look!
{"url":"https://virtualnerd.com/common-core/hsf-functions/HSF-IF-interpreting-functions/C/8/","timestamp":"2024-11-13T13:01:35Z","content_type":"text/html","content_length":"38354","record_id":"<urn:uuid:ad446afc-4cb3-4162-902b-bb71024f11d6>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00593.warc.gz"}
Depreciation Archives » Serio Consulting Considering a move to S/4HANA from ECC, or considering activating the New Depreciation Calculation Program in ECC and want to know what you're getting into? Read more! Handling US Tax Depreciation in SAP (Part 7): Tax Forms 4562 & 4797 Can SAP support specific US IRS Tax Forms such as 4562 and 4797? Yes, and no. Group Assets: What is SAP Doing in S/4HANA? If you have to deal with UOP depreciation and SAP Group Asset functionality, you'll want to read this to see what SAP is cooking up. Group Assets: What are They? Ever wondered what exactly a group asset was and how it's different from a regular asset? How to Copy/Insert a New Depreciation Area Into Existing Assets With RAFABNEW (Part 2) After reviewing the merits and situations of adding new depreciation areas into a live system in the previous blog, this one will go through the nuts-and-bolts of executing the transaction. It's demo How to Copy/Insert a New Depreciation Area Into Existing Assets With RAFABNEW (Part 1) Are all of your depreciation areas correct? Do you trust them to provide the right values (asking the tax group specifically!!!)? Are you implementing a new ledger in the GL? Is this an archiving project? Based on how you answer each of those questions it may be necessary to create... How to Reconcile Fixed Assets to the G/L (Part 4) The final blog in a series about how to identify, research, and ensure that the fixed asset subledger always ties to the GL. How to Reconcile Fixed Assets to the G/L (Part 3) The third installment in the AA-to-GL Reconciliation blog series talks about some other useful SAP programs that can help you get through this issue. How to Reconcile Fixed Assets to the G/L (Part 2) Have you found an Asset-to-GL reconciliation issue in your ECC system? How do you confirm it? More importantly, how do you fix it? SAP has some built-in program to assist but there are several more you can download that assist with this effort! How to Reconcile Fixed Assets to the G/L (Part 1) It's so important that the fixed asset subledger reconciles to the GL. But since they are designed with a subledger-to-ledger relationship, shouldn't they always be in sync? Doesn't SAP have special reconciliation accounts to ensure that they are? Read this blog to see why this isn't always the case and...
{"url":"https://serioconsulting.com/blog/category/depreciation/","timestamp":"2024-11-05T12:58:50Z","content_type":"text/html","content_length":"248973","record_id":"<urn:uuid:0ff0758c-2a61-4224-8854-f78264c72eaf>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00678.warc.gz"}
Bit sequences and Hamiltonian cycles With the coronavirus pandemic currently sweeping the globe, most of us here in Montréal are shut at home. But social distancing hasn’t kept my friends and I from discussing cool math (some school-related, some not) over videochat. The subject of today’s article is a little exercise that Prof. Luc Devroye gave some of us in his virtual office hours yesterday. After thinking on it overnight and working it out on paper this morning, I found a solution that I thought was pretty nice. Bit sequences The exercise went as follows. Starting with a sequence of $k$ bits (either $0$ or $1$), we repeatedly add a bit to the end and lop off a bit from the beginning. Each time we do so, we get a new $k$-sequence. Is it possible to cycle through all $2^k\!$ possible bit-sequences in this manner, in exactly $2^k\!$ steps? The answer is yes, as we shall soon prove. As always, it is instructive to work out a couple small examples. When $k=1$, this is very easy, because we can start with $0$ and then replace it with a $1$. For $k=2$, we might start with the sequence $00$. Adding a $1$, we obtain the sequence $01$. Now adding a $0$, we get $10$. But here we’re stuck, because whether we add $0$ or $1$, we’ll get a sequence we’ve already seen. The correct thing would have been to add a $1$, giving us $11$, and then a $0$, giving us $10$. All $2^2 = 4$ sequences of length $2$ have been obtained. So we know that the statement is true for $k = 1$ and $2$. But of course, for larger values of $k$ we don’t want to be stuck guessing and checking, so we will have to develop a proof that works for all $k$. To this end, we need to introduce a couple of concepts from graph theory. (Feel free to skip the next section if you’re already acquainted with graphs.) Graphs and cycles Graph theory is the study of networks consisting of vertices (also called nodes) and edges between them. Often we have edges that are undirected in that one can travel along them in both directions, but our graphs will be directed, so every edge can be seen as a one-way street. For this article, we are concerned with cycles in a graph, i.e. paths that start at a vertex, travel around for a bit along edges, then return to the starting vertex. Some natural questions to ask about graphs are as follows: 1. Starting at a vertex, is it possible to traverse the graph using every edge exactly once and return to the starting point? 2. Starting at a vertex, is it possible to traverse the graph hitting every node exactly once and return to the starting point? In fact, an instance of the first question is one of the earliest problems in graph theory, and was solved by Leonhard Euler in 1736. In his honour, a cycle that uses every edge exactly once is called an Euler tour, while a cycle that visits every vertex exactly once is named a Hamiltonian cycle, after the Irish mathematician William Rowan Hamilton. Although these questions about graphs seem very similar, the first is very easy and the second is very hard. It turns out that a graph has an Euler tour if and only if every vertex has as many edges coming in as going out. I won’t give the proof of this, but if you work out a couple examples, you might see why. Having an Euler tour means that it is impossible, in some sense, to “get stuck” by following an edge into a vertex and then having no way out. If every vertex has the same number of in-edges as out-edges, this can’t happen. Representing bits as graphs Graphs are relevant to our problem because we can represent the operation of adding a new bit to the end and dropping one off the beginning as a walk in a graph. For any $k\geq 1$, we can create a graph with $2^k$ vertices, representing all possible $k$-sequences of bits. Then we draw an edge from a vertex to another if it is possible to go from one sequence to the other with exactly one bit-adding operation. We let $G_k$ denote the graph representation for the $k$-sequence problem; here are $G_1$ and $G_2$: Now it is clear why we have represented the sequences in this way. The original problem asked us to cycle through all $2^k$ possible bit-sequences by performing the bit-shift operation one at a time. This reduces to the problem of finding a Hamiltonian cycle in the graph representation. However, like I said earlier, proving the existence of Hamiltonian cycles isn’t as easy as finding Euler tours. So it isn’t immediately obvious that for any $k$, the graph representing $k$-sequences should have a Hamiltonian cycle. From Euler to Hamilton However, we note that every vertex has exactly two edges leading out (corresponding to adding either a $0$ or a $1$) and two leading in from other sequences. This means that for all $k$, $G_k$ has an Euler tour! This wouldn’t help us, except that we can draw a correspondence between the edges of $G_k$ and the vertices of $G_{k+1}$. The correspondence runs as follows: If an edge, labelled with a bit $b\in \{0,1\}$ in $G_k$ connects vertices $u$ and $v$, then we create a vertex in $G_{k+1}$ with the same label as $u$, but with the extra bit $b$ added to the end. This is illustrated in the diagram above, where the four edges in $G_1$ correspond to the four nodes in $G_2$. Then we draw an edge between nodes $a$ and $b$ in $G_{k+1}$ if and only if in the original graph $G_k$, the destination of the edge corresponding to $a$ is the starting point of edge $b$. This is less complicated than it sounds when you look at the picture. As an example, the orange node connects to the green one in $G_2$ because in $G_1$, the orange arrow leads to a node that the green arrow starts from. Notice that an Euler tour in $G_k$ is exactly a Hamiltonian cycle in $G_{k+1}$! Because we know that every $G_k$ has an Euler tour for all $k\geq 1$, we know that for all $k\geq 2$, $G_k$ has a Hamiltonian cycle ($G_1$ does as well, but we can check that manually). Back to bits We now have a way to read off the bit sequences that solve the original problem. $G_1$ and $G_2$ give us the solutions for $k=2$ and $k=3$ respectively: The bit sequences at the bottom contain every possible $k$-sequence as a substring and each corresponds to an Euler tour of $G_{k-1}$. Neat! Links and references I later found out that the sequences above are called de Bruijn sequences and the graphs are called de Bruijn graphs. So the solution I found is quite well-known (it’s in the Wikipedia page I just linked). The construction of taking a graph and producing another graph whose vertices correspond to the edges of the original is detailed in Donald Knuth’s The Art of Computer Programming, Vol. 1, in an exercise on oriented trees (Section 2.3.4.2, Exercise 21). A postscript on PostScript. I drew the graphs above by hand-coding them in PostScript. Click for the source code of the first one, and of the second one.
{"url":"https://marcelgoh.ca/2020/04/08/bit-sequences.html","timestamp":"2024-11-10T20:40:14Z","content_type":"text/html","content_length":"13148","record_id":"<urn:uuid:d54ccc89-4992-4f91-ba2f-121bd11f62c7>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00715.warc.gz"}
Boundary value problems for higher order linear impulsive differential equations The theory of impulsive di®erential equations has become an important area of research in recent years. Linear equations, meanwhile, are fundamental in most branches of applied mathematics, science, and technology. The theory of higher order linear impulsive equations, however, has not been studied as much as the cor- responding theory of ordinary di®erential equations. In this work, higher order linear impulsive equations at xed moments of impulses together with certain boundary conditions are investigated by making use of a Green's formula, constructed for piecewise di®erentiable functions. Existence and uniqueness of solutions of such boundary value problems are also addressed. Properties of Green's functions for higher order impulsive boundary value prob- lems are introduced, showing a striking di®erence when compared to classical bound- ary value problems of ordinary di®erential equations. Necessarily, instead of an or- dinary Green's function there corresponds a sequence of Green's functions due to impulses. Finally, as a by-product of boundary value problems, eigenvalue problems for higher order linear impulsive di®erential equations are studied. The conditions for the existence of eigenvalues of linear impulsive operators are presented. Basic properties of eigensolutions of self-adjoint operators are also investigated. In particular, a necessary and su±cient condition for the self-adjointness of Sturm-Liouville opera- tors is given. The corresponding integral equations for boundary value and eigenvalue problems are also demonstrated in the present work. Ö. Uğur, “Boundary value problems for higher order linear impulsive differential equations,” Ph.D. - Doctoral Program, Middle East Technical University, 2003.
{"url":"https://open.metu.edu.tr/handle/11511/13926","timestamp":"2024-11-06T21:35:19Z","content_type":"application/xhtml+xml","content_length":"54313","record_id":"<urn:uuid:76a700ab-98cd-4b25-9fe8-5579a6f02c1e>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00258.warc.gz"}
Add/subtract polynomials worksheets add/subtract polynomials worksheets Related topics: solve my math problem final exam review,2 polynomial long division solver can someone give me a example on how to solve fractions layouts and fabrications converting mixed fractions into a percentage graphing calculator emulator ti pre-algebra homework helper 4th grade fractions how do you solve rational expression holt algebra 2 answers Author Message Sovcin Posted: Monday 04th of Dec 13:36 Hey , I have been trying to solve equations related to add/subtract polynomials worksheets but I don’t seem to be getting any success. Does any one know about pointers that might help Back to top ameich Posted: Wednesday 06th of Dec 07:45 Being a teacher , this is a comment I usually hear from students. add/subtract polynomials worksheets is not one of the most liked topics amongst students . I never encourage my students to get pre made solutions from the internet , however I do advise them to use Algebrator. I have developed a liking for this software over time. It helps the students learn math in an easy to understand way. From: Prague, Czech Republic Back to top thicxolmed01 Posted: Wednesday 06th of Dec 21:32 Hello, just a year ago, I was stuck in a similar scenario . I had even considered the option of leaving math and selecting some other subject. A friend of mine told me to give one last try and sent me a copy of Algebrator. I was at ease with it within few minutes . My ranks have really improved within the last year . From: Welly, NZ Back to top LifiIcPoin Posted: Friday 08th of Dec 09:36 A truly piece of math software is Algebrator. Even I faced similar difficulties while solving binomial formula, graphing parabolas and trigonometry. Just by typing in the problem workbookand clicking on Solve – and step by step solution to my math homework would be ready. I have used it through several math classes - Basic Math, Algebra 1 and Algebra 1. I highly recommend the program. From: Way Way Back to top
{"url":"https://softmath.com/algebra-software-1/addsubtract-polynomials.html","timestamp":"2024-11-10T17:39:30Z","content_type":"text/html","content_length":"39187","record_id":"<urn:uuid:f3a7886f-cfd2-48ba-979a-906728105879>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00567.warc.gz"}
Do We Really Need More Complex Models? - Image by Author | Ideogram In the current era, many machine learning model solutions and research are dominated by Generative models such as the Large Language Model (LLM). Their popularity has risen with the presence of AI products like ChatGPT and Midjourney, which allow many people to learn about deep learning models actively. Even when the AI product isn’t as prominent as it is now, the complex model is always a more popular option. Complex models such as neural networks are specifically used in many use cases, even the simplest ones. Many data people jump straight into the complex model without considering the simplest one because the allure of a complex model is always better. However, do we really need complex models in every machine learning project? Let’s explore it. What is a Complex Model? There are no exact definitions for complex models. A deep neural network is a complex model, while linear regression is a simple model. Something like Random Forest does not generally constitute a simple model, but it’s not necessarily a complex model either. So, how can the model be called complex? Many characteristics determine its complexity, which often comes from the following information: • Number of Parameters • Interpretability • Multiple Structure • Computational Efficiency Image by Author The number of parameters is the inherent model configuration parameter, the value of which was learned during the training process, not the hyperparameter that was set initially before the model training started. Complex models generally have higher parameters than simpler models. Interpretability means explaining why the model provides its prediction. Complex models have a more challenging time interpreting as the higher number of parameters contribute to the interpretability complexity, while the simpler model is more accessible to interpret. Multiple structures refer to how the models were designed. Complex models often have multiple structures, such as multiple layers like neural networks or multiple models combined like ensemble Computer efficiency for the complex model is much more significant than that for the simpler model, as the training time and resources required to train the complex model are much higher. This is also a direct effect of the parameter numbers. That was the characteristic of complex models, so do we need more complex models when simpler models work? When to Work with Complex Models I have briefly mentioned what distinguishes complex models from simpler ones and how their characteristics affect model selection. We understand that the parameter number affects the model’s complexity, whereas a higher parameter means the model is more complex. With higher parameters, the model could capture the pattern better than a simpler model, especially the non-linear pattern, which a simple model can’t capture. However, a higher number of parameters also increases the chance of overfitting risks. Overfitting is basically a condition where the model has poor generalization capability because it learns the noise from the dataset. It’s in contrast with a simpler model, where it is harder to overfit but easier to underfit, as it can’t learn much more complex patterns. A higher number of parameters and multiple structures also affect interpretability and computational efficiency. I have mentioned previously that a complex model is more challenging to interpret than a simpler model. In many business use cases, we prefer a model with higher interpretability, even with lower model performances. This is because we want to avoid bias and have confidence in the model prediction. The decision would also be affected by our production environment. Complex models require more resources compared to simpler models. The simple model would use fewer resources, which means fewer costs to deploy and maintain. All of the above were considered when you want to use a simple or complex model. So, do we really need more complex models? Well, the answer is: it depends on your situation. A simple rule of thumb you can follow: a simple model is your go-to model if it could solve your problem already. Only going to a more complex model if it’s required. A complex model always looks fancier as the complexity attracts many to use it. However, there are many characteristics that you want to understand before using the complex model. You need to understand the number of parameters, interpretability, structure, and computational efficiency. You don’t need to always use complex models for any situation. If the simple model already works, then it’s a better solution than the complex one. Cornellius Yudha Wijaya is a data science assistant manager and data writer. While working full-time at Allianz Indonesia, he loves to share Python and data tips via social media and writing media. Cornellius writes on a variety of AI and machine learning topics. Our Top 3 Partner Recommendations 1. Best VPN for Engineers – Stay secure & private online with a free trial 2. Best Project Management Tool for Tech Teams – Boost team efficiency today 4. Best Network Management Tool – Best for Medium to Large Companies
{"url":"https://digitalinfowave.com/do-we-really-need-more-complex-models/","timestamp":"2024-11-14T04:22:32Z","content_type":"text/html","content_length":"95459","record_id":"<urn:uuid:68eef960-052c-438f-8d63-17ee3db7a4c0>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00831.warc.gz"}
Vibration-collision mechanism of dual-stabilizer bottom hole assembly system in vertical wellbore trajectory The lateral vibration of drill string causes well deviation and the collision between drill collar and sidewall, and severe lateral vibration even affects the drilling safety or reliability. In view of the problems above, the lateral vibration characteristics of drill string need to be analyzed. Therefore, a newly vibration-collision model of BHA (bottom hole assembly) with random collision characteristics is proposed in this paper. Firstly, the dynamic model of drill collar with double stabilizer is presented by utilizing the Lagrange equations. Secondly, the dynamic characteristics of drill collar in air and mud drilling is analyzed with different collision frequencies. Subsequently, the displacement and motion trajectory of drill collar under different structure parameters of BHA and mechanical parameters of system are investigated by numerical simulation. Finally, the influence of rotation speed of BHA and length of drill collar on lateral vibration of drill collar is discussed. The results indicate that the lateral vibration of drill collar in air drilling is more serious than that in mud drilling; and the higher the collision frequency, the more severe the lateral vibration. Improving rotation speed of BHA, length of drill collar and WOB (weight on bit) have promoting influence on lateral vibration of drill collar; however, increasing stabilizer diameter has suppressing influence on lateral vibration of drill collar. The research findings give reasonable guidance for structure design of BHA and selection of mechanical parameters of system. • A random collision model between stabilizers and sidewall is presented in air drilling. • A newly vibration-collision model of BHA is proposed to describe the interaction between BHA and sidewall in air drilling engineering. • The research findings give reasonable guidance for structure design of BHA and selection of mechanical parameters of system. 1. Introduction In the drilling of vertical well, dual-stabilizer BHA (bottom hole assembly) is widely applied to control the well deviation. However, well deviation is still appeared in the process of air and mud drilling [1-3]. As the transverse vibration of BHA is more acute in air drilling, the well deviation in air drilling process is more severe than that in mud drilling process, which leads to increase of collision frequency between drill collar and sidewall [4, 5]. Therefore, dynamic characteristics of BHA is a complex coupling behavior between vibration and collision in the drill string system. On the vibration analysis of BHA, some ideas can be obtained from the research of Jena et al. [6, 7]. These findings will provide guidance for the follow-up work of drill string mechanics. Due to the complexity of drilling conditions, the motion of drill string includes axial, torsional and lateral vibration in the actual drilling engineering, furthermore, the lateral vibration of BHA determining the degree of well deviation should be seriously studied [8-10]. To explore the transverse vibration of BHA, many scholars have presented some methods for studying the motion characteristics of BHA, which can give support for analyzing adverse effects caused by transverse vibration of BHA. On account of the high slenderness ratio of the BHA, the transverse vibration model of BHA can be established based on Euler-Bernoulli beam theory; in this case, the governing equation is effectively simplified. The governing equations of transverse vibration of BHA under axial load and interaction with sidewall were established by Yigit et al. by applying the assumed modes method [11, 12]. Chen et al. [13] applied an improved transfer matrix method to analyze the lateral vibration of BHA, which is suitable for drill string subjected to large axial load. Besides, added mass coefficient caused by mud-drill string interaction affecting on lateral vibration is explored. Subsequently, to consider the force caused by bit-rock interaction, a stochastic dynamic approach is introduced into investigation of lateral vibration by Spanos, which can solve the uncertainty of force on the bit [14]. For decreasing the lateral vibration of BHA, Zhu and Di [15] presented a prebent structure of drill collar between the two stabilizers, who demonstrated that the lateral vibration is obviously affected by prebent deflection, and the severe whirling motion of BHA can be effectively prevented by designing a reasonable prebent deflection. On the basis of Zhu’s research, Wang et al. [16] investigated the influence of bend angle on well deviation control mechanism, and the control ability of well deviation is evaluated by the dynamic bit lateral force. Meanwhile, the field experiment shows that the prebent structure of drill collar performs an excellent deviation control effect. However, the above-mentioned articles mainly concentrated on the research of lateral displacement of BHA, the whirling of BHA affected on lateral vibration is neglected. In view of this phenomenon, Marcin et al. [17] adopted a nonsmooth lumped parameter model to analyze the whirling of drill string, and various types of whirling motion are revealed. In the following years, to demonstrate the reliability of model for whirling of drill string, a new type of experimental rig was applied to study the forward and backward whirling of BHA by Marcin et al. [18]. The co-existing state of forward and backward whirling of BHA was firstly observed in experiments. To avoid complicated derivation and solution, the lateral vibration of drill-string was analyzed by many scholars based on finite element method. In addition, the finite element model of drill string can be established with reference to the modeling method by Jena et al. [19]. According to the beam-column theory and finite element method, Wang et al. [20] proposed a model to explore the transverse vibration of BHA, and an indoor experiment was implemented to demonstrate the accuracy of the model. In view of Wang’s research, Li et al. [21] presented a new experimental setup to investigate the lateral motion of BHA, and the motion mechanism of BHA was analyzed. However, the researches above only considered the lateral motion of BHA, and the influence of drill string above BHA was ignored. Consequently, to solve this problem, Li et al. [22] proposed a new model with full-dimensional beam elements by using the finite element method, and the static-kinetic friction model was considered. To better study the dynamic behavior of drill string, Zhu et al. [23] analyzed the buckling response of drill string when the lateral vibration is considered, which can evaluate the critical buckling and working security of drill string. Furthermore, for analyzing the transverse vibration of drill-string in curved well, a beam finite element method was presented by Cai et al. [24]; different with the previous well structure, the curved well consists of vertical section, deflecting section and horizontal section. The above-mentioned research solved many problems existed in the drilling engineering and provided theoretical guidance for the new lateral vibration model, but the vibration-collision mechanism of dual-stabilizer BHA system with random collision characteristics is less explored. Therefore, according to the findings above, a new model with random collision between stabilizers and sidewall is proposed to determine the vibration of BHA; in this case, the relationship among lateral vibration, random collision and parameter variation in the drill string system can be ascertained. 2. Dynamic model of drill collar with double stabilizer Fig. 1(a) shows the structure of BHA in a vertical well. It usually consists of stabilizers, drill bit and drill collar. Viewed from the positive z axis, the system of drill string rotates clockwise with an angular velocity $\mathrm{\Omega }$. In addition, Fig. 1(b) shows the schematic diagram of drill collar cross-section A-A. The transverse vibration of drill collar between double stabilizer is investigated in this paper. To obtain the motion equations of drill collar, the formulas for calculating the virtual work, potential energy and kinetic energy need to be given. Fig. 1The sketch of the system 2.1. Kinetic energy The total kinetic energy includes translational energy and rotational kinetic energy of drill collar, which is expressed as: $T=\frac{1}{2}{\int }_{0}^{L}\left({\rho }_{a}{A}_{a}{{V}_{a}}^{2}+{\rho }_{c}{A}_{c}{{V}_{c}}^{2}+{J}_{x}{{\omega }_{x}}^{2}+{J}_{y}{{\omega }_{y}}^{2}+{J}_{z}{{\omega }_{z}}^{2}\right)dz,$ where, $L$ is the length of drill collar between two stabilizers, ${\rho }_{a}$ and ${\rho }_{c}$ are the density of drill collar and circulation medium of drilling, respectively. ${A}_{a}$ and ${A}_ {c}$ are the cross-section area of drill collar and circulation medium inside the drill collar, respectively. ${V}_{a}$ and ${V}_{c}$ are the velocity of mass center of drill collar and circulation medium at the distance $z$, respectively, which can be given as: ${\mathbf{V}}_{a}=\left(\stackrel{˙}{u}+e\mathrm{\Omega }\mathrm{c}\mathrm{o}\mathrm{s}\mathrm{\Omega }t\right)\mathbf{i}+\left(\stackrel{˙}{v}-e\mathrm{\Omega }\mathrm{s}\mathrm{i}\mathrm{n}\mathrm {\Omega }t\right)\mathbf{j}+\stackrel{˙}{w}\mathbf{k},$ where $u$, $v$ and $w$ are the displacements in the $x$, $y$ and $z$ directions, respectively. $\mathbf{i}$, $\mathbf{j}$ and $\mathbf{k}$ are the unit vectors in the $x$, $y$ and $z$ directions, respectively. $e$ represents the eccentricity of mass center relative to the geometrical center of drill collar. Symbol (^·) indicates derivative with respect to time $t$. ${J}_{x}$ and ${J}_{y}$ indicate the transverse inertia moments of drill-collar with unit length relative to $x$ and $y$ axes, respectively. ${J}_{z}$ indicates the rotational inertia moment of drill-collar with unit length relative to $z$ axis. Therefore, the mass inertia moment of drill-collar with unit length are written as: $\begin{array}{l}{J}_{x}={J}_{y}={\rho }_{a}I,\\ {J}_{z}=2{\rho }_{a}I,\end{array}$ where, $I$ expresses the area inertia moment of drill collar with unit length. ${\omega }_{x}$, ${\omega }_{y}$ and ${\omega }_{z}$ denote the angular velocity components of drill collar at position $z$ in the $x$, $y$ and $z$ directions, respectively. According to the previous research, the components of angular velocity are written as: $\begin{array}{l}{\omega }_{x}={\stackrel{˙}{\gamma }}_{1}\mathrm{c}\mathrm{o}\mathrm{s}{\gamma }_{2}-\mathrm{\Omega }\mathrm{c}\mathrm{o}\mathrm{s}{\gamma }_{1}\mathrm{s}\mathrm{i}\mathrm{n}{\gamma }_{2},\\ {\omega }_{y}={\stackrel{˙}{\gamma }}_{2}+\mathrm{\Omega }\mathrm{s}\mathrm{i}\mathrm{n}{\gamma }_{1},\\ {\omega }_{z}={\stackrel{˙}{\gamma }}_{1}\mathrm{s}\mathrm{i}\mathrm{n}{\gamma }_{2}+ \mathrm{\Omega }\mathrm{c}\mathrm{o}\mathrm{s}{\gamma }_{1}\mathrm{c}\mathrm{o}\mathrm{s}{\gamma }_{2},\end{array}$ where ${\gamma }_{1}$ and ${\gamma }_{2}$ represent the rotation angle generated by bending of BHA relative to $x$ and $y$ axes, respectively. Based on the small deformation hypothesis, and the torsional deformation of drill collar is ignored, the Eq. (5) can be simplified as: $\begin{array}{l}{\omega }_{x}={\stackrel{˙}{\gamma }}_{1}-\mathrm{\Omega }{\gamma }_{2},\\ {\omega }_{y}={\stackrel{˙}{\gamma }}_{2}+\mathrm{\Omega }{\gamma }_{1},\\ {\omega }_{z}={\stackrel{˙}{\ gamma }}_{1}{\gamma }_{2}+\mathrm{\Omega }.\end{array}$ The shear deformation of rotational drill collar is neglected, the rotation angles expressed by the deflection are given as: $\begin{array}{l}{\gamma }_{1}=-\frac{\partial v}{\partial z},\\ {\gamma }_{2}=\frac{\partial u}{\partial z}.\end{array}$ Substituting Eqs. (2-7) into the kinetic energy, and the axial vibration is neglected, the equation is deduced as: $T=\frac{1}{2}{\int }_{0}^{L}\left\{\begin{array}{l}{\rho }_{a}{A}_{a}\left[{\stackrel{˙}{u}}^{2}+{\stackrel{˙}{v}}^{2}+2e\mathrm{\Omega }\left(\stackrel{˙}{u}\mathrm{c}\mathrm{o}\mathrm{s}\mathrm{\ Omega }t-\stackrel{˙}{v}\mathrm{s}\mathrm{i}\mathrm{n}\mathrm{\Omega }t\right)+{e}^{2}{\mathrm{\Omega }}^{2}\right]+{\rho }_{c}{A}_{c}\left({\stackrel{˙}{u}}^{2}+{\stackrel{˙}{v}}^{2}\right)\\ +{\rho }_{a}I\left[{\left(\frac{\partial \stackrel{˙}{v}}{\partial z}\right)}^{2}+{\left(\frac{\partial \stackrel{˙}{u}}{\partial z}\right)}^{2}-2\mathrm{\Omega }\left(\frac{\partial v}{\partial z}\right)\ left(\frac{\partial \stackrel{˙}{u}}{\partial z}\right)-2\mathrm{\Omega }\left(\frac{\partial \stackrel{˙}{v}}{\partial z}\right)\left(\frac{\partial u}{\partial z}\right)+2{\mathrm{\Omega }}^{2}\ By analyzing the regularity of lateral vibration of drill collar, and utilizing the assumed-modes method, the first-order expression of lateral deflection of drill collar is given as: $\begin{array}{l}u=r\left(t\right)\mathrm{s}\mathrm{i}\mathrm{n}\frac{\pi z}{L}\mathrm{s}\mathrm{i}\mathrm{n}\theta ,\\ v=r\left(t\right)\mathrm{s}\mathrm{i}\mathrm{n}\frac{\pi z}{L}\mathrm{c}\mathrm {o}\mathrm{s}\theta ,\end{array}$ where, $r$ is the radial displacement of drill string at distance $z=L$/2, which is one of the generalized coordinates of lateral vibration of BHA. Besides, $\theta$ is expressed the rotation angle of radial displacement of drill string, which is another generalized coordinate of lateral vibration of BHA. Substituting Eq. (9) into Eq. (8), the final form of kinetic energy is obtained as: $\begin{array}{l}T=\frac{{\rho }_{a}{A}_{a}}{2}\left[\frac{{\stackrel{˙}{r}}^{2}L}{2}+\frac{{r}^{2}{\stackrel{˙}{\theta }}^{2}L}{2}+\frac{4e\mathrm{\Omega }L}{\pi }\stackrel{˙}{r}\mathrm{s}\mathrm{i} \mathrm{n}\left(\theta -\mathrm{\Omega }t\right)+\frac{4e\mathrm{\Omega }L}{\pi }r\stackrel{˙}{\theta }\mathrm{c}\mathrm{o}\mathrm{s}\left(\theta -\mathrm{\Omega }t\right)+{e}^{2}{\mathrm{\Omega }}^ {2}L\right]\\ +\frac{{\rho }_{c}{A}_{c}}{2}\left[\frac{{\stackrel{˙}{r}}^{2}L}{2}+\frac{{r}^{2}{\stackrel{˙}{\theta }}^{2}L}{2}\right]\end{array}$$+\frac{{\rho }_{a}I}{2}\left[\frac{{\pi }^{2}}{2L}{\ stackrel{˙}{r}}^{2}+\frac{{\pi }^{2}}{2L}{r}^{2}{\stackrel{˙}{\theta }}^{2}-\frac{{\pi }^{2}\mathrm{\Omega }}{L}r\stackrel{˙}{r}\mathrm{s}\mathrm{i}\mathrm{n}2\theta -\frac{{\pi }^{2}\mathrm{\Omega }}{L}{r}^{2}\stackrel{˙}{\theta }\mathrm{c}\mathrm{o}\mathrm{s}2\theta +2{\mathrm{\Omega }}^{2}L\right].$ 2.2. Potential energy For the drill collar subjected to axial force and bending deformation, the potential energy is given by: $U=\frac{1}{2}{\int }_{0}^{L}\left\{EI\left[{\left(\frac{{\partial }^{2}u}{\partial {z}^{2}}\right)}^{2}+{\left(\frac{{\partial }^{2}v}{\partial {z}^{2}}\right)}^{2}\right]-P\left[{\left(\frac{\ partial u}{\partial z}\right)}^{2}+{\left(\frac{\partial v}{\partial z}\right)}^{2}\right]\right\}dz,$ where, $P$ is the axial force; in this study, the axial force of drill collar between two stabilizers is approximately equal to WOB (weight on bit). $EI$ indicates the flexural rigidity of drill collar. Introducing Eq. (9) into Eq. (11), the potential energy is derived as: $U=\frac{EI{\pi }^{4}}{4{L}^{3}}{r}^{2}-\frac{P{\pi }^{2}}{4L}{r}^{2}.$ When $r>{s}_{0}$, the stabilizer will contact the sidewall. Thus, equivalent stiffness of the drill-collar $k$ is given by: $k=\frac{1}{r-{s}_{0}}\frac{\partial U}{\partial r},$ where, ${s}_{0}$ is the clearance between stabilizer and sidewall. The purpose for solving the stiffness of drill collar is to equivalent the potential energy to a restoring force, which is simplifying the derivation of motion equation of drill collar. 2.3. Virtual work In this study, the virtual work includes four parts, which are from equivalent restoring force ${F}_{k}$, contact force between drill collar and sidewall ${F}_{h}$, air damping ${F}_{d}$ and gyroscopic moment ${F}_{g}$, respectively. The formula of virtual work can be expressed as: $\delta W=\delta {W}_{{F}_{k}}+\delta {W}_{{F}_{h}}+\delta {W}_{{F}_{d}}+\delta {W}_{{F}_{g}}.$ Fig. 2The contact model between stabilizers and sidewall c) Projection in $x$-$y$ plane The virtual work of restoring force can be given as: $\delta {W}_{{F}_{k}}={F}_{k,r}\delta r+{F}_{k,\theta }r\delta \theta ,$ where ${F}_{k,r}$ and ${F}_{k,\theta }$ indicate the normal component and tangential component of equivalent restoring force ${F}_{k}$, respectively. Besides, the restoring force is generated by the interaction of the stabilizer with the sidewall. In previous studies, the contact model between stabilizers and sidewall shown in Fig. 2(a) was applied to calculate the restoring force, which means that the contact points between two stabilizers and sidewall are in the same plumb line. However, in the actual drilling engineering, the contact points between two stabilizers and sidewall are in different plumb line, in this case, there is an offset distance between the geometric centers of the case two stabilizers in the x direction, as displayed in Fig. 2(b). Fig. 2(c) shows the projection of the contact position between two stabilizers and sidewall in $x$-$y$ plane, where $\phi$ is the phase angle between two stabilizers and sidewall contact point. When the contact between stabilizers and sidewall is ipsilateral, the phase angle $\phi$ is 0; when the contact between stabilizers and sidewall is random, the phase angle $\phi$ is ranged from 0 to $\pi$. According to geometric relationship, the expression for the offset distance of geometric centers of the two stabilizers in terms of the phase angle is written as: $a={s}_{0}\left(1-\mathrm{c}\mathrm{o}\mathrm{s}\phi \right).$ When the friction between stabilizer and sidewall is ignored, the restoring force is written as: ${F}_{k}=\left\{\begin{array}{l}-k\left[r-\frac{{s}_{0}\left(1+\mathrm{c}\mathrm{o}\mathrm{s}\phi \right)}{2}\right],\\ 0,\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}r\le {s}_{0}.\end{array}\begin {array}{l}\\ \end{array}\begin{array}{l}r>{s}_{0},\\ \end{array}\right\$ However, the friction between stabilizer and sidewall is considered, the restoring force is indicated as: ${F}_{k}=\left\{\begin{array}{l}-k\left[r\mathrm{c}\mathrm{o}\mathrm{s}\alpha -\frac{{s}_{0}\left(1+\mathrm{c}\mathrm{o}\mathrm{s}\phi \right)\mathrm{c}\mathrm{o}\mathrm{s}\beta }{2}\right],r>{s}_ {0},\\ 0,\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}r\le {s}_{0},\end{array}\right\$ where, $\alpha$ is the angle displacement of rotation of geometrical center of drill collar relative to geometrical center of stabilizer. $\beta$ is the friction angle caused by the contact between stabilizer and sidewall. The friction angle can be confirmed based on the law of Coulomb friction. In addition, the geometric relationship of $\alpha$ and $\beta$ is obtained by: $r\mathrm{s}\mathrm{i}\mathrm{n}\alpha =\frac{{s}_{0}\left(1+\mathrm{c}\mathrm{o}\mathrm{s}\phi \right)}{2}\mathrm{s}\mathrm{i}\mathrm{n}\beta .$ When stabilizer contacts with the wall of well, the normal component ${F}_{k,r}$ and tangential component ${F}_{k,\theta }$ of equivalent restoring force are given as: ${F}_{k,r}=-k\left[r\mathrm{c}\mathrm{o}\mathrm{s}\alpha -\frac{{s}_{0}\left(1+\mathrm{c}\mathrm{o}\mathrm{s}\phi \right)\mathrm{c}\mathrm{o}\mathrm{s}\beta }{2}\right]\mathrm{c}\mathrm{o}\mathrm{s}\ alpha ,$ ${F}_{k,\theta }=-k\left[r\mathrm{c}\mathrm{o}\mathrm{s}\alpha -\frac{{s}_{0}\left(1+\mathrm{c}\mathrm{o}\mathrm{s}\phi \right)\mathrm{c}\mathrm{o}\mathrm{s}\beta }{2}\right]\mathrm{s}\mathrm{i}\ mathrm{n}\alpha .\mathrm{}$ When the lateral displacement is large enough, the drill collar will contact with sidewall. On the basis of the Hertz contact law, the normal force is written as: ${F}_{h,r}=\left\{\begin{array}{l}-{k}_{h}{\left(r-{c}_{0}\right)}^{\frac{3}{2}},r>{c}_{0},\\ 0,\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}r\le {c}_{0},\end{array}\right\$ where ${k}_{h}$ is the contact coefficient, ${c}_{0}$ is the clearance between drill collar and sidewall. Furthermore, the tangential force is indicated as: ${F}_{h,\theta }=\left\{\begin{array}{l}-\mathrm{s}\mathrm{i}\mathrm{g}\mathrm{n}\left(r\stackrel{˙}{\theta }+{R}_{o}\mathrm{\Omega }\right){\mu }_{h}{k}_{h}{\left(r-{c}_{0}\right)}^{\frac{3}{2}},r> {c}_{0},\\ 0,\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}r\le {c}_{0},\end{array}\right\$ where ${\mu }_{h}$ denotes the friction coefficient, ${R}_{o}$ is the outer radius of drill collar. Thus, the virtual work generated by contact force is written as: $\delta {W}_{{F}_{h}}={F}_{h,r}\delta r+{F}_{h,\theta }r\delta \theta .$ The damping force will be generated when the drill collar vibrates, the virtual work of the damping force is expressed as: $\delta {W}_{{F}_{d}}={F}_{d,r}\delta r+{F}_{d,\theta }r\delta \theta .$ The normal and tangential components of the damping force are written as follow: ${F}_{d,r}=-{c}_{f}\stackrel{˙}{r}\sqrt{{\stackrel{˙}{r}}^{2}+{r}^{2}{\stackrel{˙}{\theta }}^{2}},$ ${F}_{d,\theta }=-{c}_{f}r\stackrel{˙}{\theta }\sqrt{{\stackrel{˙}{r}}^{2}+{r}^{2}{\stackrel{˙}{\theta }}^{2}}.$ with ${c}_{f}=\frac{4}{3\pi }{\rho }_{c}{C}_{d}{R}_{o}L$, where ${C}_{d}$ is the damping coefficient. The drill collar rotates about its own axis; thus, gyroscopic moment will be occurred when the axis is deflected. The virtual work from gyroscopic moment can be expressed as: $\delta {W}_{{F}_{g}}=\mathrm{\Omega }{J}_{z}{\int }_{0}^{L}\left({\stackrel{˙}{\gamma }}_{1}\delta {\gamma }_{2}-{\stackrel{˙}{\gamma }}_{2}\delta {\gamma }_{1}\right)dz.$ Substituting Eqs. (4), (7) and (9) into Eq. (28), the result of virtual work from gyroscopic moment is derived as: $\delta {W}_{{F}_{g}}=\frac{{\pi }^{2}{\rho }_{a}I\mathrm{\Omega }}{L}r\left(\stackrel{˙}{r}\delta \theta -\stackrel{˙}{\theta }\delta r\right).$ Therefore, the total generalized forces of system from virtual work can be obtained as: ${Q}_{r}=\frac{\delta {W}_{{F}_{k}}+\delta {W}_{{F}_{h}}+\delta {W}_{{F}_{d}}+\delta {W}_{{F}_{g}}}{\delta r},$ ${Q}_{\theta }=\frac{\delta {W}_{{F}_{k}}+\delta {W}_{{F}_{h}}+\delta {W}_{{F}_{d}}+\delta {W}_{{F}_{g}}}{\delta \theta }.$ 2.4. Governing equations In this study, to acquire the motion equations of drill collar, the expressions of Lagrange equations are adopted as: $\frac{d}{dt}\left(\frac{\partial T}{\partial \stackrel{˙}{r}}\right)-\frac{\partial T}{\partial r}={Q}_{r},$ $\frac{d}{dt}\left(\frac{\partial T}{\partial \stackrel{˙}{\theta }}\right)-\frac{\partial T}{\partial \theta }={Q}_{\theta }.$ By applying Lagrange equations, the governing equations are obtained as follow: $m\stackrel{¨}{r}-mr{\stackrel{˙}{\theta }}^{2}+\frac{{\pi }^{2}{\rho }_{a}I\mathrm{\Omega }}{L}r\stackrel{˙}{\theta }+{c}_{f}\stackrel{˙}{r}\sqrt{{\stackrel{˙}{r}}^{2}+{r}^{2}{\stackrel{˙}{\theta }} ^{2}}=\frac{2e{\mathrm{\Omega }}^{2}{\rho }_{a}{A}_{a}L}{\pi }\mathrm{c}\mathrm{o}\mathrm{s}\left(\theta -\mathrm{\Omega }t\right)+{F}_{1},$ $mr\stackrel{¨}{\theta }+2m\stackrel{˙}{r}\stackrel{˙}{\theta }-\frac{{\pi }^{2}{\rho }_{a}I\mathrm{\Omega }}{L}\stackrel{˙}{r}+{c}_{f}r\stackrel{˙}{\theta }\sqrt{{\stackrel{˙}{r}}^{2}+{r}^{2}{\ stackrel{˙}{\theta }}^{2}}=-\frac{2e{\mathrm{\Omega }}^{2}{\rho }_{a}{A}_{a}L}{\pi }\mathrm{s}\mathrm{i}\mathrm{n}\left(\theta -\mathrm{\Omega }t\right)+{F}_{2},$ $m=\frac{1}{2}\left({\rho }_{a}{A}_{a}L+{\rho }_{c}{A}_{c}L+\frac{{\pi }^{2}{\rho }_{a}I}{L}\right),$ ${F}_{1}=\left\{\begin{array}{l}0,\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}r\le {s}_{0},\\ -k\left[r\mathrm{c}\mathrm{o}\mathrm{s}\alpha -\frac{{s}_{0}\left(1+\mathrm{c}\mathrm{o}\mathrm {s}\phi \right)\mathrm{c}\mathrm{o}\mathrm{s}\beta }{2}\right]\mathrm{c}\mathrm{o}\mathrm{s}\alpha ,{s}_{0}<r\le {c}_{0},\\ -k\left[r\mathrm{c}\mathrm{o}\mathrm{s}\alpha -\frac{{s}_{0}\left(1+\mathrm {c}\mathrm{o}\mathrm{s}\phi \right)\mathrm{c}\mathrm{o}\mathrm{s}\beta }{2}\right]\mathrm{c}\mathrm{o}\mathrm{s}\alpha -{k}_{h}{\left(r-{c}_{0}\right)}^{\frac{3}{2}},r>{c}_{0},\end{array}\right\$ ${F}_{2}=\left\{\begin{array}{l}0,\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}r\le {s}_{0},\\ -k\left[r\mathrm{c}\mathrm{o}\mathrm{s}\alpha -\frac{{s}_{0}\left(1+\mathrm{c}\mathrm{o}\mathrm {s}\phi \right)\mathrm{c}\mathrm{o}\mathrm{s}\beta }{2}\right]\mathrm{s}\mathrm{i}\mathrm{n}\alpha ,{s}_{0}<r\le {c}_{0},\\ -k\left[r\mathrm{c}\mathrm{o}\mathrm{s}\alpha -\frac{{s}_{0}\left(1+\mathrm {c}\mathrm{o}\mathrm{s}\phi \right)\mathrm{c}\mathrm{o}\mathrm{s}\beta }{2}\right]\mathrm{s}\mathrm{i}\mathrm{n}\alpha -\mathrm{s}\mathrm{i}\mathrm{g}\mathrm{n}\left(r\stackrel{˙}{\theta }+{R}_{o}\ mathrm{\Omega }\right){\mu }_{h}{k}_{h}{\left(r-{c}_{0}\right)}^{\frac{3}{2}},r>{c}_{0}.\end{array}\right\$ 3. Numerical simulation The dynamic model of lateral vibration of drill collar has been established in the previous section. To demonstrate the applicability of the theoretical analysis and dynamic model, in this section, the simulation results of dynamic characteristics of drill collar are given. In the actual drilling engineering, the collision between stabilizers and sidewall is random, and the phase angle is ranged from 0 to $\pi$; in this case, the collision frequency between stabilizers and sidewall is regarded as $f$, i.e., the phase angle is changed in every 1/$f$ second. To better satisfy this condition, the random function is introduced to solve dynamic equations. Consequently, the phase angle can be written as: $\phi =\pi ×\mathrm{r}\mathrm{a}\mathrm{n}\mathrm{d}\left(tf,1\right),$ where, $\mathrm{r}\mathrm{a}\mathrm{n}\mathrm{d}\left(\right)$ is the random function and ranges from 0 to 1, $t$ is the integral time. The dynamic characteristics of drill collar are analyzed in air drilling and mud drilling when the collision frequency $f$ is 0.2, 1 and 5 Hz, respectively. The structural parameters of the BHA are listed in Table 1, the mechanical parameters of the system are given in Table 2. Table 1Structural parameters of BHA Property Value Units Young’s modulus of drill collar 210 GPa Density of drill collar 7860 Kg/m^3 Length of drill collar 21 m Outer diameter of drill collar 228.6 mm Inner diameter of drill collar 76.2 mm Eccentricity of drill collar 12.7 mm Diameter of stabilizer 308 mm The Runge-Kutta method, as a high-precision algorithm, is widely applied in complex nonlinear calculation. Hence, in the numerical simulation of this paper, Runge-Kutta method is applied to solve the dynamic equations. The flow chart of the numerical simulation of the dynamic model is shown in Fig. 3. Firstly, the phase angle is determined by the collision frequency. Secondly, the forces ${F}_{1} $ and ${F}_{2}$ are obtained based on the judgment conditions. Subsequently, the forces are introduced to the governing equations. Finally, the acceleration, velocity and displacement can be calculated according to the Runge-Kutta method. All data will be saved in the blue box on the right of Fig. 3, which is an observation device. Table 2Mechanical parameters of the system Property Value Units Weight on bit 20 kN Rotation speed 45 r/min Density of compressed air 50 kg/m^3 Diameter of borehole 311.2 mm Contact coefficient 6.78×10^11 Nm^-1.5 Friction angle 0.5 ° Friction coefficient 0.2 Damping coefficient 1 Fig. 3The flow chart of the numerical simulation of the dynamic model 3.1. The dynamic characteristics of drill collar under different collision frequencies 3.1.1. Air drilling The dynamic characteristics of drill collar in air drilling is shown in Fig. 4 when the collision frequency between stabilizers and sidewall are 0.2, 1 and 5 Hz, respectively. Fig. 4(a) shows the motion trajectory of geometric center of drill collar, it can be found that the trajectory is chaotic, and the collision between drill collar and sidewall is frequently occurred. Meanwhile, the final motion trajectory of drill collar under all conditions is circular. With the increase of collision frequency between stabilizers and sidewall, the trajectory curve is more chaotic, and the lateral displacement of drill collar is more severe. Fig. 4(b) and (c) show the phase trajectory of drill collar in $X$ and $Y$ directions, respectively, and the maximum velocities in two directions are consistent under the same frequency collision. Besides, in the case of low frequency collision, the maximum lateral velocity is decreased comparing with frequent frequency collision. In Fig. 4(d), the black solid line represents the whirling angle, when the slope of black line is positive, this phenomenon means that the average whirling speed is positive, i.e., the drill collar is forward whirling; when the slope of black line is negative, this suggests that the average whirling speed is negative, i.e., the drill collar is backward whirling. Meanwhile, the whirling angle increases initially and decreased afterwards under different collision frequencies, which demonstrates that the drill collar is first forward whirling, and then backward whirling. However, it takes more time when the drill collar reaches the backward whirling under high collision frequency. Furthermore, the slope of black line is changed with the variation of average whirling speed of drill collar. The red solid line represents the whirling speed of drill collar, it is seen that numerous spikes are appeared, which can be explained by Fig. 4(a). In trajectory curve diagram, the geometric center of drill collar frequently passes by the center of borehole, in this case, the whirling radius of drill collar is become small. As the whirling speed of drill collar is inversely proportional to whirling radius of drill collar, thus, the whirling speed of drill collar will appear peak value. The number of peak value of whirling speed is increased with the collision frequency, and the inflection point related to the average whirling speed is increased accordingly. When the motion state of drill collar from forward whirling to backward whirling, the backward whirling speed is maintained around 13 rad/s, which is three times of the rotation speed of BHA. Therefore, it can be concluded that lower collision frequency is beneficial to decrease lateral displacement of drill collar, however, the lower collision frequency is more likely to induce the backward whirling of drill collar. Fig. 4Dynamic characteristics of drill collar in air drilling with different collision frequency b) Phase trajectory in $X$ direction c) Phase trajectory in $Y$ direction d) Whirling speed and angle 3.1.2. Mud drilling Different from the air drilling, the friction coefficient is given as 0.1 and the density of drilling fluid is treated as 1500 kg/m^3 in mud drilling. Fig. 5 shows the dynamic characteristics of drill collar in mud drilling when the collision frequency between stabilizers and sidewall is chosen as 0.2, 1 and 5 Hz, respectively. Fig. 5Dynamic characteristics of drill collar in mud drilling with different collision frequency b) Phase trajectory in $X$ direction c) Phase trajectory in $Y$ direction d) Whirling speed and angle Fig. 5(a) displays the motion trajectory of geometric center of drill collar, and it is found that the lateral displacement of drill collar is increased with the increase of collision frequency. Furthermore, the maximum lateral displacement is less than the clearance between drill collar and sidewall, under the circumstances, the contact between drill collar and sidewall is disappeared. When the collision frequency is up to 5 Hz, the displacement of drill collar in $Y$ direction is significantly greater than the displacement in $X$ direction. In the light of Fig. 5(b) and (c), the velocities in two directions are both increased with the collision frequency, and the lateral velocity is no more than 0.2 m/s. The lateral velocity of drill collar in the mud drilling is much smaller than air drilling process. As shown in Fig. 5(d), the whirling angle is increased with the time, which means that the average whirling speed is always positive, i.e., the drill collar is always forward whirling. When the collision frequency is 0.2 Hz, there is no spikes in whirling speed curve of drill collar because the geometric center of drill collar is deviated from the center of borehole according to trajectory curve diagram. With the increase of collision frequency between stabilizers and sidewall, the number of peak value of whirling speed is increased; besides, the number of inflection points is increased, i.e., the number of average whirling speed changes is increased. Compared with air drilling process, the whirling speed of drill collar mainly fluctuates near the rotation speed in mud drilling, and the backward whirling of drill collar is accidentally happened. Therefore, to reduce the lateral displacement and lateral velocity of drill collar, the collision frequency between stabilizers and sidewall should be controlled. The previous section, the dynamic characteristics of drill collar under different collision frequencies are explored in air and mud drilling process, respectively. It is obviously found that lateral displacement and velocity in air drilling are greater than that in mud drilling, and the collision between drill collar and sidewall will be happened in air drilling. Meanwhile, the whirling angle increases initially and decreased afterwards in air drilling; however, the whirling angle is always increased in mud drilling. Hence, the backward whirling of drill collar is more likely to occur in air drilling process. Besides, to further analyze the dynamic behavior of drill collar in different structure and mechanical parameters, the lateral displacement and motion trajectory of drill collar are given. Since the influence of structure and mechanical parameters of the system on lateral vibration during mud drilling has been discussed several times in previous studies, thus, only the lateral vibration of drill collar in air drilling will be analyzed below. It can provide theoretical guidance to control the well deviation of trajectory. 3.2. The lateral displacement of drill collar under different diameters of stabilizer To explore the drill collar lateral vibration affected by the size of stabilizer, the stabilizer diameter is treated as 306, 307, 309 and 310 mm, respectively. The rotation speed of BHA, length of drill collar and WOB are given by 45 r/min, 21 m and 20 kN. As shown in Fig. 6, the displacement and the trajectory curve of geometric center of drill collar are discussed. In right part of Fig. 6 (a), the motion trajectory of drill collar represented by the black curve is overlapped with clearance between drill collar and sidewall represented by the red circle, which means that drill collar always contacts the sidewall; in this situation, the lateral displacement of drill collar is maximum, and same conclusion can be seen in time-displacement curve. In Fig. 6(b), the displacement in $X$ direction is larger than $Y$ direction, and the collision between drill collar and sidewall is occurred. When the diameter of the stabilizer is 309 mm, the lateral displacement of drill collar is displayed in Fig. 6(c), and it can be observed that the displacements in $X$ and $Y$ directions are always less than the clearance between drill collar and sidewall. As shown in Fig. 6(d), the motion trajectory of drill collar is mainly concentrated in near the center of borehole, and the motion trajectory is approximated to a circle; in this case, the variation ranges of displacement in $X$ and $Y$ directions are almost identical. Therefore, the larger the diameter of stabilizer, the smaller the lateral vibration of drill collar; and the simulation result is consistent with the actual working conditions. According to the numerical results, the lateral displacement is decreased with the diameter of stabilizer, therefore, to deduce the lateral vibration of BHA in air drilling, the larger diameter of stabilizer should be chosen. Besides, the influence of diameter of stabilizer on lateral displacement of drill collar is significant. In Fig. 6(a) and (d), the difference in stabilizer diameter is 4mm; however, the lateral displacement in 6(a) is greatly larger than 6(d). 3.3. The lateral displacement of drill collar under different rotation speeds To investigate the rotation speed affected on the lateral vibration of drill collar, the rotation speed is chosen as 35, 40, 45 and 50 r/min, respectively; and the diameter of stabilizer, length of drill collar and WOB are given by 308 mm, 21 m and 20 kN. Fig. 7 shows the motion trajectory of geometric center of drill collar under different rotation speeds. As shown in Fig. 7(a), the lateral displacement of drill collar in $X$ direction ranges from –0.01 to 0.01m, and the lateral displacement of drill collar in $Y$ direction ranges from –0.03 to 0.03m. Hence, the drill collar is not contact with the sidewall. In Fig. 7(b), it can be found that the peak values of displacement in $X$ and $Y$ direction are consistent; the drill collar contacts with sidewall, however, the collision frequency is small. When the rotation speed is 45 r/min, Fig. 7(c) shows the displacement of geometric center of drill collar, and the motion trajectory is complicated; besides, with the increase of time, the lateral displacement of drill collar will reach the maximum. Fig. 7(d) shows the motion trajectory of drill collar when the rotation speed is 50 r/min; but the red circle is only displayed in trajectory diagram, which means that the black curve is overlapped with red circle. In this case, the drill collar is always in contact with sidewall. Based on the analysis above, increasing rotation speed can promote the lateral vibration of drill collar. Therefore, to avoid the collision between drill collar and sidewall and backward whirling of drill collar, when the ROP (rate of penetration) requirement is satisfied in practical engineering, a smaller rotation speed should be chosen. Fig. 6Motion trajectory of drill collar under different diameters of stabilizer Fig. 7Motion trajectory of drill collar under different rotation speeds a)$\mathrm{\Omega }=$ 35 r/min b)$\mathrm{\Omega }=$ 40 r/min c)$\mathrm{\Omega }=$ 45 r/min d)$\mathrm{\Omega }=$ 50 r/min 3.4. The lateral displacement of drill collar under different WOB To analyze the influence of WOB on lateral vibration of drill collar, the WOB is considered as 10, 15, 20 and 25 kN, respectively; and the rotation speed of BHA, length of drill collar and diameter of stabilizer are given by 45 r/min, 21 m and 308 mm. The lateral displacement and motion trajectory of geometric center of drill collar under different WOB are shown in Fig. 8. As shown in Fig. 8, with the rising of WOB, the lateral displacement of drill collar is increased. In Fig. 8(a), the motion trajectory of drill collar represented by the black curve is completely limited in the red circle representing the clearance between drill collar and sidewall, thus, there is no collision between drill collar and sidewall; in addition, the whirling motion of drill collar is irregular. Fig. 8Motion trajectory of drill collar under different WOB In the light of Fig. 8(b), the largest displacement in $Y$ direction is equal to the value of clearance between drill collar and sidewall; besides, the chaotic motion of drill collar is more obvious, and the collision between drill collar and sidewall is happened several times. When WOB is 20 kN, the motion state of drill collar becomes more complicated; the motion of drill collar is irregular 98 seconds ago, however, the displacements in $X$ and $Y$ directions will be maximum after 98 seconds, in this case, the motion trajectory of drill collar is a circle. Fig. 8(d) shows the motion trajectory of drill collar when WOB is 25 kN, the black curve representing the motion trajectory of drill collar is covered by the red circle representing the clearance between drill collar and sidewall; in this situation, the drill collar is always in contact with sidewall. According to the above-mentioned discussion, it is found more severe lateral vibration will be appeared if large WOB is given in the actual drilling operation. Therefore, for avoiding the buckling of drill string and limiting the lateral vibration of drill collar, a reasonable WOB satisfying rock breaking efficiency should be applied in the air drilling engineering. Fig. 9Motion trajectory of drill collar under different lengths of drill collar 3.5. The lateral displacement of drill collar under different lengths of drill collar In actual drilling engineering, length of drill collar is one of the significant factors affecting the lateral vibration of drill collar. The length of drill collar is treated as 12, 16.5, 21 and 25.5 m, respectively; and the rotation speed of BHA, diameter of stabilizer and WOB are given by 45 r/min, 308 mm and 20 kN. Fig. 9 shows the displacement and motion trajectory of geometric center of drill collar under different lengths of drill collar. As shown in Fig. 9(a) and 9(b), the motion trajectory of drill collar is similar to a circle, and it is always limited in the red circle; meanwhile, the radius of motion trajectory of drill collar in Fig. 9(b) is larger than Fig. 9(a), which means that the lateral displacement of drill collar in 9(b) is severe than 9(a). When the drill collars between the two stabilizers is 21 m, the motion trajectory of drill collar is shown in Fig. 9(c); it is evident that the motion trajectory of drill collar becomes chaotic, and the displacement gradually increases to the maximum. When the length of drill collar between the stabilizers is 25.5m, the lateral displacement of drill collar is maximum, and the motion trajectory of drill collar is a circle, which indicates that drill collar is rotated and contacted with the sidewall all the time. On the basis of analysis above, the length of drill collar has promoting influence on lateral vibration of drill collar, i.e., the longer the length of drill collar, the greater the lateral displacement. It can be concluded that length of drill collar between the two stabilizers should be decreased to control severe lateral vibration of drill collar. 4. Parametric analysis To further understand the influence of rotation speed of BHA and length of drill collar on lateral vibration of drill collar, the parametric analysis will be carried out with numerical computations. The deflection capacity of BHA is positively related to the lateral displacement of drill collar, therefore, the maximum displacements of drill collar in $X$ and $Y$ directions are discussed by changing the structure parameters of the BHA and mechanical parameters of the drill string system. 4.1. Influence of rotation speed of BHA A fixed length drill collar is assumed when analyzing the influence of rotation speed on lateral vibration of drill collar under a constant WOB. The relationship between lateral displacements in $X$, $Y$ directions and rotation speed are shown in Fig. 10. Fig. 10The response amplitude of drill collar lateral vibration related to rotation speed Ω a) The maximum lateral displacement in $X$ direction b) The maximum lateral displacement in $Y$ direction To sum up, the variation tendency of displacements in the two directions are approximately identical. The figure is divided into three regions; the maximum lateral displacements in air and mud drilling are both increased with the rotation speed in region Ⅰ, but the maximum lateral displacement in air drilling is always greater than that in mud drilling. When the rotation speed is arrived in 50.5 r/min, the maximum lateral displacement in air drilling is approached to the value of clearance between drill collar and sidewall; however, the maximum lateral displacement in mud drilling is only increased to 0.01 m, which is far less than the clearance between drill collar and sidewall. In region Ⅱ, the maximum lateral displacement in air drilling remains unchanged with the rising of rotation speed of BHA; while the maximum lateral displacement in mud drilling is still increased with the rotation speed. The critical rotation speed between Ⅱ and Ⅲ regions is 67.5 r/min; when the rotation speed is continually increased and larger than critical rotation speed of BHA, the maximum lateral displacements in air and mud drilling are remained constant. According to the above-mentioned analysis, the lateral displacement of drill collar in mud drilling is smaller than that in air drilling in the same rotation speed, and the rotation speed of BHA making drill collar contacted with sidewall in mud drilling is larger than air drilling. Besides, the rotation speed of BHA is positively correlated to the lateral vibration of drill collar. Therefore, a small rotation speed of BHA is beneficial to reduce the lateral vibration of drill collar. 4.2. Influence of length of drill collar The lateral vibration of drill collar affected by length of drill collar is explored when the rotation speed and WOB are 45 r/min and 20 kN, respectively. Fig. 11 shows the relationship between lateral displacements and the length of drill collar, where the black line indicates the air drilling and the red line represents the mud drilling. Fig. 11The response amplitude of drill collar lateral vibration related to length of drill collar L a) The maximum lateral displacement in $X$ direction b) The maximum lateral displacement in $Y$ direction Besides, the variation tendency of displacements in the two directions are approximately identical, and the figure is likewise divided into three regions. In region Ⅰ, the maximum lateral displacements in air and mud drilling are both increased with the length of drill collar; however, the growth rate of maximum lateral displacement in mud drilling is less than that in air drilling. When the length of drill collar is arrived in 22.2 m, the maximum lateral displacement is increased to 0.0413 m in air drilling, in this situation, the drill collar is in contact with sidewall; while the maximum lateral displacement in mud drilling is only increased to 0.024 m, which is always less than the value of clearance between drill collar and sidewall. In region Ⅱ, the maximum lateral displacement in air drilling keeps unchanged with the increase of length of drill collar; however, the maximum lateral displacement is increased with length of drill collar in mud drilling. The critical length of drill collar between Ⅱ and Ⅲ regions is 23.3 m, the maximum lateral displacements in air and mud drilling are unaltered with the increase of length of drill collar when the length of drill collar is greater than the critical length. Based on the analysis above, the lateral vibration of drill collar in air drilling is more serious than mud drilling in same length of drill collar, and the length of drill collar making drill collar arrived in the maximum lateral displacement in air drilling is shorter than mud drilling. Furthermore, the length of drill collar is positively correlated to the lateral displacement of drill collar. Therefore, to avoid the severe lateral vibration of drill collar, the length of drill collar between two stabilizers should be 5. Conclusions For the adverse effects caused by the lateral vibration of BHA, such as excessive well deviation and collision between drill string and sidewall, a newly dynamic model of drill collar with random collision between stabilizers and sidewall is presented in this article. Lateral vibration of drill collar affected by the structure parameters of the BHA and the mechanical parameters of system are analyzed by the numerical simulation. Some conclusions are given as follow: 1) The lateral vibration of drill collar is more severe in air drilling, and the collision between drill collar and sidewall is more likely to occur. In addition, the backward whirling of drill collar will be occurred in air drilling, and the backward whirling speed is approached to the three times as much as the rotation speed of BHA. 2) The relationship between lateral vibration of drill collar and collision frequency is studied. The higher the collision frequency, the more severe the lateral vibration of drill collar, and the more the spikes of whirling speed. 3) The diameter of stabilizer is negatively correlated with the lateral vibration of drill collar; however, the rotation speed of BHA, WOB and length of drill collar are positively correlated to the lateral vibration of drill collar. 4) Research indicates that the lateral vibration of drill collar is significantly affected by the drilling parameters, and the findings will provide theoretical support for structure design of BHA and selection of mechanical parameters of the system. 5) The reliability of proposed model is proved by the numerical simulation results, thus the research findings can provide reasonable guidance for the dynamic model of BHA with double or multi span drill collar in future work. • D. Gao and D. Zheng, “Study of a mechanism for well deviation in air drilling and its control,” Petroleum Science and Technology, Vol. 29, No. 4, pp. 358–365, Jan. 2011, https://doi.org/10.1080/ • Z. Li, C. Zhang, and G. Song, “Research advances and debates on tubular mechanics in oil and gas wells,” Journal of Petroleum Science and Engineering, Vol. 151, pp. 194–212, Mar. 2017, https:// • D. Zhang, M. Wu, C. Lu, L. Chen, and W. Cao, “A deviation correction strategy based on particle filtering and improved model predictive control for vertical drilling,” ISA Transactions, Vol. 111, No. 1, pp. 265–274, May 2021, https://doi.org/10.1016/j.isatra.2020.11.023 • Z. Lian, Q. Zhang, T. Lin, and F. Wang, “Experimental and numerical study of drill string dynamics in gas drilling of horizontal wells,” Journal of Natural Gas Science and Engineering, Vol. 27, pp. 1412–1420, Nov. 2015, https://doi.org/10.1016/j.jngse.2015.10.005 • M. Sarker, D. G. Rideout, and S. D. Butt, “Dynamic model for 3D motions of a horizontal oilwell BHA with wellbore stick-slip whirl interaction,” Journal of Petroleum Science and Engineering, Vol. 157, pp. 482–506, Aug. 2017, https://doi.org/10.1016/j.petrol.2017.07.025 • P. Charan Jena, “Identification of crack in SiC composite polymer beam using vibration signature,” Materials Today: Proceedings, Vol. 5, No. 9, pp. 19693–19702, 2018, https://doi.org/10.1016/ • S. P. Parida and P. C. Jena, “Selective layer-by-layer fillering and its effect on the dynamic response of laminated composite plates using higher-order theory,” Journal of Vibration and Control, p. 107754632210811, Apr. 2022, https://doi.org/10.1177/10775463221081180 • A. Ghasemloonia, D. Geoff Rideout, and S. D. Butt, “A review of drillstring vibration modeling and suppression methods,” Journal of Petroleum Science and Engineering, Vol. 131, pp. 150–164, Jul. 2015, https://doi.org/10.1016/j.petrol.2015.04.030 • J. Tian, Y. Yang, and L. Yang, “Vibration characteristics analysis and experimental study of horizontal drill string with wellbore random friction force,” Archive of Applied Mechanics, Vol. 87, No. 9, pp. 1439–1451, Sep. 2017, https://doi.org/10.1007/s00419-017-1262-9 • P. K. Saraswati, S. Sahoo, S. P. Parida, and P. C. Jena, “Fabrication, characterization and drilling operation of natural fiber reinforced hybrid composite with filler (Fly-Ash/Graphene),” International Journal of Innovative Technology and Exploring Engineering, Vol. 8, No. 10, pp. 1653–1659, Aug. 2019, https://doi.org/10.35940/ijitee.j1253.0881019 • A. S. Yigit and A. P. Christoforou, “Coupled axial and transverse vibrations of oilwell drillstrings,” Journal of Sound and Vibration, Vol. 195, No. 4, pp. 617–627, Aug. 1996, https://doi.org/ • A. P. Christoforou and A. S. Yigit, “Dynamic modelling of rotating drillstrings with borehole interactions,” Journal of Sound and Vibration, Vol. 206, No. 2, pp. 243–260, Sep. 1997, https:// • S. L. Chen and M. Géradin, “An improved transfer matrix technique as applied to BHA lateral vibration analysis,” Journal of Sound and Vibration, Vol. 185, No. 1, pp. 93–106, Aug. 1995, https:// • P. D. Spanos, A. M. Chevallier, and N. P. Politis, “Nonlinear stochastic drill-string vibrations,” Journal of Vibration and Acoustics, Vol. 124, No. 4, pp. 512–518, Oct. 2002, https://doi.org/ • Z. Weiping and D. Qinfeng, “Effect of prebent deflection on lateral vibration of stabilized drill collars,” SPE Journal, Vol. 16, No. 1, pp. 200–216, Mar. 2011, https://doi.org/10.2118/120455-pa • W. Wang et al., “The dynamic deviation control mechanism of the prebent pendulum BHA in air drilling,” Journal of Petroleum Science and Engineering, Vol. 176, pp. 521–531, May 2019, https:// • M. Kapitaniak, V. Vaziri, J. Páez Chávez, and M. Wiercigroch, “Numerical study of forward and backward whirling of drill-string,” Journal of Computational and Nonlinear Dynamics, Vol. 12, No. 6, p. 06100, Nov. 2017, https://doi.org/10.1115/1.4037318 • M. Kapitaniak, V. Vaziri, J. Páez Chávez, and M. Wiercigroch, “Experimental studies of forward and backward whirls of drill-string,” Mechanical Systems and Signal Processing, Vol. 100, pp. 454–465, Feb. 2018, https://doi.org/10.1016/j.ymssp.2017.07.014 • S. P. Parida, P. C. Jena, S. R. Das, D. Dhupal, and R. R. Dash, “Comparative stress analysis of different suitable biomaterials for artificial hip joint and femur bone using finite element simulation,” Advances in Materials and Processing Technologies, pp. 1–16, Jul. 2021, https://doi.org/10.1080/2374068x.2021.1949541 • H. Wang et al., “Modeling and analyzing the motion state of bottom hole assembly in highly deviated wells,” Journal of Petroleum Science and Engineering, Vol. 170, pp. 763–771, Nov. 2018, https:/ • W. Li, G. Huang, H. Ni, F. Yu, B. Huang, and W. Jiang, “Experimental study and mechanism analysis of the motion states of bottom hole assembly during rotary drilling,” Journal of Petroleum Science and Engineering, Vol. 195, p. 107859, Dec. 2020, https://doi.org/10.1016/j.petrol.2020.107859 • W. Li, G. Huang, F. Yu, H. Ni, W. Jiang, and X. Zhang, “Modeling and numerical study on drillstring lateral vibration for air drilling in highly-deviated wells,” Journal of Petroleum Science and Engineering, Vol. 195, p. 107913, Dec. 2020, https://doi.org/10.1016/j.petrol.2020.107913 • X.-H. Zhu and B. Li, “Numerical simulation of dynamic buckling response considering lateral vibration behaviors in drillstring,” Journal of Petroleum Science and Engineering, Vol. 173, pp. 770–780, Feb. 2019, https://doi.org/10.1016/j.petrol.2018.09.090 • M. Cai, L. Mao, X. Xing, H. Zhang, and J. Li, “Analysis on the nonlinear lateral vibration of drillstring in curved wells with beam finite element,” Communications in Nonlinear Science and Numerical Simulation, Vol. 104, p. 106065, Jan. 2022, https://doi.org/10.1016/j.cnsns.2021.106065 About this article Mechanical vibrations and applications drill string dynamics air drilling random collision lateral vibration The research leading to these results received funding from [the PetroChina Innovation Foundation] under Grant Agreement No. [2020D-5007-0312] and [the PetroChina-Southwest Petroleum University Innovation Consortium Project] under Grant Agreement No. [2020CX040103]. Data Availability The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request. Author Contributions Pan Fang: conceptualization, writing – review and editing; Kang Yang: methodology, writing – original draft preparation; Gao Li: funding acquisition, resources; Qunfang Feng: data curation; Shujie Ding: software. Conflict of interest The authors declare that they have no conflict of interest. Copyright © 2023 Pan Fang, et al. This is an open access article distributed under the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
{"url":"https://www.extrica.com/article/22813","timestamp":"2024-11-07T16:00:35Z","content_type":"text/html","content_length":"223225","record_id":"<urn:uuid:5efe4851-e92c-4c58-905a-6ae6883f3a52>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00836.warc.gz"}
Help putting these formulas as c++ expression i need to write a program that computes an equivalent resistor value if it is in either series or parallel . (in c++). how will i write this in c++ : [TEX] R_\mathrm{eq} = R_1 \| R_2 = {R_1 R_2 \over R_1 + R_2} [/TEX] (parallel) [TEX] R_\mathrm{eq} = R_1 + R_2 + \cdots + R_n[/TEX](series) thank you Here is something to get you started : #include <iostream> using namespace std; int main(){ Now I am not sure about the specification. Do you ask the user to input a range of Resistor values and compute its parallel or series value, or is it just 2 values? All you have to do is enter them just like you see them. Just be mindful of your parentheses. For the series situation, you'll probably want to use a loop of some sort and just total up the user's input values. You're kidding me, this thread sat for an hour and I get ninja'd. :( I have to ask for a range of resistor values. i have to first ask the user whether they want to compute a resistor in series or parallel, then from there i have to either compute it in series or parallel. I also have to compute the resistor value from the color code of the resistor but that is totally something different. It is basically two programs in one. but how do i actually put in the equation? just as i see them? i am using a do while loop by the way. Post your code. From what I got, you should be doing something like this : float calculateParallelValues(float resistors[], const int numOfResistors){ float calculateSeriesValues(float resistors[], const int numOfResistors){ int main(){ const int MAX_RESISTORS = 100; //maximum number of resistors float resistors[MAX_RESISTORS ] = {}; get resistor values and store them into resistors array ask if user want series value if so call calculateSeriesValues(); else calculateParallelValues(); return 0; Reply to this Topic
{"url":"https://www.daniweb.com/programming/software-development/threads/313002/help-putting-these-formulas-as-c-expression","timestamp":"2024-11-04T11:50:33Z","content_type":"text/html","content_length":"75825","record_id":"<urn:uuid:154d616d-549b-4a6f-8c99-2ba6b481be7c>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00656.warc.gz"}
Allreduce Mini-Exercise Obviously, MPI_MAX isn't the only operation that may be useful in a global computation. Take a look at either of the sample codes below. Once the question mark is removed, either the C or Fortran version of the program will compile correctly. But to make the computed number match the answer calculated by the formula, you will need to substitute a different operation in the call to MPI_Allreduce. Can you deduce the correct operation? MPI_Allreduce Exercise ! Replace MPI_MAX? with the correct operation program allreduce use mpi_f08 double precision :: val, sum icomm = MPI_COMM_WORLD knt = 1 call mpi_init(ierr) call mpi_comm_rank(icomm,mype,ierr) call mpi_comm_size(icomm,npes,ierr) val = dble(mype) call mpi_allreduce(val,sum,knt,MPI_REAL8,MPI_MAX?,icomm,ierr) ncalc = ((npes-1)*npes)/2 print '(" pe#",i5," sum =",f5.0, " calc. sum =",i5)', & mype, sum, ncalc call mpi_finalize(ierr) end program If you don't recognize the formula, you can always try changing the MPI operation and testing the program with different numbers of processes until the answer always comes out right. Or, you can peek at the correct operation by hovering here. On Stampede2 or Frontera, copy and paste the sample code into a command line editor, then compile and run it using an interactive session. The Stampede2 and Frontera CVW Topics explain these steps in more detail. • Compile using a command like those shown below: % mpif90 allreduce.f90 -o allreduce_f % mpicc allreduce.c -o allreduce_c • Start an interactive session using: % idev -N 1 -n8 • Run the code using the ibrun MPI launcher wrapper. Try varying the number of processes from 2 to 8: % ibrun -np 8 allreduce_c
{"url":"https://cvw.cac.cornell.edu/mpicc/global-computing/allreduce-exercise","timestamp":"2024-11-15T01:26:28Z","content_type":"text/html","content_length":"26229","record_id":"<urn:uuid:336d9880-b366-434c-a424-f3e71d5ec795>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00797.warc.gz"}
Harmonics and Overtones | Curious Toons Table of Contents Welcome, future physicists! As we embark on this thrilling journey through the universe of physics, prepare to uncover the secrets hidden in the fabric of reality. Have you ever marveled at how a roller coaster defies gravity, or pondered what makes a light bulb glow? Physics isn’t just about equations and formulas; it’s about understanding the world around you—from the tiniest particles to the vastness of galaxies. Imagine standing on the edge of a black hole or experiencing the exhilarating rush of the speed of light. Each concept we explore this year will open new doors to questions like: What is time? Can we really bend it? How do we harness energy from the sun? Throughout this course, you’ll not only learn the fundamental laws of motion and energy, but you’ll also engage in hands-on experiments that turn theoretical concepts into tangible experiences. Get ready to ignite your curiosity, challenge your perceptions, and maybe even discover a passion that lights your path forward. Remember, physics is everywhere—it’s the magic behind everyday phenomena waiting for you to unlock! Let’s dive in and see what wonders await! 1. Introduction to Sound Waves 1.1 Nature of Sound Waves Sound waves are mechanical waves that propagate through various mediums—such as air, water, and solids—by causing particles of the medium to vibrate. These vibrations transfer energy through the medium in the form of longitudinal waves, where particle displacement occurs parallel to the direction of wave propagation. Sound can be characterized by several properties: frequency, which determines the pitch; wavelength, which is the distance between two consecutive points in phase; and amplitude, which affects the loudness of the sound. The speed of sound varies based on the medium and temperature; it travels fastest in solids and slowest in gases. For example, at room temperature, sound travels approximately 343 meters per second in air, compared to around 1,480 meters per second in water. Additionally, sound waves can be classified as harmonic waves, meaning they can be expressed as a series of sine or cosine functions. This gives rise to complex sounds made up of fundamental frequencies and overtones, enriching the auditory experience. Understanding sound waves is foundational for diving into topics like harmonics and acoustics, which describe how different sound waves interact and produce various musical tones. Key Properties of Sound Waves Property Description Frequency Determines pitch (measured in Hertz) Amplitude Determines loudness (measured in decibels) Wavelength Distance between two consecutive points in phase Speed Varies by medium: 343 m/s in air, 1,480 m/s in water 1.2 Types of Sound Waves Sound waves can be categorized into two main types: longitudinal waves and transverse waves. Longitudinal waves are the most common type in sound propagation. In these waves, the particles of the medium (like air, water, or solids) vibrate parallel to the direction of the wave’s travel. This results in regions of compression and rarefaction, allowing the energy of the sound wave to move through the medium. Conversely, transverse waves involve particle motion perpendicular to the direction of wave travel. However, while transverse waves are prevalent in electromagnetic waves (like light), they are not typical for sound waves in fluids. Table: Comparison of Sound Wave Types Feature Longitudinal Waves Transverse Waves Particle Motion Parallel to wave direction Perpendicular to wave direction Medium Required Yes (cannot travel through a vacuum) Yes (cannot travel through a vacuum) Examples Sound waves in air, water, solids Light waves, waves on a string Understanding these types helps clarify the behavior and characteristics of sound as it travels through different media, forming the foundation for further studies in harmonics and overtones. 2. Understanding Harmonics 2.1 Fundamental Frequency The fundamental frequency, often referred to as the first harmonic, is the lowest frequency produced by a vibrating object, such as a string or an air column. This frequency corresponds to the basic pitch of the sound produced and serves as the foundation upon which higher frequencies, or overtones, are built. In the case of a vibrating string fixed at both ends, the fundamental frequency is determined by the string’s length, tension, and mass per unit length. The equation that defines this relationship is: f_1 = \frac{1}{2L} \sqrt{\frac{T}{\mu}} where ( f_1 ) is the fundamental frequency, ( L ) is the length of the string, ( T ) is the tension in the string, and ( \mu ) is the mass per unit length. For sound waves in an open tube, the fundamental frequency is also determined by the length of the tube but varies based on whether the tube is open at both ends or closed at one end. Understanding the fundamental frequency is crucial as it sets the stage for exploring harmonics and how they contribute to the unique timbre of different musical instruments. 2.2 Definition of Harmonics Understanding Harmonics Harmonics are specific frequencies at which a system, such as a vibrating string or an air column, can resonate. They are integral multiples of a fundamental frequency, which is the lowest frequency at which the system vibrates. The fundamental frequency, often referred to as the first harmonic, determines the pitch of the sound produced. Subsequent harmonics, or overtones, are categorized as the second harmonic (first overtone), third harmonic (second overtone), and so on. Each harmonic corresponds to a distinctive vibrational mode, where the string or medium vibrates in segments or nodes, resulting in more complex sound waves. For instance, in a stringed instrument, the fundamental frequency is produced when the string vibrates along its entire length, while the second harmonic vibrates in two segments, producing a note that is an octave higher. Understanding harmonics is crucial in music, acoustics, and various engineering applications, as they influence the quality and characteristics of sound. Harmonic Number Frequency Ratio Description 1st Harmonic 1:1 Fundamental 2nd Harmonic 2:1 First Overtone 3rd Harmonic 3:1 Second Overtone 4th Harmonic 4:1 Third Overtone This table illustrates the relationship between harmonic numbers and their respective frequency ratios, demonstrating the foundational concept of harmonics in sound production. 3. Overtones Explained 3.1 First Overtone vs. Second Overtone In the study of harmonics, overtone refers to the frequencies of sound that are higher than the fundamental frequency of a vibrating system. The first overtone is the first harmonic above the fundamental frequency, while the second overtone is the next harmonic above the first overtone. For example, if the fundamental frequency (first harmonic) is ( f1 ), the first overtone (second harmonic) is ( f2 = 2f1 ), and the second overtone (third harmonic) is ( f3 = 3f_1 ). This hierarchy showcases how the harmonics build upon the fundamental frequency, leading to richer sound tones. In practical terms, when a musician plays a string on a guitar, the fundamental tone is the main note heard, while the overtones contribute to the instrument’s unique timbre. Understanding these relationships allows us to appreciate the complexity of sound. The table below highlights this relationship: Harmonic Frequency Relation Fundamental (1st) ( f_1 ) First Overtone (2nd) ( f2 = 2f1 ) Second Overtone (3rd) ( f3 = 3f1 ) This intricate layering of frequencies forms the foundation for many musical and physical phenomena in waves. 3.2 Relation between Overtones and Harmonics Overtones and harmonics are integral concepts in understanding sound waves and musical tones, closely related yet distinct. Harmonics refer to the specific frequencies at which a system oscillates naturally, primarily influenced by its fundamental frequency. The first harmonic, or fundamental frequency, is the lowest frequency produced by a vibrating system. Overtones, on the other hand, are higher frequencies that occur at multiples of this fundamental frequency. Specifically, the first overtone corresponds to the second harmonic, the second overtone to the third harmonic, and so forth. This relationship can be illustrated succinctly in the following table: Harmonic Number Frequency (n × Fundamental) Overtone Number 1st Harmonic 1 × Fundamental 0 (Fundamental) 2nd Harmonic 2 × Fundamental 1st Overtone 3rd Harmonic 3 × Fundamental 2nd Overtone 4th Harmonic 4 × Fundamental 3rd Overtone Thus, while harmonics define the fundamental properties of a vibrating system, overtones provide insight into the complexity and richness of the sound produced, contributing to timbre and texture in 4. Mathematical Representation 4.1 Harmonic Series The harmonic series is a crucial concept in understanding oscillations and waves, particularly in music and physics. It refers to a specific sequence of frequencies that are integer multiples of a fundamental frequency, known as the first harmonic. This series occurs when a system oscillates; for instance, a vibrating string produces a fundamental frequency along with higher frequencies called overtones. The harmonic series can be summarized as follows: • 1st Harmonic (Fundamental): ( f_1 = f ) • 2nd Harmonic (1st Overtone): ( f_2 = 2f ) • 3rd Harmonic (2nd Overtone): ( f_3 = 3f ) • 4th Harmonic (3rd Overtone): ( f_4 = 4f ) • 5th Harmonic (4th Overtone): ( f_5 = 5f ) This sequence can be represented mathematically as ( f_n = nf ), where ( n ) is the harmonic number. The harmonic series plays a vital role in various applications, such as musical theory, where different instruments produce unique timbres based on their harmonic profiles. Understanding these relationships enables a deeper comprehension of sound, resonance, and wave behavior in diverse physical systems. 4.2 Fourier Analysis Fourier Analysis is a mathematical technique used to break down complex periodic functions into simpler sine and cosine wave components, known as harmonics. Named after mathematician Jean-Baptiste Joseph Fourier, this analysis is foundational in understanding waveforms in various fields, including physics, engineering, and signal processing. According to Fourier’s theorem, any periodic function can be expressed as an infinite sum of sine and cosine terms. This allows us to analyze phenomena such as sound waves and vibrations in a structured manner. For example, a complex sound wave can be decomposed into its fundamental frequency and its overtones, revealing the underlying harmonic structure. The coefficients obtained through Fourier Analysis indicate the amplitude and phase shift of each harmonic, thus providing a comprehensive representation of the original waveform. This process is not only pivotal in acoustics but also in data compression algorithms and image processing, showcasing its versatility. In summary, Fourier Analysis serves as a powerful tool for understanding and manipulating waveforms across various scientific and engineering domains. Harmonic Frequency (Hz) Amplitude Fundamental f A1 First Overtone 2f A2 Second Overtone 3f A3 5. Applications in Music 5.1 Musical Instruments and Their Harmonics Musical instruments produce sound through vibrations, which create complex waveforms that include fundamental frequencies and overtones, also known as harmonics. The fundamental frequency is the lowest frequency produced by the instrument and determines the pitch of the sound we perceive. Overtones are higher-frequency vibrations that occur simultaneously and contribute to the timbre or quality of the sound. For instance, a guitar string vibrating not only produces the fundamental note but also harmonics at integer multiples of that frequency, such as 2f, 3f, and so on. Different instruments have unique harmonic structures based on their design and how they produce sound. String instruments, woodwinds, brass, and percussion each emphasize different overtones, creating distinctive sounds. For example, a flute predominantly produces even harmonics, resulting in a sweeter tone, while a trumpet emphasizes odd harmonics, leading to a brighter, fanfare-like Understanding harmonics allows musicians and composers to manipulate sound effectively, enhancing their musical expressions. The interplay between fundamental frequencies and harmonics is crucial in various musical applications, such as tuning, orchestration, and sound engineering, emphasizing the profound relationship between physics and music. Instrument Fundamental Frequency (f) Common Harmonics Guitar f 2f, 3f, 4f Flute f 2f Trumpet f 3f, 5f, 7f Violin f 2f, 3f, 4f 5.2 Tuning Systems and Overtones In music, tuning systems and overtones play crucial roles in how we perceive harmony and melody. A tuning system defines the arrangement of pitches used in music. The most common system is the equal temperament, where the octave is divided into 12 equal parts (semitones), allowing instruments to play in any key. Alternatively, just intonation uses ratios of small whole numbers, creating purer intervals but limiting key changes. Overtones, or harmonics, are higher frequencies produced when an instrument vibrates, arising at integer multiples of the fundamental frequency. For example, if the fundamental frequency is 100 Hz, the first few overtones would be 200 Hz (first overtone), 300 Hz (second overtone), and so on. These overtones contribute significantly to the timbre or color of the sound, distinguishing different instruments even when playing the same note. The relationship between overtones and tuning systems can deeply influence the emotional impact of music, as specific intervals resonate differently within various cultures and musical traditions. Frequency (Hz) Harmonic Number Overtone Frequency (Hz) As we reach the end of our journey through the fascinating world of physics, I want to thank each of you for your enthusiasm and curiosity. Together, we’ve explored the fundamental laws that govern the universe, from the smallest particles to the vast expanses of space. Remember, physics is not just a collection of equations and theories; it’s a lens through which we can understand the beauty of the world around us. The concepts we’ve discussed — from the elegance of Newton’s laws to the mysterious realms of quantum mechanics — are not merely academic; they’re tools for unlocking greater understanding. As you move forward, I encourage you to keep questioning, keep experimenting, and let your imaginations soar. Physics is everywhere, shaping our reality in countless ways. So, whether you’re watching the stars or playing sports, remember, you are engages in a dance of forces, energy, and motion. You have the power to see the extraordinary in the ordinary. As you leave this class, carry the spark of curiosity with you. Embrace challenges, seek knowledge, and may your passion for discovery never fade. The universe awaits, and it’s yours to explore. Thank you for an incredible year!
{"url":"https://curioustoons.in/harmonics-and-overtones/","timestamp":"2024-11-09T20:57:22Z","content_type":"text/html","content_length":"112646","record_id":"<urn:uuid:965eef62-12d6-44c1-8105-e72e3b179f12>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00712.warc.gz"}
When quoting this document, please refer to the following DOI: 10.4230/LIPIcs.CPM.2022.7 URN: urn:nbn:de:0030-drops-161348 URL: http://dagstuhl.sunsite.rwth-aachen.de/volltexte/2022/16134/ Lai, Wenfeng ; Liyanage, Adiesha ; Zhu, Binhai ; Zou, Peng Beyond the Longest Letter-Duplicated Subsequence Problem Motivated by computing duplication patterns in sequences, a new fundamental problem called the longest letter-duplicated subsequence (LLDS) is proposed. Given a sequence S of length n, a letter-duplicated subsequence is a subsequence of S in the form of x₁^{d₁}x₂^{d₂}⋯ x_k^{d_k} with x_i ∈ Σ, x_j≠ x_{j+1} and d_i ≥ 2 for all i in [k] and j in [k-1]. A linear time algorithm for computing the longest letter-duplicated subsequence (LLDS) of S can be easily obtained. In this paper, we focus on two variants of this problem. We first consider the constrained version when Σ is unbounded, each letter appears in S at least 6 times and all the letters in Σ must appear in the solution. We show that the problem is NP-hard (a further twist indicates that the problem does not admit any polynomial time approximation). The reduction is from possibly the simplest version of SAT that is NP-complete, (≤ 2,1, ≤ 3)-SAT, where each variable appears at most twice positively and exact once negatively, and each clause contains at most three literals and some clauses must contain exactly two literals. (We hope that this technique will serve as a general tool to help us proving the NP-hardness for some more tricky sequence problems involving only one sequence - much harder than with at least two input sequences, which we apply successfully at the end of the paper on some extra variations of the LLDS problem.) We then show that when each letter appears in S at most 3 times, then the problem admits a factor 1.5-O(1/n) approximation. Finally, we consider the weighted version, where the weight of a block x_i^{d_i} (d_i ≥ 2) could be any positive function which might not grow with d_i. We give a non-trivial O(n²) time dynamic programming algorithm for this version, i.e., computing an LD-subsequence of S whose weight is maximized. BibTeX - Entry author = {Lai, Wenfeng and Liyanage, Adiesha and Zhu, Binhai and Zou, Peng}, title = {{Beyond the Longest Letter-Duplicated Subsequence Problem}}, booktitle = {33rd Annual Symposium on Combinatorial Pattern Matching (CPM 2022)}, pages = {7:1--7:12}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-234-1}, ISSN = {1868-8969}, year = {2022}, volume = {223}, editor = {Bannai, Hideo and Holub, Jan}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/opus/volltexte/2022/16134}, URN = {urn:nbn:de:0030-drops-161348}, doi = {10.4230/LIPIcs.CPM.2022.7}, annote = {Keywords: Segmental duplications, Tandem duplications, Longest common subsequence, NP-completeness, Dynamic programming} Keywords: Segmental duplications, Tandem duplications, Longest common subsequence, NP-completeness, Dynamic programming Collection: 33rd Annual Symposium on Combinatorial Pattern Matching (CPM 2022) Issue Date: 2022 Date of publication: 22.06.2022 DROPS-Home | Fulltext Search | Imprint | Privacy
{"url":"http://dagstuhl.sunsite.rwth-aachen.de/opus/frontdoor.php?source_opus=16134","timestamp":"2024-11-09T12:59:16Z","content_type":"text/html","content_length":"7777","record_id":"<urn:uuid:1e72946f-1290-4924-a49f-d90d37b7d970>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00369.warc.gz"}
RidgeRun Video Stabilization Library - Basics and Foundation&#x20;-&#x20;Stabilization Algorithms with IMU RidgeRun Video Stabilization Library RidgeRun documentation is currently under development. Table of Contents [Sticky] Algorithms with IMU Here we introduce some of the algorithms that are essential parts of the overall video stabilization process. In this context the use of an integration is done to determine the orientation of an object from its angular velocity, which is the rate of change of its orientation over time. Essentially, it transforms the angular velocity (rotation rate) into the actual orientation. However, this approach does not incorporate data from accelerometers, which measure linear acceleration. Accelerometer data can be fused with gyroscope data (which provides the angular velocity) in a process known as sensor fusion. This fusion can yield a more accurate estimate of the object’s orientation. Simple integration The simple integration algorithm takes only the gyroscope measurements which are the rate of change in rotation to obtain the given rotation, using an initial value of orientation. However, this approach does not incorporate data from accelerometers, which measure linear acceleration. Accelerometer data can be fused with gyroscope data (which provides the angular velocity) in a process known as sensor fusion. This fusion can yield a more accurate estimate of the object’s orientation. The Versatile Quaternion Based Filter or VQF is proposed in the paper Highly Accurate IMU Orientation Estimation with Bias Estimation and Magnetic Disturbance Rejection and it uses a gyroscope bias estimation algorithm and an algorithm for magnetic disturbance detection and rejection. The full version of the algorithm, includes additional features such as rest detection, gyroscope bias estimation, and magnetic disturbance rejection. Notably, the gyroscope bias estimation method in VQF avoids reliance on magnetometer corrections, which enhances its resilience against magnetic disturbances. Instead, the bias estimation is based solely on the disagreement between strapdown integration and accelerometer measurements during motion. This design choice helps maintain accuracy and robustness in challenging environments. This algorithm uses a quaternion representation, allowing accelerometer and magnetometer data to be used in an analytically derived and optimised gradient-descent algorithm and it proposed in the paper An Efficient Orientation Filter for Inertial and Inertial/Magnetic Sensor Arrays. Performance evaluation compared this filter with a proprietary Kalman-based algorithm used in orientation sensors. The results demonstrate that the filter achieves higher levels of accuracy than the Kalman-based method. Notably, the filter's low computational load and ability to operate at low sampling rates present new opportunities for real-time applications with IMU and MARG sensor arrays. The complementary algorithm is proposed in the paper A Quaternion-Based Orientation Filter for IMUs and MARGs and it uses a algebraic solution of a system to obtain a quaternion estimation. Incorporating a complementary filter, the system analyzes signals in the frequency domain to combine them effectively. By applying high-pass filtering on gyroscope data (affected by low-frequency noise) and low-pass filtering on accelerometer data (affected by high-frequency noise), the filter aims to achieve an all-pass and noise-free attitude estimation. This complementary filtering process is crucial for accurate attitude estimation from IMU readings. Sensor data and video frames times might not be synchronized, to smooth the orientation at each frame we need the orientation values for each corresponding frame. This is why we need to interpolate the values for the sensor to obtain the orientation values at each frame. Spherical linear interpolation (Slerp) Slerp is a method of interpolation on the surface of a unit sphere. Given two points on the sphere, SLERP provides a smooth curve that follows the shortest path on the sphere’s surface between these two points. The speed along this curve is constant, which makes SLERP particularly useful for creating smooth transitions. Slerp is often used to interpolate between two orientations or rotations. Slerp is often used with quaternions, allowing to create smooth rotational motion between two orientations. Smooth orientation Spherical Exponential Smoothing Exponential smoothing is a time series forecasting method that uses weighted averages of past observations. The weights decrease exponentially as the observations get older, hence the name "exponential smoothing". It’s a powerful method that can handle data with a systematic trend or seasonal component. We apply the slerp interpolation to the exponential smoothing to obtain more stable Horizon lock The Horizon Lock technique leverages data from both the gyroscope and accelerometer. Specifically, it utilizes accelerometer data to determine the direction of gravity. This information is instrumental in stabilizing footage along the horizon. In order to apply the required transformation to the original frames a rotation is computed between the unstable and stable orientations. This rotation is then transformed into a rotation matrix, serving as a rectification transformation. This transformation rectifies the footage from the unstable space (original orientation) to the stable space (desired orientation). This process is analogous to the undistortion performed on raw images captured by a camera, which corrects for lens distortions. Following this, a set of mapping functions are derived for the image. These functions account for transformations such as translations, rotations, scalings, and cropping, ultimately resulting in stabilized footage.
{"url":"https://developer.ridgerun.com/wiki/index.php/RidgeRun_Video_Stabilization_Library/Basics_and_Foundation/Stabilization_Algorithms_with_IMU","timestamp":"2024-11-11T11:07:04Z","content_type":"text/html","content_length":"71433","record_id":"<urn:uuid:68d07d43-6bc9-4e10-bb4d-5acf058217b6>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00088.warc.gz"}
Dimension of the State Space of an Ideal Gas There is a subtle difference between space and dimension. A space can have any any number of dimensions, but a subset of that space can have any dimension less than the space itself. In any case the dimension of a set is the number of numbers needed to define any element of the set. Consider the set of points \[\{ (x,x^2) : x \in \mathbb{R} \}\] . This set defines the line which is a curve in the plane. The plane is two dimensional but the curve is one dimensional since only one point is needed to define each point on the curve. This example shows how it is possible for a space to be embedded in a higher dimensional subspace. The example is illustrative. are coordinates on different axes. The coordinates are not independent if one coordinate is a function of the other, or if some coordinates are functions are some other coordinates. Some physical systems are defined by physics properties. One such is a confined ideal gas. A gas is defined totally by its internal heat energy , its temperature , its pressure and volume These quantities are not independent however. is directly related to by the equation \[U= \frac{3}{2} kT\] and we can define and hence in terms of by the equation is the number of mols and \[R=8.314 J/mol/K\] is the Universal molar gas constant. are not needed to describe the state of a gas. We only need - or in fact any two of \[U, \: T, |; p, V\] . The dimension of the state space of an ideal gas is 2 and we can plot any state of an ideal gas in the plane.
{"url":"https://astarmathsandphysics.com/university-physics-notes/thermal-physics/4120-dimension-of-the-state-space-of-an-ideal-gas.html","timestamp":"2024-11-09T14:05:51Z","content_type":"text/html","content_length":"33672","record_id":"<urn:uuid:d56c0dfa-e396-41b2-a98c-fb3252387a4e>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00527.warc.gz"}
How To Find Diameter From Circumference? | TIRLA ACADEMY We divide the Circumference by pi(𝜋) to find the diameter. Let's take an example: Q- Find the diameter of a circle if its circumference is 22 cm. Circumference = 22 cm Diameter of circle = Circumference÷𝜋 = 22÷3.14 = 7 cm (approx).
{"url":"https://www.tirlaacademy.com/2024/05/how-to-find-diameter-from-circumference.html","timestamp":"2024-11-02T23:57:02Z","content_type":"application/xhtml+xml","content_length":"310500","record_id":"<urn:uuid:6c98841a-6e2b-48e7-88e6-53c72d715dea>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00710.warc.gz"}
Algorithm Efficiency Section 8.9 Algorithm Efficiency When it comes time to put an algorithm to work or choose between competing algorithms, we need a way to measure and compare algorithms. There are many different things we could measure about an algorithm: the number of lines of code to express, how much time it takes to program and debug, the amount of memory used while running, and the time taken to run are all things we might care about. But in general, the most important metric is usually “How much work does it require for a problem of size n”? As users of a computer, what we usually care about is “How quickly do I get my answer”. A search for a file on your computer that shows results in 0.2 seconds is great; a search that takes 20 minutes would be so slow it would only be much less useful. But the time to do something depends on many factors - how fast is the computer? how many other things is it trying to do? how big is the problem (how many files are there to search)? These factors will change depending on who is running the program and when; the fact that you are running a program on a faster computer and it takes less time than when I run it does not tell us anything interesting about the algorithm the program uses. So instead of measuring time when measuring algorithms, we usually think in terms of the work required. The work required to perform a particular algorithm does not generally change when it is executed on different machines or under different conditions. So what exactly is “work”? Let’s try to come up with some descriptions of how much work two different algorithms take. First, we will consider this DrawSquare algorithm: DrawSquare of size (n) Pen Down Repeat 4 times: Move (n) Turn Clockwise (90) Pen Up We might say it requires 10 “units” of work: Pen Down + 4 Moves + 4 Turns + Pen Up. Note that it doesn’t matter what size the square is (assuming Move always takes a fixed amount of time), this algorithm always requires 10 steps of work. If we decided that the pen up and pen down happen instantly and don’t count as work, then we might say the algorithm only took 8 “units” of work (4 Moves + 4 Turns); if we decided that processing the “Repeat” took one unit of work for each loop we might say that the algorithm requires 14 units (Pen Down + 4 Moves + 4 Turns + 4 Repeats + Pen Up). But whatever we decide, the problem always takes that amount of work. Compare that to the following algorithm: DrawShape with (n) sides Pen Down Repeat (n) times: Move 100 Turn Clockwise ( 360 / (n) ) Pen Up If we use it to draw a square and stick to our original accounting method, it takes 10 “units” of work. But what if use it to draw a triangle? Now it would take 8 units of work (Pen Down + 3 Moves + 3 Turns + Pen Up). If we use it to draw a pentagon it would take 12 units of work (Pen Down + 5 Moves + 5 Turns + Pen Up). A decagon would take 22 units (Pen Down + 10 Moves + 10 Turns + Pen Up). For this algorithm, the amount of work grows as the input (number of sides) grows. If we do a little thinking, we could come up with a function relating the amount of work required f(n) to the number of sides n: f(n) = 2n + 2. Each side of the shape takes two steps, and there are two steps for putting the Pen Down and Up. If we decided the Pen Up/Down didn’t count, we might say the function for calculating work was just f(n) = 2n. If we decided that each time we “Repeat” it costs a unit of work, each side would require three steps and the function might be f(n) = 3n + 2. The graph below compares the work required for the two algorithms. The x-axis represents the value of n input to the algorithms while the y-axis represents f(n) - the work required. Figure 8.9.1. Comparison of 3 different ways of counting work for 2 different algorithms - Draw Square (Blue) and Draw Shape (Gold) What should be clear is that it does not matter which accounting system we use when comparing the two algorithms. By any counting system, if we are drawing a large number of sides, the DrawShape algorithm has to do more work than the DrawSquare algorithm. You have attempted of activities on this page.
{"url":"https://runestone.academy/ns/books/published/welcomecs2/algorithms_algorithm-efficiency.html","timestamp":"2024-11-02T20:28:49Z","content_type":"text/html","content_length":"128549","record_id":"<urn:uuid:dc792e83-6a31-414d-9719-40dec641529a>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00592.warc.gz"}
Online IB Math Tutor in Bangalore. - IGCSE IB Math Tutor Revolutionizing IB Math Learning in Bangalore: YK Reddy’s Online Tutoring Unveiling the Future of Math Education: YK Reddy’s Online Approach In the dynamic educational landscape of Bangalore, a new era in IB Mathematics tutoring is unfolding, spearheaded by the experienced and passionate educator, YK Reddy. His online tutoring service is redefining how students engage with challenging curriculums like Math AI (Applications and Interpretation) and Math AA (Analysis and Approaches) at both Standard Level (SL) and Higher Level (HL). Empowering Students through Self-Learning YK Reddy’s unique approach integrates a self-learning platform, boasting thousands of meticulously curated questions. This innovative method not only enhances student engagement but also fosters an environment where learners can take control of their education. This autonomy in learning is crucial for success in rigorous programs like the International Baccalaureate (IB). Expertise That Makes a Difference Specializing in Math AI SL, Math AI HL, Math AA SL, and Math AA HL, YK Reddy brings a wealth of knowledge and an exceptional skill set. His expertise in these courses is a beacon for students in Bangalore seeking to excel in IB Mathematics. His methods are not just about mastering content; they’re about instilling a deeper understanding and appreciation for the subject. Online Learning: A Gateway to Excellence The pivot to online tutoring has opened doors for students across Bangalore and beyond, breaking geographical barriers and making quality education accessible. YK Reddy’s online platform is a testament to the potential of digital education in creating personalized, flexible, and effective learning experiences. Why Choose YK Reddy? Choosing YK Reddy’s online IB Math tutoring service means more than just preparing for exams. It’s about embarking on a journey towards academic excellence, guided by a tutor who is committed to each student’s success. With YK Reddy, students are not just learning; they’re evolving into confident, independent thinkers ready to tackle the challenges of IB Mathematics. Conclusion: Setting New Educational Standards in Bangalore YK Reddy’s online IB Math Tutor service is more than a tutoring program; it’s a catalyst for change in the educational realm of Bangalore. By embracing innovative teaching methods and fostering self-learning, YK Reddy is not just preparing students for exams; he’s preparing them for life. As this revolutionary approach gains momentum, it promises to set new standards in math education, making YK Reddy a name synonymous with success in IB Mathematics Online IB Math Tutor in Bangalore. IB Maths tutors in Bangalore. Math tuitions in Banaglore. IGCSE Math tutor in Bangalore. About Me Kondal Reddy Hello! I am a passionate and experienced math tutor with over 10 years of dedicated teaching experience. I have had the pleasure of helping students of all ages and abilities to excel in mathematics. As a certified Cambridge IGCSE and IB Math tutor, I have specialized knowledge in these curriculums. Favourite Quotes As far as the laws of mathematics refer to reality, they are not certain, and as far as they are certain, they do not refer to reality. - Albert Einstein
{"url":"https://www.igcseibmathtutor.com/general/online-ib-math-tutor-in-bangalore/","timestamp":"2024-11-02T15:20:04Z","content_type":"text/html","content_length":"109430","record_id":"<urn:uuid:437217cf-5c6e-49a3-94d6-3dc7e2c4ccca>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00844.warc.gz"}
Theo Johnson-Freyd Talks that I have given (or will give) For conferences that I organized rather than spoke at, click here. 23–27 September 2024: Scottish Talbot On Algebra and Topology: Higher tensor categories and their extensions, Cairngorm Lodge, Glenmore, Scotland. 20–24 May 2024: Atlantic TQFT Spring School, Memorial University Newfoundland. 3–8 December 2023: Subfactors and Fusion (2-)Categories, Banff Research Station. 12–16 June 2023: Dagger higher categories, Zoomland. 1–5 May 2023: Atlantic TQFT Spring School, Wolfville, NS. 6–17 June 2022: Global Categorical Symmetries, Perimeter Institute. 22–25 February 2021: Women at the Intersection of Mathematics and Theoretical Physics, Perimeter Institute (Zoomland). 25–29 May 2020: Elliptic Cohomology and Physics, Perimeter Institute (Zoomland). 13–17 August 2018: Higher Algebra and Mathematical Physics, Perimeter Institute and Max Planck Institute for Mathematics. 8–12 May 2017: Quantum Field Theory on Manifolds with Boundary and the BV Formalism, Perimeter Institute. April 2016: Representation Theory, Integrable Systems and Quantum Fields, Northwestern University. 9–17 March 2013: QFTahoe workshop for young researchers. December 16-20, Vertex algebras and related topics, Academia Sinica, Taipei. The unitary cobordism hypothesis. November 12, SCGCS Annual Meeting Satellite Workshop, New York University. (abstract.) Abstract: A dagger category is a category equipped with extra "unitarity" data: among its isomorphisms, some are marked as "unitary"; to each 1-morphism, there is an "adjoint" 1-morphism, and this assignment sends unitary (but not general!) isomorphisms to their inverses. If a monoidal higher category has duals, it is interesting to ask for a choice of duality functor for which the units and counits are adjoints; whereas duality functors are unique up to contractible choice, so-defined "unitary duality" functors are not. I will explain a higher-categorical generalization of these ideas, and explain how, for any stable tangential structure H, a construction of Freed and Hopkins makes the extended bordism category Bord[n]^H(n) into a dagger symmetric monoidal n -category with unitary duality. This category satisfies a unitary cobordism hypothesis: whereas nonunitary functors Bord[n]^H(n) → C are classified by H(n)-fixed points in C, unitary functors are classified by fixed points for the stablized group H(∞). This talk is based on joint work in preparation with Cameron Krulewski, Lukas Mueller, and Luuk Stehouwer. (hide abstract) The unitary cobordism hypothesis. November 5, Fields Geometry and Physics Seminar, University of Toronto. (abstract.) Abstract: A strict dagger category is a strict 1-category C with an antiinvolutive functor from C to C^op that is the identity on objects. I will explain a natural coherent version of dagger higher category. In the case of higher categories with lots of duals and adjoints, it is natural to ask for "unitary adjoints", and I will explain how such notion is also natural, and also selects a good definition of higher pivotality. A special case is the extended cobordism category: building on a construction of Freed and Hopkins, the cobordism category (extended or unextended) is naturally dagger (with unitary duals, if extended), but only when the tangential structure is stable. The stably-framed cobordism (∞,n)-category satisfies the unitary cobordism hypothesis: it is freely generated by the point among symmetric monoidal dagger (∞,n)-categories with unitary duals. In particular, whereas the non-dagger cobordism category with tangential structure H is a noninvertible refinement of MTH(n), the dagger structure makes it act more like a noninvertible refinement of MTH. This talk is based on arXiv:2403.01651 with many authors and on joint work in progress with Cameron Krulewski, Lukas Mueller, and Luuk Stehouwer. (hide abstract) Exact sequences of Hopf algebras. October 30, Geometry and Physics Seminar, Boston University. (abstract.) Abstract: The notion of "Hopf algebra" makes sense in any braided monoidal category. I will explain an equational version of "exact sequence of finite-dimensional Hopf algebras" that I call "BC-exactness" because it involves a funny variation of Beck-Chevalley condition. Being equational, BC-exactness is preserved by all functors; BC-exactness recovers other versions of exactness when the Hopf algebras in the sequence are separable and coseparable. There is a functor that inputs a retract in a 3-category with duals and outputs a finite dimensional Hopf algebra. A different Beck-Chevalley-type condition supplies a notion of "BC-exact sequence" of retracts, and the retract-to-Hopf-algebra functor takes BC-exact sequences of retracts to BC-exact sequences of Hopf algebras. The framed bordism category with boundary contains a BC-exact sequence of retracts and hence a BC-exact sequence of Hopf algebras that I call the "quantum Puppe sequence" because when evaluated via a sigma model with target X and boundary condition Y → X with fibre F, it recovers the Puppe sequence of homotopy groups for the fibration F → Y → X. This talk is based on joint work in preparation with David Reutter. (hide abstract) The homotopy groups of a TQFT. October 22, Representation theory and tensor categories seminar, University of California, Berkeley. (abstract.) Abstract: To any open-closed TQFT (possibly framed, possibly not fully extended, possibly not compact) I associate a sequence of Hopf algebras that I interpret as "homotopy groups" of the "quantum target space" of the TQFT — the interpretation is justified because when the TQFT does come from a sigma model into some target space, then these Hopf algebras encode the homotopy groups of the target space. To any relative open-closed TQFT, I associate a long exact sequence of Hopf algebras — I interpret this as the "Puppe sequence" for a "quantum fibre bundle". Every tensor higher category is expected to determine an open-closed TQFT; my Hopf algebras generalize the canonical Hopf algebra object in any braided fusion category. Exactness of the Puppe sequence in this case gives a version of S-matrix theory for tensor higher categories, relating (non)degeneracy of Hopf links with Morita (non)invertibility. This talk is based on joint work in preparation with David Reutter. (hide abstract) Wormholes and an exact sequence. September 10, Applications of Generalized Symmetries and Topological Defects to Quantum Matter, Simons Center for Geometry and Physics, Stony Brook. (video.) Quantum homotopy groups. June 17, Categorical Symmetries in Quantum Field Theory, International Centre for Mathematical Sciences, Edinburgh. (abstract. video.) Abstract: An open-closed tqft is a tqft with a choice of boundary condition. Example: the sigma model for a sufficiently finite space, with its Neumann boundary. Slogan: every open-closed tqft is (sigma model, Neumann boundary) for some “quantum space”. In this talk, I will construct homotopy groups for every such “quantum space” (and recover usual homotopy groups). More precisely, these “groups” are Hopf in some category. Given a “quantum fibre bundle” (a relative open-closed tqft), I will construct a Puppe long exact sequence. Retracts in 3-categories and a higher Beck-Chevalley condition will cameo appearances. This project is joint work in progress with David Reutter. (hide abstract) Quantum homotopy groups. June 12, Thematic Program in Field Theory and Topology, University of Notre Dame. (abstract. video.) Abstract: An open-closed tqft is a tqft with a choice of boundary condition. Example: the sigma model for a sufficiently finite space, with its Neumann boundary. Slogan: every open-closed tqft is (sigma model, Neumann boundary) for some “quantum space”. In this talk, I will construct homotopy groups for every such “quantum space” (and recover usual homotopy groups). More precisely, these “groups” are Hopf in some category. Given a “quantum fibre bundle” (a relative open-closed tqft), I will construct a Puppe long exact sequence. Retracts in 3-categories and a higher Beck-Chevalley condition will cameo appearances. This project is joint work in progress with David Reutter. (hide abstract) The universal target category. May 2, Workshop on Global Categorical Symmetries, Center of Mathematical Sciences and Applications, Harvard University. (abstract. slides.) Abstract: Hilbert's Nullstellensatz says that the complex numbers C satisfy a universal property among all R-algebras: every not-too-large nonzero commutative R-algebra maps to C. Deligne proved a similar statement in categorical dimension 1: every not-too-large symmetric monoidal category over R maps to the category sVec[C] of complex super vector spaces. In other words, sVec[C] (and not Vec[C]!) is "algebraically closed". These statements help explain why quantum field theory requires imaginary numbers and fermions. I will describe the universal symmetric monoidal higher category that extends the sequence C, sVec[C], .... This is joint work in progress with David Reutter, and builds on closely-related work by GCS collaborators Freed, Scheimbauer, and Teleman and Schlank et al. (hide abstract) The universal target category. April 11, Topology Seminar, Johns Hopkins University. (abstract. notes.) Abstract: Hilbert's Nullstellensatz says that the complex numbers C satisfy a universal property among all R-algebras: every not-too-large nonzero commutative R-algebra maps to C. Deligne proved a similar statement in categorical dimension 1: every not-too-large symmetric monoidal category over R maps to the category sVec of super vector spaces. These statements help explain why quantum field theory involves imaginary numbers and fermions. I will describe the universal symmetric monoidal higher category that extends the sequence C, sVec, .... In particular, I will tell you what you have to add at each categorical dimension --- in other words, I will tell you about the ``∞-categorical absolute Galois group'' of R. This computation involves a version of surgery theory for topological quantum field theories instead of manifolds, and shows that this absolute Galois group is enticingly similar to the infinite piecewise-linear group PL. This talk is based on joint work in progress with David Reutter. (hide abstract) Quantum homotopy groups. March 21, Higher Categorical Tools for Quantum Phases of Matter, Perimeter Institute for Theoretical Physics. (abstract. slides. video.) Abstract: An open-closed tqft is a tqft with a choice of boundary condition. Example: the sigma model for a sufficiently finite space, with its Neumann boundary. Slogan: every open-closed tqft is (sigma model, Neumann boundary) for some “quantum space”. In this talk, I will construct homotopy groups for every such “quantum space” (and recover usual homotopy groups). More precisely, these “groups” are Hopf in some category. Given a “quantum fibre bundle” (a relative open-closed tqft), I will construct a Puppe long exact sequence. Retracts in 3-categories and a higher Beck-Chevalley condition will make appearances. This project is joint work in progress with David Reutter. (hide abstract) Recent progress on the classification of fusion higher categories. SCGCS internal meeting, Zoom. (slides.) Dagger Categories. December 14, SCGCS Collaboration Assembly, Zoom. (video.) Higher Dagger Categories. November 8, Geometry, Topology & Physics Seminar, New York University Abu Dhabi. (abstract. slides.) Abstract: Hilbert spaces form more than a category: their morphisms maps can be composed, but also every morphism $f : X \to Y$ has a distinguished "adjoint" $f^\dagger : Y \to X$, making it into a "dagger category". This extra data is important for axiomatizing functional analysis, quantum mechanics, quantum information theory.... However, the assignment $f \mapsto f^\dagger$ is unsatisfying from a higher category theorist's perspective because it is "evil", i.e. it violates the principle of equivalence: a category equivalent to a dagger category may not admit a dagger structure. This in particular interferes with generalizing the notion of dagger category to the (non-strict) higher categories necessary for axiomatizing fully-local quantum field theory. In this talk I will propose a manifestly non-evil definition of "dagger $(\infty,n)$-category". The same machinery also produce a non-evil definitions of "pivotal $(\infty,n)$-category" and helps to clarify the relationship between reflection positivity and spin-statistics. This is based on joint work with B. Bartlett, G. Ferrar, B. Hungar, C. Krulewski, L. Müller, N. Nivedita, D. Penneys, D. Reutter, C. Scheimbauer, L. Stehouwer, and C. Vuppulury. (hide abstract) Deeper Kummer theory. September 21, Mathematical Physics Seminar, Perimeter Institute. (abstract. video.) Abstract: A tower is an infinite sequence of deloopings of symmetric monoidal ever-higher categories. Towers are places where extended functorial field theories take values. Towers are a "deeper" version of commutative rings (as opposed to "higher rings" aka E[∞]-spectra). Notably, towers have their own opinions about Galois theory, and think that usual Galois groups are merely shallow approximations of deeper homotopical objects. In this talk, I will describe some steps in the construction and calculation of the deeper Galois group of a characteristic-zero field. In particular, I'll explain a homotopical version of the Kummer description of abelian extensions. This is joint work in progress with David Reutter. (hide abstract) Quantum Homotopy Types. September 11, Researcher Presentations for New PSI Students, Perimeter Institute. (slides.) SVOAs and some exceptional groups. August 22, Universität Hamburg. (abstract.) Abstract: The goal of this talk is to present the answer to a fun classification problem: What are all N=1 SVOAs with no continuous symmetries and with bosonic part a simply connected WZW model, and what are their automorphism groups? It turns out that there are two infinite families, both related to alternating groups, and eleven exceptions, all of which are related to the "Suzuki chain" of exceptional subgroups of Conway's largest sporadic group. The Suzuki chain is "dual" to a certain chain of alternating subgroups of Conway's group, and my construction of the corresponding SVOAs implements this to a sort of "level-rank duality" inside the Conway Moonshine SCFT. Time permitting, I will also speculate that there might be an interesting family of SVOAs related to the Mathieu groups, and that this family is "dual" to a family of actions of the fusion categories SU(2)_k on the Conway Moonshine SCFT. (hide abstract) Super Duper Vector Spaces II: The higher-categorical Galois group of R. August 18, Higher Structures in Functorial Field Theory, University of Regensburg. (abstract. notes.) Abstract: A theorem of Deligne suggests that the complex numbers are not algebraically closed in a 1-categorical sense but that their 1-categorical algebraic closure is the category sVec of complex super vector spaces. In fact, this property uniquely (up to non-unique isomorphism) characterizes sVec amongst complex-linear symmetric monoidal categories. In these talks, we will outline work in progress on constructing complex-linear symmetric n-categories which are higher categorical analogues of sVec in that they are uniquely characterized by being the n-categorical separable closure of the complex numbers. We will explore the resulting higher-categorical absolute Galois group of the complex numbers, and outline a construction of that group very much akin to the surgery-theoretic description of the stable piecewise linear group PL. This is the second half of a 2-part lecture. The first part is given by David Reutter, with whom this work is joint in progress. The slides from Part I are available here. (hide abstract) Topological Umbral Moonshine. July 17-21, Topological Moonshine, UIUC. (notes.) Higher algebraic closure. May 19, OWSM Seminar, Mathematics Institute, Oxford. (abstract.) Abstract: I will describe my construction, joint with David Reutter, of the universal target category for semisimple TQFTs. (hide abstract) Higher algebraic closure. April 27, Colloquium, The Ohio State University. (abstract. slides. video.) Abstract: The fundamental theorem of algebra, as Hilbert explained, asserts that every consistent system of polynomial equations over R has a solution over C. Together with David Reutter, we have established a "fundamental theorem of higher algebra": we have constructed and analyzed the n-category in which every consistent (and semisimple) system of "n-categorical polynomial equations" has a solution. In this talk, I will explain a bit about our construction, and why a quantum physicist might care. (hide abstract) Higher algebraic closure. April 11, Higher Structures Seminar, Feza Gürsey Institute. (abstract. slides.) Abstract: Deligne's work on Tannakian duality identifies the category sVec of super vector spaces as the "algebraic closure" of the category Vec of vector spaces (over C). I will describe my construction, joint with David Reutter, of the higher-categorical analog of sVec: the algebraic closure of the n-category of "n-vector spaces". The construction mixes ideas from Galois theory, quantum physics, homotopy theory, and fusion category theory. Time permitting, I will describe the higher-categorical Galois group, which turns out to have a surgery-theoretic description through which it is almost, but not quite, the group PL. (hide abstract) Homotopy Quantum Groups. November 18, 2022 Simons Collaboration on Global Categorical Symmetries Annual Meeting, Simons Foundation. (abstract. slides. video.) Abstract: Systems of global categorical symmetry can be thought of as quantum higher groups; I will define and describe their homotopy quantum groups in which two operators represent the same class if they are related by a quantum (noninvertible) homotopy. When the categorical symmetry is a usual higher group, these homotopy quantum groups recover its usual homotopy groups. For fusion higher categories, two operators are in the same “quantum homotopy class” if and only if they are related by a condensation of operators of higher homotopical degree. Although in general the homotopy quantum groups can reflect the noninvertible nature of categorical symmetries, it happens remarkably often that they are honest groups — that all operators are “invertible up to quantum homotopy,” but that the homotopy itself is noninvertible. This provides an interesting middle ground between fully invertible and fully noninvertible symmetries. This talk is based on joint work with David Reutter. (hide abstract) SVOAs and some exceptional groups. October 25, Quantum Symmetries, Centre de Recherches Mathématics. (abstract. video. notes.) Abstract: This talk will be in two parts. In the first part of the talk, I'll present the answer to a fun classification problem: What are all N=1 SVOAs with some natural properties, and what are their automorphism groups? It turns out that there are two infinite families, both related to alternating groups, and eleven exceptions, all of which are related to the "Suzuki chain" of exceptional subgroups of Conway's largest sporadic group. The Suzuki chain is "dual" to a certain chain of alternating subgroups of Conway's group, and my construction of the corresponding SVOAs implements this to a sort of "level-rank duality" inside the Conway Moonshine SCFT. In the second half of my talk, I will say a bit more about various constructions about duality, gauging, and coset models. In particular, I will speculate that there might be an interesting family of SVOAs related to the Mathieu groups, and that this family is "dual" to a family of actions of the fusion categories $SU(2)_\ell$ on the Conway Moonshine SCFT. (hide abstract) Hypergroups and fusion higher categories. October 6, Mathematical Physics Group Meeting, Perimeter Institute. (abstract.) Abstract: Hypergroups are a piece of "lower" algebra in which rather than a well-defined group law, group elements multiply to each other in a probabilistic way. I will explain that all fusion higher categories, and all (possibly-relative, possibly-unextended) TQFTs — both very much objects of "higher" algebra — have "homotopy hypergroups". They encode the "fusion rules", they have "S-matrices", a "Verlinde formula", and all that jazz. This is based on joint work in progress with David Reutter. (hide abstract) A 4D TQFT which is not (quite) a gauge theory. October 4, Symmetry Seminar, Oxford. (abstract. slides. video.) Abstract: Some people have accused me of proving that every (nice enough) 4D TQFT is equivalent to a gauge theory. A version of this statement is true, but only if your notion of "gauge theory" allows dynamical spin structures among the "gauge fields", and some rather complicated terms in the Lagrangian that couple the spin structures to the other gauge fields. That said, many of these 4D TQFTs also admit "dual" descriptions as true gauge theories for higher-form groups. In this talk, I will explain exactly which 4D TQFTs are equivalent to true (higher-group) gauge theories theories, and present a minimal counterexample to the belief that all 4D TQFTs are true gauge theories. (hide abstract) Hypergroups and fusion higher categories. September 29, Higher categories and topological order, AIM. (notes.) Global categorical symmetry and higher fusion categories. September 19, PSI faculty presentations, Perimeter Institute. (slides.) What is a fusion higher category? September 9, Higher Symmetry and Quantum Field Theory, Aspen Center for Physics. Classification of (semisimple) TQFTs. March 29, Math QFT seminar, MPIM. (abstract. slides.) Abstract: If you ask a mathematician for a classification of (fully extended, framed) TQFTs, she will probably tell you that they are classified by fully-dualizable objects of the target n-category, and in particular the classification depends on your choice of target n-category. If you ask a physicist, on the other hand, she will tell you that "(T)QFT" is a primitive notion, and that the classification question has a well-defined answer. These two perspectives combine into the following challenge: define, construct, and analyze the "universal target category" in which all TQFTs take values. In this talk, I will describe the solution to this challenge in the case of semisimple TQFTs. This is joint work in progress with David Reutter. (hide abstract) Why the spaces of N=(0,1) susy QFTs form a spectrum? March 7, Stolz–Teichner Seminar, Oxford. (slides) Categorified algebraic closure. February 15, ATCAT, Dalhousie. (abstract. notes. video.) Abstract: A famous theorem of Deligne's says that any (abelian, C-linear) symmetric monoidal category satisfying certain mild size constraints admits a symmetric monoidal functor to the category sVec of super vector spaces. Deligne used this result to classify such symmetric monoidal categories in terms of representation theories of algebraic groups — this is the "Tannakian Duality". A few years ago, I pointed out that Deligne's theorem has a neat interpretation: it says that the symmetric monoidal category sVec is the "algebraic closure" of the symmetric monoidal category Vec of vector spaces; that the extension of Vec into sVec is "Galois"; and that the Tannakian Duality is a categorified version of the Galois correspondence. In this talk, I will explain the statement of Deligne's theorem and my interpretation, and mention some aspects of Deligne's proof. The version of the story I will present here was developed in conversation with D. Reutter. ( hide abstract) Algebraically closed higher categories. December 5, Geometry and Topology Seminar, Haifa. (abstract. slides. video.) Abstract: Super Tannakian duality can be interpreted as saying that the symmetric monoidal category Vec is not algebraically closed, but rather its algebraic closure is the category sVec of super vector spaces. In this talk, I will explain how to construct the algebraic closure of the symmetric monoidal n-category nVec. The Galois group is almost, but not quite, the stable orthogonal group. The cokernel of the J homomorphism appears in an interesting way. This is based on joint work in progress with David Reutter. (hide abstract) Algebraically closed higher categories. December 3, Mathematical Physics Seminar, Perimeter Institute. (abstract. video.) Abstract: I will report on my progress, joint with David Reutter, to construct and analyze the algebraic closure of nVec — in other words, the universal n-category of framed nD TQFTs. The invertibles are Pontryagin dual to the stable homotopy groups of spheres. The Galois group is almost, but not quite, the stable PL group. An invertible TQFT can be condensed from the vacuum if and only if it trivializes on (possibly-exceptional) spheres. (hide abstract) Classification of topological quantum field theories. November 18, CTP Seminar, QMUL. (abstract. slides.) Abstract: Modulo some vitally important ansätze, subtleties, provisos, and work in progress, all topological quantum field theories are gauge theories for higher finite groups. (hide abstract) TMF and SQFT: questions and conjectures. November 4, Generalized Cohomology and Physics, ICTP. (abstract. slides. video.) Abstract: The Monster is a particularly magical finite group, with spooky relations to both number theory and quantum physics. I will explain its significance, and hint at some of the mysteries that still remain. (hide abstract) The Monster. October 27, Honours Seminar, Dalhousie. (abstract.) Abstract: The Monster is a particularly magical finite group, with spooky relations to both number theory and quantum physics. I will explain its significance, and hint at some of the mysteries that still remain. (hide abstract) A menagerie of N=1 SVOAs. October 6, NAAP, Kavli IPMU. (abstract. slides.) Abstract: The Conway Moonshine module V^f&natural; is specific "N=1" supersymmetric vertex operator algebra; its name reflects that its automorphism group is the Conway sporadic group Co[1]. It is a supersymmetric analogue of the Monstrous Moonshine module, and a quantum analogue of the Leech lattice. I will tell you about some interesting subalgebras of V^f&natural;, which seem to correspond to some interesting subgroups of Co[1]. Some of these subalgebras fit within a theorem about WZW algebras, and others fit within a conjecture about umbral moonshine. Along the way, I will highlight some of the techniques for building and analyzing SVOAs and superconformal field theories. (hide abstract) Semisimple higher categories. July 26, Western Hemisphere Colloquium on Geometry and Physics. (abstract. slides. video.) Abstract: Semisimple higher categories are a quantum version of topological spaces (behaving sometimes like homotopy types and sometimes like manifolds) in which cells are attached along superpositions of other cells. Many operations from topology make sense for semisimple higher categories: they have homotopy sets (not groups), loop spaces, etc. For example, the extended operators in a topological sigma model form a semisimple higher category that can be thought of as a type of "cotangent bundle" of the target space. The "symplectic pairing" on this "cotangent bundle" is measured an S-matrix pairing aka Whitehead bracket defined on the homotopy sets of any (pointed connected) semisimple higher category, and the nondegeneracy of this pairing is a type of Poincare or Atiyah duality. This is joint work in progress with David Reutter. (hide abstract) Operators and (higher) categories in quantum field theory. July 19-23, Seminar on Arithmetic Geometry and Quantum Field Theory, Korea Institute for Advanced Study. (abstract. digital chalk boards: I, II, III, IV, V. videos: I, II [first five minutes] and II, III, IV, V.) I. A complete mathematical definition of quantum field theory does not yet exist. Following the example of quantum mechanics, I will indicate what a good definition in terms could look like. In this good definition, QFTs are defined in terms of their operator content (including extended operators), and the collection of all operators is required to satisfy some natural properties. II. After reviewing some classic examples, I will describe the construction of Noether currents and the corresponding extended symmetry operators. III. One way to build topological extended operators is by "condensing" lower-dimensional operators. The existence of this condensation procedure makes the collection of all topological operators into a semisimple higher category. IV. Topological operators provide "noninvertible higher-form symmetries". These symmetries assign charges to operators of complementary dimension. This assignment is a version of what fusion category theorists call an "S-matrix". V. The Tannakian formalism suggests a way to recognize higher gauge theories. It also suggests the existence of interesting higher versions of super vector spaces with more exotic tangential structures. (hide abstract) Higher S-matrices. June 18, TQFT Club, Instituto Superior Técnico. (abstract. slides. video.) Abstract: Each fusion higher category has a "framed S-matrix" which encodes the commutator of operators of complementary dimension. I will explain how to construct and interpret this pairing, and I will emphasize that it may fail to exist if you drop semisimplicity requirements. I will then outline a proof that the framed S-matrix detects (non)degeneracy of the fusion higher category. This is joint work in progress with David Reutter. (hide abstract) Minimal nondegenerate extensions. June 11, Fusion Friday, AIM. (notes. video.) Minimal nondegenerate extensions and an anomaly indicator. June 10, Quantum Matter in Mathematics and Physics, CMSA, Harvard. (abstract. slides. video.) Abstract: Braided fusion categories arise as the G-invariant (extended) observables in a 2+1D topological order, for some (generalized) symmetry group G. A minimal nondegenerate extension exists when the G-symmetry can be gauged. I will explain what this has to do with the classification of 3+1D topological orders. I will also explain a resolution to a 20-year-old question in mathematics, which required inventing an indicator for a specific particularly problematic anomaly, and a clever calculation of its value. Based on arXiv:2105.15167, joint with David Reutter. ( hide abstract) Classification of topological orders. June 9, Quantum Mathematics: Quantum Matter and Quantum Information, 75+1 CMS Summer Meeting. (abstract. slides. video.) Abstract: Topological orders have a mathematical axiomatization in terms of their higher fusion categories of extended operators; the characterizing property of these higher fusion categories is that they are satisfy a nondegeneracy condition. After overviewing some of the higher category theory that goes into this axiomatization, I will describe what we do and don't know about the classification of topological orders in various dimensions. (hide abstract) Higher S-matrices. May 20, Higher Structures & Field Theory Seminar, UniVie/Erlangen/Würzburg/TUM. (abstract. slides.) Abstract: Each fusion higher category has a "framed S-matrix" which encodes the commutator of operators of complementary dimension. I will explain how to construct and interpret this pairing, and I will emphasize that it may fail to exist if you drop semisimplicity requirements. I will then outline a proof that the framed S-matrix detects (non)degeneracy of the fusion higher category. This is joint work in progress with David Reutter. (hide abstract) The classification of topological orders. April 1, Mathematics Department Colloquium, OSU. (abstract. slides. pretalk slides.) Abstract: The Landau Paradigm classifies phases of matter by how "ordered" they are, i.e. by their symmetry groups (and symmetry breaking). The difference between liquids and solids fits into this paradigm, as does the Higgs mechanism that gives particles masses in high-energy physics. However, starting around the turn of the (21st) century, it has become clear that there are patterns of "order" or "symmetry" in quantum matter systems which cannot be described by groups. In particular, there are topological phases of matter, characterized by having no local observables whatsoever, which Landau would have thought were completely trivial but which in fact have subtle long-range topological order. To describe these topological orders requires the mathematics of fusion higher categories. In this talk, I will describe the classification of these topological orders in various dimensions and the extent to which the Landau Paradigm does or does not hold. ( hide abstract) Fusion n-category Q&A. March 12, Fusion categories and tensor networks, AIM. Orbifolds. March 4, Moonshine Learning Seminar, IAS. (prepared typed notes. live hand-written slides.) Higher Galois closures. March 3, AGQFT, University of Warwick. (abstract. slides. video.) Abstract: I will describe a mostly-conjectural picture of the higher-categorical separable closure of R. In particular, I will speculate about unitary topological field theory, higher analogues of spin-statistics, homotopy groups of spheres, and the j-homomorphism. (hide abstract) Strongly-fusion 2-categories are grouplike. March 1, Representation Theory Seminar, UMass Amherst. (abstract. slides. video.) Abstract: A fusion category is a finite semisimple monoidal category in which the unit object is indecomposable, equivalently has trivial endomorphism algebra. There are two natural categorifications of this notion: a fusion 2-category is a finite semisimple monoidal 2-category in which the unit object is indecomposable, and a strongly fusion 2-category is one in which the unit object has trivial endomorphism algebra. As I will explain in this talk, fusion 2-categories are extremely rich, with a seemingly-wild classification, whereas strongly-fusion 2-category are very simple: they are essentially just finite groups. Based on joint work with Matthew Yu. (hide abstract) Condensations and components. February 18, University Quantum Symmetries Lectures, NCSU. (abstract. slides.) Abstract: The 1-categorical Schur's lemma, which says that a nonzero morphism between simple objects is an isomorphism, fails for semisimple n-categories when n≥2. Rather, when two simple objects are related by a nonzero morphism, they each arise as a condensation descendant of the other. Because of this, for many purposes the natural n-categorical version of "set of simple objects" is the set of components: the set of simples modulo condensation descent. I will explain this phenomenon and describe some conjectures, including conjectures about "higher categorical S-matrices" and, time permitting, about the image of the j-homomorphism in the homotopy groups of spheres. (hide abstract) A topological umbral moonshine conjecture. November 17, Modularity in Quantum Systems, Kavli Institute for Theoretical Physics. (abstract. slides. video.) Abstract: I will propose a "topological" description of M[24] umbral moonshine. Specifically, I will describe a specific M[24]-equivariant SCFT, and explain that if it is M[24]-equivariantly nullhomotopic in the space of SQFTs — if it can be continuously deformed to an SQFT with spontaneous supersymmetry breaking — then that nullhomotopy would produce the mock modular forms of generalized M[24]-moonshine. I will not construct such a nullhomotopy, but I will provide some evidence of its existence: it is expected that the obstruction for and SQFT to be nullhomotopic is valued in a space of "topological modular forms", and I have calculated that the obstruction vanishes "perturbatively at odd primes". Time permitting, I will suggest that the "optimal growth condition" of umbral moonshine corresponds to working with "topological cusp forms", and I will outline a version of the construction for the umbral groups 2M[12] and 2AGL[3](2). (hide abstract) Holomorphic SCFTs of small index. November 13, Topology, Algebraic Geometry, and Dynamics Seminar, George Mason University. (abstract. slides.) Abstract: I will explain how some questions in theoretical physics and algebraic topology led to a curious result about error-correcting ternary codes. No knowledge of the terms in the title or abstract will be assumed. Based on joint work with Davide Gaiotto. (hide abstract) Pseudounitary slightly degenerate braided fusion categories admit minimal modular extensions. November 10, Wales Mathematical Physics and Physical Mathematics, Cardiff University. (abstract. Abstract: A braided fusion category is "slightly degenerate" if its Muger centre is a copy of SVec: they arise as the line operators of 3d spin-TFTs. It is a longstanding conjecture that any such braided fusion category admits a "minimal modular extension", i.e. an index-2 extension to a nondegenerate braided fusion category. I will outline a proof, which is joint work in progress with David Reutter, of this conjecture in the pseudounitary case. The proof involves traveling into four, and briefly five, dimensions. (hide abstract) Some examples in fusion 2-categories and 3+1D topological order. November 6, Global Categorical Symmetries Seminar. (abstract. slides. video.) Abstract: I have spent the last couple months computing everything I can about the (extended) operator content of a trio of closely related 3+1D bosonic topological orders (≈TFTs): Z[2] gauge theory, spin-Z[2] gauge theory, and an anomalous version of spin-Z[2] gauge theory. [Nontrivial theorem: these are precisely the three 3+1D topological orders with a unique nontrivial particle.] I will tell you some of the results of my computations. In other words, I will tell you the "global categorical symmetries" of these three topological orders. My hope is to illustrate some phenomena that can occur in fusion 2-categories. (hide abstract) 3+1d topological orders with (only) an emergent fermion. October 26, Heidelberg-Munich-Vienna Seminar on Mathematical Physics. (abstract. slides.) Abstract: There are exactly two bosonic 3+1d topological orders whose only nontrivial quasiparticle is an emergent fermion (and exactly one whose only nontrivial quasiparticle is an emergent boson). I will explain the meaning of this sentence: I will explain what a "3+1d topological order" is, and how I know that these are the complete list. Time permitting, I will tell you some details about these specific topological orders, and say what this classification has to do with "minimal modular extensions". (hide abstract) 3+1d topological orders with (only) an emergent fermion. October 20, Representation Theory and Mathematical Physics, UC Berkeley. (abstract.) Abstract: There are exactly two bosonic 3+1d topological orders whose only nontrivial quasiparticle is an emergent fermion (and exactly one whose only nontrivial quasiparticle is an emergent boson). I will explain the meaning of this sentence: I will explain what a "3+1d topological order" is, and how I know that these are the complete list. Time permitting, I will tell you some details about these specific topological orders, and say what this classification has to do with "minimal modular extensions". (hide abstract) Separable and central simple (higher) algebras. October 13, ATCAT, Dalhousie. (abstract. slides.) Abstract: Most of my talk will be a review of the famous characterization of separable algebras in terms of dualizability/adjunctibility conditions, and of central simple algebras in terms of invertibility conditions, in the bicategory of algebras and bimodules. Time permitting, I will describe my recent generalization of these results in which algebras are replaced by monoidal (higher) categories. Prerequisites: some familiarity with the bicategory of algebras and bimodules, as explained for instance in last week's talk by Robert Paré. (hide abstract) Algebraic definition of topological order. August 7, Topological Orders and Higher Structures, ESI Vienna. (abstract. slides. video.) Abstract: I will explain a fully mathematically rigorous definition of (n+1)-dimensional "topological order" in terms of its multifusion n-category of extended operators. This involves understanding "categorical condensation", which is a universal way to build extended operators as networks of lower-dimensional extended operators. It also allows a complete proof of the Lan--Kong--Wen classification of (3+1)-dimensional topological orders. (hide abstract) Mock modularity and a secondary invariant. July 31, String Math 2020, Stellenbosch University, South Africa. (abstract. slides. video.) Abstract: (1+1)d supersymmetric field theories admit a famous deformation invariant called, variously, the Witten Index or the Elliptic Genus, which is valued in integral (weak) modular forms. I will present a "secondary" variation of this invariant, which is measures the obstruction to being a shadow of an integral (weak, generalized) mock modular form. It sees beyond the Witten/ Elliptic index/genus: in particular, it sees some torsion in the space of SQFTs. Based on joint work with Davide Gaiotto. (hide abstract) SPT phases and generalized cohomology. June 23, Algebraic structures in quantum computation IV, UBC. (abstract. slides. video.) Abstract: A priori, classifying n-dimensional SPT phases requires understanding the homotopy type of the topological space I^n of n-dimensional invertible phases of matter. As I will explain, the spaces I^n, for different values of n, compile into a structure called an "Omega-spectrum". This provides a huge advantage: whereas topological spaces are flimsy, Omega-spectra are rigid, and algebraic topologists have developed many powerful techniques for computing with them. In particular, for each n, there is a finite set of groups G such that knowledge of the classification of n-dimensional G-SPTs for those groups determines the classification for all groups. Based on joint work with D. Gaiotto. (hide abstract) Phases of SQFTs. June 8, Mathematics Colloquium, Dalhousie. (abstract. slides.) Abstract: A "phase" of quantum systems (of some type) is a connected component in the space of all quantum systems (of that type). For example, phases of minimally-supersymmetric quantum mechanics models (1D QFTs) are perfectly classified by K-theory, leading to myriad applications in mathematics and physics. I will report on what we do (and don't) know about phases of minimally-supersymmetric 2D QFTs. These are expected to be classified by the generalized cohomology theory of Topological Modular Forms (TMF). In particular, I will describe a new invariant of SQFTs called the "secondary Witten genus" which, on the one hand, sees torsion in TMF, and, on the other hand, connects directly to mock modular forms, and thereby to the modern "umbral" moonshine of Niemeier lattices and K3 surfaces. (hide abstract) On the classification of topological phases. June 1, Colloquium, Perimeter Institute. (abstract. slides. video>.) Abstract: There is a rich interplay between higher algebra (category theory, algebraic topology) and condensed matter. I will describe recent mathematical results in the classification of gapped topological phases of matter. These results allow powerful techniques from stable homotopy theory and higher categories to be employed in the classification. In one direction, these techniques allow for complete a priori classifications in spacetime dimensions ≤6. In the other direction, they suggest fascinating and surprising statements in mathematics. (hide abstract) Gapped condensation in higher categories. March 17, Tensor categories and topological quantum field theories, MSRI. (abstract. video.) Abstract: Idempotent (aka Karoubi) completion is used throughout mathematics: for instance, it is a common step when building a Fukaya category. I will explain the n-category generalization of idempotent completion. We call it "condensation completion" because it answers the question of classifying the gapped phases of matter that can be reached from a given one by condensing some of the chemicals in the matter system. From the TFT side, condensation preserves full dualizability. In fact, if one starts with the n-category consisting purely of ℂ in degree n, its condensation completion is equivalent both to the n-category of n-dualizable ℂ-linear (n-1)-categories and to an n-category of lattice condensed matter systems with commuting projector Hamiltonians. This establishes an equivalence between large families of TFTs and of gapped topological phases. Based on joint work with D. Gaiotto. (hide abstract) A deformation invariant of 2D SQFTs. February 25, New High Energy Theory Center Seminar, Rutgers. (abstract. video.) Abstract: The elliptic genus is a powerful deformation invariant of 2D SQFTs: if it is nonzero, then it protects the SQFT from admitting a deformation to one with spontaneous supersymmetry breaking. I will describe a "secondary" invariant, defined in terms of mock modularity, that goes beyond the elliptic genus, protecting SQFTs with vanishing elliptic genus. The existence of this invariant supports the hypothesis that the space of minimally supersymmetric 2D SQFTs provides a geometric model for universal elliptic cohomology. (hide abstract) A deformation invariant of 2D SQFTs. January 24, Geometry, Topology, and Physics, UC Santa Barbara. (abstract. audio. notes by Dave Morrison.) Abstract: The elliptic genus is a powerful deformation invariant of 2D SQFTs: if it is nonzero, then it protects the SQFT from admitting a deformation to one with spontaneous supersymmetry breaking. I will describe a "secondary" invariant, defined in terms of mock modularity, that goes beyond the elliptic genus, protecting SQFTs with vanishing elliptic genus. The existence of this invariant supports the hypothesis that the space of minimally supersymmetric 2D SQFTs provides a geometric model for universal elliptic cohomology. Based on joint works with D. Gaiotto and E. Witten. (hide abstract) A deformation invariant of 1+1D SQFTs. January 14, Quantum Fields and Strings, Perimeter Institute. (abstract. video.) Abstract: The elliptic genus is a powerful deformation invariant of 1+1D SQFTs: if it is nonzero, then it protects the SQFT from admitting a deformation to one with spontaneous supersymmetry breaking. I will describe a "secondary" invariant, defined in terms of mock modularity, that goes beyond the elliptic genus, protecting SQFTs with vanishing elliptic genus. The existence of this invariant supports the hypothesis that the space of minimally supersymmetric 1+1D SQFTs provides a geometric model for universal elliptic cohomology. Based on joint works with D. Gaiotto and E. Witten. (hide abstract) A deformation invariant of 2D SQFTs. December 11, Diff. Geom, Math. Phys., PDE Seminar, UBC. (abstract. notes.) Abstract: The elliptic genus is a powerful deformation invariant of 2D SQFTs: if it is nonzero, then it protects the SQFT from admitting a deformation to one with spontaneous supersymmetry breaking. I will describe a "secondary" invariant, defined in terms of mock modularity, that goes beyond the elliptic genus, protecting SQFTs with vanishing elliptic genus. The existence of this invariant supports the hypothesis that the space of minimally supersymmetric 2D SQFTs provides a geometric model for universal elliptic cohomology. (hide abstract) Spaces of quantum systems. December 10, Department Colloquium, UBC. (abstract. slides.) Abstract: Physicists have long been interested in answering homotopical questions about (appropriately topologized) spaces of quantum systems --- for example, the connected components of such spaces classify phases of matter (in the solid-liquid-gas sense). Recent evidence suggests that such spaces may also be of interest to pure mathematicians, because in many cases they have the same homotopy types as objects of fundamental interest in topology. I will describe two examples of this phenomenon. First, an example from condensed matter: the classification of topological phases of matter leads to rich category theory, and, conjecturally, to a relationship between cobordism groups and a higher-categorical version of Galois theory. Second, an example from high energy physics: the space of minimally supersymmetric 2D quantum field theories provides, conjecturally, an analytic model for universal elliptic cohomology. (hide abstract) The Monstrous Moonshine Anomaly. November 27, Cohomology of Groups Seminar, Perimeter Institute. (abstract. video. notes.) Abstract: The action of the Monster sporadic group on the Moonshine CFT enjoys an 't Hooft anomaly. I will describe my (complete) calculation of its value, and my (almost complete) calculation of its home; I will also mention my joint work with Treumann on other sporadic groups. The talk will begin with elementary methods in the cohomology of finite groups, and proceed to more and more sophisticated techniques. In particular, I will interpret the Serre Spectral Sequence as providing a "finite-group T-duality" relationship between symmetries of 2d QFTs related by cyclic group orbifold, and use a series of such relationship between the Monster CFT and the Leech lattice to control the anomaly. (hide abstract) Condensation in higher categories. November 20, Higher Structures in Geometry and Physics, Fields Institute. (abstract. video.) Abstract: Idempotent (aka Karoubi) completion is used throughout mathematics: for instance, it is a common step when building a Fukaya category. I will explain the n-category generalization of idempotent completion. We call it "condensation completion" because it answers the question of classifying the gapped phases of matter that can be reached from a given one by condensing some of the chemicals in the matter system. From the TFT side, condensation preserves full dualizability. In fact, if one starts with the n-category consisting purely of ℂ in degree n, its condensation completion is equivalent both to the n-category of n-dualizable ℂ-linear (n-1)-categories and to an n-category of lattice condensed matter systems with commuting projector Hamiltonians. This establishes an equivalence between large families of TFTs and of gapped topological phases. Based on joint work with D. Gaiotto. (hide abstract) TMF and SQFT. November 18. High Energy Theory Seminar, IAS, Princeton. (abstract. video.) Abstract: I will describe my work, all joint with D. Gaiotto and some also joint with E. Witten, to understand the homotopy type of the space of (1+1)d N=(0,1) SQFTs — what a condensed matter theorist would call "phases" of SQFTs. Our motivating hypothesis (due in large part to Stolz and Teichner) is that this space models the spectrum called "topological modular forms". Our work includes many nontrivial checks of this hypothesis. First, the hypothesis implies constraints on the possible values of elliptic genera, and suggests (but does not imply) the existence of holomorphic SCFTs saturating these constraints; we have succeeded in constructing such SCFTs in low central charge. Second, the hypothesis implies the existence of torsion-valued "secondary invariants" beyond the elliptic genus that protect SQFTs from admitting deformations that spontaneously break supersymmetry. I will explain such an invariant in terms of holomorphic anomalies and mock modularity. (hide abstract) Bott periodicity from quantum Hamiltonian reduction. October 24. Colloquium, Dalhousie University, Halifax. (abstract.) Abstract: The "quantization dictionary" posits that constructions in noncommutative algebra often parallel constructions in symplectic geometry. I will explain an example of this dictionary: I will produce the 8-fold periodicity of Clifford algebras as an example of quantum Hamiltonian reduction of a free fermion quantum mechanical system. The exceptional Lie group G_2 will make a cameo appearance. (hide abstract) Condensation and idempotent completion. October 22. ATCAT Seminar, Dalhousie University, Halifax. (abstract.) Abstract: Idempotent (aka Karoubi, aka Cauchy) completion appears throughout mathematics: for instance, it converts the category of free modules to the category of projective modules. I will explain the higher-categorical generalization of idempotent completion. I call it "condensation", because, as I will explain, if you start with a category of gapped phases of matter, then its idempotent completion consists of those phases that can be condensed from the phases you already have. In particular, if you start just with the vacuum phase, and idempotent complete, you recover a very large class of gapped phases, including the Turaev--Viro--Barrett--Westbury models. Moreover, every condensable-from-the-vacuum phase of matter is fully dualizable (i.e. determines a fully-extended TQFT), and conversely every condensable-from-the-vacuum TQFT has a commuting projector Hamiltonian model, and so one finds an equivalence between large classes of TQFTs and condensed phases. Based on joint work with Davide Gaiotto. (hide abstract) Bott periodicity from quantum Hamiltonian reduction. September 16. McGill. (abstract.) Abstract: The "quantization dictionary" posits that constructions in noncommutative algebra often parallel constructions in symplectic geometry. I will explain an example of this dictionary: I will produce the 8-fold periodicity of Clifford algebras as an example of quantum Hamiltonian reduction of a free fermion quantum mechanical system. The exceptional Lie group G[2] will make a cameo appearance. (hide abstract) Phases of 2d SQFTs. August 9. Generalized Symmetries, Anomalies and Observables, Aspen Center for Physics. (notes. abstract.) Abstract: I will describe my work, all joint with D. Gaiotto and some also joint with E. Witten, to understand the homotopy type of the space of (1+1)d N=(0,1) SQFTs --- what a condensed matter theorist would call "phases" of SQFTs. Our motivating hypothesis (due in large part to Stolz and Teichner) is that this space models the spectrum called "topological modular forms"; our work includes many nontrivial checks of this hypothesis. I will warm up with a brief discussion of N=1 quantum mechanics, fermionic anomalies, and K-theory. Then I will mention constraints that the Stolz--Teichner hypothesis places on the indexes of holomorphic SCFTs, and our checks of those constrains. Time permitting, I will end by explaining our realization of the Bunke--Naumann "secondary" invariants in terms of holomorphic anomalies and mock modularity. (hide abstract) Exceptional Mathematics: from Egyptian fractions to heterotic strings. July 23. Colloquium, Canada/USA Mathcamp. (slides without animation (8MB), slides with embedded video (best viewed in Acrobat Reader, 200MB), slides in handout format. abstract.) Abstract: Most of the mathematical universe is regular and repeating, but every once in a while there is an exception, and it leads to all sort of interesting and irregular phenomena. I will explain how the exceptional solutions to a simple problem from antiquity — find all integer solutions to (1/a) + (1/b) + (1/c) > 1 — lead to 20th and 21st century highlights: topological phases of matter, heterotic string theory, and E[8], the most exceptional Lie group. (hide abstract) 0D QFT and Feynman diagrams. June 17. QFT for Mathematicians, Perimeter Institute. (notes, video.) Secondary invariants and mock modularity. May 27. Topology Seminar, Oxford. (abstract.) Abstract: A two-dimensional, minimally Supersymmetric Quantum Field Theory is "nullhomotopic" if it can be deformed to one with spontaneous supersymmetry breaking, including along deformations that are allowed to "flow up" along RG flow lines. SQFTs modulo nullhomotopic SQFTs form a graded abelian group SQFT[•]. There are many SQFTs with nonzero index; these are definitely not nullhomotopic, and indeed represent nontorision classes in SQFT[•]. But relations to topological modular forms suggests that SQFT[•] also has rich torsion. Based on an analysis of mock modularity and holomorphic anomalies, I will describe explicitly a "secondary invariant" of SQFTs and use it to show that a certain element of SQFT[3] has exact order 24. This work is joint with D. Gaiotto and E. Witten. (hide abstract) The Galois action on VOA anomalies. March 18. Higher Symmetries Conference 2019, Aspen Center for Physics. (abstract.) Abstract: An important role for higher symmetries arises when an ordinary group action suffers an 't Hooft anomaly. I will focus on the case of (anomalous) actions on (two-dimensional) holomorphic CFTs. After reviewing the construction and comparing to the 1d case, I will discuss how the anomaly transforms under Galois conjugation of the VOA — it does not transform in the most obvious way, because the relation between VOAs and MTCs is not Galois-equivariant. The actual transformation law explains many of the "24"s that arise in moonshine, and suggests connections between VOAs, algebraic K-theory, and "higher" Brauer groups. (hide abstract) Galois actions on VOA gauge anomalies. February 27. Conference on Number Theory, Geometry, Moonshine & Strings III. Simons Foundation, NYC. (abstract. video available from conference website) Abstract: Assuming a widely believed conjecture, any action of a finite group G on a holomorphic VOA V over C determines a gauge anomaly α living in H^3(G;U(1)). I will explain that this anomaly does not transform in the most obvious way under Galois conjugation. Rather, if V is conjugated by an element γ &in; Aut(C), then α transforms to γ^2(α). This explains many of the appearances of the number 24 in moonshine. (hide abstract) Bott periodicity from quantum Hamiltonian reduction. February 25. Analysis & PDE Seminar, Stanford. (abstract.) Abstract: The "quantization dictionary" posits that constructions in noncommutative algebra often parallel constructions in symplectic geometry. I will explain an example of this dictionary: I will produce the 8-fold periodicity of Clifford algebras as an example of quantum Hamiltonian reduction of a free fermion quantum mechanical system. No knowledge of the words "quantization", "Clifford algebra", "free fermion", or "Hamiltonian reduction" will be assumed. The exceptional Lie group G[2] will make a cameo appearance. (hide abstract) Galois actions on VOA gauge anomalies. February 22. Alg No Th Seminar, UCSC. (abstract.) Abstract: Symmetries of a physical system can be "anomalous"; when they are, there is a "gauge anomaly" living in the cohomology of the group of symmetries. I will explain the definition of this anomaly in the case of quantum mechanics (also called Azumaya algebra) and holomorphic conformal field theory (also called holomorphic vertex operator algebra). I will then explain an anomaly of the VOA case: if a finite group G acts on a holomorphic VOA V over C, then the anomaly lives in H^3(G; C^×), but a Galois automorphism γ does not act simply on the coefficients by α ↦ γ(α), but rather by α ↦ γ^2(α). This explains many appearances of the number 24 in moonshine and suggests many questions relating VOAs to K-theory. (hide abstract) Bott periodicity from quantum Hamiltonian reduction. February 19. Colloquium, UCSC. (abstract.) Abstract: The "quantization dictionary" posits that constructions in noncommutative algebra often parallel constructions in symplectic geometry. I will explain an example of this dictionary: I will produce the 8-fold periodicity of Clifford algebras as an example of quantum Hamiltonian reduction of a free fermion quantum mechanical system. No knowledge of the words "quantization", "Clifford algebra", "free fermion", or "Hamiltonian reduction" will be assumed. The exceptional Lie group G[2] will make a cameo appearance. (hide abstract) Poisson and coisotropic. November 21 and December 12 (three-hour lecture). Koszul Duality Seminar, PITP. (abstract. video of first hour. video of third hour (self-contained).) Abstract: I will explain some subset of the following, probably not in this order: a Poisson version of the AKSZ construction; derived-algebraic versions of the words "ideal" and "coisotropic"; Koszul duality between "Frobenius" and "Lie Bi"; the difference between trees and graphs; a nontrivial fact about Poincare duality in DeRham(S^1). (hide abstract) Holomorphic SCFTs of small index. November 27. Quantum Algebra and Quantum Topology, OSU. (abstract.) Abstract: Stolz and Teichner have conjectured that the moduli space of D=1+1, N=(0,1) QFTs provides a geometric model for Topological Modular Forms. Some important building blocks in this moduli space are the holomorphic superconformal field theories, and the conjecture leads to predictions about the possible values the supersymmetric index of such SCFTs can take. Specifically, the conjecture leads one to predict the existence of SCFTs of small nonzero index, and that the minimal possible index depends in an interesting way on the central charge of the SCFT. I will explain a construction of some SCFTs of indexes equal to the predicted minimal values. The construction leads to a new divisibility result in the seemingly unrelated field of algebraic coding theory. Based on joint work with Davide Gaiotto. (hide abstract) Holomorphic SCFTs of small index. November 8. Mathematical Physics, UIUC. (abstract.) Abstract: Stolz and Teichner have conjectured that the moduli space of D=1+1, N=(0,1) QFTs provides a geometric model for Topological Modular Forms. Some important building blocks in this moduli space are the holomorphic superconformal field theories, and the conjecture leads to predictions about the possible values the supersymmetric index of such SCFTs can take. Specifically, the conjecture leads one to predict the existence of SCFTs of small nonzero index, and that the minimal possible index depends in an interesting way on the central charge of the SCFT. I will explain a construction of some SCFTs of indexes equal to the predicted minimal values. The construction leads to a new divisibility result in the seemingly unrelated field of algebraic coding theory. Based on joint work with Davide Gaiotto. (hide abstract) Galois action on gauge anomalies. October 18. Fusion Categories and Subfactors, BIRS, Banff, Canada. (abstract. video.) Abstract: Assuming a widely-believed conjecture, any action of a finite group G on a holomorphic vertex algebra A determines a "gauge anomaly" ω ∈ H^3(G; U(1)). The construction is fusion category theoretic: any conformal inclusion W ⊂ V, with V holomorphic, determines a fusion category; W = V^G gives a pointed fusion category. I will explain a subtlety in the construction coming from Galois actions. Specifically, if γ is a Galois automorphism, then the anomaly for γ(V) is not, as might be assumed naively, γ(ω), but is rather γ^2(ω). The proof relies on a recent construction by Evans and Gannon of vertex algebras with a given gauge anomaly. (hide abstract) T-duality for finite groups. June 4. Representation Theory, Mathematical Physics and Integrable Systems, CIRM, Luminy, France. (abstract. notes.) Abstract: I will describe a version of "T-duality" in which circles are replaced by finite cyclic groups. This T-duality appears naturally in fusion category theory and in the construction of "twisted orbifolds" of conformal field theories. As an application, I will compute the 't Hooft anomaly of monstrous moonshine. (hide abstract) The fourth cohomology of some sporadic groups. April 17. Geometry, Symmetry and Physics, Yale. (abstract.) Abstract: Whenever a finite group G acts on a 2d quantum field theory, it determines a "gauge anomaly" living in the fourth integral group cohomology of G. I will describe what I know about H^4 for certain sporadic groups, focusing on the most charismatic ones: the Monster and Conway's largest group. In particular, I will suggest that often H^4(G) is cyclic and generated by the gauge anomaly of a distinguished "moonshine" representation of G. (hide abstract) Moonshine anomalies. April 9. Algebra seminar, University at Buffalo. (abstract.) Abstract: Surprisingly many finite simple groups G have cyclic fourth integral group cohomology. Of particular interest are sporadic groups, at least some of which arise, via "moonshine", as automorphism groups of conformal field theories. Any action of a group G on a conformal field theory produces an "anomaly" living in the fourth cohomology of G, and I will speculate that most sporadic groups have distinguished "moonshine" actions on conformal field theories and that the corresponding anomalies generate the cohomology in question. This speculation is supported by various examples, including O'Nan's group O'N and Conway's largest group Co0; I will explain the techniques we used to calculate their fourth cohomologies. I will also tell you what I know about the Monster: although I cannot prove the full speculation, I can calculate the anomaly of the "monster moonshine" theory; if the speculation holds, then all Monster representations have vanishing second Chern class. Time permitting, I will explain a finite-group version of T-duality that I used to compute the Monster's moonshine anomaly. This talk is based in part on joint work with D. Treumann. (hide abstract) Infinitely-categorified commutative algebra. March 17--19. Recent developments in noncommutative algebra and related areas, University of Washington. (abstract.) Abstract: I will introduce an "infinite categorification" of commutative rings that I refer to as "towers" and that are a higher-categorical version of coconnective Ω-spectra. I will describe some basic constructions, including a "suspension" type construction that turns any commutative ring R or any symmetric monoidal linear category C into a tower Σ^•R or Σ^•-1C. I will emphasize the role that finitely generated projective modules and that separable associative algebras play in these constructions. Through these, suspension towers are closely related to constructions in condensed matter and in topological field theory. I will end by suggesting an infinitely-categorified Galois theory, and in particular I will predict that the infinitely-categorified absolute Galois group of the real numbers if the stable orthogonal group. This talk is based in part on joint work with Davide Gaiotto. (hide abstract) Moonshine anomalies. February 9. QMAP seminar, UC Davis. (abstract.) Abstract: Many sporadic groups G admit distinguished "moonshine" representations on conformal field theories --- the original and most famous example being the "defining" representation of the Monster group. Any such action produces an accompanying gauge anomaly living in H^4(G;Z). I will discuss the values of these anomalies, and suggest that often the anomaly of the distinguished "moonshine" action generates the corresponding cohomology group. In particular, I will report on my calculation, joint with David Treumann, of the fourth cohomology of the largest Conway group and of the O'Nan group, and on my calculation of the order of the Monster's moonshine anomaly. The latter calculation uses a construction I call "finite group T-duality" which may be of independent interest. (hide abstract) Higher algebraic closures and phases of matter. January 22. Northeastern University. (abstract. handout.) Abstract: Algebraic closures and Galois groups have been of central importance for hundreds of years. I will present a setting for "higher" commutative algebra in which these notions can be extended. In this setting C is not algebraically closed: its algebraic closure knows about super vector spaces and Deligne's "existence of fiber functors". I will conjecture that the "higher" absolute Galois group of R is the infinite orthogonal group O(∞), and suggest that the stable homotopy groups of spheres arise naturally in "higher" algebraic closures. My setting and conjectures are based on questions coming from the classification of condensed matter systems. In particular, the "spin-statistics theorem" and the experimentally-observed role for cobordism groups in the classification of condensed matter systems both arise naturally from my conjectures. Parts of this talk are based on joint work in progress with Davide Gaiotto and with Mike Hopkins. (hide Moonshine anomalies. January 19. UT Austin. (abstract.) Abstract: Many sporadic groups G admit distinguished "moonshine" representations on conformal field theories --- the original and most famous example being the "defining" representation of the Monster group. Any such action produces an accompanying gauge anomaly living in H^4(G;Z). I will discuss the values of these anomalies, and suggest that often the anomaly of the distinguished "moonshine" action generates the corresponding cohomology group. In particular, I will report on my calculation, joint with David Treumann, of the fourth cohomology of the largest Conway group and of the O'Nan group, and on my calculation of the order of the Monster's moonshine anomaly. The latter calculation uses a construction I call "finite group T-duality" which may be of independent interest. (hide abstract) Higher categories, generalized cohomology, and condensed matter. November 15. Representation Theory and Mathematical Physics, UC Berkeley. (abstract. notes.) Abstract: I will report on joint work in progress with Davide Gaiotto on the classification of gapped phases of matter. I will explain what symmetry protected phases are and why they are classified by reduced generalized group cohomology. I will also introduce the notion of "condensable n-algebra," and the higher category thereof, as an axiomatization of the algebraic structure enjoyed by gapped phases that can be condensed from the vacuum. Finally, I will interpret the Cobordism Hypothesis as the equivalence between (condensable) topological field theories and (condensable) gapped phases. (hide abstract) 576 Fermions. October 24. Algebra Seminar, Emory. (abstract.) Abstract: The Stolz--Teichner conjectures predict that the generalized cohomology theory called Topological Modular Forms has a geometric model in terms of the space of 2-dimensional supersymmetric quantum field theories, and that holomorphic vertex operator superalgebras provide the geometric model for nontrivial degrees of TMF. Since TMF is periodic with period 576, these conjectures in particular predict an equivalence between holomorphic VOSAs of different central charge that had not been discovered by physicists. I will report on progress constructing this "periodicity" equivalence geometrically. Specifically, I will explain the solution to the warm-up problem of constructing geometrically the 8-fold periodicity of real K-theory: my solution realizes this periodicity as an example of super symplectic reduction. I will then explain why I believe the Conway group Co0 will play a role in the 576-fold periodicity problem, and why my recent computation of H^4(Co0) provides evidence for this belief. (hide abstract) Bott periodicity from Hamiltonian reduction. October 5. NT&AG, Boston College. (abstract.) Abstract: I will explain that the 8-fold periodicity of KO arises as a quantum Hamiltonian reduction of a free fermion system. The talk will be elementary: I will explain the words "8-fold periodicity", "quantum Hamiltonian reduction", and "free fermion". The exceptional Lie group G_2 will make an appearance. (hide abstract) Exceptional structures, fermions, anomalies, and Hamiltonian reduction. October 3. Research Seminar in Mathematics, Northeastern. (abstract. notes.) Abstract: My story will begin with Hamiltonian reduction and a super, quantum version thereof. It will end with the cohomology of the Monster group. Along the way, I will talk about Bott periodicity, topological modular forms, the Leech lattice, free chiral fermions, symmetry protected topological phases of matter, and a finite-group version of T-duality. I will, of course, assume no background in any of these areas: my goal will be to explain who all the characters are and why they are all characters in the same story. Parts of the story are still fantasy, and parts are joint with David Treumann. (hide abstract) The Moonshine Anomaly. July 25. Higher Structures Lisbon, Instituto Superior Técnico, Lisbon, 24–27 July 2017. (abstract.) Abstract: Whenever a finite group G acts on a holomorphic conformal field theory, there is a corresponding «anomaly» in H^3(G,U(1)) — a sort of «characteristic class» of the action — measuring the obstruction to gauging the action. After a brief review of the general story, I will describe a construction that I call «finite group T-duality», which allows for information about anomalies to be compared between different field theories. The most famous example of a finite group acting on a conformal field theory is surely the Monster group acting on its natural «moonshine» representation. I will explain how T-duality can be used to calculate the anomaly. Along the way I will also discuss the Conway group and the anomaly for its natural action, the fermionic version of anomalies, the relation to String structures, and how I hope to construct physically the 576-fold periodicity of TMF. (hide abstract) The Moonshine Anomaly. July 19. Maximals Seminar, University of Edinburgh. (abstract.) Abstract: Conformal field theories, and the fusion categories derived from them, provide classes in group cohomology that generalize characteristic classes. These classes are called "anomalies", and obstruct the existence of constructing orbifold models. I will discuss two of the most charismatic groups — the Conway group Co[0] and the Fischer–Griess Monster group M — and explain my calculation that in both cases the anomaly has order exactly 24. The Monster calculation relies on a version of "T-duality" for finite groups which in turn relies on fundamental results about fusion categories. I will try to explain everything from the beginning, and assume no knowledge of the Monster or its cousins. (hide abstract) Orbifolds of conformal field theories and cohomology of sporadic groups. April 14. RTGC Seminar, UC Berkeley. (abstract. notes.) Abstract: I will report on work in progress to understand the fourth integral cohomology of the two most famous sporadic finite groups: the Fischer–Griess Monster and Conway's group Co[0]. These cohomology classes arise when studying orbifolds of conformal field theories; in that world, they are called "anomalies". I will explain the connection between fermionic anomalies and "string structures" on representations. I will also describe how to move information about anomalies through orbifolds. (hide abstract) Advanced integration by parts: the BV formalism. Feb 9. Geometric Structures Laboratory, Fields Institute. (abstract. notes.) Abstract: The phrase "BV formalism" means many things. The first half of my talk will focus on its most basic meaning: a systematic way to organize the "integration by parts" method from undergraduate calculus into a packaging amenable to more general homological algebra (namely, into a twisted de Rham complex). Particularly useful is the Homological Perturbation Lemma. It assures that the algorithms we teach to undergraduates terminate; it produces Feynman diagrams, the ur-example of "perturbative" physics; and, as I will explain, it also applies to nonperturbative integrals, providing a nonperturbative version of "stationary phase approximation". The second half of my talk will generalize the earlier discussion. The "BV formalism" suggests that any system with algebraic properties similar to a twisted de Rham complex should be thought of as an "oscillating integral problem". I will explain one origin of such systems, called the "AKSZ construction". BV-type systems are amenable to homological perturbation lemma techniques. Time permitting, I will explain how I had hoped to prove Kontsevich formality, why my proof failed, and what that failure says about Poincare duality. (hide abstract) Fermionic hamiltonian reduction and periodicity. Feb 1. Geometry and Physics Seminar, Boston University. (abstract.) Abstract: I will describe a "geometric" origin for the famous "Bott periodicity" Morita equivalence between Cliff(8) and R. Specifically, I will explain that that equivalence arises from quantizing the symplectic reduction of fermionic 8-dimensional space by an action by Spin(7). The quaternions and the Lie group G_2 will also make an appearance. Time permitting, I will speculate about a similar "periodicity" equivalence of conformal field theories predicted by conjectures in homotopy theory. In the CFT version, sporadic finite simple groups play a starring role. (hide Ideals in derived algebra and boundary conditions in AKSZ-type field theories. Jan 27. RTGC Seminar, UC Berkeley. (abstract. notes.) Abstract: For each dg operad P, I will present a homotopically-coherent version of "P-ideal". This presentation extends without change to a many-to-many generalization of operads with tree-level compositions called "dioperads". Whereas operads describe algebras, dioperads describe bialgebras, and "P-ideals" for a dioperad P are simultaneously ideals and coideals. In the case where P describes Frobenius algberas, P-ideals show up in relative Poincare duality. In the case where P describes Lie bialgebras, P-ideals are related to coisotropic submanifolds of derived Poisson manifolds. Koszul duality and exact triangles will also appear in my talk. (hide abstract) 576 fermions, the Conway group, and tmf. Sept 27. Institute for Theoretical Physics, Stanford. Moonshine, topological modular forms, and 576 fermions. Sept 22, Mathematical Physics Seminar, Perimeter Institute. (abstract. video.) Abstract: I will report on progress understanding the 576-fold periodicity in TMF in terms of conformal field theoretic constructions. Sporadic finite groups and their cohomology will play a role. (hide abstract) Bott periodicity via quantum Hamiltonian reduction. Sept 2, Representation Theory, Geometry, and Combinatorics Seminar, UC Berkeley. (abstract) Abstract: I will describe the Morita equivalence between Cliff(8) and R in terms of the quantum Hamiltonian reduction of the spin module of Spin(7). Along the way I will mention fermions, the exceptional group G_2, the E_8 lattice, and the quaternions. Time permitting I will also speculate about Conway's group Co_0 and topological modular forms. I will aim to be elementary, at least in the non-speculative portion of the talk, and I always invite arbitrary questions and interruptions. (hide abstract) Bott periodicity via quantum Hamiltonian reduction. May 12, Factorization Algebras and Functorial Field Theories meeting, Oberwolfach. (abstract, notes) Abstract: I will describe a symplectic origin for the famous Morita equivalence between Cliff(8) and R. Specifically, I will explain that this Morita equivalence arises from quantizing the Hamiltonian reduction R^0|8//Spin(7), where Spin(7) acts on R^8 via the spin representation. I will also quantize the reductions R^0|4//Spin(3) and R^0|7//G[2]. (hide abstract) "Spin-statistics" is a categorification of "Hermitian". February 22, RTGC Seminar, Berkeley. (abstract, handout) Abstract: I will describe a "cobordism" language in which to pose requirements on a quantum field theory like being Hermitian (when complex conjugation = orientation reversal) or satisfying Spin-Statistics (when fermions = spinors). This language also a homotopy-theoretic origin for those two requirements: nature distinguished them among all possible similar requirements. Namely, Hermitian field theories arise because π[0](O(∞)) has a unique nontrivial torsor over Spec(R). Spin-statistics field theories arise for the same reason, except that one must replace π[0](O(∞)) with the fundamental groupoid of O(∞) and one must replace commutative algebras with symmetric monoidal categories. (hide abstract) Where does the Spin-Statistics Theorem come from?. November 23, Geometry, Topology and Dynamics Seminar, UIC. (abstract, handout) Abstract: The "spin-statistics theorem" is a physical phenomenon in which spinors --- (-1)-eigenstates of rotation by 360° --- are the same as fermions --- (-1)-eigenstates of switching two identical particles. Physicists usually understand this phenomenon as a fact about certain representations of the Lorentz group. In this talk I will give a very different mathematical "origin" of the spin-statistics theorem. I will explain that spin-statistics arises in precisely the same was as does the physical phenomenon of "unitarity", which in turn depends on a fundamental but nontrivial coincidence: the absolute Galois group of R happens to equal the group of connected components of the orthogonal group. This talk will assume no knowledge of physics. (hide abstract) Spin--Statistics and Categorified Galois Groups. October 23, Conference on Condensed Matter Physics and Topological Field Theory, Perimeter Institute. (abstract, handout) Abstract: I will describe two coincidences in homotopy theory, the second a categorification of the first. The first coincidence is the origin of "unitarity" in field theory. The second is a topological origin for the so-called "spin--statistics theorem". (hide abstract) A higher category theorist's take on the spin--statistics theorem. September 28, Topology Seminar, UIUC. (abstract, handout) Abstract: This talk is about a pair of coincidences in homotopy theory which related quantum field theory with commutative algebra. The first coincidence is the fact that the etale homotopy type of Spec(R) matches the homotopy 1-type of BO(∞), the classifying space of the stable orthogonal group. This coincidence, I will argue, is the reason for "unitary" phenomena in physics. The second coincidence is a categorification of this: I will describe a setting in which Spec(R) has an "etale" homotopy type that matches the homotopy 2-type of BO(∞), and explain how this provides the "spin--statistics theorem" relating spinors to fermions. (hide abstract) The Stokes groupoids of Gualtieri, Li, and Pym. May 4&6, Math 448: Topics in Geometry and Topology, Northwestern. (abstract, notes) Abstract: An overview of the paper Gualtieri, Li, and Pym, "The Stokes Groupoids", 2013, arXiv:1305.7288. No results in this talk are due to the speaker. (hide abstract) Some non-dualizable categories. April 17, Representation Theory, Geometry, and Combinatorics, UC Berkeley. (abstract, handout) Abstract: Linear cocomplete categories provide a categorification of linear algebra. In this talk, I will describe recent work with M. Brandenburg and A. Chirvasitu in which we investigate which linear cocomplete categories are "dualizable" --- if this were the module theory for a commutative ring, these would be the finitely generated projective modules. In fact, I will explain that dualizability *of the category* is closely related to whether the category has enough finitely generated projectives. Examples will come from representation theory and from algebraic geometry: in particular, non-reductive groups and projective varieties provide non-dualizable examples. Applications come from quantum field theory: dualizable linear cocomplete categories arise both in "relative" and in "extended" quantum field theories, and so our results mean that "topological gauge theory for non-reductive groups" and "topological sigma models for projective varieties" cannot be described in this framework. (hide abstract) Local Poincare duality & deformation quantization. April 2, Center for Geometry and Physics, Institute for Basic Science, Pohang, South Korea. (abstract, handout, video (follow talk link)) Abstract: Poincare duality implies, among other things, that the de Rham cohomology of a compact oriented manifold is a commutative Frobenius algebra. Then a version of "local Poincare duality" would be a "homotopy commutative Frobenius algebra" structure on the de Rham complex satisfying some locality conditions. It turns out that there are at least two inequivalent notions of "homotopy commutative Frobenius algebra", depending on whether you work at "tree level" or at "all loop order" in a certain "Feynman" diagrammatics. This choice affects whether local Poincare duality is or is not canonical. The "all loop order" version of local Poincare duality is closely related to Kontsevich-type problems in deformation quantization. In particular, "all loop order" local Poincare duality on S^1 is obstructed; the obstruction answers the question of which Poisson structures admit universal deformation quantizations that do not require taking traces. (hide Some comments on Heisenberg-picture qft. March 18, Max Planck Institute for Mathematics, Bonn, Germany. (abstract, handout) Abstract: The usual Atiyah–Segal "functorial" description of quantum field theory corresponds to the "Schrodinger picture" in quantum mechanics. I will describe a slight modification that corresponds to the "Heisenberg picture", which I will argue is better physically motivated. The example I am most interested in is a version of quantum Chern–Simons theory that does not require the level to be quantized; it provides a neat packaging of pretty much all objects of skein theory. (hide abstract) Twisted field theories and higher-categorical (op)lax transfors. March 3, Topology Seminar, University of Notre Dame, South Bend, IN. (abstract, handout) Abstract: A "Schrodinger picture" (extended) quantum field theory is a functor from some (higher) category of "spacetimes" to some (higher) category of "Hilbert spaces". This framework is powerful and well-studied. Unfortunately, it does not capture many important examples. Instead, most interesting quantum field theories are best described as "morphisms" of some sort between functors from the category of spacetimes --- these are called "twisted" or "relative" or "Heisenberg picture" quantum field theories. The most natural notion of "morphisms of functors" is "natural transformation." Unfortunately, plain (i.e. "strong") natural transformations still fail to accommodate most examples. Instead, what is needed are "lax" or "oplax" natural transformations. In this talk, based on joint work with Claudia Scheimbauer, I will describe the definition of "(op)lax natural transformation" between functors of higher categories, and discuss qualitative differences between "lax" and "oplax" twisted quantum field theories. (hide abstract) Functorial axioms for Heisenberg-picture quantum field theory. January 12, CRG Geometry and Physics Seminar, UBC, Vancouver, BC. (abstract, handout) Abstract: The usual Atiyah–Segal "functorial" description of quantum field theory corresponds to the "Schrodinger picture" in quantum mechanics. I will describe a slight modification that corresponds to the "Heisenberg picture", which I will argue is better physically motivated. The example I am most interested in is a version of quantum Chern–Simons theory that does not require the level to be quantized; it provides a neat packaging of pretty much all objects of skein theory. (hide abstract) Heisenberg-picture quantum field theory. November 17, Quantum Mondays Seminar, Center for Geometry and Physics, Institute for Basic Science, Pohang, South Korea. (abstract, video (follow talk Abstract: The usual Atiyah–Segal axioms describe quantum field theory in terms of a "Schrodinger picture" of physics. I will argue that instead a "Heisenberg picture" is needed, and describe a small modification of those axioms that accommodates this. As an example, I will describe a skein-theoretic version of quantum Chern-Simons theory as a "fully extended oriented Heisenberg-picture tqft". It has the feature that it does not require the "level" to be quantized. It provides in particular a tqft packaging of skein theory, and my hope is that it will shed light on open conjectures in quantum topology. Bits of my talk will be based on joint work with M. Brandenburg, A. Chirvasitu, and C. Scheimbauer. (hide abstract) The CS-WZW correspondence. October 30, CFT Seminar, Northwestern University, Evanston, IL. (abstract, notes) Abstract: My overall goal is to at least explain an assertion that I have tied to Freed–Teleman that chiral WZW is not an absolute theory, but instead a relative theory valued in Chern–Simons theory. To get there, I will ramble for a while about "Heisenberg-picture field theory", and at least give a definition of Chern–Simons theory. I don't really have a complete definition of (quantum) WZW, but I do know what classical WZW theory is, and I'll end by giving the classical story of the correspondence, in which chiral WZW fields are a Lagrangian inside Chern–Simons fields. I will not have time to say anything important, like that this relates the space of conformal blocks for chiral WZW to the Hilbert space for Chern–Simons, because I will instead spend too much time being a bit polemical about how to set up the category theory necessary for qft. (hide abstract) Poisson AKSZ theories. October 3, Homological Methods in Quantum Field Theory, Simons Center for Geometry and Physics, Stony Brook, NY. (abstract, outline, video, live-TeX'ed notes by Gabriel C. Abstract: I will describe a version of the AKSZ construction that applies to possibly-open source manifolds and to possibly-infinite-dimensional Poisson (as opposed to symplectic) target manifolds (the cost being that the target must be infinitesimal). Quantization of such theories has to do with the relationship between dioperads and properads, and to the fact (due to Merkulov and Vallette) that formality in one world does not imply formality in the other. In particular, universal quantization of AKSZ theories on R^d is equivalent to the formality of a certain properad which is formal as a dioperad. I will conjecture that it is also equivalent to formality of the E_d operad. (hide abstract) Lie bialgebra quantization in 2- and 3-dimensional field theory. May 28, Associators, Formality and Invariants Seminar, Northwestern, Evanston, IL. (abstract, notes) Abstract: My goal for this talk is to describe "in pictures" a connection between the Etingof--Kazhdan and Tamarkin proofs of the existence of functorial quantization of Lie bialgebras. As I will explain, a Lie bialgebra provides the data for a 2- and 3-dimensional perturbative topological field theory --- a 3-dimensional field theory "of AKSZ type", a 2-dimensional field theory "of Poisson AKSZ type", and a way for the 2-dimensional theory to live as a "boundary field theory" for the 3-dimensional one. I will argue that Tamarkin's proof involves directly quantizing (the factorization algebra associated to) the 2-d theory (with certain boundary conditions), whereas the Etingof--Kazhdan proof involves quantizing (the Wilson lines in) the 3-d theory (again with appropriate boundary conditions). Joint with Owen Gwilliam. (hide abstract) The Jones polynomial and the Temperley--Lieb TQFT. May 9, Graduate Student Seminar, Northwestern, Evanston, IL. (abstract, notes) Abstract: The Jones polynomial was the first of by now many connections between low-dimensional topology and quantum groups. This talk will only barely touch the latter of these; instead, I will focus on the Jones polynomial and its close cousin, the Temperley--Lieb category. Indeed, Jones originally discovered his polynomial by investigating the algebras that Temperley and Lieb had written down, and I will give an ahistorical version of that discovery. My goal for the talk will be to put these objects, as well as many related objects from low-dimensional topology (going by names like "skein algebra" and "space of SL(2) local systems" and "quantum A polynomial"), into their natural packaging: a structure I call the "Temperley--Lieb TQFT". (hide abstract) Heisenberg-picture TQFTs. May 2, Representation Theory, Geometry, and Combinatorics, UC Berkeley. (abstract, notes) Abstract: The Atiyah–Segal axioms for quantum field theory generalize the "Schrödinger picture" of quantum mechanics (Hilbert spaces of states, partition functions, etc.). I will describe a small modification that corresponds instead to the "Heisenberg picture" (algebras of observables). As examples, I will describe some versions of "fully-extended" quantum Chern–Simons Theory: one built from the category of comodules of a quantized function algebra, and another built from the Temperley–Lieb category. The latter is defined over ℤ and fully extended, and (perhaps most importantly) "at generic level." (hide abstract) Poisson AKSZ theory and homotopy actions of properads. February 21, Modern trends in topological quantum field theory, Erwin Schrödinger International Institute for Mathematical Physics. ( abstract, handout) Abstract: I describe a generalization of the AKSZ construction of topological field theories to allow targets with possibly-degenerate up-to-homotopy Poisson structure. The construction requires investigating in what sense the chains on an oriented manifold carry a chain-level homotopy Frobenius structure. There are two versions of the construction: a "classical field theory" tree-level version, and a "quantum field theory" graph-level version. The tree-level version is well-behaved for all possible spacetimes and targets. The graph-level version is much more subtle, and intimately connected to the "formality" or "quantization" problem for the operad of little n-dimensional disks. (hide abstract) Up-to-homotopy Frobenius structures on manifolds, and how they relate to perturbative QFT. January 22, Topology Seminar, UC Berkeley. (abstract, handout) Abstract: The de Rham homology of an oriented manifold carries a well-known graded-commutative Frobenius algebra structure. Does this structure lift in a geometrically meaningful up-to-homotopy way to de Rham chains? The answer depends on the meanings of "geometrically meaningful" and "up-to-homotopy". I will describe two potential choices for the meanings of these words. Using the first choice, the answer to the question is always Yes. Using the second gives a more subtle situation, in which the answer is No in dimension 1, and related to the formality of the E_n operad in dimension n>1. To explain this relationship (and my interest in the problem) requires a short sojourn in the world of perturbative topological quantum field theory. (hide abstract) Poisson AKSZ theories and quantization. October 24, Geometry Seminar, UT Austin. (abstract, handout) Abstract: In the late 1990s, Alexandrov, Kontsevich, Schwartz, and Zaboronsky introduced a very general construction of classical field theories of "topological sigma model" or "Chern--Simons" type, which is well-adapted to quantization in the Batalin--Vilkovisky formalism. I will describe a generalization, which is to the usual AKSZ construction as "Poisson" is to "symplectic". The perturbative quantization problem for such field theories includes the problem of wheel-free universal deformation quantization and the Etingof--Kazhdan quantization of Lie bialgebras; more generally, it has to do with the formality problem for the E_n operads. The terms "properad" and "Koszul duality" will also make appearances in my talk. (hide abstract) Poisson AKSZ theories and quantization. October 12, Higher Structures in Algebra, Geometry and Physics, Fall Eastern Sectional Meeting of the AMS, Temple University, Philadelphia. (abstract, Abstract: I will describe a Poisson generalization of the AKSZ construction of topological field theories. This version of "classical" AKSZ theory exists for all oriented spacetimes, and resides in the world of dioperads and "quasilocal" factorization algebras. The quantization problem is generically obstructed; as I will discuss, "quantum" AKSZ theories are from the world of properads. The quantization problem is closely related to the formality problem for the En operad. It is also closely related to the question of finding a geometrically-meaningful properadic homotopy-Frobenius structure at the chain level, lifting the Frobenius-algebra structure on the homology of spacetime. (hide abstract) Poisson AKSZ theory, properads, and quantization. October 10, Geometry and Physics, Northwestern, Evanston. (abstract, handout) Abstract: In the late 1990s, Alexandrov, Kontsevich, Schwartz, and Zaboronsky introduced a very general construction of classical field theories of "topological sigma model" or "Chern--Simons" type, which is well-adapted to quantization in the Batalin--Vilkovisky formalism. I will describe a generalization, which is to the usual AKSZ construction as "Poisson" is to "symplectic". The perturbative quantization problem for such field theories includes the problem of wheel-free universal deformation quantization and the Etingof--Kazhdan quantization of Lie bialgebras; more generally, it has to do with the formality problem for the E_n operads. The technical tool needed to pose the quantum construction is the theory of properads (the classical construction corresponds to their genus-zero part, namely dioperads). This leads to a conjectured properadic description of the space of formality quasiisomorphisms for E_n. (hide abstract) A properad action on homology that fails to lift to the chain level. October 10, Geometry and Physics Pre-talk, Northwestern, Evanston. (abstract, handout) Abstract: A tenet of algebraic topology is that algebraic structures on the homology of a space should correspond to structures at the chain level, such that the axioms that hold on homology are weakened to coherent homotopies. For example, the homology of an oriented manifold is a Frobenius algebra --- what about the chains? In this talk, I will explain that for one-dimensional manifolds, the answer is No. I will save comenting on higher-dimensional manifolds for the 4pm talk. To make this precise, I will spend some time discussing the notion of "properad", which generalizes the notion of "operad" to allow many-to-many operations. I will recall the Koszul duality for properads, and how to compute cofibrant replacements. I will not assume that the words "Koszul duality" or "cofibrant replacement" are particularly familiar. (hide abstract) A properadic approach to the deformation quantization of topological field theories. September 25, Algebra and Combinatorics, Loyola University, Chicago. (abstract, notes) Abstract: I will describe how Koszul duality and the bar construction for properads is related to the path integral quantization of topological field theories. As an application, I will give a class of Poisson structures that admit a canonical wheel-free deformation quantization. (hide abstract) A salad of BV integrals and AKSZ field theories, served over a bed of properads; it comes spiced with chain-level Poincare duality and just a pinch of Poisson geometry. September 10, Research Seminar in Mathematics, Northeastern University, Boston. (abstract, notes) Abstract: The "Batalin–Vilkovisky formalism" is a collection of algebraic structures that largely subsume the theory of oscillating integrals. As an appetizer, my talk will begin by motivating this formalism through an investigation of finite-dimensional integrals and integration by parts, in the "semiclassical" (or "rapidly oscillating") limit. Along the way, we will be led to invent Feynman diagrams, and we will find ourselves with a totally algebraic understanding of the (in)dependence of Feynman diagrams on the choice of coordinates, the choice of "gauge fixing", etc. From here, my story moves into two branches, and how much of each branch I explain will depend on audience appetite. Of course, time permitting I will explain both, as they combine to give new results into deformation quantization problems. The antipasto has to do with "dioperads" and "properads", which are algebraic structures for which the multiplication is controlled by certain graphs (just like multiplication in associative algebras is controlled by putting beads on a string). In particular, Koszul duality for properads provides a host of examples of BV / Feynman diagrmamatic "integrals." The main entree has to do with topological field theory. Factorization algebras are a "quantum" version of sheaves --- what makes them "quantum" is that a version of the Heisenberg uncertainty principle disallowing simultaneous measurements is build into the axioms. They provide a framework for understanding a deep construction of topological field theories due to Alexandrov, Kontsevich, Schwarz, and Zaboronsky. (A quantum field theory is "topological" if the classical equations of motion are "the fields are constant.") The AKSZ construction realizes important models, including topological quantum mechanics and Chern--Simons Theory, within the BV formalism. The properadic story from the second part provides new examples, and relates the classical and quantum AKSZ constructions to questions from algebraic topology about lifting Poincare duality to the chain level. We will surely be too full for dessert, but time permitting I would love to describe an example using all of the above machinery: topological quantum mechanics valued in a Poisson manifold. The quantization problem for this theory is generically obstructed --- it is essentially equivalent to the problem of finding a "wheel free universal deformation quantization," and these do not exist. The properadic / BV / AKSZ story identifies (modulo combinatorial calculations that are too hard to do by hand, but should be trivial on a correctly-programmed computer) exactly which Poisson structures do admit a wheel-free deformation quantization. (hide abstract) Star quantization via lattice topological field theory. June 18, String-Math, Simons Center for Geometry and Physics, Stony Brook. (abstract, slides) Abstract: The deformation quantization problem for Poisson manifolds is well-known, and famously answered by Kontsevich more than a decade ago. I will describe a new, purely combinatorial, construction of deformation quantizations of infinitesimal Poisson manifolds. It is closely related to the "factorization algebra" perspective on effective quantum field theory recently introduced by Costello and Gwilliam, and also to a new "lattice" version of topological field theories of AKSZ type — time permitting, I will try to describe these connections. (hide abstract) Lattice Poisson AKSZ Theory. February 4, Algebraic Geometry Seminar, University of British Columbia. (abstract, handout) Abstract: AKSZ Theory is a topological version of the Sigma Model in quantum field theory, and includes many of the most important topological field theories. I will present two generalizations of the usual AKSZ construction. The first is closely related to the generalization from symplectic to Poisson geometry. (AKSZ theory has already incorporated an analogous step from the geometry of cotangent bundles to the geometry of symplectic manifolds.) The second generalization is to phrase the construction in an algebrotopological language (rather than the usual language of infinite-dimensional smooth manifolds), which allows in particular for lattice versions of the theory to be proposed. From this new point of view, renormalization theory is easily recognized as the way one constructs strongly homotopy algebraic objects when their strict versions are unavailable. Time permitting, I will end by discussing an application of lattice Poisson AKSZ theory to the deformation quantization problem for Poisson manifolds: a _one_-dimensional version of the theory leads to a universal star-product in which all coefficients are rational numbers. (hide Feynman diagrams for quantum mechanics. October 10–12, Topics in Applied Mathematics, UC Berkeley. (abstract, notes) Abstract: In this two-day guest-lecture in a semester-long class on quantum field theory, I describe some of my work on the Feynman-diagram expansion of the path integral in quantum mechanics. I begin by motivating the path integral, and then spend most of the time recalling the diagrammatic description of the asymptotics of finite-dimensional oscillating integrals. (hide abstract) Nonperturbative integrals, imaginary critical points, and homological perturbation theory. August 28, New Perspectives in Topological Field Theories, Center for Mathematical Physics, Hamburg. ( abstract, notes) Abstract: The method of Feynman diagrams is a well-known example of algebraization of integration. Specifically, Feynman diagrams algebraize the asymptotics of integrals of the form ∫ f exp(s/h) in the limit as h→0 along the pure imaginary axis, supposing that s has only nondegenerate critical points. (In quantum field theory, s is the "action," and f is an "observable.") In this talk, I will describe an analogous algebraization when h=1 --- no formal power series will appear --- and s is allowed degenerate critical points. Nevertheless, some features from Feynman diagrams remain: I will explain how to algebraically "integrate out the higher modes" and reduce any such integral to the critical locus of s; the primary tool will be a homological form of perturbation theory (itself almost as old as Feynman's diagrams). One of the main new features in nonperturbative integration is that the critical locus of s must be interpreted in the scheme-theoretic sense, and in particular imaginary critical points do contribute. Perhaps this will shed light on questions like the Volume Conjecture, in which an integral over SU(2) connections is dominated by a critical point in SL(2,ℝ). (hide abstract) Nonperturbative integrals, imaginary critical points, and homological perturbation theory. August 24, QGM Lunch Seminar, Aarhus. (abstract, notes) Abstract: The method of Feynman diagrams is a well-known example of algebraization of integration. Specifically, Feynman diagrams algebraize the asymptotics of integrals of the form ∫ f exp(s/h) in the limit as h→0 along the pure imaginary axis, supposing that s has only nondegenerate critical points. (In quantum field theory, s is the "action," and f is an "observable.") In this talk, I will describe an analogous algebraization when h=1 --- no formal power series will appear --- and s is allowed degenerate critical points. Nevertheless, some features from Feynman diagrams remain: I will explain how to algebraically "integrate out the higher modes" and reduce any such integral to the critical locus of s; the primary tool will be a homological form of perturbation theory (itself almost as old as Feynman's diagrams). One of the main new features in nonperturbative integration is that the critical locus of s must be interpreted in the scheme-theoretic sense, and in particular imaginary critical points do contribute. Perhaps this will shed light on questions like the Volume Conjecture, in which an integral over SU(2) connections is dominated by a critical point in SL(2,ℝ). (hide abstract) Wick-type theorems beyond the Gaussian. March 2, Representation Theory (and related topics) seminar, Northeastern. (abstract, handout, notes) Abstract: Wick's theorem, proven by Isserlis in 1918, provides simple algebraic relations describing the moments (i.e. correlation functions, expectation values) of a Gaussian probability measure in terms of the quadratic moments. One can ask for similar explicit relations for probability measures of the form exp(cubic)dx or even higher-degree homogeneous polynomials in the exponent. In this talk I will present a homological-algebraic approach to finding such relations, based ultimately on a derived-geometry interpretation of Batalin--Vilkovisky integration. This is joint work with Owen Gwilliam and joint work in progress with Shamil Shakirov. (hide abstract) Twisted N=1 and N=2 supersymmetry on R^4. February 10, GRASP seminar, UC Berkeley. (abstract, notes) Abstract: The goal of this talk is to explain the title. In a little more detail, I will define the N=N super-translation and super-Poincare groups for R^4, including what is an "R-symmetry". I will then define what is "twisting data" for a supersymmetric theory, and why "twisting" a theory makes it simpler. Generic "twists" for N=2 supersymmetric theories on R^4 make it topological, but the most interesting twists make it holomorphic. This talk is an attempt to understand some talks by Kevin Costello, and contains no material due to me. (hide abstract) • Notes on Floer / Gromov–Witten TQFT, based on conversations with Zack Sylvan. November 15, Witten in the 80s, UC Berkeley. (notes) • Gauge-fixed integrals for Lie algebroids. November 1, Talks in Mathematical Physics, Universität Zürich & Eidgenössische Technische Hochschule Zürich. (abstract, handout) Abstract: We describe the "BRST / Faddeev–Popov gauge-fixing" definition of integrals on (the quotient stack of) a Lie algebroid. As a central example, we compute the volume of the de Rham stack of a compact manifold. In the process, we find a new proof of the Chern–Gauss–Bonnet theorem. This is joint work with Dan Berwick-Evans. (hide abstract) • Introduction to BV Integrals. November 1, Selected topics in classical and quantum geometry, Universität Zürich. (abstract, handout) Abstract: The BV method describes integrals (and in particular asymptotics of expectation values against rapidly oscillating measures) purely in terms of (homological) algebra, with the goal being to use this algebraic description as a definition of "integral" for generalized manifolds (stacks, infinite-dimensional spaces, etc.). In the first part of this talk, I will describe the translation of expectation values into homological algebra, and (somewhat telegraphically) mention the connections with super (Gerstenhaber and derived) geometry. In the second part of the talk, I will discuss some combinatorial and algebraic methods for carrying out the actual computations: one can directly derive the usual Feynman diagrams, or one can apply more general homological perturbation theory. The material in this talk is essentially "well-known" (the first part of the talk is based a paper of Witten's from 1990), and the Feynman diagrammatics I learned in joint work with Owen Gwilliam. (hide abstract) • Asymptotics of oscillating integrals via homological perturbation theory. October 19, GRASP seminar, UC Berkeley. (abstract, handwritten notes, too-long typed notes) Abstract: The Batalin-Vilkovisky approach to integration converts the question of computing expectation values into a question in homological algebra, and reinterprets the asymptotics of oscillating integrals in terms of (quantum) deformations of (derived) intersections. The move to homological algebra makes these computations tractable by combinatorial means — a special case includes the Feynman-diagrammatic description of Gaussian integration. In this talk, I will try to explain both the derived geometry and the homological perturbation theory. Most of this story is known to experts, and a little of it is joint work with Owen Gwilliam. (hide abstract) • BRST Gauge Fixing: I. Introduction to Q-manifolds and BRST. II. Chern-Gauss-Bonnet, Morse Theory, and topological sigma models. October 11-20, Witten in the 80s, UC Berkeley. (abstract, notes I, notes II) Oct 11: Let X be a manifold equipped with a Lie algebra (or Lie algebroid) action. The derived quotient of X can be realized as a Q-manifold over X; Q-manifolds are (Z-graded) supermanifolds equipped with "cohomological" vector fields, and are a piece of derived geometry. I will recall the motation and definition. This talk is essentially contained within R. Mehta's thesis. Oct 13: I will discuss what it would mean to "integrate" over a Q-manifold. The BRST argument explains how to improve a priori ill-defined integrals. The talk will conclude with a discussion of the "Faddeev-Popov construction" for Q-manifolds that arise from Lie algebroids. The description of the Faddeev-Popov construction is joint work with Dan Berwick-Evans (in prep). Oct 18: Let X be a manifold. Denote the derived quotient of X modulo its tangent bundle by X[dR], as it is "spec" of the ring of de Rham forms on X. This derived manifold is formally zero-dimensional (if X is contractible, then X[dR] is equivalent to a point), and so ought to be equipped with a canonical "counting measure". We will compute this measure by BRST gauge fixing. Along the way we will come up with a slick proof of the Chern–Gauss–Bonnet formula. A version of this argument will appear in the thesis by Dan Berwick-Evans; the version I will present is our joint work. Oct 20: BRST gauge fixing ideas can be applied to topological field theories (with degenerate actions). In one dimension, BRST gauge fixing gives a heuristic proof of the Morse–de Rham equivalence. In two dimensions, BRST gauge fixing should give Witten's topological sigma model and Gromov-Witten theory. This talk will mostly follow papers by Rogers and Baulieu and Singer. Note: because of a schedule conflict, I did not end up giving this talk, and do not have completed notes. (hide abstract) • (Topological) duality of Hopf algebras. June 13, Cluster Algebras and Lusztig's Semicanonical Basis, University of Oregon. (abstract, notes) Abstract: I will begin by telling you what a "group" is, in a language that makes it easy to think about Lie groups, algebraic groups, universal enveloping algebras, etc., all at the same time. I will then tell you that the universal enveloping algebra of the Lie algebra T[e]G of a Lie group G "is" the subgroup of G consisting of "the points infinitely close to the identity e&in;G. To make this inclusion precise, I will describe the corresponding pairing between the universal enveloping algebra and the algebra of smooth functions. Replacing "smooth" with "polynomial" or "analytic" and forcing G to be commutative, we get a perfect pairing. A perfect pairing isn't quite as good as you really want, because the structures involved are infinite-dimensional. In the case when G is the group of upper-triangular matrices (with 1s on the diagonal) you can do better: there are natural gradings on the universal enveloping algebra and on the algebra of polynomial functions, and each graded piece is finite-dimensional, and then the two Hopf algebras are precise graded duals of each other. (hide abstract) • Homological perturbation and factorization algebras. May 26, Geometry/Physics Seminar, Northwestern University. (abstract, notes) Abstract: Much of quantum field theory concerns questions with the following flavor: you have some "classical" data, and you make a "small perturbation" to some part of it; how can you compatibly perturb the rest of the data to preserve some structure? One version of this question was solved in the 60s: given a homotopy equivalence of chain complexes and a small perturbation to the differential on the large complex, the homotopy perturbation lemma provides formulae that compute compatible perturbations to the differential on the small complex and to all the maps in the homotopy equivalence. In this talk I will recall this lemma, and then illustrate it with some examples from low-dimensional "topological" factorization algebras, where the homological perturbation lemma can be used to: compute asymptotics of oscillating integrals ("Feynman diagrams"); construct Weyl, Clifford, and Universal Enveloping algebras; explain how a topological quantum field theory on the bulk of a manifold can induce a tqft on the boundary. (hide abstract) • On Atoms, Mountains, and Rain. April 20, NUMS Seminar, Northwestern University. (abstract) Abstract: This talk consists almost entirely of lies. A few lies we will tell: rocks are made of rock atoms, liquid water is a perfect cubic crystal lattice, and 1 = 2. Using these lies, we will derive from first principles the radius of an atom, the height of a mountain, and the volume of a raindrop. Doing so honestly, even if we knew all the fundamental equations of the universe, would be impossible; lying makes everything work out nicely. The talk is based on P. Goldreich, S. Mahajan, and S. Phinney, Order-of-Magnitude Physics: Understanding the World with Dimensional Analysis, Educated Guesswork, and White Lies, 1999. (hide abstract) • Feynman Diagrams for Schrodinger's Equation. Feb 15, GADGET Seminar, UT Austin. (abstract, handout) Abstract: Feynman's path integral, an important formalism for quantum mechanics, lacks a completely satisfactory analytic definition. One possible definition is as a formal power series whose coefficients are given by sums of finite-dimensional integrals indexed by Feynman diagrams. This ``formal'' path integral is used extensively in every-day physics, but is not usually compared against (mathematically rigorous, nonperturbative) quantum mechanics. In this talk, I will explain the definition of the quantum-mechanical formal path integral, and point out many of its features --- it has ultraviolet divergences unless certain compatibility conditions are met, it is coordinate-independent, it solves Schrodinger's equation --- none of which are obvious from the definitions, but rather require the combinatorics of Feynman diagrams. These results provide justification for the formal path integrals in quantum field theory. (hide abstract) • E[2] operad, Gerstenhaber and BV, and Formality. Feb 10, Student String Topology Seminar, UC Berkeley. (abstract, notes, handout) Abstract: I will briefly recall the notion of an operad, and then focus on the E[2] or "little 2-disks" operad (in spaces), and its framed cousin. Calculating its homology recovers the Gerstenhaber operad (in graded vector spaces), with the correct signs — most descriptions of Gerstenhaber have unfortunate sign conventions — or with framing the BV operad. I will then prove the Formality Theorem for (framed) E[2]: as operads of dg vector spaces, the operad of simplicial chains in (framed) E[2] is quasiisomorphic to its homology. I will follow the Tamarkin/Severa proof, which requires developing some of the very rich theory of Drinfel'd associators. (hide abstract) • Crash course in Tannaka-Krein theory. Dec 3, Student Subfactors Seminar, UC Berkeley. (abstract, notes) Abstract: Tannaka-Krein theory asks two main questions: (Reconstruction) What about an algebraic object can you determine based on knowledge about its representation theory? (Recognition) Which alleged "representation theories" actually arise as the representation theories of algebraic objects? In this talk I'll mention some answers to the second question, but I'll focus more on the first. The punchline: essentially everything, provided you remember the underlying spaces of your representations --- there is an almost perfect dictionary between algebraic structures and categorical structures. My goal is to explain the results in as elementary and pared-down a way as possible, so the talk will be more or less reverse-chronological. The only prerequisite is some brief acquaintance with the following two-categories: (Category, Functor, Natural Transformation) and (Algebra, Bimodule, Intertwiner). The main Tannaka-Krein story that I will present is ``twentieth century'' and by now well known, but time permitting I will also mention some joint work in progress with Alex Chirvasitu. (hide abstract) • Formal calculus, with applications to quantum mechanics. Sept 10, GRASP seminar, UC Berkeley. (abstract, notes). Abstract: "Formal" or "Feynman diagrammatic" calculus is nothing more nor less than the differential and integral calculus of formal power series. The latter name is because Feynman's diagrams provide a convenient notation for manipulating formal power series and for understanding their combinatorics. In this talk, I will outline the formal calculus, and then use it to write out the "path integral" description of the asymptotics of the time-evolution operator in quantum mechanics. The diagrammatics make it much easier to prove that the "path integral" is well-defined and satisfies the necessary requirements. (hide abstract) • Introduction to Vassiliev invariants. April 2, GRASP seminar, UC Berkeley. (abstract). Abstract: "Vassiliev" or "finite-type" knot invariants include (up to a change of coordinates) most of the popular knot invariants (HOMFLYPT, ...). But they are also closely related to Lie algebraic questions. I will give an introduction to this story. (hide abstract) • How to quantize infinitesimally-braided symmetric monoidal categories. March 19, Subfactors Seminar, UC Berkeley. (abstract, notes). Abstract: An infinitesimal braiding on a symmetric monoidal category is analogous to a Poisson structure on a commutative algebra: both tell you a "direction" in which to "quantize". In this expository talk, I will tell a story that was completed by the end of the 1990s, concerning the quantization problem for infinitesimally-braided symmetric monoidal categories. Along the way, other main characters will include: a Lie algebra, a quadratic Casimir, and a classical R-matrix; braided monoidal categories, associators, and pentagons and hexagons; Tannakian reconstructions theorems and Hopf and quasiHopf algebras; and everyone's favorite knot invariants. I'll explain all these words, and try to explain how they're all part of a single story. (hide abstract) • The Formal Path Integral in Quantum Mechanics. Feb 26, Subfactors Seminar, UC Berkeley. (abstract, slides). Abstract: In his thesis (first published in 1948), Richard Feynman suggested a new formalism for quantum mechanics, now called the "Feynman Path Integral." Feynman knew that defining his path integral analytically would be difficult: modern analytic definitions generally start with a Wiener measure and place restrictions on the corresponding classical mechanical system. But within a few years Feynman and Freeman Dyson had defined a "perturbative" path integral: they declared the value of the integral to be a formal power series whose coefficients were given by sums of finite-dimensional integrals indexed by "Feynman diagrams." These days, this "formal" path integral is used extensively in every-day physics, and provided some of the first "quantum" knot invariants. However, it has not been compared carefully against (mathematically rigorous, nonperturbative) quantum mechanics. In this talk, I will explain the definition of the quantum-mechanical formal path integral, and point out many of its features — it has ultraviolet divergences unless certain compatibility conditions are met, it is coordinate-independent, it solves Schrödinger's equation — none of which are obvious from the definitions, but rather require the combinatorics of Feynman diagrams. These results provide justification for the formal path integral. (hide abstract) • What the Hell is a Feynman Diagram? Sept 29, Ph.d. seminar, Institut for Matematiske Fag, Aarhus Universitet. (abstract, notes). Abstract: The goal of the talk is to introduce the notion of "Feynman Diagram" in a reasonably rigorous way, and to state some theorems proving that it is a good notion. I will organize the talk more-or-less via a "mathematician's history of mathematics," which is to say a false history, one that gives the impression that all ideas inevitably lead up to what we now know is the true and complete story. Thus, I will begin by describing why you might invent Feynman Diagrams. I'll then tell you about what the mathematicians have said about them. Time permitting, I'll finish with some speculation of my own. (hide abstract) • On Atoms, Mountains, and Rain. March 31, Many Cheerful Facts, UC Berkeley. (abstract, notes). Abstract: This talk won't include very many facts, but it will include many almost facts, aka "lies". A few lies we will tell: rocks are made of rock atoms, liquid water is a perfect cubic crystal lattice, and 1 = 2. Using these and similar "facts", we will derive from first principles the radius of an atom, the height of a mountain, and the volume of a raindrop. Doing so honestly, even if we knew all the fundamental equations of the universe, would be impossible; lying makes everything work out nicely. The material is almost entirely from to P. Goldreich, S. Mahajan, and S. Phinney, Order-of-Magnitude Physics: Understanding the World with Dimensional Analysis, Educated Guesswork, and White Lies , 1999. Available at http://www.inference.phy.cam.ac.uk/sanjoy/oom/. (hide abstract) • Enriching Yoneda. December 11, QFT Mini Conference, UC Berkeley. (abstract, notes). Abstract: The goal of this expository talk is to formulate and prove the Yoneda embedding theorem for categories enriched over a closed monoidal category. The material for this talk is almost entirely from G.M. Kelly, Basic Concepts of Enriched Category Theory, Cambridge University Press, 2005. (hide abstract) • Divergent Series. October 18, Many Cheerful Facts, UC Berkeley. (abstract, notes). Abstract: Mathematicians through the ages have varied from terrified of divergent sums to only mildly scared of them: Euler, most famously, made great use of divergent series, whereas Abel called them "the invention of the devil". In this talk, I will survey the most important methods of summing divergent series, and make general vague remarks about them. I will quote many results, but will studiously avoid proving anything. The material is almost entirely from G.H. Hardy, Divergent Series, 1949. (hide abstract)
{"url":"http://categorified.net/talks.html","timestamp":"2024-11-09T06:09:33Z","content_type":"text/html","content_length":"221567","record_id":"<urn:uuid:438b60b3-b2c5-4dc7-ad10-3c376535fc9d>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00253.warc.gz"}
Data Structures & Algorithms Matrix Multiplication - November 1, 2022 Introduction Matrix multiplication is a fundamental operation in computer science and has widespread applications in various domains, including data processing, machine learning, computer graphics, and scientific computing. In this tech blog, we will explore the importance of matrix multiplication, its basic concepts, different methods for performing matrix multiplication, and its relevance in data-driven applications. Why […]
{"url":"https://serhatgiydiren.com/category/data-structures-algorithms/","timestamp":"2024-11-12T19:00:46Z","content_type":"text/html","content_length":"53220","record_id":"<urn:uuid:e53e392e-eff5-464d-87ca-af95506f7765>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00810.warc.gz"}
Testing and interpreting uncovered interest parity in Russia • Adolfson, M., Laseen, S., Linde, J., & Villani, M. (2007). Bayesian estimation of an open economy DSGE model with incomplete pass-through. Journal of International Economics, 72(2), 481-511. • Alfaro, L., & Kanczuk, F. (2013). Debt redemption and reserve accumulation. Harvard Business School Working Paper. • Baba, N., & Packer, F. (2009). Interpreting deviations from covered interest parity during the financial market turmoil of 2007-2008. Journal of Banking & Finance, 33(11), 1953-1962. • Backus, D.K., Foresi, S., & Telmer, C.I. (2001). Affine term structure models and the forward premium anomaly. Journal of Finance, 56(1), 279-304. • Bansal, R., & Dahlquist, M. (2000). The forward premium puzzle: Different tales from developed and emerging economies. Journal of International Economics, 51(1), 115-144. • Bilson, J.F.O. (1981). The “speculative efficiency” hypothesis. Journal of Business, 54(3), 435-451. • Brandt, M.W., Cochrane, J.H., & Santa-Clara, P. (2006). International risk sharing is better than you think, or exchange rates are too smooth. Journal of Monetary Economics, 53(4), 671-698. • Burnside, C. (2015). The carry trade in industrialized and emerging markets. In C. RaddatzD. Saravia & J. Ventura, Global liquidity, spillovers to emerging markets and policy responses, (pp. 245-280). Santiago, Chile: Central Bank of Chile. • Burnside, C., Eichenbaum, M., & Rebelo, S. (2012). Understanding the profitability of currency- trading strategies. NBER Reporter, 3. • Chinn, M., & Meredith, G. (2004). Monetary policy and long-horizon uncovered interest parity. IMF Staff Papers, 51(3), 409-430. • Chinn, M.D. (2006). The (partial) rehabilitation of interest rate parity in the floating rate era: Longer horizons, alternative expectations, and emerging markets. Journal of International Money and Finance, 25(1), 7-21. • Chinn, M.D., & Quayyum, S. (2012). Long horizon uncovered interest parity re-assessed. NBER Working Paper No. 18482. • Clarida, R., Davis, J., & Pedersen, N. (2009). Currency carry trade regimes: Beyond the Fama regression. Journal of International Money and Finance, 28(8), 1375-1389. • Engel, C. (2015). Exchange rates and interest parity. • Evans, M. (2011). Exchange-rate dynamics. Princeton: Princeton University Press. • Frankel, J., & Poonawala, J. (2010). The forward market in emerging currencies: Less biased than in major currencies. Journal of International Money and Finance, 29(3), 585-598. • Galati, G., Heath, A., & Mcguire, P. (2007). Evidence of carry trade activity. BIS Qaurterly Review, 3, 27-41. • Gurvich, E.T., Sokolov, V.N., & Ulyukaev, A.V. (2009). The impact of the exchange rate policy on the interest rates: Uncovered and covered interest rate parity. Zhurnal Novoy Ekonomicheskoy Assotsiatsii, 1-2, 104-126. • Hansen, L.P., & Hodrick, R.J. (1980). Forward exchange rates as optimal predictors of future spot rates: An econometric analysis. Journal of Political Economy, 88(5), 829-853. • Hassan, T., & Mano, R. (2014). Forward and spot exchange rates in a multi-currency world. NBER Working Paper No. 20294. • IMF (2009). Russian Federation: 2009 Article IV consultation: Staff report. Washington, DC: International Monetary Fund (IMF Country Report No. 09/246). • IMF (2012 a). Russian Federation: Staff report for the 2015 Article IV consultation. Washington, DC: International Monetary Fund (IMF Country Report No. 15/211). • IMF (2012 b). Russian Federation: Selected issues: Banking sector and financial market conditions. Washington, DC: International Monetary Fund (IMF Country Report No. 12/218). • IMF (2015). 2015 external sector report. Washington, DC: International Monetary Fund. • Kollmann, R. (2004). Welfare effects of a monetary union. Journal of the European Economic Association, 2(2–3), 289-301. • Menkhoff, L., Sarno, L., Schmeling, M., & Schrimpf, A. (2012). Carry trades and global foreign exchange volatility. Journal of Finance, 67(2), 681-718. • Rodrik, D. (2006). The social cost of foreign exchange reserves. International Economic Journal, 20(3), 253-266. • Wang, J. (2010). Home bias, exchange rate disconnect, and optimal exchange rate policy. Journal of International Money and Finance, 29(1), 55-78. • Burnside, C. (2012). Carry trades and risk. In J. JamesI.W. Marsh & L. Sarno, Handbook of exchange rates, (pp. 283-312). Hoboken: NJ: Wiley. • Della Corte, P., Riddiough, S., & Sarno, L. (2015). Currency premia and global imbalances. Review of Financial Studies, 29(8), 2161-2193. • Lewis, K.K. (1995). Puzzles in international financial markets.
{"url":"https://rujec.org/article_preview.php?id=27987","timestamp":"2024-11-11T10:15:57Z","content_type":"application/xhtml+xml","content_length":"170598","record_id":"<urn:uuid:1600eecb-8c77-4a60-96ec-887c2c85925b>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00035.warc.gz"}
Future value compounded monthly excel Being able to calculate out the future value of an investment after years of compounding will help you to make goals and measure your progress toward them. Fortunately, calculating compound interest is as easy as opening up excel and using a simple function- the future value formula. Thankfully there is an easy way to calculate this with Excel’s FV formula! FV stands for Future Value. In our example below, we have the table of values that we need to get the compound interest or Future Value from: There are two important concepts we need to use since we are using monthly contributions: The FV function can calculate compound interest and return the future value of an investment. To configure the function, we need to provide a rate, the number of periods, the periodic payment, the present value. To get the rate (which is the period rate) we use the annual rate / periods, or C6/C8. Here's how to use Excel to calculate any of the five key unknowns for any So if the annual interest rate is 6% and you make monthly loan payments, the argument would be 10 times 12, or 120 periods. pv is the present value of the loan. 31 Dec 2019 Future value is the value of a sum of cash to be paid on a specific date in the future. As another example, what if the interest on the investment compounded monthly instead of annually, and Excel Formulas and Functions 21 Sep 2018 with the time. Use the Excel calculator to get the time value of money in Excel. Compounded Monthly : r = Annual rate/12, n=no of yrs. X 12. The calculation of the effective rate on the loan in Excel For calculating to the effective monthly rate, we need use the IRR function (return to the internal rate of return FV function. This is the monetary value of accrued compound interest. 6 Jun 2019 Keep reading to understand the importance of future value and how it can Future value with compounded interest is calculated in the following manner: How to Calculate a Monthly Loan Payment in Excel (Mortgage, Car This function allows you to calculate the present value of a simple annuity. £ 10,000 in 4 years time if the annual discount rate is 10% (compounded monthly). The future value formula also looks at the effect of compounding. Earning .5% per month is not the same as earning 6% per year, assuming that the monthly The future value calculations on this page are applied to investments for which interest is compounded in each period of the investment. However if you are supplied with a stated annual interest rate, and told that the interest is compounded monthly, you will need to convert the annual interest rate to a monthly interest rate and the number of periods into months: Thankfully there is an easy way to calculate this with Excel’s FV formula! FV stands for Future Value. In our example below, we have the table of values that we need to get the compound interest or Future Value from: There are two important concepts we need to use since we are using monthly contributions: The FV function can calculate compound interest and return the future value of an investment. To configure the function, we need to provide a rate, the number of periods, the periodic payment, the present value. To get the rate (which is the period rate) we use the annual rate / periods, or C6/C8. The future value calculations on this page are applied to investments for which interest is compounded in each period of the investment. However if you are supplied with a stated annual interest rate, and told that the interest is compounded monthly, you will need to convert the annual interest rate to a monthly interest rate and the number of periods into months: The following picture shows the future value of an original investment of $100 for different years, invested at an annual interest rate of 5%. Compound Interest Formula with Monthly Contributions in Excel. If the interest is paid monthly then the formula for future value becomes, Future Value = P*(1+r/12)^(n*12). General Compound Interest Formula (for Daily, Weekly, Monthly, and Yearly Compounding) A more efficient way of calculating compound interest in Excel is applying the general interest formula: FV = PV(1+r)n, where FV is future value, PV is present value, r is the interest rate per period, and n is the number of compounding periods. To determine future value using compound interest: The second six-month period returns more than the first six months Excel Compound Interest Formula - How to Calculate Compound Interest in Excel. In Excel, you can calculate the future value of an investment, earning a I.e. the annual interest rate is divided by 12 to give a monthly interest rate, and the The general formula for compound interest is: FV = PV(1+r)n, where FV is future value, 31 Mar 2019 For compound interest, you most likely know the rate already; you are just calculating what the future value of the return might be. 1:52. WATCH: FV is a financial function in Excel that In the case of monthly compounding, How to Calculate Compound Interest in Excel. In Excel and Google Sheets, you can use the FV function to calculate a future value using the compound interest formula. The following three examples show how the FV function is related to the basic compound interest formula. Covers the compound-interest formula, and gives an example of how to use it. For instance, let the interest rate r be 3%, compounded monthly, and let the initial all the values plugged in properly, you can solve for whichever variable is left. Suppose that you plan to need $10,000 in thirty-six months' time when your 1 Apr 2019 How to calculate interest rate with compounding using MS-Excel For monthly compounding, the Npery value will in the EFFECT function will be 12. Cutting interest rates at this time does not make much sense: Keki Mistry. 12 Jan 2020 Time value of money results from the concept of interest. For instance, to find the future value of $100 at 5% compound interest, look up You borrow $50,000 and will make monthly payments for 2 years at 12% interest. Microsoft Excel is a popular program, and included is an Excel workbook which 1 Nov 2019 Pv is the present value; also known as the principal. Fv is have the interest compounded bi-annually, even if the payments are made monthly. of the month and you earn interest each month (i.e. monthly compounding), then you may estimate the future value after 30 years using: =FV(AnnualInterest/ 12 FV is a financial function in Excel that In the case of monthly compounding, 29 Jul 2019 Example 2: What is the future value of an initial investment of $5,000 that earns 5 % compounded monthly for 10 years? Answer: F = 5000*(1+0.05 Excel Compound Interest Formula - How to Calculate Compound Interest in Excel. In Excel, you can calculate the future value of an investment, earning a I.e. the annual interest rate is divided by 12 to give a monthly interest rate, and the The general formula for compound interest is: FV = PV(1+r)n, where FV is future value, 31 Mar 2019 For compound interest, you most likely know the rate already; you are just calculating what the future value of the return might be. 1:52. WATCH: FV is a financial function in Excel that In the case of monthly compounding, Under the assumption that the 7% interest rate is a nominal rate of interest compounded monthly in the first case and semiannually in the second, we see that The present value calculations on this page are applied to investments for which interest is compounded in each period of the investment. However if you are supplied with a stated annual interest rate, and told that the interest is compounded monthly, you will need to convert the annual interest rate to a monthly interest rate and the number of periods into months:
{"url":"https://cryptopkzdtm.netlify.app/basua13763men/future-value-compounded-monthly-excel-cas.html","timestamp":"2024-11-04T20:46:05Z","content_type":"text/html","content_length":"35241","record_id":"<urn:uuid:9b2f666b-b6b7-4dae-9449-bc940526bf86>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00068.warc.gz"}
Understanding Mathematical Functions: How To Use Sum Function In Googl Introduction to Mathematical Functions in Google Sheets Mathematical functions are a crucial aspect of data analysis, allowing users to perform various numerical computations to derive insights and make informed decisions. In Google Sheets, these functions play a vital role in processing data and performing complex calculations. Explanation of what mathematical functions are and their significance in data analysis Mathematical functions in the context of spreadsheet software such as Google Sheets refer to predefined formulas that perform specific operations on one or multiple values. These functions are essential in data analysis as they enable users to manipulate and analyze numerical data efficiently. By using mathematical functions, users can perform tasks such as adding up values, finding averages, calculating percentages, and more. Brief overview of Google Sheets as a tool for numerical computations Google Sheets is a web-based spreadsheet application that allows users to create, edit, and share spreadsheets online. It provides a wide range of functions and tools for numerical computations, making it a popular choice for data analysis and financial modeling. With its collaborative features and integration with other Google Workspace apps, Google Sheets is widely used in both professional and personal settings for various numerical tasks. Introduction to the SUM function as one of the most frequently used mathematical functions in spreadsheets One of the most frequently used mathematical functions in Google Sheets is the SUM function. The SUM function is used to add up a range of numbers and provide the total sum. It simplifies the process of adding multiple values together, making complex calculations more manageable. Understanding how to use the SUM function is essential for anyone working with numerical data in Google Sheets. Key Takeaways • Sum function adds up a range of numbers. • Start by selecting the cell where you want the sum. • Then type '=SUM(' and select the range of cells. • Close the parentheses and press Enter. • Your sum will appear in the selected cell. Understanding Mathematical Functions: How to use sum function in Google Sheets Google Sheets offers a variety of mathematical functions to help users perform calculations and analyze data. One of the most commonly used functions is the SUM function, which allows users to add up a range of numbers or cells. In this chapter, we will explore the basic usage of the SUM function in Google Sheets, including step-by-step instructions, variations in syntax, and a practical example of summing up a month's expenses. A. Step-by-step instructions on entering the SUM function in a Google Sheets cell 1. Open your Google Sheets document and navigate to the cell where you want the sum to appear. 2. Type =SUM( into the cell. This will initiate the SUM function. 3. Enter the range of cells or individual values that you want to add together. For example, if you want to sum the values in cells A1 to A5, you would enter A1:A5. If you want to sum specific values, you would enter them separated by commas, such as 5, 10, 15. 4. Close the parentheses and press Enter. The sum of the specified values or range will appear in the cell. B. Variations in syntax: SUM(value1, [value2, ]) vs SUM(range) Google Sheets provides two main syntax variations for the SUM function. The first syntax allows users to input individual values separated by commas, while the second syntax allows users to input a range of cells. SUM(value1, [value2, ]): This syntax is used to add up specific values. For example, =SUM(5, 10, 15) will return the sum of 5, 10, and 15. SUM(range): This syntax is used to add up a range of cells. For example, =SUM(A1:A5) will return the sum of the values in cells A1 to A5. C. Practical example: Summing up a month's expenses Let's consider a practical example of using the SUM function in Google Sheets. Suppose you have a spreadsheet that tracks your monthly expenses, with each expense listed in a separate column. To calculate the total expenses for the month, you can use the SUM function. 1. Navigate to a blank cell where you want the total expenses to appear. 2. Enter the SUM function and specify the range of cells that contain your expenses. For example, if your expenses are listed in cells B2 to B30, you would enter =SUM(B2:B30). 3. Press Enter, and the total sum of your monthly expenses will be displayed in the cell. By following these steps, you can easily use the SUM function in Google Sheets to perform calculations and gain insights into your data. Advantages of Using the SUM Function When it comes to data analysis and financial reporting, the SUM function in Google Sheets is an invaluable tool. It offers several advantages that streamline the process and enhance accuracy. A Streamlining data analysis by quickly totaling numbers One of the primary advantages of using the SUM function is its ability to streamline data analysis. Instead of manually adding up numbers in a dataset, the SUM function can quickly total them with a simple formula. This saves time and allows for more efficient analysis of large sets of data. B Enhancing the accuracy of computations by reducing manual entry errors Manual entry of numbers can lead to errors, especially when dealing with a large amount of data. By using the SUM function, the risk of manual entry errors is significantly reduced. This enhances the accuracy of computations and ensures that the final totals are correct. C Case study: How businesses use the SUM function for financial reporting Businesses rely on the SUM function in Google Sheets for financial reporting purposes. Whether it's calculating total sales for a specific period, tallying expenses, or summarizing revenue streams, the SUM function simplifies the process and provides accurate results. This case study highlights the practical application of the SUM function in real-world scenarios. Adjusting the SUM Function with Cell References When working with mathematical functions in Google Sheets, the SUM function is a powerful tool for adding up a range of numbers. However, instead of using hard-coded numbers within the function, you can also utilize cell references to make your formulas more dynamic and adaptable to changes in your data. A. Demonstrating the use of cell references instead of hard-coded numbers By using cell references in the SUM function, you can easily update the values in the referenced cells without having to modify the formula itself. This provides flexibility and efficiency in managing your data and calculations. For example, instead of entering =SUM(A1:A5) to add up the values in cells A1 to A5, you can use =SUM(A:A) to sum up all the values in column A, or =SUM(A1, B1, C1) to add specific cells together. B. The importance of relative versus absolute cell references When using cell references in the SUM function, it's essential to understand the difference between relative and absolute cell references. Relative references adjust when copied to other cells, while absolute references remain constant. For instance, if you use =SUM(A1:A5) and then copy the formula to another cell, the reference will adjust accordingly (e.g., to =SUM(B1:B5)). On the other hand, if you use =SUM($A$1:$A$5) with absolute references, the formula will always refer to cells A1 to A5, regardless of where it's copied. C. Scenario: Updating a budget with changing values without altering formulas Imagine you have a budget spreadsheet with various expense categories, and you've used the SUM function to calculate the total expenses for each category. As the month progresses, the values in the expense cells change, but you don't want to constantly update the formulas. By using cell references in the SUM function, you can simply update the values in the expense cells, and the total expenses will automatically recalculate without any need to modify the formulas. This saves time and reduces the risk of errors in your budget calculations. Expanding the SUM Function with Other Operations While the SUM function in Google Sheets is a powerful tool for adding up a range of numbers, it can be further expanded to perform more complex calculations and conditional sums. By combining the SUM function with other functions and incorporating text and date criteria, you can customize the summing process to meet specific requirements. A Combining the SUM function with other functions like SUMIF and SUMIFS for conditional sums The SUMIF and SUMIFS functions in Google Sheets allow you to sum values based on specific criteria. By using these functions in combination with the SUM function, you can perform conditional sums. For example, you can sum the sales of a particular product or from a specific region. B Incorporating text and date criteria within the summing process When working with data that includes text and date criteria, you can incorporate these into the summing process using the SUMIF and SUMIFS functions. This allows you to sum values based on specific text or date conditions, providing more flexibility in your calculations. C Example: Summing sales only for a particular product or date range For example, if you have a sales dataset that includes product names and dates, you can use the SUMIFS function to sum the sales for a particular product within a specific date range. This allows you to extract specific subsets of data and perform targeted sums based on your criteria. Troubleshooting Common Issues with the SUM Function When using the SUM function in Google Sheets, it's important to be aware of common issues that may arise. Understanding how to troubleshoot these issues can help ensure accurate results when working with mathematical functions. Solving problems with non-numeric values that cause errors One common issue when using the SUM function is encountering non-numeric values within the range of cells being summed. This can cause errors and lead to unexpected results. To solve this problem, it's important to identify and remove any non-numeric values from the range before using the SUM function. Tip: Use the ISNUMBER function to check for non-numeric values within the range. You can then use the FILTER function to exclude these non-numeric values from the range before applying the SUM Identifying and fixing issues with cell references, ranges, or closed parenthesis Another common issue that can occur when using the SUM function is errors in cell references, ranges, or closed parenthesis. These errors can lead to incorrect results or cause the function to return an error message. Tip: Double-check the cell references, ranges, and parenthesis in your SUM function to ensure they are entered correctly. Pay close attention to any typos or missing characters that may be causing the error. Using the FORMULATEXT function can help you identify any issues with the syntax of your SUM function. Tips for correcting errors when the SUM function returns unexpected results If the SUM function is returning unexpected results, there are a few tips you can use to correct the errors and ensure accurate calculations. • Check for hidden or filtered cells within the range that may be affecting the results. • Verify that the range being summed includes all the necessary cells and does not inadvertently exclude any values. • Consider using the SUMIF or SUMIFS functions if you need to apply specific criteria to the cells being summed. By following these tips and techniques, you can effectively troubleshoot common issues with the SUM function in Google Sheets and ensure accurate mathematical calculations in your spreadsheets. Conclusion & Best Practices for Using the SUM Function in Google Sheets After exploring the various aspects of the SUM function in Google Sheets, it is evident that this mathematical function offers a high level of versatility and utility for users. Whether you are working with a small dataset or a large one, the SUM function can efficiently calculate the total of a range of numbers, making it an essential tool for data analysis and manipulation. A Recap of the versatility and utility of the SUM function The SUM function in Google Sheets provides a simple and effective way to add up a range of numbers. It can be used in a variety of scenarios, from basic arithmetic operations to more complex calculations involving multiple data ranges. The ability to easily sum up values in a spreadsheet makes the SUM function a valuable asset for professionals working with financial data, statistical analysis, and other numerical applications. Best practices: Keeping your data range neat, making sure numeric values are not formatted as text, and double-checking cell references When using the SUM function in Google Sheets, it is important to maintain a clean and organized data range to ensure accurate results. This includes avoiding empty cells within the range and ensuring that all numeric values are formatted as numbers, not text. Additionally, double-checking cell references and ensuring that the correct range is selected can help prevent errors in calculations. Best practices for using the SUM function: • Keep the data range neat and organized • Ensure numeric values are not formatted as text • Double-check cell references to avoid errors Encouragement to experiment with the SUM function to better understand its potential in various contexts As with any tool or function, the best way to fully understand its potential is to experiment with it in different contexts. By exploring the various ways in which the SUM function can be used, users can gain a deeper understanding of its capabilities and how it can be applied to their specific needs. Whether it's calculating the total sales for a business quarter or summing up expenses for a project, experimenting with the SUM function can lead to valuable insights and improved proficiency in using Google Sheets.
{"url":"https://dashboardsexcel.com/blogs/blog/understanding-mathematical-functions-how-to-use-sum-function-in-google-sheets","timestamp":"2024-11-14T17:37:52Z","content_type":"text/html","content_length":"222339","record_id":"<urn:uuid:bda03c29-989e-49ba-b678-eea29c45f760>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00699.warc.gz"}
A Free Trial That Lets You Build Big! Start building with 50+ products and up to 12 months usage for Elastic Compute Service • Sales Support 1 on 1 presale consultation • After-Sales Support 24/7 Technical Support 6 Free Tickets per Quarter Faster Response • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.
{"url":"https://topic.alibabacloud.com/zqpop/javascript-multiplication_168734.html","timestamp":"2024-11-02T21:23:47Z","content_type":"text/html","content_length":"101862","record_id":"<urn:uuid:043ba7a4-94a8-4b4e-a5e5-58f3dd390c8e>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00798.warc.gz"}
Area of circle calculator The formula to calculate the area of a circle is: Area = π × r^2 Where π (pi) is a mathematical constant approximately equal to 3.14159, and r is the radius of the circle. So, if the radius of a circle is given as 5 cm, the area of the circle can be calculated as: Area = π × r^2 Area = π × 5^2 Area = 3.14159 × 25 Area = 78.5398 square centimeters Therefore, the area of the circle with radius 5 cm is 78.5398 square centimeters. Area of circle step by step Sure, here are the steps to calculate the area of a circle: 1. Determine the radius of the circle. This is the distance from the center of the circle to any point on its perimeter. 2. Use the formula A = πr² to find the area of the circle. “A” represents the area and “r” represents the radius of the circle. 3. Substitute the value of the radius into the formula. For example, if the radius of the circle is 5 cm, the formula becomes A = π(5)². 4. Simplify the formula using the order of operations. First, square the value of the radius to get 25. Then, multiply 25 by π to get the exact area of the circle. Round the answer to the desired number of decimal places, if necessary. The area of a circle is typically expressed in terms of square units, such as square centimeters or square inches. For example, if the radius of a circle is 5 cm, the area of the circle can be calculated as follows: 1. A = πr² 2. A = π(5)² 3. A = 25π 4. A ≈ 78.54 (rounded to two decimal places) Therefore, the area of the circle with a radius of 5 cm is approximately 78.54 square centimeters.
{"url":"https://calculator3.com/area-of-circle-calculator/","timestamp":"2024-11-05T15:51:35Z","content_type":"text/html","content_length":"58066","record_id":"<urn:uuid:88ecd32e-cc6e-4fd1-8755-7468122a023c>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00725.warc.gz"}
Diablo 4 Damage Buckets and Formula Explained - Diablo4.gg Diablo 4 has a damage system that not everyone can understand at a first glance. You may have heard of the Diablo 4 'Damage Buckets' theory, but how exactly does it work? In this article we aim to explain the damage formula for players so they may better understand how to efficiently deal more damage to the otherworldly forces. Sources of Damage To understand how damage and damage buckets work in Diablo 4, first we need to understand where our base damage originates from. Base Damage • Weapons are the initial source of the damage. The higher your weapon's item power, the higher the damage it can do. • When you score a damage hit, the weapon damage is then multiplied by skill damage percentage value (found in the skill tooltip). • Some classes use combined total damage from all of their equipped weapons to scale their skills damage (ex. Sorcerer). Other classes use certain weapons for certain skills (ex. Barbarian). Their weapon as a source of damage changes depending on that. You can check which weapon is used by viewing skill information. Flat Damage Some skills and aspects don't scale from weapon damage - we call these sources of flat damage. Examples of it are Thorns and the Aspect of Burning Rage. Aspects that give flat damage can be scaled and increased via player level and rarity of equipment it is found on (Normal, Sacred, Ancestral). Diablo 4 Damage Buckets Explained There are several damage multipliers (or so-called Diablo 4 'damage buckets') in the game that further increase our base damage. These multipliers are: Additive Modifiers, Vulnerable Damage Bonus, Critical Damage, and Global Multipliers. All of these can be acquired from equipment sub-stats, Aspects, skill passives, and Paragon Glyphs. 1. Additive Modifiers Damage Bonus All damage bonuses that belong in the additive category will have a (+) symbol in their tooltip descriptions. Examples of Additives are as follows: • Damage Vs Bleeding • Damage Vs CC • Damage Vs Close • Damage Vs Distant • Damage Vs Elites • Damage Vs Healthy • Damage Vs Injured • Damage Vs Knocked Down • Damage Vs Slowed • Damage Vs Stunned • Damage while Fortified • Damage while Healthy • Damage with Core • etc. The sum of all these values will equate to one multiplier applied to your Base Damage (Total Additive Bonus = Additive 1+Additive 2+Additive 3...). This means stacking multiple of these stats will result in less effectiveness, the diminishing returns are much more apparent as you unlock these bonuses from the paragon board and glyphs. Formula so far: Base Damage * Total Additive Bonus 2. Vulnerable Damage Bonus Vulnerable is a status effect that can be inflicted on enemy units via skills for a limited time. This results in the enemy taking additional damage based on the baseline 20% damage bonus + Total Vulnerable Damage Bonus found in your stats tab. A Vulnerable enemy is shown to have a purple HP bar. Formula so far: Base damage *Total Additive Bonus *Vulnerable Damage Bonus 3. Critical Strike Damage Critical Strikes have a base chance of 5%. This can be further increased by skill passives, Aspects, class stats and sub-stats from gloves and rings. Be aware that some skills can not benefit from Critical Strikes - namely, any Damage over Time skill (unless stated otherwise). When Critical strikes occur, you will see the damage numbers shown in yellow, and the damage multiplier will be based on your Critical Strike Damage. Critical Strike damage may be increased by affixes on weapons, rings, Paragon Glyphs, and Gems. Additionally, there are modifiers such as "Critical Strike damage against Vulnerable enemies" "Critical Strike Damage with Core skills" etc. These stats increase overall Critical Strike damage when conditions are fulfilled. These modifiers would apply as follows: Total Critical Strike damage = Critical Strike Damage + (Critical Strike Damage against Vulnerable enemies + Critical Strike Damage with Core skills). It is notably harder to take advantage of this multiplier as two stats need to be effectively raised to produce optimal results. However, it gets easier to get both these stats up further into the game because of increased sub-stat values from higher item power equipment. Therefore aiming for these stats at mid-game to late-game progressions is more recommended. Formula so far: Base damage *Total Additive Bonus *Vulnerable Damage Bonus *Total Critical Strike Damage 4. Main Stat Damage Bonus The most straightforward multiplier out of the bunch. Each class has a main stat associated with them (ex. Rogue - Dexterity, Sorcerer - Intelligence). The main stat varies from class to class, but can be easily checked by hovering to see which of your stats gives “Skill Damage”. One point in the main stat increases the total Skill Damage by 0.1%. Formula so far: Base damage *Total Additive Bonus *Vulnerable Damage Bonus *Total Critical Strike Damage *Main Stat Damage Bonus 5. Global Multipliers Global multipliers include Skill effects, Aspects, and Paragon Glyphs that have the (x) symbol in their tooltips. Usually, these multipliers have conditions to fulfill - for example “Deal x% of damage when Barrier is active”. Formula so far: Base damage *Total Additive Bonus *Vulnerable Damage Bonus *Total Critical Strike Damage *Main Stat Damage Bonus *Global Multiplier(s) 6. Overpower It is a mechanic that adds one final boost to your damage and is an additive increase rather than a multiplier. Overpowered hits scale with your character's sum of HP, Fortified HP, and Overpower Damage Bonus stat found in certain class stats and weapons sub-stat rolls. The base chance to proc Overpower hits is 3% with no other means to improve it. Certain Aspects and skills, however, can guarantee overpowered hits. If Overpowered and Critical Strike occurs in one instance of damage, it is called Critical Overpowered. Blue and Orange numbers Indicate Overpowered and Critical Overpowered hits respectively. Formula so far: Base damage *Total Additive Bonus *Vulnerable Damage Bonus *Total Critical Strike Damage *Main Stat Damage Bonus *Global Multiplier(s) + Overpower Damage Final Damage Formula Now that we listed all possible multipliers, the damage formula should be as follows: Base damage *Total Additive Bonus *Vulnerable Damage Bonus *Total Critical Strike Damage *Main Stat Damage Bonus *Global Multiplier(s) + Overpower Damage = Final Damage Damage Calculation Example: Let's say we have 100 base damage after scaling our weapon damage with skill damage value, and these multipliers are affecting us: • Additives with conditions fulfilled which are +50% Damage to Close Enemies and +75% Physical Damage Bonus; • We are fighting a currently Vulnerable enemy and we have a 100% Vulnerable damage bonus; • The damage dealt is a Critical Strike with our Total Critical Strike Damage being 120%; • Our main stat is 200 which would result in 20% more Skill damage; • Global multiplier from an Aspect that results in an overall 40% increase of damage, and another one from a Paragon Glyph that adds a 20% multiplier. • No Overpower is happening. 100 base damage * (1 + (0.5 + 0.75 Additives)) * (1 + 1 Vulnerable Damage Bonus) * (1 + 1.2 Critical Strike Damage) * (1 + 0.2 Main Stat Bonus) * (1 + 0.4 Global Aspect Multiplier) * (1 + 0.2 Global Glyph Multiplier) = 1108.8 Final Damage. Weapon Speed Weapon Speed does not directly add to your damage but does increase your DPS by increasing the casting speed of skills - except for skills that have locked animation length (Ex. Whirlwind). All attacks are skills (including basic attacks), therefore having an increased Weapon speed does benefit energy generation which results in more comfortable skill rotations. One can further improve Weapon Speed through Attack Speed sub-stats on gear, skill passives, aspects, and Paragon nodes. To properly banish demons in Diablo 4, one has to dive into different damage buckets and have a lot of different damage multipliers that are distributed equally. Easy-to-achieve modifiers (Damage vs Close, Damage with Core, Damage with Physical) may be more desirable to have rather than harder-to-fulfill conditions (Damage vs CC, Damage vs Injured, Damage vs This ensures consistent damage throughout all types of content. Experimentation is key!
{"url":"https://diablo4.gg/diablo-4-damage-buckets-and-formula-explained/","timestamp":"2024-11-13T10:42:50Z","content_type":"text/html","content_length":"228738","record_id":"<urn:uuid:0bf7d4e2-ce40-4eea-989d-b955b998ba2b>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00315.warc.gz"}
Trams of Berlin Baq is living in Berlin, a city really well connected thanks to its great public transport service. In particular, it has quite a vast tram network, with a peculiar characteristic: between any two tram stops, there is exactly one route that connects them. Baq has made a lot of friends in the city, and they are going to meet him soon, each on a different day. For each meeting i, let x[i] and y[i] be the tram stops where Baq and his i-thfriend plan to be before the meeting, respectively. They will meet halfway, that is, at the stop that falls closer to the middle point following the route that connects x[i] and y[i]. In case of a tie, they will choose the tram stop closer to Baq. Can you efficiently compute the total distance travelled by each of Baq’s friends? Input consists of several cases. Every case begins with the number of tram stops n. Follow n−1 triples x y ℓ describing a street of length ℓ connecting x and y. Follow n queries x[i] y[i]. Assume 2 ≤ n ≤ 10^5, that tram stops are numbered starting at 0, 1 ≤ ℓ ≤ 10^9, that the given streets form a tree, and that the queries are all different. For each case, print the total distance of the travel of every Baq’s friend. Print a line with 10dashes at the end of each case.
{"url":"https://jutge.org/problems/P51603_en","timestamp":"2024-11-10T12:25:16Z","content_type":"text/html","content_length":"24413","record_id":"<urn:uuid:80ea035d-cb47-4ec4-8697-dba4c1301188>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00302.warc.gz"}
Ratios and Fractions Practice Exercises | Math Ratios Ratios and Fractions Practice Exercises Welcome to our practice exercise module for ratios and fractions. This is your chance to apply what you've learned about these essential mathematical concepts. Inspired by the works of renowned mathematicians, these exercises will challenge your thinking and improve your understanding. Exercise 1: Ratios in Recipes Consider a recipe that uses a ratio of 2:3 of sugar to flour. If you have 8 cups of sugar, how much flour do you need? Exercise 2: Fractions in Shopping If a pair of shoes is discounted by 1/4 off the original price of $80, how much will the shoes cost after the discount? Exercise 3: Ratios in Finance If your monthly income is $3000, and you spend $1800 on expenses, what is your income to expense ratio? Exercise 4: Fractions in Time Management If you spend 1/4 of your day sleeping, how many hours do you spend sleeping? Exercise 5: Euclid's Ratio Euclid's algorithm for finding the greatest common divisor (GCD) is an essential technique in mathematics. Find the GCD of 56 and 98 using Euclid's algorithm. Exercise 6: Golden Ratio in Art Consider a rectangle where the ratio of the longer side to the shorter side is the Golden Ratio (approximately 1.618). If the shorter side is 5 cm, what is the length of the longer side? Exercise 7: Fractions in Music If a song is divided into 8 equal parts, how long is each part if the whole song is 4 minutes long? Exercise 8: Ratios in Photography Consider a photo with an aspect ratio of 4:3. If the height of the photo is 600 pixels, what is the width? Exercise 9: Fractions in Sports A basketball player makes 7 out of 10 free throw attempts. What fraction represents the player's success rate? Practicing these exercises will strengthen your understanding and application of ratios and fractions. It's fascinating to see how these concepts apply across various facets of life and academia. Keep practicing and exploring, and you'll continue to uncover the beauty of mathematics in everyday life. Ratios and Fractions Tutorials If you found this ratio information useful then you will likely enjoy the other ratio lessons and tutorials in this section: Next: Ratios and Proportions
{"url":"https://www.mathratios.com/tutorial/ratios-to-fractions-exercise.html","timestamp":"2024-11-08T18:50:14Z","content_type":"text/html","content_length":"8693","record_id":"<urn:uuid:e15da49c-5fce-4060-96db-f8dbd824db67>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00788.warc.gz"}
Course Descriptions MAT 095 Mathematics Review 0 credits This course provides opportunities for students to strengthen their mathematical skills and understanding of rational numbers using contextual, real-life problems. This course is graded on a pass/ fail basis. To pass the course, a student must earn a 77% or better. MAT 110 Math Essentials 3 credits This course provides a basic introduction to linear functions. Topics include: identify, simplify, and evaluate polynomials; solve linear equations and inequalities, including systems; graph linear equations and inequalities. Credit for this course applies toward graduation as an elective. Please note that the minimum passing grade is a ''C.'' Prerequisite(s): Pass math skills assessment or MAT 095. MAT 200 3 credits This course provides an integrated review of intermediate algebra, analytic geometry, and basic trigonometry in order to prepare the student for calculus. After a brief review of linear and quadratic functions, the course covers graphs and applications for polynomial, rational, exponential, logarithmic, and trigonometric functions. The course also incorporates matrices and vectors. Please note that a minimum grade of C is required in order for students to take Calculus I (MAT 310). Prerequisite(s): MAT 121 or MAT 205 with a minimum grade of ''C'' or college algebra equivalent. MAT 201 Mathematics for Teachers I 3 credits This class will prepare teacher candidates to become effective mathematics teachers in their own classrooms. Through mathematical investigations, candidates will learn the underlying concepts, structures, functions and patterns that promote mathematical reasoning and understanding. Candidates will investigate how moving progressively through essential topics deepens their understanding of mathematics. Students will use the National Council of Teachers of Mathematics Standards and STEM strategies. Various methods such as modeling, collaboration, manipulatives, thinking made visible, and writing across the curriculum will be presented for bridging classroom activities and real-world problem solving. Teacher candidates will learn how to analyze their students’ math-solving processes by developing thorough explanations of their own mathematical understanding and critiquing the explanation of others’ mathematical understandings. Candidates will communicate their mathematical ideas, processes, analyses and understandings through both writing and speaking. This course concentrates on numbers and operations and their application to student learning and classroom teaching. Prerequisite(s): Successfully passing math skills assessment or MAT 110 with a minimum grade of C. MAT 202 Mathematics for Teachers II 3 credits This class will prepare teacher candidates to become effective mathematics teachers in their own classrooms. Through mathematical investigations candidates will learn the underlying concepts, structures, functions and patterns that promote mathematical reasoning and understanding. Candidates will investigate how moving progressively through essential topics deepens their understanding of mathematics. Students will use the National Council of Teachers of Mathematics Standards and STEM strategies. Various methods such as modeling, collaboration, manipulatives, thinking made visible, and writing across the curriculum will be presented for bridging classroom activities and real-world problem solving. Teacher candidates will learn how to analyze their students’ math-solving processes by developing thorough explanations of their own mathematical understanding and critiquing the explanation of others’ mathematical understandings. Candidates will communicate their mathematical ideas, processes, analyses and understandings through both writing and speaking. This course concentrates on geometry, measurement, probability and statistics and their application to student learning and classroom teaching. Prerequisite(s): MAT 201 with a minimum grade of C. MAT 205 Introductory Survey of Mathematics 3 credits This course introduces a broad range of topics in mathematics, including algebra, probability, and statistics. After reviewing linear functions, algebraic topics include solving and graphing quadratic and exponential functions. Topics in probability include counting principles, combinations, permutations, compound events, mutually exclusive events, and independent events. Topics in statistics include measures of central tendency, measures of dispersion, and the normal curve. Please note that the minimum passing grade for this course is a ''C.'' Prerequisite(s): Pass math skills assessment or MAT 110 with a minimum grade of ''C''. MAT 304 Mathematics for Teachers III 3 credits This class will prepare teacher candidates to become effective mathematics teachers in their own classrooms. Through mathematical investigations candidates will learn the underlying concepts, structures, functions and patterns that promote mathematical reasoning and understanding. Candidates will investigate how moving progressively through essential topics deepens their understanding of mathematics. Students will use Common Core Mathematics Standards and STEM strategies. Various methods such as modeling, collaboration, manipulatives, thinking made visible, and writing across the curriculum will be presented for bridging classroom activities and real-world problem solving. Teacher candidates will learn how to analyze their students’ math-solving processes by developing thorough explanations of their own mathematical understanding and critiquing the explanation of others’ mathematical understandings. Candidates will communicate their mathematical ideas, processes, analyses and understandings through both writing and speaking. This course concentrates on algebra and functions and their application to student learning and classroom teaching. Prerequisite(s): MAT 202 with a minimum grade of C. MAT 308 Inferential Statistics 3 credits This course introduces the student to the scientific method of collecting, organizing, and interpreting data in real-world applications, such as behavioral science, communication, education, healthcare, manufacturing, and natural science. Students will use graphing calculators, along with Excel, to assist in displaying and analyzing data. Prerequisite(s): MAT 122 or MAT 202 or MAT 205 with minimum grade of ''C'' or BSN candidate. MAT 310 Calculus I 3 credits After a brief review of classes of functions and their properties, this course focuses on students' understanding and application of limits, continuity, techniques for finding the derivative, use of the derivative in graphing functions, applications of the derivative, implicit differentiation, anti-derivatives, areas under the curve, the Fundamental Theorem of Calculus, integration by substitution and differential equations. Students are required to explain their reasoning graphically, numerically, analytically, and verbally. Prerequisite(s): MAT 200 with a minimum grade of ''C''. MAT 311 Calculus II 3 credits After a review of limits and derivatives, this course focuses on students' understanding and application of antiderivatives, the definite integral, the Fundamental Theorem of Calculus, integration techniques, applications of the definite integral and improper integrals. An overview of multivariable calculus includes partial derivatives, minima and maxima, and double integrals. The course concludes with a discussion of Taylor series and L'Hospital's rule. Students are required to explain their reasoning graphically, numerically, analytically, and verbally. Prerequisite(s): MAT 310 MAT 312 Business Statistics 3 credits This course introduces the student to the scientific method of collecting, organizing, and interpreting data in a variety of business applications. Students will use Excel to assist in displaying and analyzing data. Prerequisite(s): MAT 205 or MAT 122 with a minimum grade of ''C'' or College of Business completion degree candidate. MAT 313 Experimental Design 3 credits A well-designed experiment is an efficient way of learning about the world. Experiments are performed in all branches of science, engineering and industry. Problems of increasing size and complexity have led to the development of many new methods for designing and analyzing experiments. This course develops concepts and practices for designing and conducting experiments, along with analyzing experimental results. Topics include randomization, replication, blocking, factorial design, ANOVA, surveys, etc. Students will develop a research question, design an experiment, and collect and analyze the data in a course project to help students use their understanding of experimental design in a practical manner. Prerequisite(s): MAT 308 or MAT 312 with a minimum grade of C. MAT 315 Calculus III 3 credits This course provides a study of vector functions, functions of several variables, limits, and continuity for functions of more than one variable, partial differentiation and applications, optimizations, the chain rule, directional derivatives, multiple integrals, line integrals, curl, Green's Theorem, Stoke's Theorem, and the Divergence Theorem. Prerequisite(s): MAT 311 MAT 320 Finite Mathematics 3 credits This course provides a survey of selected topics in mathematics, with emphasis on problem solving and applications. Algebra and functions will be reviewed. Core topics include exponential and logarithmic functions, interest, annuities, systems of linear equations, matrix operations, linear programming, the simplex method, set theory, probability, and counting theory. Prerequisite(s): MAT 304, MAT 205, MAT 121 or college algebra equivalent. MAT 322 Linear Algebra with Applications 3 credits This course is a study of linear systems, matrices, determinants, subspaces, eigenvalues, orthogonality, machine learning, AI, computer graphics, and economic models. Prerequisite(s): MAT 315 MAT 330 Discrete Math 3 credits This course provides an introduction to discrete mathematics. Topics include sets, functions and relations, mathematical induction and logic, sequences and recursion, and an introduction to Boolean Prerequisite(s): MAT 200 and MAT 320 MAT 331 3 credits This course presents the core concepts and principles of Euclidean geometry in two and three dimensions. Topics include geometric constructions, congruence, similarity, transformations, measurement, and coordinate geometry. Axiomatic systems and proofs are covered. An overview of non-Euclidean geometries is provided. Prerequisite(s): MAT 200 MAT 332 History of Mathematics 3 credits This course provides an overview of the historical evolution of major concepts in mathematics including counting and number systems, geometry, algebra, calculus, and statistics. The contributions of various civilizations ranging from Babylonia and Egypt through Greece and the Middle East to the modern world are reviewed. Biographical sketches of some of the individuals who made major contributions to the development of mathematics are presented. The interrelationship between the evolution of mathematics, science, and technology is explored. Prerequisite(s): MAT 311, MAT 308, and MAT 331 MAT 490 Experiential Learning in Applied Mathematics 3 credits This course provides students with an experiential learning opportunity to engage in project-based learning within the student’s current employment context or through a simulated work experience utilizing application-based assessments that align with the Applied Mathematics Program competencies. The course provides students with an opportunity to define, analyze and apply theories and models to resolve a complex organizational problem(s) and real-world experiences to strategize mathematical solutions. Prerequisite(s): MAT 200, MAT 310, MAT 311, MAT 315, MAT 322, MAT 320, MAT 330, MAT 312, MAT 313, CSC 402, CSC 414, and BBA 460 MAT 491 Internship in Applied Mathematics 3 credits This capstone course provides the student experience in an applied mathematics setting. Students may work within an organization on either a full-time or part-time co-op or internship basis for a 14-week semester. Alternatively, students may complete a comprehensive project based on prior coursework using an organization at which the student is currently employed or with which the student is familiar. Through this experience the student has an opportunity to explore career interests. At the same time the student gains a practical understanding of work in the industry, experience on the job, enhancement of skills learned in the classroom, and contacts with professionals in the business world. Prerequisite(s): MAT 200, MAT 310, MAT 311, MAT 315, MAT 322, MAT 320, MAT 330, MAT 312, MAT 313, CSC 402, CSC 414, and BBA 460
{"url":"https://www.wilmu.edu/courses/courseDescriptions.aspx?subCode=MAT&amp;courseNum=110","timestamp":"2024-11-11T00:53:03Z","content_type":"text/html","content_length":"37949","record_id":"<urn:uuid:d262adae-0faa-455e-972f-9f8e1b019d9f>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00825.warc.gz"}
Geometry of Retaining Wall • This tool is for retaining wall analysis, as shown in the figure above. • This tool is suitable for cantilever and gravity retaining walls using Rankine earth pressure theory. • Two homogenous soil layers are considered in the analysis. • The backfill layer is granular soil, and the cohesion c[1] is assumed to be zero. • The normal requirements for the stability check are listed as: FS[O](Safety factor of overturning) > 2; FS[S](Safety factor of base sliding) > 1.5; FS[B](Safety factor of bearing capacity) > 3. • All the input parameters should be positive. • For "nan", "0" or "inf" displayed in Results, please check your input parameters. Input Parameters Output Results Welcome! Please input the parameters! • Rankine active pressure P[a] (kN/m) = • Rankine passive pressure P[p] (kN/m) = • Total vertical force acting on the base ΣV (kN/m) = • Maximum pressure under the base q[max] (kN/m^2) = • Minimum pressure under the base q[min] (kN/m^2) = • Ultimate bearing capacity q[u] (kN/m^2) = • Safety factor of overturning FS[O] = • Safety factor of sliding FS[S] = • Safety factor of bearing capacity FS[BC] =
{"url":"http://j.geoinvention.com/retaining_wall.php","timestamp":"2024-11-11T14:18:03Z","content_type":"text/html","content_length":"6551","record_id":"<urn:uuid:28d5faef-fb8d-47e7-a3a0-143a9d8a38a6>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00556.warc.gz"}
Ronny Kohavi shares how to accelerate innovation by getting results you can trust | Kameleoon - DLIT Ronny Kohavi shares how to accelerate innovation by getting results you can trust | Kameleoon A name synonymous with trustworthy testing, Ronny Kohavi is the former Vice-President and Technical Fellow at Microsoft and Airbnb. Over his 20+ year experimentation career, he’s run thousands of online experiments and has assembled his observations into a best-selling book, Trustworthy Online Controlled Experiments: A Practical Guide to A/B Testing. He shares his observations through dozens of published pieces, and now offers a course on accelerating innovation with A/B testing. In this article, based on this interview, Ronny warns of the many pitfalls that experimentation programs fall into when running online tests. He also provides several strategies to overcome these data inaccuracies and obtain trustworthy test results. Three data accuracy traps and how to avoid them When running online experiments, getting numbers is easy. Getting numbers you can trust is hard. Although A/B tests are considered the gold standard of online testing, there are also many factors that can jeopardize test trustworthiness. Experimenters and organizations that don’t focus on trustworthy test results risk losing revenue and credibility, resulting in a damaged reputation that’s hard to repair. If this outcome occurs, management, or your clients, can lose confidence in your work. And, organizational buy-in becomes even more challenging, threatening the overall feasibility of your experimentation program and the perceived value of testing. To prevent these negative outcomes, it’s important to get trustworthy, reliable results that propel the organization forward. To do so, here are the three main actions Ronny suggests you take. 1. Follow the key statistics principles Design experiments with adequate power When running any test, there’s always a chance you’ll get faulty results. A test can yield what’s known as a false positive or a false negative. In A/B testing, a false positive, also called a type I error, happens when you claim there’s a conversion difference that isn’t really there. In contrast, a false negative, also known as a type II error, occurs when you incorrectly declare there’s no conversion difference when, in fact, there is one. Either error skews the accuracy of test results, so should be avoided. Power measures the percentage of time a real effect, or conversion difference between variant(s), will be detected — assuming a difference truly exists. Essentially, it provides a metric to help you assess how well you’ve managed to avoid a type II error so you don’t incorrectly declare there’s no conversion difference — when in fact, there is one. The standard level to set power is 0.80 (80%). This amount means you’ll accurately detect a true effect at least 80% of the time. As such, there’s only a 20% chance of missing the effect and ending up with a false negative. A risk worth taking. The higher the power, the stronger the likelihood of accurately detecting a real effect. Although increasing power may sound like a great idea, there’s a trade-off. The greater the power, the larger the sample size needs to be. And, often, the longer the test needs to run. So, getting adequate power can be tricky, especially for lower traffic sites. A test is underpowered when the sample is so small the effect, or conversion difference detected, isn’t accurate – leading to a type I error (false positive). The lower the power, the more exaggerated the effect may be. While low-powered tests can achieve statistical significance, the results aren’t to be trusted. To ensure results are accurate, experimenters need to run adequately powered studies with a large enough sample size. How large is large enough? To properly answer this question, a sample size or power calculator should be used AHEAD of running the study. If sample size or power calculations show you need more traffic than your site receives in a typical 2-6 week testing time frame, you should evaluate whether the test is worth running as the results are likely to be inaccurate. Don’t peek Once you’ve confirmed you have a large enough sample size to run an adequately powered study, don’t stop the test before reaching the pre-calculated sample size requirements. This is known as peeking because you’re “peeking” at the results and may be tempted to prematurely declare the test a winner (or loser) when, in reality, it’s far too early to accurately tell. For many websites, getting enough traffic takes time. And waiting is hard. But peeking is a bad practice that can lead you to make money-losing mistakes. Set a lower alpha Provided your study is adequately powered, and you don’t prematurely stop the test, you should be in good shape. However, your test still runs the risk of yielding a type I error (false positive). Remember, this error occurs when you think you’ve detected a conversion difference that doesn’t really exist. To lower the risk of a false positive, you can set the cut-off point at which you’re willing to accept the possibility of this error. This cut-off point is known as significance level alpha. But most people usually just call it significance or refer to it as alpha (denoted α). Experimenters can choose to set α wherever they want. The closer it is to 0, the lower the probability of a type I error. But, it’s a trade-off. Because, the lower α, in turn, the higher the probability of a type II error. So, it’s a best practice to set α at a happy middle ground. A commonly accepted level used in online testing is 0.05 (α = 0.05). This level means you accept a 5% (0.05) chance of a type I error, or of incorrectly declaring a conversion difference when there really isn’t one. A calculated risk worth taking. Interpret the p-value results correctly However, even if you’ve prudently set α ≤0.05, you’re not out of the woods yet. Because your findings still may just be the outcome of just random chance or statistical noise. To be sure they’re not, you next need to measure the probability you’ve detected an effect – assuming there’s actually no real difference between the control and variant(s). To measure this probability, you need to assign a value known as a probability value, commonly called a p-value. P-value is an outrageously mis-understood concept. But for a test to be trustworthy, you need to be able to interpret the p-value results correctly. Doing so is actually quite simple. When the p-value is less than α (p≤α), it means the chance of detecting the effect is really low. Because the chance is so low, when an effect is detected, it’s considered an unusual, surprising outcome – one that’s significant and noteworthy. Therefore, a p-value of ≤0.05 means the result is statistically significant. The closer the p-value is to 0, the stronger the evidence the conversion difference is real – not just the outcome of random chance or error. Which, in turn, means you have stronger evidence you’ve really found a winner (or loser). Use Bayes’ rule to reverse p-values to false-positive risk However, even if you’ve obtained a statistically significant result, based on a properly-powered study, you can’t fully trust the result. Yet. That’s because you’ve compromised in accepting a small margin of error – which still gives the very real probability of a false positive or false negative. However, through a calculation known as Bayes’ Rule, also called Bayes’ Theorem, you can reverse the probability of a false positive – which translates into the real chance of a true positive. Bayes’ Rule determines the probability of independent events occurring and is calculated through this formula: When you use Bayes Rule to reverse the false positive risk, you know, with greater confidence, you’ve truly found a real effect. 2. Build in guardrails to check assumptions Watch for SRM Yet, while you may have truly accurately detected a real effect, your test results can still be flawed if your data is inaccurate. Many A/B test results are invalid because of faulty data collection. One of the worst offenders is known as Sample Ratio Mismatch (SRM). SRM occurs when test traffic is distributed differently than the experimental design. For example, in a uniform split design, there is an SRM if one variation receives much more traffic than the other(s). SRM occurs in about 6-10% of all A/B tests, and invalidates the test results. There are more than 40 reasons why SRM might occur. Most relate to improper set-up of the test, bugs with randomization, tracking and reporting issues, or the outcome of bots falsely inflating traffic numbers. Not all of these problems are avoidable, but some are. To guard against SRM, there are certain measures you should take, including: Use A/A tests as a diagnostic tool One of the best ways to validate accuracy is to test the same variant against itself — through something known as an A/A test. If you’re not completely clear what an A/A test is, don’t fret. A recent survey found 32% of experimenters weren’t sure either. But the concept is actually quite simple. An A/A test is exactly as it sounds: a split-test that pits two identical versions against each other. Half of visitors see version 1, the other half version 2. The trick here is both versions are exactly the same. It’s slightly counter-intuitive. But, with an A/A test, you’re actually looking to ensure there is no difference in results. When running an A/A test, if one version emerges with a radically different traffic split, or as a clear winner, you know there’s an issue. A/A tests can be thought of as a diagnostic tool that can uncover many bugs in your test set-up, including SRM. They help you validate that you’ve set-up and run the test properly — and that the data coming back is accurate and reliable. Although A/A tests have been criticized for being traffic and resource-intensive because they can distract from running “real” tests that bring in conversions, they shouldn’t be overlooked. To make most efficient and effective use of your testing resources, here are some suggested A/A testing approaches: Start any test with an A/A test before running an A/B test. Run A/A tests, continually, in the background or offline. You can get 90% of the value of an A/A test by running the test offline. Keep the A/A test going the same time period as your A/B tests. A/A tests should be used as a guardrail to ensure data is accurate, identify bugs in the platform, spot outliers that impact results, and raise the trustworthiness of test results. Avoiding test trustworthiness traps We all have a natural bias to celebrate positive results and cast-off negative results, but it’s important to look at both with healthy skepticism. If any test result looks too good to be true, it probably is. Because, according to something known as Twyman’s law, any figure that looks interesting or different is usually wrong. So, in your own testing, look for big surprises and question them. What separates good experimentation programs from great ones is taking measures to overcome these three test trustworthiness traps. To learn more how you can accelerate innovation with trustworthy online experiments, watch the replay of our webinar with Ronny, ‘How to avoid common data accuracy pitfalls,’ or check out the key takeaways from the event. If you’re interested to learn more about how to design, measure and implement trustworthy A/B tests, check out Ronny’s course, ‘Accelerating innovation with A/B testing.’
{"url":"https://dlit.co/ronny-kohavi-shares-how-to-accelerate-innovation-by-getting-results-you-can-trust-kameleoon/","timestamp":"2024-11-02T07:46:23Z","content_type":"text/html","content_length":"61711","record_id":"<urn:uuid:15048c04-6f71-4ba7-9f9e-59df3815a961>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00014.warc.gz"}
CAPM Assignment Help | CAPM Homework Help What is CAPM Assignment Help Services Online? CAPM Assignment Help Services Online is a professional academic assistance service that provides support to students studying Capital Asset Pricing Model (CAPM) as part of their finance or investment management coursework. CAPM is a widely used financial model that helps investors evaluate the expected return of an investment relative to its risk. It is an essential topic in finance and requires a deep understanding of various concepts, formulas, and calculations. CAPM Assignment Help Services Online offer plagiarism-free write-ups that are tailored to meet the specific requirements of students’ assignments. The service is provided by a team of expert writers who possess in-depth knowledge of CAPM and have years of experience in the field. These writers ensure that the assignments are well-researched, well-structured, and written in a clear and concise manner. The write-ups are also thoroughly checked for plagiarism using advanced plagiarism detection tools to ensure originality and authenticity. CAPM Assignment Help Services Online cover a wide range of topics related to CAPM, including risk and return, beta calculation, portfolio management, efficient market hypothesis, and more. The assignments are delivered within the specified deadline, allowing students to submit their work on time and earn good grades. Additionally, the service provides 24/7 customer support to address any queries or concerns of students promptly. In summary, CAPM Assignment Help Services Online provide professional academic assistance to students studying CAPM, ensuring high-quality, plagiarism-free write-ups that meet the specific requirements of their assignments. These services help students excel in their coursework and achieve academic success. Various Topics or Fundamentals Covered in CAPM Assignment The Capital Asset Pricing Model (CAPM) is a widely used financial tool that helps investors understand and evaluate the risk and return characteristics of an investment portfolio. CAPM assignments typically cover several fundamental topics related to this model, including: Risk and Return: CAPM is based on the fundamental principle that investors require compensation for bearing risk. CAPM assignments often discuss the concepts of risk and return in financial markets, including the distinction between systematic risk (also known as market risk) and unsystematic risk (also known as idiosyncratic risk). CAPM explains that investors should be compensated for systematic risk, as it cannot be diversified away, while unsystematic risk can be diversified. Beta: Beta is a key concept in CAPM that measures the sensitivity of an investment’s returns to changes in the overall market. CAPM assignments often discuss how to calculate beta, interpret its value, and use it to assess the risk and expected return of a stock or portfolio. CAPM teaches that stocks with higher betas are expected to have higher returns but also higher risk. Capital Market Line (CML): The CML is an important concept in CAPM that depicts the relationship between risk and expected return for efficient portfolios. CAPM assignments may cover how to construct the CML, interpret its slope and intercept, and use it to identify optimal portfolios for different risk preferences. Security Market Line (SML): The SML is a critical component of CAPM that shows the expected return of a security or portfolio based on its beta. CAPM assignments may discuss how to plot the SML, interpret its relationship with the risk-free rate, and use it to determine whether a security is overvalued or undervalued. Cost of Capital: CAPM assignments may cover how to use CAPM to estimate the cost of equity capital for a company, which is the expected return required by investors for holding the company’s stock. This can be used in financial decision-making, such as determining the feasibility of investment projects or evaluating the performance of a company’s management. Limitations of CAPM: CAPM is not without its limitations, and CAPM assignments may discuss these limitations, such as the assumptions of perfect markets, rational investors, and homogenous expectations. CAPM assignments may also cover alternative models, such as the Fama-French Three-Factor Model, that attempt to address some of the limitations of CAPM. Real-world Applications: CAPM is widely used in practice for portfolio management, investment decision-making, and risk assessment. CAPM assignments may discuss real-world applications of CAPM, such as how it is used by investment professionals to make investment decisions, manage risk, and evaluate the performance of investment portfolios. In conclusion, CAPM assignments cover several fundamental topics related to the Capital Asset Pricing Model, including risk and return, beta, the Capital Market Line, the Security Market Line, cost of capital, limitations of CAPM, and real-world applications. It is important to provide a plagiarism-free write-up by properly citing all sources used and ensuring that the content is original and not copied from any other source. Explanation of CAPM Assignment with the help of General Motors by showing all formulas The Capital Asset Pricing Model (CAPM) is a widely used financial model that helps investors assess the expected return on an investment based on its systematic risk, also known as beta. CAPM is often used to determine the required rate of return for an investment or to evaluate the risk-adjusted performance of a portfolio. One key component of CAPM is the risk-free rate of return, which represents the expected return on a risk-free investment, such as a government bond. This is denoted by the symbol “rf” in the formula. In the case of General Motors, we can assume a risk-free rate of return of 3%, which represents the expected return on a government bond. The formula for CAPM is as follows: CAPM = rf + β * (rm – rf) CAPM represents the expected return on an investment rf is the risk-free rate of return β (beta) represents the systematic risk of the investment rm represents the expected return on the overall market Now let’s break down each component using General Motors as an example: Risk-Free Rate (rf): As mentioned earlier, we can assume a risk-free rate of return of 3% for General Motors based on current market conditions and the expected return on government bonds. Systematic Risk (β): Beta is a measure of an investment’s sensitivity to changes in the overall market. A beta of 1 indicates that the investment is expected to move in line with the market, while a beta greater than 1 indicates higher sensitivity to market fluctuations, and a beta less than 1 indicates lower sensitivity. General Motors’ beta can be calculated by comparing its historical stock returns to the overall market’s returns. For example, if General Motors has a beta of 1.2, it means that for every 1% change in the overall market, General Motors’ stock is expected to change by 1.2%. Expected Market Return (rm): The expected return on the overall market, denoted by “rm”, is an estimate of the average return that investors expect to earn from the market. It is typically based on historical market performance and future market projections. For example, if the expected return on the overall market is 8%, we would use 8% as the value for “rm”. Using these values, we can now calculate the expected return on General Motors’ stock using the CAPM formula: CAPM = 3% + 1.2 * (8% – 3%) CAPM = 3% + 1.2 * 5% CAPM = 3% + 6% CAPM = 9% Based on the CAPM calculation, the expected return on General Motors’ stock is 9%. This means that an investor would require a 9% return on their investment in General Motors to compensate for the systematic risk associated with the stock, as measured by its beta. In conclusion, the Capital Asset Pricing Model (CAPM) is a useful tool for investors to estimate the expected return on an investment based on its systematic risk. Using General Motors as an example, we can see how the CAPM formula incorporates the risk-free rate of return, beta, and expected market return to calculate the expected return on the stock. It’s important to note that CAPM is a simplified model and has its limitations, but it is widely used in the finance industry to assess the risk and return of investments. Need help in CAPM Assignment Help Services Online, submit your requirements here. Hire us to get best finance assignment help.
{"url":"https://financeassignmenthelpdesk.com/capm-assignment-help/","timestamp":"2024-11-11T18:06:14Z","content_type":"text/html","content_length":"66641","record_id":"<urn:uuid:f76ebf63-9e00-47a8-9a5c-8310fdeb9e0a>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00409.warc.gz"}
Computational Complexity A Georgia Tech student asked the title question in an introductory theory course. The instructor asked his TA, the TA asked me and I asked the oracle of all things log space, Eric Allender. Eric didn't disappoint and pointed me to Burkhard Monien’s 1975 theorem L is closed under Kleene star if and only if L = NL. L here is the set of problems solved in deterministic O(log n) space and NL is the nondeterministic counterpart. For a set of strings A, Kleene star , denoted A is the set of all finite concatenations of strings of A. For example if A = {00,1} then A = {ε, 1, 00, 11, 001, 100, 111, 0000, 0011, 1100, …} where ε is the zero-length string. Kleene star comes up in regular expressions but also makes for many a good homework problem. 1. Show that if A is in NL then A^* is also in NL. 2. Show that if A is in P then A^* is also in P. 3. Show that if A is c.e. (recognizable) then A^* is also c.e. Problem 1 above is equivalent to saying NL is closed under Kleene star and implies the “if” part of Monien’s result. Here is a simple proof of the other direction that L closed under Kleene star implies L = NL. Consider the following NL-complete problem: The set of triples (G,s,t) such that G is a directed graph with a restriction that all edges (i,j) have i<j and there is a path from node s to node t. Define the following language B = {G#i+1#G#i+2#...#G#j# | there is an edge (i,j) in G} B is computable in log space and the string G#s+1#G#s+2#…#G#t# is in B if and only if there is a path from node s to node t. QED Allender, Arvind and Mahajan give some generalizations to log-space counting classes and also notes that there are languages computable in AC (constant-depth circuits) whose Kleene star is NL-complete. B above is one such set. Last semester I ran a THEORY DAY AT UMCP. Below I have ADVICE for people running theory days. Some I did, some I didn't do but wish I did, and some are just questions you need to ask yourself. 1) Picking the day- I had two external speakers (Avrim Blum and Sanjeev Arrora) so I was able to ask THEM what day was good for THEM. Another way is to pick the DAY first and then asking for 2) Number of external speakers- We had two, and the rest were internal. The external speakers had hour-long talks, the internal had 20-minute talks. This worked well; however, one can have more or even all external speakers. 3) Whoever is paying for it should be told of it towards the beginning of the process. 4) Lunch- catered or out? I recommend catered if you can afford it since good time for people to all talk. See next point. 5) If its catered you need a head count so you need people to register. The number you get may be off- you may want to ask when they register if they want lunch. Then add ten percent. 6) Tell the guest speakers what time is good for them to arrive before they make plans so you can coordinate their dinner the previous night. 7) If have the money and energy do name tags ahead of time. If not then just have some sticky tags and a magic marker. 8) Guest speakers- getting them FROM Amtrak or Airport to dinner/hotel --- give them a personal lift (they may be new to the city and a bit lost). Getting them from the event back TO the airport or amtrak, can call airline limo and taxi. (though if can give a ride, that's of course good.) 9) Pick a day early and stick with it. NO day is perfect, so if someone can't make it, or there is a faculty meeting that day, then don't worry about it. 10) Have website, speakers, all set at least a month ahead of time. Advertise on theory-local email lists, blogs you have access to, and analogs of theory-local for other places (I did NY, NJ, PA). Also email people to spread the word. 11) Advertise to ugrads. Students are the future! 12) If you are the organizer you might not want to give a talk since you'll be too busy doing other things. 13) Well established regular theory days (e.g., NY theory day) can ignore some of the above as they already have things running pretty well. Gordon Moore formulated his famous law in a paper dated fifty years and five days ago. We all have seen how Moore's law has changed real-world computing, but how does it relate to computational In complexity we typically focus on running times but we really care about how large a problem we can solve in current technology. In one of my early posts I showed how this view can change how we judge running time improvements from faster algorithms. Improved technology also allows us to solve bigger problems. This is one justification for asymptotic analysis. For polynomial-time algorithms a doubling of processor speed gives a constant multiplicative factor increase in the size of the problem we can solve. We only get an additive factor for an exponential-time algorithm. Although Moore's law continues, computers have stopped getting faster ten years ago. Instead we've seen the rise of new technologies: GPUs and other specialized processors, multicore, cloud computing and more on the horizon. The complexity and algorithmic communities are slow to catch up. With some exceptions, we still focus on single-core single-thread algorithms. Rather we need to find good models for these new technologies and develop algorithms and complexity bounds that map nicely into our current computing reality. You have likely heard of the new result by Ben Roco, and Li-Yang on random oracles (see here for preprint) from either Lance or Scott or some other source: Lance's headline was PH infinite under random oracle Scott's headline was Two papers but when he stated the result he also stated it as a random oracle result. The paper itself has the title An average case depth hierarchy theorem for Boolean circuits and the abstract is: We prove an average-case depth hierarchy theorem for Boolean circuits over the standard basis of AND, OR, and NOT gates. Our hierarchy theorem says that for every d≥2, there is an explicit n-variable Boolean function f, computed by a linear-size depth-d formula, which is such that any depth-(d−1) circuit that agrees with f on (1/2+on(1)) fraction of all inputs must have size exp(nΩ(1/d)). This answers an open question posed by Hastad in his Ph.D. thesis. Our average-case depth hierarchy theorem implies that the polynomial hierarchy is infinite relative to a random oracle with probability 1, confirming a conjecture of Hastad, Cai, and Babai. We also use our result to show that there is no "approximate converse" to the results of Linial, Mansour, Nisan and Boppana on the total influence of small-depth circuits, thus answering a question posed by O'Donnell, Kalai, and Hatami. A key ingredient in our proof is a notion of \emph{random projections} which generalize random restrictions. Note that they emphasize the circuit aspect. In Yao's paper where he showed PARITY in constant dept requires exp size the title was Separating the polynomial hiearchy by oracles Hastad's paper and book had titles about circuits, not oracles. When Scott showed that for a random oracle P^NP is properly in Sigma_2^p the title was Counterexample to the general Linial-Nissan Conjecture However the abstarct begins with a statement of the oracle result. SO, here is the real question: What is more interesting, the circuit lower bounds or the oracle results that follow? The authors titles and abstracts might tell you what they are thinking, then again they might not. For example, I can't really claim to know that Yao cared about oracles more than circuits. Roughly speaking the Circuit results are interesting since they are actual lower bounds, often on reasonable models for natural problems (both of these statements can be counter-argued), oracle results are interesting since they give us a sense that certain proof teachniques are not going to work. Random oracle results are interesting since for classes like these (that is not well defined) things true for random oracles tend to be things we think are true. But I want to hear from you, the reader: For example which of PARITY NOT IN AC_0 and THERE IS AN ORACLE SEP PH FROM PSPACE do you find more interesting? Is easier to motivate to other theorists? To non-theorists (for non-theorists I think PARITY). Benjamin Rossman, Rocco Servedio and Li-Yang Tan show new circuit lower bounds that imply, among other things, that the polynomial-time hierarchy is infinite relative to a random oracle. What does that mean, and why is it important? The polynomial-time hierarchy can be defined inductively as follows: Σ^P[0 ]= P, the set of problems solvable in polynomial-time. Σ^P[i+1 ]= NP^Σ^P[i], the set of problems computable in nondeterministic polynomial-time that can ask arbitrary questions to the previous level. We say the polynomial-time hierarchy is infinite if Σ^P[i+1 ]≠ Σ^P[i] for all i and it collapses otherwise. Whether the polynomial-time hierarchy is infinite is one of the major assumptions in computational complexity and would imply a large number of statements we believe to be true including that NP-complete problems do not have small circuits and that Graph Isomorphism is not co-NP-complete. We don't have the techniques to settle whether or not the polynomial-time hierarchy is infinite so we can look at relativized worlds, where all machines have access to the same oracle. The Baker-Gill-Solovay oracle that makes P = NP also collapses the hierarchy. Finding an oracle that makes the hierarchy infinite was a larger challenge and required new results in circuit complexity. In 1985, Yao in his paper Separating the polynomial-time hierarchy by oracles showed that there were functions that had small depth d+1 circuits but large depth d circuits which was needed for the oracle. Håstad gave a simplified proof. Cai proved that PSPACE ≠ Σ^P[i] for all i even if we choose the oracle at random (with probability one). Babai later and independently gave a simpler proof. Whether a randomly chosen oracle would make the hierarchy infinite required showing the depth separation of circuits in the average case which remained open for three decades. Rossman, Servedio and Tan solved that circuit problem and get the random oracle result as a consequence. They build on Håstad's proof technique of randomly restricting variables to true and false. Rossman et al. generalize to a random projection method that projects onto a new set of variables. Read their paper to see all the details. In 1994, Ron Book showed that if the polynomial-time hierarchy was infinite that it remained infinite relativized to a random oracle. Rossman et al. thus gives even more evidence to believe that the hierarchy is indeed infinite, in the sense that if they had proven the opposite result than the hierarchy would have collapsed. I used Book's paper to show that a number of complexity hypothesis held simultaneously with the hierarchy being infinite, now a trivial consequence of the Rossman et al. result. I can live with that. As baseball starts its second week, lets reflect a bit on how data analytics has changed the game. Not just the Moneyball phenomenon of ranking players but also the extensive use of defensive shifts (repositioning the infielders and outfielders for each batter) and other maneuvers. We're not quite to the point that technology can replace managers and umpires but give it another decade or two. We've seen a huge increase in data analysis in sports. ESPN ranked teams based on their use of analytics and it correlates well with how those teams are faring. Eventually everyone will use the same learning algorithms and games will just be a random coin toss with coins weighted by how much each team can spend. Steve Kettmann wrote an NYT op-ed piece Don't Let StatisticsRuin Baseball. At first I thought this was just another luddite who will be left behind but he makes a salient point. We don’t go to baseball to watch the stats. We go to see people play. We enjoy the suspense of every pitch, the one-on-one battle between pitcher and batter and the great defensive moves. Maybe statistics can tell which players a team should acquire and where the fielders should stand but it still is people that play the game. Kettmann worries about the obsession of baseball writers with statistics. Those who write based on stats can be replaced by machines. Baseball is a great game to listen on the radio for the best broadcasters don't talk about the numbers, they talk about the people. Otherwise you might as well listen to competitive tic-tac-toe. Every four years the Association for Computing Machinery organizes a Federated Computing Research Conference consisting of several co-located conferences and some joint events. This year's event will be held June 13-20 in Portland, Oregon and includes Michael Stonebraker's Turing award lecture. There is a single registration site for all conferences (early deadline May 18th) and I recommend booking hotels early and definitely before the May 16th cutoff. Theoretical computer science is well represented. • 47th ACM Symposium on the Theory of Computing. Apply for student travel support by May 9th. • 30th Computational Complexity Conference, now an independent conference. Student travel support deadline of May 9th. CCC is looking for a new logo, if you have ideas send them to Dieter van • 16th ACM Conference on Economics and Computation and its associated workshops and tutorials • A plenary lecture by Andy Yao The CRA-W is organizing mentoring workshops for early career and mid-career faculty and faculty supervising undergraduate research. A number of other major conferences will also be part of FCRC including HPDC, ISCA, PLDI and SIGMETRICS. There are many algorithmic challenges in all these areas and FCRC really gives you an opportunity to sit in talks outside your comfort zone. You might be surprised in what you see. See you in Portland! 1. All numbers except 23 can be written as the sum of 8 cubes 2. All but a finite number of numbers can be written as the sum of 7 cubes 3. There are an infinite number of numbers that cannot be written as the sum of 3 cubes(this you can prove yourself, the other two are hard, deep theorems). Open: Find x such that: 1. All but a finite number of numbers can be written as the sum of x cubes. 2. There exists an infinite number of numbers that cannot be written as the sum of x-1 cubes. It is known that 4 ≤ x ≤ 7 Lets say you didn't know any of this and were looking at empirical data. 1. If you find that every number ≤ 10 can be written as the sum of 7 cubes this is NOT interesting because 10 is too small. 2. If you find that every number ≤ 1,000,000 except 23 can be written as the sum of 8 cubes this IS interesting since 1,000,000 is big enough that one thinks this is telling us something (though we could be wrong). What if you find all but 10 numbers (I do not know if that is true) ≤ 1,000,000 are the sum of seven cubes? Open but too informal to be a real question: Find x such that 1. Information about sums-of-cubes for all numbers ≤ x-1 is NOT interesting 2. Information about sums-of-cubes for all numbers ≤ x IS interesting. By the intermediate value theorem such an x exists. But of course this is silly. The fallacy probably relies on the informal notion `interesting'. But a serious question: How big does x have to be before data about this would be considered interesting? (NO- I won't come back with `what about x-1'). More advanced form: Find a function f(x,y) and constants c1 and c2 such that 1. If f(x,y) ≥ c1 then the statement all but y numbers ≤ x are the sum of 7 cubes is interesting. 2. If f(x,y) ≤ c2 then the statement all but y numbers ≤ x are the sum of 7 cubes is not interesting. To end with a more concrete question: Show that there are an infinite number of numbers that cannot be written as the sum of 14 4th powers. I would rather challenge you than fool you on April fools day. Below I have some news items. All but one are true. I challenge you to determine which one is false. 1. Amazon open a brick and mortar store: Full story here. If true this is really really odd since I thought they saved time and money by not having stores. 2. You may have heard of some music groups releasing Vinyl albums in recent times. They come with an MP3 chip so I doubt the buyers ever use Vinyl,but the size allows for more interesting art. What did people record on before Vinyl. Wax Cylinders! Some music groups have released songs on wax cylinders! See here for a release a while back by Tiny Tim (the singer, not the fictional character) and here for a release by a group whose name is The Men Will Not Be Blamed For Anything. 3. An error in Google Maps lead to Nicaragua accidentally invading Costa Rica. Even more amazing--- This excuse was correct and Google admitted the error. See here for details. 4. There was a conference called Galileo was wrong, The Church Was Right for people who think the Earth really is the centre of the universe (my spell checker says that `center' is wrong and `centre' is right. Maybe its from England). I assume they mean that the sun and other stuff goes around the earth in concentric circles, and not that one can take any ref point and call it the center. The conference is run by Robert Sungenesis who also wrote a book on the topic (its on Amazon here and the comments section actually has a debate on the merits of his point of view.) There is also a website on the topic here. The Catholic Church does not support him or his point of view, and in fact asked him to take ``Catholic'' out of the name of his organization, which he has done. (ADDED LATER- A commenter named Shane Chubbs, who has read over the relevent material on this case more carefully than I have, commented that Robert Sungenesis DOES claim that we can take the center of the universe to be anywhere, so it mine as well be here. If thats Roberts S's only point, its hard to believe he got a whole book out of it.) OH- this is one of the TRUE points.
{"url":"https://blog.computationalcomplexity.org/2015/04/?m=0","timestamp":"2024-11-08T02:25:07Z","content_type":"application/xhtml+xml","content_length":"233471","record_id":"<urn:uuid:f7c58c2c-d153-47b5-be98-9243d5195efb>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00408.warc.gz"}
How do you integrate y=3 (5x+9)^9? | HIX Tutor How do you integrate #y=3 (5x+9)^9#? Answer 1 Do a "u" substitution. Please see the explanation. Let #u = 5x + 9#, then #du = 5dx, dx = 1/5du# #int3(5x + 9)^9dx = 3/5intu^9du# #int3(5x + 9)^9dx = 3/50u^10 + C# Reverse the substitution: #int3(5x + 9)^9dx = 3/50(5x + 9)^10 + C# Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer 2 To integrate the function y = 3(5x + 9)^9, you can use the substitution method. Let u = 5x + 9. Then, du/dx = 5. Rearrange to solve for dx: dx = du/5. Now substitute u = 5x + 9 and dx = du/5 into the integral: ∫3(5x + 9)^9 dx = ∫3u^9 (1/5) du. Now integrate with respect to u: ∫3u^9 (1/5) du = (3/5) ∫u^9 du. Apply the power rule for integration: (3/5) * (1/10) * u^10 + C = (3/50) * u^10 + C. Now, substitute back for u: (3/50) * (5x + 9)^10 + C. Where C is the constant of integration. Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer from HIX Tutor When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some Not the question you need? HIX Tutor Solve ANY homework problem with a smart AI • 98% accuracy study help • Covers math, physics, chemistry, biology, and more • Step-by-step, in-depth guides • Readily available 24/7
{"url":"https://tutor.hix.ai/question/how-do-you-integrate-y-3-5x-9-9-8f9afa0f31","timestamp":"2024-11-02T17:40:10Z","content_type":"text/html","content_length":"567591","record_id":"<urn:uuid:801ce9b9-9a05-4470-9756-8055b8912dd0>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00379.warc.gz"}
Problem C: Barcelona FC Manager [Middle l] Karl is a fan of Barcelona FC(Football Club), but recently he is dissatisfied with the manager. So, he make a daydream to become a Barcelona FC manager and buy some football players. The Barcelona FC has infinite money, that means Karl can buy anyone he want. There are n players. For each player, it has a power \(a_i\). For Each hour, he can only buy one player. However, for the i^th player, he can only be bought before \(b_i\)^th hour(including the \(b_i\)^th hour). Karl is short with algorithm, please help him to buy some players, which can make the sum of power as max.
{"url":"https://acm.sustech.edu.cn/onlinejudge/problem.php?cid=1086&pid=2","timestamp":"2024-11-09T01:25:10Z","content_type":"text/html","content_length":"10099","record_id":"<urn:uuid:b500780e-6059-4ec6-b0ed-64aabad026fe>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00165.warc.gz"}
Fast Small Dense Matrix Solver 03-06-2013 06:24 PM I have a general square dense matrix A (not symmetric) which is formed by A=P^TBP where B was in a compressed storage scheme and P is a rectangular matrix. The size of A ranges from 10x10 to 500x500, where B can be 150,000x150,000 and is sparse. What would be the best way to solve for x given b (system of linear equations) that result from Ax=b => x=A^-1b Right now I am just using LAPACK DGESV that is linked to MKL (so assume I am using their solver). Is there any benifit to going to a interative solver or any recomendations as to how to best solve this system of equations as fast as possible. Thanks for any comments 03-07-2013 06:15 AM 03-07-2013 08:01 AM 03-07-2013 08:51 PM 03-07-2013 08:56 PM
{"url":"https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Fast-Small-Dense-Matrix-Solver/td-p/964454","timestamp":"2024-11-03T03:40:45Z","content_type":"text/html","content_length":"237492","record_id":"<urn:uuid:ab54ac40-11bf-4f4a-90db-daf55b58189d>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00581.warc.gz"}
Statistics, Optimization & Information ComputingThe Weighted Xgamma Model: Estimation, Risk Analysis and ApplicationsReliability estimation of a multicomponent stress-strength model based on copula function under progressive first failure censoringImproved estimation of the sensitive proportion using a new randomization technique and the Horvitz–Thompson type estimatorA Condition-based Maintenance Policy in Chance SpaceStatistical modelling of cryptocurrenciesOverlap Analysis in Progressive Hybrid Censoring: A Focus on Adaptive Type-II and Lomax DistributionOptimization of Weibull Distribution Parameters with Application to Short-Term Risk Assessment and Strategic Investment Decision-MakingA New Family of Continuous Distributions with ApplicationsA dynamic bi-objective optimization model for a closed loop supply chain design under environmental policiesComplexity analysis of primal-dual interior-point methods for semidefinite optimization based on a new type of kernel functionsSecond Order Duality Involving Second Order Cone Arcwise Connected Functions and Their Generalizations in Vector Optimization Problem over ConesImage Denoising Using the Geodesics' Gramian of the Manifold Underlying Patch-SpaceDeafTech Vision: A Visual Computer's Approach to Accessible Communication through Deep Learning-Driven ASL AnalysisMulti-sensors search for lost moving targets using unrestricted effortDominant Mixed Metric Dimension of GraphFractal as Julia sets of complex functions via a new generalized viscosity approximation type iterative methodSpotting, Tracking algorithm and the remotenessInterference-aware scheme to improve distributed caching in cellular networks via D2D underlay communicationsRailway Track Faults Detection Using Ensemble Deep Transfer Learning ModelsSome Properties of Dominant Local Metric DimensionSkin Cancer Diagnosis With Multi-Level ClassificationA Novel Hybrid ANFIS-NARX and NARX- ANN models to Predict the profitability of Egyptian Insurance Companies Utilizing the Discrete Heisenberg Group and Laser Systems in RGB Image EncryptionIndonesian News Extractive Summarization using Lexrank and YAKE AlgorithmOptimization techniques of Assignment Problem using Trapezoidal Intuitionistic Fuzzy Numbers and Interval ArithmeticOn the Geometric Pattern Transformation (GPT) Properties of Unidimensional Signals http://www.iapress.org/index.php/soic <p><em><strong>Statistics, Optimization and Information Computing</strong></em>&nbsp;(SOIC) is an international refereed journal dedicated to the latest advancement of statistics, optimization and applications in information sciences.&nbsp; Topics of interest are (but not limited to):&nbsp;</p> <p>Statistical theory and applications</p> <ul> <li class="show">Statistical computing, Simulation and Monte Carlo methods, Bootstrap,&nbsp;Resampling methods, Spatial Statistics, Survival Analysis, Nonparametric and semiparametric methods, Asymptotics, Bayesian inference and Bayesian optimization</li> <li class="show">Stochastic processes, Probability, Statistics and applications</li> <li class="show">Statistical methods and modeling in life sciences including biomedical sciences, environmental sciences and agriculture</li> <li class="show">Decision Theory, Time series&nbsp;analysis, &nbsp;High-dimensional&nbsp; multivariate integrals,&nbsp;statistical analysis in market, business, finance,&nbsp;insurance, economic and social science, etc</li> </ul> <p>&nbsp;Optimization methods and applications</p> <ul> <li class= "show">Linear and nonlinear optimization</li> <li class="show">Stochastic optimization, Statistical optimization and Markov-chain etc.</li> <li class="show">Game theory, Network optimization and combinatorial optimization</li> <li class="show">Variational analysis, Convex optimization and nonsmooth optimization</li> <li class="show">Global optimization and semidefinite programming&nbsp;</li> <li class="show">Complementarity problems and variational inequalities</li> <li class="show"><span lang="EN-US">Optimal control: theory and applications</span></li> <li class="show">Operations research, Optimization and applications in management science and engineering</li> </ul> <p>Information computing and&nbsp;machine intelligence</p> <ul> <li class="show">Machine learning, Statistical learning, Deep learning</li> <li class="show">Artificial intelligence,&nbsp;Intelligence computation, Intelligent control and optimization</li> <li class="show">Data mining, Data&nbsp;analysis, Cluster computing, Classification</li> <li class="show">Pattern recognition, Computer vision</li> <li class="show">Compressive sensing and sparse reconstruction</li> <li class="show">Signal and image processing, Medical imaging and analysis, Inverse problem and imaging sciences</li> <li class="show">Genetic algorithm, Natural language processing, Expert systems, Robotics,&nbsp;Information retrieval and computing</li> <li class="show">Numerical analysis and algorithms with applications in computer science and engineering</li> </ul> International Academic Press en-US Statistics, Optimization & Information Computing 2311-004X <span>Authors who publish with this journal agree to the following terms:</span><br /><br /><ol type="a"><ol type="a"><li>Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a <a href="http://creativecommons.org/licenses/by/3.0/" target="_new">Creative Commons Attribution License</a> that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.</li><li>Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.</li><li>Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See <a href="http://opcit.eprints.org/oacitation-biblio.html" target="_new">The Effect of Open Access</a>).</li></ol></ol> http://www.iapress.org/index.php/soic/article/view/1677 <p>The weighted xgamma distribution, a new weighted two-parameter lifespan distribution, is introduced in this study. Theoretical characteristics of this model are deduced and thoroughly examined, including quantile function, extreme value, moments, moment generating function, cumulative entropy, and residual cumulative. Some classical estimation methods such as the the maximum likelihood, weighted least square, Anderson Darling and Cramer-von-Mises are considered. A simulation experiments are performed to compare the estimation methods. Four real-life data sets is finally examined to demonstrate the viability of this model. Four key risk indicators are defined and analyzed under the maximum likelihood method. A risk analysis for the exceedances of flood peaks is presented.</p> Majid Hashempour Morad Alizadeh Haitham Yousof Copyright (c) 2024 Statistics, Optimization & Information Computing 2024-08-31 2024-08-31 12 6 1573 1600 10.19139/soic-2310-5070-1677 http://www.iapress.org/index.php/soic/article/view/1894 <p>In reliability analysis of a multicomponent stress-strength model, most studies assumed independence between stress and strength variable. However, this assumption may not be realistic. To account for dependency, copula approach can be used. Although it is important, only few studies considered this case and usually under complete study. Observing the failures for all units may be difficult due to cost and time limitation. Recently, progressive first failure censoring scheme has attracted attention in the literature due to its ability to save time and money. To the best of our knowledge, dependent multicomponent stress-strength model under progressive first failure censoring was not considered yet. In this article, we derived the likelihood function for progressive first failure censored sample under copula and multicomponent stress strength model. A simulation study is performed and a real dataset is analyzed to test the applicability of the model. Maximum likelihood estimates, asymptotic confidence interval and bootstrap confidence intervals are obtained. The results illustrated that the proposed censoring scheme under copula provides a good estimate for the reliability.</p> Ola Abuelamayem Copyright (c) 2024 Statistics, Optimization & Information Computing 2024-06-29 2024-06-29 12 6 1601 1611 10.19139/soic-2310-5070-1894 http://www.iapress.org/index.php/soic/article/view/1807 <p> Randomized response techniques efficiently collect data on sensitive subjects to protect individual privacy. This paper aims to introduce a new randomizing technique in the additive scrambled model so that privacy is well preserved and the estimator's efficiency for the sensitive population proportion is improved. Also, a Horvitz–Thompson type estimator is presented as an unbiased estimator of the sensitive proportion of the population, then convergence to the normal distribution for the Horvitz–Thompson type estimator is considered by the entropy of the inclusion indicators in the Poisson sampling. Eventually, using the new additive scrambled model, the ratio of taking addictive drugs is estimated among students of the University.</p> Hadi Farokhinia Rahim Chinipardaz Gholamali Parham Copyright (c) 2024 Statistics, Optimization & Information Computing 2024-07-25 2024-07-25 12 6 1612 1621 10.19139/soic-2310-5070-1807 http://www.iapress.org/index.php/soic/article/view/2018 <p style= "-qt-block-indent: 0; text-indent: 0px; -qt-user-state: 0; margin: 0px;">A condition-based maintenance policy is considered for a deteriorating system including both of preventive and corrective maintenance actions. The gamma process is used to model stochastic degradation in the probability space. Although, the cost of preventive maintenance is considered as an uncertain variable due to incomplete information, and its distribution is estimated based on the opinions of some experts using the Delphi method. <br>The optimal policy is determined by minimizing the expected cost rate function. Since in this function, there are both random variables discussing in a probability space, and an uncertain variable, which is considered in an uncertain space, we have to study the optimal policy in a chance space which is a combination of probability and uncertain spaces. The proposed methodology is explained in an illustrative example. Finally, the results are applied to a real data set.</p> Somayyeh Shahraki Dehsoukhteh Mostafa Razmkhah Copyright (c) 2024 Statistics, Optimization & Information Computing 2024-07-26 2024-07-26 12 6 1622 1639 10.19139/soic-2310-5070-2018 http:// www.iapress.org/index.php/soic/article/view/1570 <p>There has been tremendous interest invested by researchers and academics in Bitcoin since it's introduction to the financial market. However, in recent years there has been an advancement of the cryptocurrency market where other cryptocurrencies such as Ethereum, Litecoin and Ripple have grown relatively quickly and could potentially challenge the dominant placement of Bitcoin. These cryprocurrencies have been utilized globally as a virtual currency for multiple transactions. The returns of cryptocurrencies are known to be volatile and have been observed to fluctuate quite a bit in recent times. This study assesses and differentiates the performance of generalized autoregressive score (GAS) models integrated with a few heavy-tailed distributions in Value-at-Risk (VaR) estimation of the four most popular cryptocurrencies' returns, i.e. Bitcoin returns, Ethereum returns, Litecoin returns and Ripple returns. This paper proposed VaR models for Bitcoin, Ethereum, Litecoin and Ripple returns, i.e. GAS models combined with the generalized hyperbolic distribution (GHD), the variance gamma (VG) distribution, the normal inverse Gaussian (NIG) distribution and the generalized lambda distribution (GLD). The Kupiec likelihood ratio test was adopted to evaluate the proposed models' adequacy and Backtesting VaR was used to select the superior set of models.</p> <p><br><br></p> Stephanie Danielle Subramoney Knowledge Chinhamu Retius Chifurira Copyright (c) 2024 Statistics, Optimization & Information Computing 2024-08-03 2024-08-03 12 6 1640 1662 10.19139/soic-2310-5070-1570 http://www.iapress.org/index.php/soic/article/view/1908 <p>This article explores the adaptive type-II progressive hybrid censoring scheme, introduced by Ng et al. (2009), which is used to make inferences about three measures of overlap: Matusita's measure ($\rho $), Morisita's measure ($\lambda $), and Weitzman's measure ($\Delta $) for two Lomax distributions with different parameters. The article derives the bias and variance of these overlap measures' estimators. If sample sizes are limited, the precision or bias of these estimators is difficult to determine because there are no closed-form expressions for their variances and exact sampling distributions, so Monte Carlo simulations are used. Also, confidence intervals for these measures are constructed using both the bootstrap method and Taylor approximation.</p> <p>To demonstrate the practical significance of the proposed estimators, an illustrative application is provided by analyzing real data.</p> Amal Helu Copyright (c) 2024 Statistics, Optimization & Information Computing 2024-08-22 2024-08-22 12 6 1663 1683 10.19139/ soic-2310-5070-1908 http://www.iapress.org/index.php/soic/article/view/2099 <p>Accurate parameter estimation is fundamental in financial modeling, especially in investment analysis, where the Modified Internal Rate of Return (MIRR) plays a key role in evaluating investment performance. This study aims to enhance risk and return predictions in Sharia-compliant property investments by exploring the efficacy of various optimization techniques for estimating Weibull distribution parameters within the MIRR framework. To achieve this, we employed a comparative analysis of optimization methods, including Simulated Annealing (SA), Differential Evolution (DE), Genetic Algorithm (GA), and traditional Numerical Methods (NM). Performance was assessed through metrics such as Root Mean Squared Error (RMSE), Akaike Information Criterion (AIC), R-squared (R<sup>2</sup>) values, and Kolmogorov-Smirnov (KS) statistics. The results reveal that metaheuristic algorithms (SA, DE, GA) significantly outperform traditional numerical methods in terms of parameter estimation accuracy. Specifically, SA achieved the lowest RMSE of 0.042, with a Weibull shape parameter estimate of 1.254 and variance of 0.004, followed closely by DE with an RMSE of 0.048, and GA with 0.046. In contrast, NM exhibited a higher RMSE of 0.067, with a shape parameter estimate of 1.310 and a variance of 0.006. The AIC values for metaheuristic methods ranged from 14.25 to 14.68, compared to 15.12 for NM, and R<sup>2</sup> values for metaheuristic methods ranged from 0.932 to 0.945, compared to 0.910 for NM. KS statistics further underscored the superior model fit of metaheuristics, with SA showing the lowest KS value of 0.045. The study underscores the critical role of metaheuristic optimization in improving the accuracy of parameter estimation based on MIRR models. This enhancement provides more reliable risk assessments and returns predictions, offering valuable insights for informed investment decision-making and contributing to optimized financial outcomes in the property sector.</p> Hamza Abubakar Masnita Misiran Amani A. Idris Sayed Abubakar Balarabe Karaye Copyright (c) 2024 Statistics, Optimization & Information Computing 2024-08-23 2024-08-23 12 6 1684 1709 10.19139/soic-2310-5070-2099 http://www.iapress.org/index.php/soic/article/view/2144 <p>This article introduces a novel set of optimizing probability distributions known as the Survival Power-G (SP-G) family, which employs a specific approach to introduce an additional parameter with the survival function of the original distributions. The utilization of this family enhances the modelling capabilities of diverse existing continuous distributions. By applying this approach to the single-parameter exponential distribution, a new two-parameter Survival Power-Exponential (SP-E) distribution is generated. The statistical characteristics of this fresh distribution and the maximum likelihood estimator are established, and Monte Carlo simulation is utilized to explore the efficiency of the maximum likelihood estimator of the two parameters under varying sample sizes. Subsequently, the new distribution is employed in the analysis of three distinct sets of real data. Through comparison with alternative distributions on these datasets, it is demonstrated that the new distribution outperforms the other distributions.</p> Hazim Ghdhaib Kalt Copyright (c) 2024 Statistics, Optimization & Information Computing 2024-08-23 2024-08-23 12 6 1710 1724 10.19139/soic-2310-5070-2144 http:// www.iapress.org/index.php/soic/article/view/2112 <p>Given the increasing focus on sustainability and environmental policy constraints, companies are required to redesign their supply chains. This paper explores the optimization of a closed loop supply chain (CLSC) network under both economic and environmental considerations. To achieve this, a bi-objective mixed integer linear model was developed. The proposed model identifies the optimal selection of CLSC facilities and manages both forward and reverse flows between them. The economic objective is reached by minimizing the total CLSC costs, while the environmental objective is satisfied by reducing CO2 emissions throughout the network. Products can be returned throughout their entire life cycle, which is why our model incorporates a dynamic aspect by considering product life cycle phases as time periods for the decision horizon. The model was tested through numerical experiments using a meta-heuristic approach based on the non-dominated sorting genetic algorithm NSGA-II. This algorithm produces a set of Pareto-optimal solutions that balance both objectives effectively. The results showed good performance in terms of computational time and optimization. Pareto solutions offered various options for managers and decision makers aiming for a sustainable closed loop supply chain design.</p> Oulfa Labbi Abdeslam Ahmadi Copyright (c) 2024 Statistics, Optimization & Information Computing 2024-08-07 2024-08-07 12 6 1725 1744 10.19139/soic-2310-5070-2112 http://www.iapress.org/index.php/soic/article/ view/1927 <p>Kernel functions are essential for designing and analyzing interior-point methods (IPMs). They are used to determine search directions and reduce the computational complexity of the interior point method. Currently, IPM based on kernel functions is one of the most effective methods for solving LO [1,20], second-order cone optimization (SOCO) [2], and symmetric optimization (SO) and is a very active research area in mathematical<br>programming. This paper presents a large-update primal-dual IPM for SDO based on a new bi-parameterized hyperbolic kernel function. Then we proved that the proposed large-update IPM has the same complexity bound as the best-known IPMs for solving these problems. Taking advantage of the favorable characteristics of the kernel function, we can deduce that the iteration bound for the large update method is $\mathcal{O}\left( \sqrt{n}\log n\log\dfrac{n}{\varepsilon }\right) $ when a takes a special value utilizing the favorable properties of the kernel function. These theoretical results play an essential role in the design and analysis of IPMs for CQSCO [7] and the Cartesian$\ P_{\ast }\left( \kappa \right) $-SCLCP [8]. The proximity function has never been used. In order to validate the efficacy of our algorithm and verify the effectiveness of our algorithm, examples are<br>given to illustrate the applicability of our main results, and we compare our numerical results with some alternatives presented in the literature.</p> Bachir Bounibane Randa Chalekh Copyright (c) 2024 Statistics, Optimization & Information Computing 2024-09-26 2024-09-26 12 6 1745 1761 10.19139/soic-2310-5070-1927 http://www.iapress.org/index.php/soic/article/view/2016 <p>In this paper, we introduce second order cone arcwise connected function and its generalizations. Further we study the interrelations among these functions also. Mond Weir type second order dual is formulated and duality results are proved using these functions. </p> Mamta Chaudhary Vani Sharma Copyright (c) 2024 Statistics, Optimization & Information Computing 2024-09-27 2024-09-27 12 6 1762 1774 10.19139/soic-2310-5070-2016 http://www.iapress.org/index.php /soic/article/view/2124 <p>With the proliferation of sophisticated cameras in modern society, the demand for accurate and visually pleasing images is increasing. However, the quality of an image captured by a camera may be degraded by noise. Thus, some processing of images is required to filter out the noise without losing vital image features. Even though the current literature offers a variety of denoising methods, the fidelity and efficacy of their denoising are sometimes uncertain. Thus, here we propose a novel and computationally efficient image denoising method that is capable of producing accurate images. To preserve image smoothness, this method inputs patches partitioned from the image rather than pixels. Then, it performs denoising on the manifold underlying the patch-space rather than that in the image domain to better preserve the features across the whole image. We validate the performance of this method against benchmark image processing methods.</p> Kelum Gajamannage Randy Paffenroth Anura Jayasumana Copyright (c) 2024 Statistics, Optimization & Information Computing 2024-08-19 2024-08-19 12 6 1775 1794 10.19139/soic-2310-5070-2124 http:// www.iapress.org/index.php/soic/article/view/2020 <p>Sign language is commonly used by people with hearing and speech impairments, making it difficult for those without such disabilities to understand. However, sign language is not limited to communication within the deaf community alone. It has been officially recognized in numerous countries and is increasingly being offered as a second language option in educational institutions. In addition, sign language has shown its usefulness in various professional sectors, including interpreting, education, and healthcare, by facilitating communication between people with and without hearing impairments. Advanced technologies, such as computer vision and machine learning algorithms, are used to interpret and translate sign language into spoken or written forms. These technologies aim to promote inclusivity and provide equal opportunities for people with hearing impairments in different domains, such as education, employment, and social interactions. In this paper, we implement a DeafTech Vision (DTV-CNN) architecture based on the convolutional neural network to recognize American Sign Language (ASL) gestures using deep learning techniques. Our main objective is to develop a robust ASL sign classification model to enhance human-computer interaction and assist individuals with hearing impairments. Through extensive evaluation, our model consistently outperformed baseline methods in terms of precision. It achieved an outstanding accuracy rate of 99.87% on the ASL alphabet test dataset and 99.94% on the ASL digit dataset, significantly exceeding previous research, which reported an accuracy of 90.00%. We also illustrated the model's learning trends and convergence points using loss and error graphs. These results highlight the DTV-CNN's effectiveness and capability in distinguishing complex ASL gestures.</p> Shafayat Bin Shabbir Mugdha Hridoy Das Mahtab Uddin Md. Easin Arafat Md. Mahfujul Islam Copyright (c) 2024 Statistics, Optimization & Information Computing 2024-06-13 2024-06-13 12 6 1795 1811 10.19139/soic-2310-5070-2020 http://www.iapress.org/index.php/soic/article/view/1975 <p>This paper addresses the problem of searching for multiple targets using multiple sensors, where targets move randomly between a limited number of states at each time interval. Due to the potential value or danger of the targets, multiple sensors are employed to detect them as quickly as possible within a fixed number of search intervals. Each search interval has an available search effort and an exponential detection function is assumed. The goal is to develop an optimal search strategy that distributes the search effort across cells in each time interval and calculates the probability of not detecting the targets throughout the entire search period. The optimal search strategy that minimizes this probability is determined, the stability of the search is analyzed, and some special cases are considered. Additionally, we introduce the $M$-cells algorithm.</p> Abd-Elmoneim A. M. Teamah Mohamed A. Kassem Elham Yusuf Elebiary Copyright (c) 2024 Statistics, Optimization & Information Computing 2024-06-17 2024-06-17 12 6 1812 1825 10.19139/soic-2310-5070-1975 http://www.iapress.org/index.php/soic/article/view/1925 <p>For $k-$ordered set $W=\{s_1, s_2,\dots, s_k \}$ of vertex set $G$, the representation of a vertex or edge $a$ of $G$ with respect to $W$ is $r(a|W)=(d(a,s_1), d(a,s_2),\dots, d(a,s_k))$ where $a$ is vertex so that $d(a,s_i)$ is a distance between of the vertex $v$ and the vertices in $W$ and $a=uv$ is edge so that $d(a,s_i)=min\{d(u,s_i),d(v,s_i)\}$. The set $W$ is a mixed resolving set of $G$ if $r(a|W)\neq r(b|W)$ for every pair $a,b$ of distinct vertices or edge of $G$. The minimum mixed resolving set $W$ is a mixed basis of $G$. If $G$ has a mixed basis, then its cardinality is called mixed metric dimension, denoted by $dim_m(G)$. A set $W$ of vertices in $G$ is a dominating set for $G$ if every vertex of $G$ that is not in $W$ is adjacent to some vertex of $W$. The minimum cardinality of dominating set is domination number , denoted by $\gamma(G)$. A vertex set of some vertices in $G$ that is both mixed resolving and dominating set is a mixed resolving dominating set. The minimum cardinality of mixed resolving dominating set is called dominant mixed metric dimension, denoted by $\gamma_{mr}(G)$. In our paper, we will investigated the establish sharp bounds of the dominant mixed metric dimension of $G$ and determine the exact value of some family graphs.</p> Ridho Alfarisi Sharifah Kartini Said Husain Liliek Susilowati Arika Indah Kristiana Copyright (c) 2024 Statistics, Optimization & Information Computing 2024-06-21 2024-06-21 12 6 1826 1833 10.19139/soic-2310-5070-1925 http://www.iapress.org/index.php/soic/article/view/2089 <p>In this article, we study and explore novel variants of Julia set patterns that are linked to the complex exponential function $W(z)=pe^{z^n}+qz+r$, and complex cosine function $T(z)=\cos({z^n})+dz+c$, where $n\geq 2$ and $c,d,p,q,r\in \mathbb{C}$ by employing a generalized viscosity approximation type iterative method introduced by Nandal et al. (Iteration process for fixed point problems and zero of maximal monotone operators, Symmetry, 2019) to visualize these sets. We utilize a generalized viscosity approximation type iterative method to derive an escape criterion for visualizing Julia sets. This is achieved by generalizing the existing algorithms, which led to visualization of beautiful fractals as Julia sets. Additionally, we present graphical illustrations of Julia sets to demonstrate their dependence on the iteration parameters. Our study concludes with an analysis of variations in the images and the influence of parameters on the color and appearance of the fractal patterns. Finally, we observe intriguing behaviors of Julia sets with fixed input parameters and varying values of $n$ via proposed algorithms.</p> Iqbal Ahmad Haider Rizvi Copyright (c) 2024 Statistics, Optimization & Information Computing 2024-07-12 2024-07-12 12 6 1834 1853 10.19139/soic-2310-5070-2089 http://www.iapress.org/index.php/soic/article/view/1893 <p>On this paper we present a solution to detect and know if a point M is inside a polygon ( A(k), k∈{1,...,n} ) or outside. We are going to give a very simple, practical and explicit method of the triangulation of a convex polygon (convex polyhedron) after a definition and the concretization of the order relation of the points of a polygon in a plane following a well-chosen orientation in before and an arbitrary point of the vertices of the polygon. In the case where the point M is outside the polygon, a simple optimization method will be applied to determine the distance between the point M and the polygon A(1), ..., A(n) and the point P of the border of the polygon closest to M ”The neighboring Point”.</p> Aziz Arbai Abounaima Mohammed Chaouki Amina Bellekbir Copyright (c) 2024 Statistics, Optimization & Information Computing 2024-07-22 2024-07-22 12 6 1854 1872 10.19139/ soic-2310-5070-1893 http://www.iapress.org/index.php/soic/article/view/2094 <p>Underlay Device-to-Device (D2D) communications is a promising networking technology intended to boost the<br>spectral efficiency of future cellular networks, including 5G and beyond. When used for distributed caching, where cellular<br>devices store popular files for direct exchange later with other devices away from the cellular infrastructure, the technology<br>bears more fruits such as enhancing throughput, reducing latency and offloading the infrastructure. However, due to their<br>non-orthogonality, underlay D2D communications can result in excessive interference to the cellular user. To avoid this<br>problem, the present article proposes a scheme with two interference-reduction elements: a guard zone intended to allow<br>D2D communications only for devices far enough from the base station (BS), and a pairing strategy intended to allow D2D<br>pairing for only devices that are close enough to each other. We assess the performance of the scheme using a stochastic<br>geometry (SG) model, through which we characterize the coverage probability of the cellular user. This probability is a<br>principal indicator of maintaining the quality of service (QoS) of the cellular user and of enabling successful caching for the<br>D2D user. We introduce in the process a novel empirical technique which, given a desired level of interference, identifies<br>an upper bound for the distance between two devices to be paired without exceeding that level. We finally validate the<br> analytical findings obtained from the model by intensive simulation to ensure the correctness of both the model and the<br>scheme performance. A salient feature of the scheme is that it requires for its implementation no software or hardware<br>modification in the device</p> Amira Eleff Mohamed Mousa Hamed Nassar Copyright (c) 2024 Statistics, Optimization & Information Computing 2024-07-24 2024-07-24 12 6 1873 1885 10.19139/soic-2310-5070-2094 http://www.iapress.org/index.php/soic/article/view/1994 <p>Railway track fault detection is an essential task for ensuring the safety and reliability&nbsp;&nbsp; of railway systems, particularly in the summer and rainy seasons when train wheels may slide due to fractures in the track or corrosion may cause track fractures. In this study, we propose a novel approach for the automated detection of railway track faults using deep transfer learning models. The proposed method combines image processing techniques and the training of three pretrained models: InceptionV3, ResNet50V2, and VGG16, on a dataset of railway track images. We evaluated the performance of our proposed method by measuring its accuracy on a test set of railway track images. The individual training accuracies for InceptionV3, ResNet50V2, and VGG16 were 94.30%, 96.79%, and 94.64%, respectively. We then combined these models using an ensemble approach, which achieved an impressive accuracy of 98.57% on the test set. Our results demonstrate the effectiveness of using deep ensemble transfer learning for railway track fault detection. Moreover, our proposed method can be used as a valuable tool for railway track maintenance and monitoring, which can ultimately lead to the improvement of the safety and reliability of railway systems.</p> <p>our proposed approach for railway track fault detection using ensemble deep transfer learning models shows promising results, indicating that it has great potential for detecting track faults accurately and efficiently. The proposed method can be used in various railway systems worldwide, ultimately leading to improved safety and reliability for passengers and cargo transportation.</p> Ali almadani Vivek Mahale Ashok T. Gaikwad Copyright (c) 2024 Statistics, Optimization & Information Computing 2024-06-07 2024-06-07 12 6 1886 1911 10.19139/soic-2310-5070-1994 http://www.iapress.org/index.php/soic/article/view/2062 <p>Let $G$ be a connected graph with vertex set $V$. Let $W_l$ be an ordered subset defined by $W_l=\{w_1,w_2,\dots,w_n\}\subseteq V(G)$. Then $W_l$ is said to be a dominant local resolving set of $G$ if $W_l$ is a local resolving set as well as a dominating set of $G$. A dominant local resolving set of $G$ with minimum cardinality is called the dominant local basis of $G$. The cardinality of the dominant local basis of $G$ is called the dominant local metric dimension of $G$ and is denoted by $Ddim_l(G)$. We characterize the dominant local metric dimension for any graph $G$ and for some commonly known graphs in terms of their domination number to get some properties of dominant local metric dimension.</p> Reni Umilasari Liliek Susilowati Slamin AFadekemi Janet Osaye Ilham Saifudin Copyright (c) 2024 Statistics, Optimization & Information Computing 2024-07-31 2024-07-31 12 6 1912 1920 10.19139/soic-2310-5070-2062 http://www.iapress.org/index.php/soic/article/view/2090 <p>Skin cancer arises from the uncontrolled proliferation of abnormal skin cells, primarily triggered by exposure to the harmful ultraviolet (UV) rays of the sun and the utilization of UV tanning beds. This condition poses a heightened risk due to its potential to progress into blood cancer and lead to rapid fatality. Extensive research efforts have been dedicated to advancing the treatment of this perilous ailment. This paper presents a system designed for the examination and diagnosis of pigmented skin lesions and melanoma.</p> <p>The system incorporates a supervised classification algorithm that combines Convolutional Neural Network (CNN) and Deep Neural Network (DNN) architectures with feature extraction techniques. It operates in two distinct stages: the initial stage classifies images into two categories, namely benign or malignant, while the subsequent stage further categorizes the images into one of three classes: basal cell carcinomas, squamous cell carcinomas, or melanoma. Consequently, the comprehensive system addresses four classes, namely benign, basal cell carcinomas, squamous cell carcinomas, and melanoma.</p> <p>This work contributes to the system's design in three significant ways. Firstly, it implements multiple iterations to select the most optimal images, resulting in the highest classification accuracy. Secondly, it employs various statistical methods to identify the most pertinent features, thereby enhancing the classifier's accuracy by focusing on the most informative features for the classification task. Lastly, a two-stage classification approach is implemented, employing two distinct classifiers at different levels within the overall system. Despite the inherent complexity of the real-world problem, the overall system attains a commendable level of classification accuracy.</p> <p>Following rigorous experimentation, the study identifies the top three models. Each approach culminates in a classifier for each stage. The first approach, utilizing a deep learning classifier, achieves an accuracy of 81.82% in the initial cancer discrimination stage and 58.33% in the subsequent stage. The second approach, employing a machine learning classifier, attains an accuracy of 74.63% in the first stage and 64.41% in the second stage. The third approach, utilizing a linear regression classifier, achieves an accuracy of 98% in the first stage and 90% in the second stage. These results underscore the significance of feature selection in influencing model accuracy and suggest the potential for further optimization.</p> Rania Elbadawy BenBella S. Tawfik Mohamed Amal Zeidan Copyright (c) 2024 Statistics, Optimization & Information Computing 2024-08-02 2024-08-02 12 6 1921 1933 10.19139/soic-2310-5070-2090 http://www.iapress.org/index.php/soic/article/view/2104 <p>The use of fuzzy logic models with machine learning (ML) models have become common in many areas, especially insurance field. This study aims to compare between non-hybrid models such as artificial neural network (ANN) model, adaptive neural fuzzy inference system (ANFIS) model, nonlinear auto-regressive external input (NARX) model, and the following hybrid models (ANFIS-NARX) and (NARX-ANN) to predict the profits of the insurance activity which represent the important indicator of the good performance of Egypt's 39 insurance companies in the period from 1st January 2009 to 31 December 2022 , monthly .This prediction based on the following factors (net premiums, reinsurance commissions, net income from earmarked investments, other direct income, net compensation, production cost commissions and general and administrative expenses) that help decision makers to make appropriate decisions . The results found that the (ANN) model is given good results compared with the following models (ANFIS), (NARX), hybrid (ANFIS-NARX) and (NARX-ANN) models according to the following prediction accuracy measures (RMSE, MAPE, MAE and Theil inequality). The explanatory ability (R<sup>2</sup>) was appeared (0.79, 0.61) respectively for training and testing phases in persons insurance companies. The explanatory ability also was appeared (0.83, 0.68) respectively in property insurance companies.</p> Hanaa Hussein Ali Copyright (c) 2024 Statistics, Optimization & Information Computing 2024-08-08 2024-08-08 12 6 1934 1955 10.19139/soic-2310-5070-2104 http://www.iapress.org/index.php/soic/article/view/1744 <p>This study signifies the endpoint of thorough cryptographic experimentation, leading to the creation of an innovative color image encryption scheme. It embodies a fusion of mathematical concepts rooted in both group theory and chaos theory.<br>The novel encryption procedure entails the creation of cube faces, to depict the relative positions of pixels within a given stream, thereby generating six distinct channels. Within our algorithm, each monochromatic layer of an image is independently encrypted using digraph encryption. This involves a technique of rotating the four faces, followed by another rotation to encrypt the second digraph.<br>Subsequently, matrices derived from Heisenberg theory are integrated with the monochromatic layer from the preceding step to fine-tune the image's parameters and introduce blur. Impressively, our approach has yielded promising outcomes across various images and evaluation criteria, demonstrating resilience against differential attacks and statistical analyses. Furthermore, comparative evaluations have highlighted the superiority of our method over existing algorithms.</p> Fouzia Elazzaby Khalid Sabour Nabil EL AKKAD Bouchta Zouhairi Samir Kabbaj Copyright (c) 2024 Statistics, Optimization & Information Computing 2024-10-15 2024-10-15 12 6 1956 1972 10.19139/soic-2310-5070-1744 http://www.iapress.org/index.php/soic/article/view/1976 <p>The surge in global technological advancements has led to an unprecedented volume of information sharing<br>across diverse platforms. This information, easily accessible through browsers, has created an overload, making it challenging for individuals to efficiently extract essential content. In response, this paper proposes a hybrid Automatic Text Summarization (ATS) method, combining LexRank and YAKE algorithms. LexRank determines sentence scores, while YAKE calculates individual word scores, collectively enhancing summarization accuracy. Leveraging an unsupervised learning approach, the hybrid model demonstrates a 2% improvement over its base model. To validate the effectiveness of the proposed method, the paper utilizes 5000 Indonesian news articles from the Indosum dataset. Ground-truth summaries are employed, with the objective of condensing each article to 30% of its content. The algorithmic approach and experimental results are presented, offering a promising solution to information overload. Notably, the results reveal a two percent improvement in the Rouge-1 and Rouge-2 scores, along with a one percent enhancement in the Rouge-L score. These findings underscore the potential of incorporating a keyword score to enhance the overall accuracy of the summaries generated by LexRank. Despite the absence of a machine learning model in this experiment, the unsupervised learning and heuristic approach suggest broader applications on a global scale. A comparative analysis with other state-of-the-art text summarization methods or hybrid approaches will be essential to gauge its overall effectiveness.</p> Julyanto Wijaya Abba Suganda Girsang Copyright (c) 2024 Statistics, Optimization & Information Computing 2024-06-07 2024-06-07 12 6 1973 1983 10.19139/soic-2310-5070-1976 http://www.iapress.org/index.php/soic/article/view/1842 <p>This paper discusses the Assignment problem to optimize the assigning of jobs to workers based on their talents and efficiency. In general, scheduling jobs plays a significant role in manufacturing and is advantageous in real world applications as we face more uncertainty and ambiguity in assigning jobs. The Intuitionistic Fuzzy Assignment problem (IFAP) is employed in circumstances when decision-makers have to deal with uncertainty. The domains are Trapezoidal Intuitionistic Fuzzy Numbers (TrIFNs) and the techniques used are Hungarian Method (HM), Brute Force Method (BFM), and Greedy Method (GM). The suggested model's performance is compared with the existing approach with the help of interval arithmetic operations. Allocating work to the individual is illustrated numerically, the optimal solution of minimizing cost is obtained using R programming and the results of comparative analysis are shown diagrammatically that help viewers to easily understand and generate results from comparisons.</p> R. Sanjana G. Ramesh Copyright (c) 2024 Statistics, Optimization & Information Computing 2024-08-27 2024-08-27 12 6 1984 1999 10.19139/soic-2310-5070-1842 http://www.iapress.org/index.php/soic/article/view/1924 <p><span style="vertical-align: inherit;">The Geometric Pattern Transformation (GPT) has several advantages of use concerning contemporary algorithms that have been duly studied in previous research. Regarding some of its properties, four different but complementary aspects of the GPT are presented in this work. After a brief review of the GPT concept, how tied data are manifested in data sets is shown, to obtain a symmetric representation of the GPT, a linear transformation is performed that regularizes the geometric representation of the GPT and the theoretical relationship between the GPT and the phase-state representation of 1D signals is analyzed and formalized, then the study of the forbidden pattern is easily revealed, obtaining a strong relationship with the stable and unstable fixed points of the logistic equation. Finally, the characterization of colored noises and the application in real world signals taken through experimental procedures is analyzed. With these results, in this work is proposed an advance in the potential applications of the GPT in an integral way in the processing and analysis of data series.</span></p> Cristian Bonini Marcos Maillot Dino Otero Andrea Rey Ariel Amadio Walter Legnani Copyright (c) 2024 Statistics, Optimization & Information Computing 2024-08-25 2024-08-25 12 6 2000 2021 10.19139/soic-2310-5070-1924
{"url":"http://www.iapress.org/index.php/soic/gateway/plugin/WebFeedGatewayPlugin/rss","timestamp":"2024-11-02T04:46:21Z","content_type":"application/rdf+xml","content_length":"65837","record_id":"<urn:uuid:04bd895a-64dd-4753-ad7c-91925d209c10>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00685.warc.gz"}
sklearn.metrics.hamming_loss(y_true, y_pred, *, sample_weight=None)[source]¶ Compute the average Hamming loss. The Hamming loss is the fraction of labels that are incorrectly predicted. Read more in the User Guide. y_true1d array-like, or label indicator array / sparse matrix Ground truth (correct) labels. y_pred1d array-like, or label indicator array / sparse matrix Predicted labels, as returned by a classifier. sample_weightarray-like of shape (n_samples,), default=None Sample weights. lossfloat or int Return the average Hamming loss between element of y_true and y_pred. See also Compute the accuracy score. By default, the function will return the fraction of correct predictions divided by the total number of predictions. Compute the Jaccard similarity coefficient score. Compute the Zero-one classification loss. By default, the function will return the percentage of imperfectly predicted subsets. In multiclass classification, the Hamming loss corresponds to the Hamming distance between y_true and y_pred which is equivalent to the subset zero_one_loss function, when normalize parameter is set to True. In multilabel classification, the Hamming loss is different from the subset zero-one loss. The zero-one loss considers the entire set of labels for a given sample incorrect if it does not entirely match the true set of labels. Hamming loss is more forgiving in that it penalizes only the individual labels. The Hamming loss is upperbounded by the subset zero-one loss, when normalize parameter is set to True. It is always between 0 and 1, lower being better. Grigorios Tsoumakas, Ioannis Katakis. Multi-Label Classification: An Overview. International Journal of Data Warehousing & Mining, 3(3), 1-13, July-September 2007. >>> from sklearn.metrics import hamming_loss >>> y_pred = [1, 2, 3, 4] >>> y_true = [2, 2, 3, 4] >>> hamming_loss(y_true, y_pred) In the multilabel case with binary label indicators: >>> import numpy as np >>> hamming_loss(np.array([[0, 1], [1, 1]]), np.zeros((2, 2))) Examples using sklearn.metrics.hamming_loss¶ Model Complexity Influence
{"url":"https://scikit-learn.org/1.3/modules/generated/sklearn.metrics.hamming_loss.html","timestamp":"2024-11-11T03:30:47Z","content_type":"text/html","content_length":"22822","record_id":"<urn:uuid:4686cb83-d877-450c-b03c-f0cbe1fdc02d>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00449.warc.gz"}
Approximating the influence of monotone Boolean functions in O(√n) query complexity The Total Influence (Average Sensitivity) of a discrete function is one of its fundamental measures. We study the problem of approximating the total influence of a monotone Boolean function, which we denote by I[f]. We present a randomized algorithm that approximates the influence of such functions to within a multiplicative factor of (1 ± ∈) by performing O (equation) queries. We also prove a lower bound of Ω (equation) on the query complexity of any constant factor approximation algorithm for this problem (which holds for I[f] = Ω(1)), hence showing that our algorithm is almost optimal in terms of its dependence on n. For general functions, we give a lower bound of Ω ([n/I[f]]), which matches the complexity of a simple sampling algorithm. • Influence of a Boolean function • Sublinear query approximation algorithms • Symmetric chains Dive into the research topics of 'Approximating the influence of monotone Boolean functions in O(√n) query complexity'. Together they form a unique fingerprint.
{"url":"https://cris.huji.ac.il/en/publications/approximating-the-influence-of-monotone-boolean-functions-in-on-q-10","timestamp":"2024-11-12T09:10:58Z","content_type":"text/html","content_length":"48548","record_id":"<urn:uuid:0f3633dd-1a76-4af5-80ee-5bf5d5a30cca>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00113.warc.gz"}
Booster's Lab - It's Pretty Good Now Mar 2, 2013 at 4:10 AM Join Date: Jul 15, 2007 Location: Australia Posts: 6224 Age: 39 Pronouns: he/him Would it be possible for BL to allow the exporting of Vanilla CS (PC) format mapdata? It's difficult for me to explain to someone else how to do this via a hex editor when mapdata can be all over the place depending on what editor was used, so a way to streamline this process would be very much appreciated. Mar 2, 2013 at 10:42 AM Lvl 1 Forum Moderator "Life begins and ends with Nu." Join Date: May 28, 2008 Location: PMMM MMO Posts: 3713 Age: 32 Hey noxid, you should enable zooming with the scroll wheel (even if not as default, as an optional setting). Also you should make said zooming zoom in/out centered about the mouse cursor. Mar 2, 2013 at 11:05 PM Join Date: Jul 15, 2007 Location: Australia Posts: 6224 Age: 39 Pronouns: he/him GIRakaCHEEZER said: Hey noxid, you should enable zooming with the scroll wheel (even if not as default, as an optional setting). Also you should make said zooming zoom in/out centered about the mouse cursor. CTRL+MOUSEWHEEL is better. I would naturally expect CTRL+MOUSEWHEEL to zoom and for MOUSEWHEEL by itself to scroll, as the natural behaviour of the mousewheel is just that, to scroll. I know for a fact that all major desktop browsers support CTRL+MOUSEWHEEL for zooming. On that subject is it possible to make it so that holding down on the mouse wheel does not count as a left click? Mar 7, 2013 at 9:38 PM "In Soviet Russia, graves keep YOU!" Join Date: Oct 20, 2010 Location: Within the hearts of all Posts: 691 Age: 104 Pronouns: he/him Is this editor stable enough to use seriously at this point? I wanted to have this confirmed before I try it out. Mar 7, 2013 at 9:57 PM Professional Whatever "Life begins and ends with Nu." Join Date: Jan 13, 2011 Location: Lasagna Posts: 4481 Pronouns: she/her There's some glitches but it won't probably kill your game probably. I use it with my mods. Mar 8, 2013 at 1:16 AM "In Soviet Russia, graves keep YOU!" Join Date: Oct 20, 2010 Location: Within the hearts of all Posts: 691 Age: 104 Pronouns: he/him Cool, I'll check it out then. Mar 10, 2013 at 10:35 PM Senior Member "I, Ikachan. The Life and Documentary of the OrigiNAL SQuiD." Join Date: Oct 29, 2012 Location: England Posts: 178 Age: 26 Pronouns: he/him Cool, I'm definitely going to use Booster's Lab when porting my mod to CS+. I figured out how to get the mods to appear in the CS+ start menu under the challenges tab, the only problem was actually putting my mod there. Thanks, Noixd, once BL leaves Beta I will definitely make a permanent switch from CE to BL. Mar 11, 2013 at 5:17 AM Join Date: Aug 28, 2009 Location: The Purple Zone Posts: 5998 Pronouns: he/him ok control-scroll zoom will be implemented in the next release. At the moment it doesn't do a super job of focusing on the mouse but if you're lucky I'll be able to fix that. I swear there are gonna be more ways to zoom in and out than anything else, next you'll have me make it controllable by wiggling your ass in front of a webcam or something. Export mapdata will produce a 2nd file "csmap.bin" which contains vanilla-format mapdata, alongside the currently produced "stage.tbl" which contains CS+ format (hopefully). I probably won't update it again until a 0.3.0.0 or greater release which will contain an at least somewhat functional hack dialog. Unsure how long this will take, depends on my workload etc. etc. note to self making new maps doesn't detect name collisions (it should) Mar 11, 2013 at 6:20 AM Join Date: Jul 15, 2007 Location: Australia Posts: 6224 Age: 39 Pronouns: he/him Noxid said: I swear there are gonna be more ways to zoom in and out than anything else, next you'll have me make it controllable by wiggling your ass in front of a webcam or something. Nonsense. Ass wiggling should be used for scrolling the map pane. Mar 14, 2013 at 5:16 AM Join Date: Aug 28, 2009 Location: The Purple Zone Posts: 5998 Pronouns: he/him GIR said I should post this code snippet so you all can see what an amazing programmer I am and how I stop at nothing to ensure that you have the most efficient, stable, well-written software money can't buy. public void mouseReleased(MouseEvent eve) { int mapX = dataHolder.getMapX(); int mapY = dataHolder.getMapY(); Point mousePoint = eve.getPoint(); int viewScale = (int) (EditorApp.DEFAULT_TILE_SIZE * EditorApp.mapScale); int cursorX, cursorY; //do nothing if we are editing the line layer if (parent.getActiveLayer() == 4) if (eve.isPopupTrigger()) { //this needs to be copied to isRelease cursorX = mousePoint.x / viewScale; cursorY = mousePoint.y / viewScale; new TilescriptAction(mapX, mapY, tilePen.getI(0, 0), popup_tilescript.setText("generate CMP for (" + cursorX + "," + cursorY + ")"); popup_tra.setAction(new TraScriptAction(cursorX, cursorY, dataHolder.getMapNumber())); popup_tra.setText("make <TRA to this spot"); popup.show(eve.getComponent(), eve.getX(), eve.getY()); if ((eve.getModifiersEx() & MouseEvent.BUTTON3_DOWN_MASK) != 0) return; //do nothing if right mouse button if (dragging) { byte[][] oldDat; byte[][] newDat; Rectangle newCursorRect; int w, h; //Graphics2D g2d = (Graphics2D)((MapPane)eve.getSource()).getGraphics(); switch (parent.getDrawMode()) { case EditorApp.DRAWMODE_RECT: if (selW < 0) cursorX = baseX + selW + 1; cursorX = baseX; //absolutize to make calculations easy //(it no longer needs to be updated until next click) selW = Math.abs(selW); if (selH < 0) cursorY = baseY + selH + 1; cursorY = baseY; //see above selH = Math.abs(selH); int tmpx = tilePen.dx; int tmpY = tilePen.dy; tilePen.dx = 0; tilePen.dy = 0; //capture the previous state oldDat = new byte[selW][selH]; for (int dx = 0; dx < selW; dx++) { for (int dy = 0; dy < selH; dy++) { oldDat[dx][dy] = dataHolder.getTileB(cursorX + dx, cursorY + dy, parent.getActiveLayer()); //draw over it for (int dx = 0; dx < selW; dx += tilePen.getW()) { for (int dy = 0; dy < selH; dy += tilePen.getH()) { drawPen(cursorX + dx, cursorY + dy, cursorX + selW, cursorY + selH); //capture the new state newDat = new byte[selW][selH]; for (int dx = 0; dx < selW; dx++) { for (int dy = 0; dy < selH; dy++) { newDat[dx][dy] = dataHolder.getTileB(cursorX + dx, cursorY + dy, parent.getActiveLayer()); redrawTiles(cursorX, cursorY, selW, selH); tilePen.dx = tmpx; tilePen.dy = tmpY; selW = 1; selH = 1; dataHolder.addEdit(dataHolder.new MapEdit(cursorX, cursorY, oldDat, newDat, parent.getActiveLayer())); case EditorApp.DRAWMODE_COPY: if (selW < 0) cursorX = baseX + selW + 1; cursorX = baseX; //absolutize to make calculations easy //(it no longer needs to be updated until next click) selW = Math.abs(selW); if (selH < 0) cursorY = baseY + selH + 1; cursorY = baseY; //see above selH = Math.abs(selH); //create a pen tilePen = new TileBuffer(); tilePen.dx = baseX - cursorX; tilePen.dy = baseY - cursorY; tilePen.data = new byte[selW][selH]; for (int x = 0; x < selW; x++) { for (int y = 0; y < selH; y++) { tilePen.data[x][y] = dataHolder.getTileB(cursorX + x, cursorY + y, parent.getActiveLayer()); redrawTiles(cursorX, cursorY, selW, selH); selW = 1; selH = 1; case EditorApp.DRAWMODE_FILL: int currentX = mousePoint.x / viewScale; int currentY = mousePoint.y / viewScale; if (currentX >= mapX) currentX = mapX-1; if (currentX < 0) currentX = 0; if (currentY >= mapY) currentY = mapY-1; if (currentY < 0) currentY = 0; Rectangle tracker = new Rectangle(currentX, currentY, 0, 0); dataHolder.getTileB(currentX, currentY, parent.getActiveLayer()), tracker.width += 1; tracker.height += 1; redrawTiles(tracker.x, tracker.y, tracker.x + tracker.width, tracker.height + tracker.y); //oldCursorRect = new Rectangle(lastX - tilePen.dx, lastY - tilePen.dy, tilePen.getW(), tilePen.getH()); newCursorRect = new Rectangle(currentX - tilePen.dx, currentY - tilePen.dy, tilePen.getW(), tilePen.getH()); //put the cursor back oldDat = new byte[tracker.width][tracker.height]; for (int dx = 0; dx < tracker.width; dx++) { for (int dy = 0; dy < tracker.height; dy++) { oldDat[dx][dy] = prevLayerState[tracker.x + dx][tracker.y + dy]; //capture the new state newDat = new byte[tracker.width][tracker.height]; for (int dx = 0; dx < tracker.width; dx++) { for (int dy = 0; dy < tracker.height; dy++) { newDat[dx][dy] = dataHolder.getTileB(tracker.x + dx, tracker.y + dy, parent.getActiveLayer()); //create the edit dataHolder.addEdit(dataHolder.new MapEdit(tracker.x, tracker.y, oldDat, newDat, parent.getActiveLayer())); case EditorApp.DRAWMODE_REPLACE: currentX = eve.getX() / viewScale; currentY = eve.getY() / viewScale; if (currentX >= mapX) currentX = mapX-1; if (currentX < 0) currentX = 0; if (currentY >= mapY) currentY = mapY-1; if (currentY < 0) currentY = 0; Rectangle r = replacePen(currentX, currentY); r.width += 1; r.height += 1; redrawTiles(r.x, r.y, r.width, r.height); //put the cursor back //oldCursorRect = new Rectangle(lastX - tilePen.dx, lastY - tilePen.dy, tilePen.getW(), tilePen.getH()); newCursorRect = new Rectangle(currentX - tilePen.dx, currentY - tilePen.dy, tilePen.getW(), tilePen.getH()); w = r.width - r.x; h = r.height - r.y; oldDat = new byte[w][h]; for (int dx = 0; dx < w; dx++) { for (int dy = 0; dy < h; dy++) { oldDat[dx][dy] = prevLayerState[r.x + dx][r.y + dy]; //capture the new state newDat = new byte[w][h]; for (int dx = 0; dx < w; dx++) { for (int dy = 0; dy < h; dy++) { newDat[dx][dy] = dataHolder.getTileB(r.x + dx, r.y + dy, parent.getActiveLayer()); //create the edit dataHolder.addEdit(dataHolder.new MapEdit(r.x, r.y, oldDat, newDat, parent.getActiveLayer())); case EditorApp.DRAWMODE_DRAW: //capture the previous state if (selW > mapX) selW = mapX; if (selH > mapY) selH = mapY; w = selW - baseX; h = selH - baseY; oldDat = new byte[w][h]; for (int dx = 0; dx < w; dx++) { for (int dy = 0; dy < h; dy++) { oldDat[dx][dy] = prevLayerState [baseX + dx - tilePen.dx][baseY + dy - tilePen.dy]; //capture the new state newDat = new byte[w][h]; for (int dx = 0; dx < w; dx++) { for (int dy = 0; dy < h; dy++) { newDat[dx][dy] = dataHolder.getTileB( baseX + dx - tilePen.dx, baseY + dy - tilePen.dy, //create the edit dataHolder.addEdit(dataHolder.new MapEdit(baseX - tilePen.dx, baseY - tilePen.dy, oldDat, newDat, parent.getActiveLayer())); fun fact: this represents about 1.75% of the current BL codebase Mar 14, 2013 at 6:28 AM Bonds that separate us Forum Administrator "Life begins and ends with Nu." Join Date: Aug 20, 2006 Posts: 2855 Age: 34 Pronouns: he/him Mar 16, 2013 at 12:15 AM Join Date: Jul 15, 2007 Location: Australia Posts: 6224 Age: 39 Pronouns: he/him Just an idea for the future, not now obviously, but a way to load a maplist and edit maps entities and tsc by loading the data directory directly (bypassing the executable entirely) might be beneficial down the line when you start work on making BL cross-platform. This way BL can have limited support for all ports regardless of if it supports them directly or not. Mar 16, 2013 at 2:37 AM Join Date: Aug 28, 2009 Location: The Purple Zone Posts: 5998 Pronouns: he/him it sort of does that now and sort of doesn't. Currently it does support a few different project setups, that being CS+-like (external mapdata in the data directory) and CS-like (mapdata in the exe) as well as pxm-only. I've been reticent to enable editing w/o having the mapdata there because there is kind of a fair bit of important information in the mapdata itself, so it would be difficult to set up right and you wouldn't even be able to change the tilesets of a map and stuff. So for that, single-map loading/editing is maybe the best I can give. I think I could find the mapdata in the mac version but as for that I have no idea what you're supposed to load / would have access to on a mac. A .app folder? the .dmg? it's all so foreign to me. Mar 16, 2013 at 12:50 PM Join Date: Jul 15, 2007 Location: Australia Posts: 6224 Age: 39 Pronouns: he/him Dear god I'm an idiot. Please ignore that idea. Though I do have information on the mac format. The .dmg is more or less a Mac-specific file packaging format, like .zip and the like. Well that's not what the format was originally intended for but that's what nakiwo used it for. Doukutsu.app is a folder in reality. Though opening the .app folder rather than the main executable binary would be the safest option. Save file names can be edited here: It's an XML-ish plain text format. Look for key "CFBundleIdentifier" and modify the paired string to your needs. All save files are saved in the same folder, so a way to set this from the editor would prevent conflict. Data Folder: The mapdata is at $9FD40 and $1467A0 in the executable, one for PowerPC Macs and another for Intel Macs. The format is different to CSPC and the two sets of mapdata have different byte orders to each This only applies to the universal binary mind you. I know little about the PowerPC-only binary but I doubt anyone really uses it these days. However I don't know how to break the map limitation. Here's a copy of the files to mess around with (.zip instead of .dmg): Here's cultr1's old anti-(C)Pixel .bmp hack, so you can run file comparisons if you want: I had information on mac weapon data, but I can't remember where I put it. Mar 26, 2013 at 6:35 PM Indie game enthusiast "What is a man!? A miserable pile of secrets! But enough talk, have at you!" Join Date: Apr 18, 2006 Location: Forever wandering the tower...! Posts: 1790 Pronouns: he/him Is there currently a way while map editing : to use Copy to grab something on one tab, switch tabs, and paste it elsewhere? I tend to build unconnected areas on a construction map, and then start putting them together on other maps later... Mar 28, 2013 at 1:34 AM Join Date: Aug 28, 2009 Location: The Purple Zone Posts: 5998 Pronouns: he/him Currently: No, but may try to come up with a solution for that, time permitting. Mar 28, 2013 at 8:14 PM Veteran Member Join Date: Aug 21, 2012 Location: At a computer Posts: 337 Mar 28, 2013 at 8:27 PM Join Date: Aug 28, 2009 Location: The Purple Zone Posts: 5998 Pronouns: he/him No doubt I've missed a thing or two in the entity list, thanks for pointing it out; I'll have it fixed in the next release. May 3, 2013 at 3:07 PM The TideWalker Modding Community Discord Founder Join Date: Apr 5, 2013 Location: In my mind and of my body. Posts: 1640 Age: 27 Pronouns: he/him Could Something like this be incoreperated in the near future? It should be spell proof with or without a <FAC command similar to this. The only tricky part it getting it to render the correct face from the template. May 3, 2013 at 7:22 PM Been here way too long... "What're YOU lookin' at?" Join Date: Jan 21, 2007 Posts: 1111 Actually rather than do that, why not have one of the script-view types show simply a text-box graphic under the script window with whatever selected <MSG area shown inside it? It seems to already determine the number of characters afterward anyway, so it's not like you "can't" do it. Then plop the most recent <FAC (if any) into the message window. Or have some checkbox toggle between the face offset and not.
{"url":"https://forum.cavestory.org/threads/boosters-lab-its-pretty-good-now.3865/page-13","timestamp":"2024-11-05T20:14:48Z","content_type":"text/html","content_length":"180724","record_id":"<urn:uuid:db49b71c-4004-4711-98ce-c236bf7371f1>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00881.warc.gz"}
Comments on Hyperborea Exists: Every story has a beginning IX2nd column numbers 5 to 32 price forty chips ever... tag:blogger.com,1999:blog-3690494619282758469.post940189282454216579..comments2023-02-27T10:47:03.139+01:00J. Hågensenhttp://www.blogger.com/profile/ 00210986130056143846noreply@blogger.comBlogger1125tag:blogger.com,1999:blog-3690494619282758469.post-21959252528767070372022-12-02T19:12:08.445+01:002022-12-02T19:12:08.445+01:002nd column numbers 5 to 32 price forty chips every to complete. Based on the placement of the numbers on the structure, the number of chips required to <a href="https://thekingofdealer.com/coin-casino/" rel="nofollow">코 인카지노</a> &quot;full&quot; a number can be decided. Final four, for example, is a 4-chip guess and consists of 1 chip placed on every of the numbers ending in four, that&#39;s four, 14, 24, and 34. The tiers guess additionally be|can be} referred to as the &quot;small sequence&quot; and in some casinos &quot;sequence 5-8&quot;. The French fashion desk with a wheel in the centre and a structure on both aspect is rarely discovered exterior of Monte Carlo. The sum of all the numbers on the roulette wheel is 666, which is the &quot;Number of the Beast&
{"url":"http://www.hyperboreaexists.com/feeds/940189282454216579/comments/default","timestamp":"2024-11-12T16:03:54Z","content_type":"application/atom+xml","content_length":"4574","record_id":"<urn:uuid:0b599f72-0676-4daf-9a90-405b4cfe5e72>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00800.warc.gz"}
SSC CHSL Previous Year Question Papers Answers - current-affairs.org SSC CHSL Previous Year Question Papers Answers SSC CHSL Previous Year Question Papers Answers SSC CHSL Previous Year Question and Sample Model Paper Candidates will be glad to know that notifications of SSC CHSL are declared and the tentative exam dates are also provided. The candidates must go for this exam as it is most coveted exam among candidates who want to go for government jobs. This exam is conducted each year to recruit the skilled and eligible candidates for the post of Lower division clerk (LDC) and Data entry Operator(DEO). It is always observed that large amount of applicants fill the application form before the last date of submission. The exam is considered highly important for all those who are aspiring to be in government sector. The details associated with this exam are provided here which also includes the previous papers and sample papers along with syllabus and pattern that will encourage you to follow the listed detailed information of the SSC CHSL. The Staff selection Commission of India conducts recruitment for various positions of data- Entry operator, Clerk, Postal and sorting Assistant through SSC Combined Higher Secondary Level Examination. Lakhs of students appear for this exam, and therefore is very crucial among other exams. SSC CHSL Exam If we go by popularity, SSC CHSL is ranked 1 among all other government exams. With each passing year, the number of candidates applying for exam is increasing steeply. A record of 37 Lakhs applications was received for SSC exam. SSC recruitment exam details are already advertised on the official website of SSC. Tentative Dates CHSL Exams 1. Advertisement release 8 Oct, 2016 2. Last registration date 25 Nov, 2016 3. Admit Card issue Dec, 3rd week 4. Exam Date (Tier 1) 7 Jan 2017 – 5 Feb, 2017 5. Result (Tier 1) April 2017 6. Admit Card (Tier 2) End of May, 2017 7. Exam date (Tier 2) 4 June, 2017 8. Result (Tier 2) July, 2017 Note: The dates for Tier 3 exam and declaration of final result will be notified later. And these are tentative dates; any change in date will be notified here. Get: Current Affairs for SSC Exams For all those who are preparing and have decided to give this exam, must take the benefits we offer in this site and begin the preparation from the material that we offer through links, as accessing these links is very easy and doesn’t require any additional charges. The vacancies are also observed to be in great amount and one has to prepare from finest material to score well in SSC exam. Thus, it manifest to hold greater importance, and it consist of the specific syllabus and pattern which we provide below, at the end of this post. The Commission declared the notification for SSC CHSL exam, and also released the syllabus at the official page, so we encourage you to visit the official website and read the notification before filing the application form. Syllabus SSC CHSL Exam The selection procedure of CHSL consists of a written test and skill test. For the written test the syllabus is divided into four sections. The sections are general intelligence, English Language, Quantitative Aptitude and General Awareness. The complete paper is of 200 marks where each section has 50 questions each.The pattern is provided with you as it includes the subjects from which the paper will be prepared and this will bring ease in understanding the exam with clarity. Along with this, we also provide the sample papers and previous papers that also help you in developing the concept for the CHSL exam. There are objective type Questions followed with the skill Test. The Written Examination consists of the following sections and the objective type Question paper is conducted for 2 hours. So, take benefit and prepare from the exam pattern and syllabus. Note : Each wrong answers deducts 0.50 marks from your score. Subject Questions Marks General intelligence 50 50 English language 50 50 Quantitative aptitude 50 50 General awareness 50 50 The complete Sets of previous papers is mentioned in these links which you can download easily and get the sample papers also as it contains questions from the previous sets as well as some questions from the desired pattern. Get Delhi Police SI Previous Year Exam Question Papers Answers PDF Sample Questions for SSC CHSL Recruitment Exams: 1. 3/5th of a number is more than 40% of the same number by 35. What is 80% of that number ? a) 175 b) 105 c) 150 d) 140 (Ans) 2. The sum of two digits of a number is less than the number by 54. What is the difference between two digits of the number ? a) 2 b) 4 c) 6 d) Data inadequate (Ans) 3. In the equation given below which number (approx.) will replace the question mark ? 6.59 x 149.36 + 159% of 1642 = 10000 – ? a) 6800 b) 7500 c) 6500 (Ans) d) 5500 4. The inequality of b2 + 8b > 9b + 14 can be removed if —– a) b> 5, b< -5 b) b> 5, b< – 4 (Ans) c) b > 6, b< -6 d) b > 4, b < -4 Explanation : b2 + 8b – 9b – 14 > 0 b2 -b – 14 > 0 b2 – b + 1/4 > 14 +1/4 (b-1/2) > + √57/2 b> 1 + 7.55 > 4.28 or – 3.28 :. b > 5 or b < – 4 5. If the radius of the base and the height of a right circular cone are increased by 20%, then what is the approximate percentage increase in volume ? a) 60 b) 68 c) 73 (Ans) d) 75 Explanation : 2a + b + a2 +2ab + a2b (Here a = 20=b) = 2 x 20 + 20+ (20)2 + 2 x 20 x 20 + (20)2 x 20 = 40 + 20 + 1200 + 8000 = 72.8 = 73 (Approx.) PQRS is a diameter of a circle of radius 6 cm as shown in the figure above. The lengths PQ, QR and RS are equal. Semi-circles are drawn on PQ and QS as diameters. What is the perimeter of the shaded region ? a) 12 Π(Ans) b) 14 Πc) 16 Πd) 18 Π7. In the equation given below, what (approx.) number will replace the question mark ? 162 √7+18068 – 2 and 1/7 of 5162 = ? a) 7200 b) 8700 c) 9200 d) 7600 (Ans) 8. In the equation given below what will come in place of question mark ? 5672 + 3805 = ? + 39846 a) 21615 b) 20031 c) 219751 d) 20751 (Ans) 9. When 40% of a certain number is added in an other number, then the second number is increased by its 60%. What is the ratio between the two numbers ? a) 3:2 (Ans) b) 2:3 c) 3:4 d) Data inadequate 10. The average marks of 55 students of a class is 60. The average marks of passed students is 70 and the average marks of failed students is 45. What is the number of failed students ? a) 33 b) 22 (Ans) c) 28 d) Data inadequate Download SSC CHSL Previous Year Question Papers SSC CHSL Question Paper Download link PDF SSC CHSL Question Paper 1 Download here SSC CHSL Question Paper 2 Download here SSC CHSL Question Paper 3 Download here SSC CHSL Question Paper 4 Download here SSC CHSL Question Paper 5 Download here The candidates interest and aspirations is followed here, so if you are finding any inconvenience related with any notifications or want to sort your query, let us know about that, we will look forward to it and find relevant solution for the same. To get more updates, visit the official page.
{"url":"https://www.current-affairs.org/ssc-chsl-previous-question-paper/","timestamp":"2024-11-05T20:19:26Z","content_type":"text/html","content_length":"203799","record_id":"<urn:uuid:029e7097-3386-471b-a251-6ce8adf94fa4>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00071.warc.gz"}
Hyperamicus is a functional programming language defined by David Madore in his 2015-11-16 blog article “Qu'est-ce qu'une machine hyperarithmétique ?” which is non-computable and strictly more powerful than Turing machines. The motivation why this language came to being is that David wanted to give a precise definition for a larger computability class called “hyperarithmetic function” by defining this programming language. To ensure that the definition of Hyperamicus is correct and understandible, he first defined a Turing-equivalent language Amicus, and then an extension that turns that language to The basic data type of Hyperamicus is natural numbers of unbounded size, but these numbers are usually treated as representing recursive lists in the Lisp sense. The empty list <> is represented by the number 0, the list <a: d> with head a and tail d is represented as the number 2**a * (2 * d + 1), and in general a finite list <v1, v2, …, vk> is represented as <v1: <v2: <… <vk, <>>…>>>, or 2**v1 + 2**(v1+v2+1) + … + 2**(v1+v2+…+vk+(k-1)). This construction makes every natural number a list. Programs are represented as a number (list), operate on a single input argument which is also a number (list), and give a single number (list) as the result. There is no IO or other side effects. Evaluation is defined by the following recursive definition, where E(p, v) = y means that the program p on the input v evaluates to the value y. Rule 0. E(<0>, v) = v Rule 1. E(<1, c>, v) = c Rule 2. E(<2>, <n: r>) = 1 + n Rule 3. E(<3, n>, <v1: <v2: <…, <vn:d>…>>>) = vn, provided 0 < n Rule 4. E(<4>, <m, n, u, v>>) = (if m == n then u else v) Rule 5. E(<5, f, g1, …, gn>, v) = E(f, <E(g1, v), …, E(gn, v)>) Rule 6. E(<6>, <h: r>) = E(h, r) Rule 7. E(<7>, <f>) = 0 if E(<f>, <i>) = 0 for all natural numbers i, and E(<7>, <f>) = 1 if E(<f>, <i>) is defined (terminates) for all natural numbers i and E(<f>, <j>) ≠ 0 for at least one natural number j. Rule 5 is valid for any natural number n. As a special case, E(<5, f>) = E(f, <>) = E(f, 0). • The addition of rule 7 is the only way this language differs from Amicus. This is the rule that makes the language uncomputable. Any Amycus program is also a Hyperamicus program, and on inputs where the result of running Amycus is defined for this program, Hyperamicus gives the same result. • The rules that Amicus article defines for abstraction elimination to transform lambda calculus expressions to Amicus still work if lambda calculus is expanded with new primitives so that some of the functions are non-computable, so the technique described there can also be used to program Hyperamicus. This is not immediate, because Amicus is capable of reading and interpreting its own programs, so you could use techniques that do not extend to Hyperamicus. • David doesn't give a name for this language, so User:B_jonas gave the name “Hyperamicus”.
{"url":"http://esolangs.org/wiki/Hyperamicus","timestamp":"2024-11-11T17:16:09Z","content_type":"text/html","content_length":"20631","record_id":"<urn:uuid:b34d1633-ed0c-4bbe-8f48-dd169a875c75>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00821.warc.gz"}
The propeller shaft of a large ship has outside diameter 350mm ... | Filo Question asked by Filo student The propeller shaft of a large ship has outside diameter and inside diameter as shown in the figure. The shaft is rated for a maximum shear stress of (a) If the shaft is turning at 500 rpm, what is the maximum horsepower that can be transmitted without exceeding the allowable stress?(b) If the rotational speed of the shaft is doubled but the power requirements remain unchanged, what happens to the shear stress in the shaft? Not the question you're searching for? + Ask your question Step by Step Solution: Step 1. Calculate the polar moment of inertia of the propeller shaft using the formula , where is the outside diameter, and is the inside diameter. Step 2. Calculate the torque on the propeller shaft using the formula , where is the maximum horsepower, and is the rotational speed of the shaft. Step 3. Calculate the maximum shear stress using the formula Step 4. (a) Substitute and in step 1 to get . Substitute , and in step 3 to get the maximum torque the shaft can handle. Substitute and in step 2, and the maximum torque in the previous step to get the maximum horsepower that can be transmitted. Step 5. (b) Double the RPM value and substitute it in step 3 to obtain the new shear stress. The power requirement is unchanged and therefore the torque must be constant. Use the equation and the original RPM value to find the original torque. Since torque is constant, the old and new shear stress values can be equated and solved for . Final Answer: (a) The maximum horsepower that can be transmitted without exceeding the allowable stress is 2775.15 hp (b) The shear stress in the shaft will be double in case the shaft rotational speed is doubled while keeping the power requirements unchanged. Found 7 tutors discussing this question Discuss this question LIVE for FREE 14 mins ago One destination to cover all your homework and assignment needs Learn Practice Revision Succeed Instant 1:1 help, 24x7 60, 000+ Expert tutors Textbook solutions Big idea maths, McGraw-Hill Education etc Essay review Get expert feedback on your essay Schedule classes High dosage tutoring from Dedicated 3 experts Practice more questions on All topics View more Students who ask this question also asked View more Stuck on the question or explanation? Connect with our Physics tutors online and get step by step solution of this question. 231 students are taking LIVE classes The propeller shaft of a large ship has outside diameter and inside diameter as shown in the figure. The shaft is rated for a maximum shear stress of (a) If the shaft is turning at 500 rpm, Question what is the maximum horsepower that can be transmitted without exceeding the allowable stress?(b) If the rotational speed of the shaft is doubled but the power requirements remain unchanged, Text what happens to the shear stress in the shaft? Updated Mar 15, 2024 Topic All topics Subject Physics Class Class 11 Answer Text solution:1
{"url":"https://askfilo.com/user-question-answers-physics/the-propeller-shaft-of-a-large-ship-has-outside-diameter-and-37383736313930","timestamp":"2024-11-14T17:42:59Z","content_type":"text/html","content_length":"206201","record_id":"<urn:uuid:3dff66f2-323e-4a95-9cea-f0303dd56169>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00628.warc.gz"}
MATLAB Archives | Electricalvoice Here in this tutorial, we are going to learn how to find the inverse of a matrix in MATLAB. First of all, see what is the syntax of matrix inverse in MATLAB. Syntax A = inv(B) where B is the square matrix and A is the inverse of matrix B. Let us take a few … Read more MATLAB for Loop Examples A MATLAB for loop is a repetition control structure that allows you to efficiently write a loop that needs to execute a specific number of times. The MATLAB for loop syntax is given as for index = values <program statements> … end MATLAB for loop Examples 1. Write a program to display numbers from 1 … Read more MATLAB Program for Kalman Filter Q. To estimate the positions and velocity of an object using Kalman Filter in MATLAB when a set of measurements and control inputs are available. Please read about Kalman Filter and Extended Kalman Filter. Program %==============================KALMAN FILTER ============================== %To estimate position ‘X’ and velocity ‘Xdot’ of a moving object when the %measurement of position ‘Y’ and … Read more MATLAB Program to Determine State Transition Matrix Write a MATLAB Program to determine the State Transition Matrix for Program %Program to determine the state transition matrix %provided by electricalvoice.com clc clear all %calculation of state transition matrix using inverse technique syst a=[1 4;-2 -5] phi=exmp(a*t) You can get MATLAB assignment help at AssignmentCore from a team of homework experts. MATLAB Program to Obtain Transfer Function from Data Q. Obtain the transfer function from the following data Program %Program to Obtain Transfer Function from Data %provided by electricalvoice.com clc clear all %obtain transfer matrix A=[-5 1;-6 0] B= [1;2] C=[2 1] D=[0] [num, den]=ss2tf(A,B,C,D) printsys(num,den) Q. Obtain the transfer function from the following data Program %Program to Obtain Transfer Function from Data %provided by … Read more MATLAB Program for finding Error Coefficients Q. The open-loop transfer function of a unity feedback control system is given by Find error coefficients (a). Kp (b). Kv (c). Ka Program % program for calculation of error coefficients % provided by electricalvoice.com clc clear all numg=10 deng=[1 6 10] %step input %calculation of error coefficient kp G=tf(numg, deng) kp=dcgain(G) Ess=1/(1+kp) %ramp input … Read more MATLAB Program for Determining Time response of Transfer function Q. Determine the time response for the following transfer function using MATLAB. Answer. First of all simplifying numerator(num) and denominator(den) of the transfer function respectively as num = s+2 den = s2+2s+2 Program % program for Determining Time response of Transfer function % provided by electricalvoice.com clc clear all num=[1 2]; den=[1 2 2]; sys5=tf(num,den) load … Read more MATLAB Program for finding Poles & Zeroes of Transfer Function Q. The transfer function of a system is given below Determines the poles and zeroes and show the pole-zero configuration in s-plane using MATLAB. First of all simplifying numerator(p1) and denominator(q1) of the transfer function respectively as p1=8s2+56s+96 q1=s4+4s3+9s2+10s Program % program for finding poles and zeroes of a transfer function % provided by electricalvoice.com clc … Read more MATLAB Program for Plotting Two sine waves Connected Together MATLAB Program for plotting Two sine waves connected together, one of which has twice the frequency of other. Program %Two sine waves connected together,one of which has twice the frequency of other %provided by electricalvoice.com clc clear all fr= input(‘Enter frequency in Hz:’); stptime = input(‘Enter Stop time in Second:’); frad = 2*pi*fr; t = … Read more MATLAB Program for Drawing two circles one having radius twice the other Using Matlab Draw two sine waves connected together, one of which has twice the frequency of other. Program %Program Code for Drawing two circles one having radius twice the other %provided by electricalvoice.com clc clear all x=input(‘Enter x-cordinate of center of circle:’); y=input(‘Enter y-cordinate of center of circle:’); r=input(‘Enter radius of circle:’); ang=0:0.01:2*pi; xp=r*cos (ang); yp=r*sin(ang); … Read more
{"url":"https://electricalvoice.com/category/matlab/","timestamp":"2024-11-02T20:03:45Z","content_type":"text/html","content_length":"98142","record_id":"<urn:uuid:1143470b-1e3b-4fa5-8d6c-26ee20586d72>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00267.warc.gz"}
Laws of indices mathematics wikipedia indices. plural of index The subscript numbers after each element are the indices of that element. A common convention in computing is to have indices beginning at zero, whereas in mathematics indices usually begin at one. 1972, American Society for Metals, Materials Science and Engineering, volumes 9–10, page 67 (Elsevier Sequoia) Mathematics is the study of numbers, shapes and patterns.The word comes from the Greek word "μάθημα" (máthema), meaning "science, knowledge, or learning", and is sometimes shortened to maths (in England, Australia, Ireland, and New Zealand) or math (in the United States and Canada). The short words are often used for arithmetic, geometry or simple algebra by students and their schools. Laws of Indices synonyms, Laws of Indices pronunciation, Laws of Indices translation, English dictionary definition of Laws of Indices. n. Mathematics The act of raising a quantity to a power. n the use of an exponent to raise the value of the base number to a power n. the raising of a Snell's law (also known as Snell-Descartes law and the law of refraction) is a formula used to describe the relationship between the angles of incidence and refraction, when referring to light or other waves passing through a boundary between two different isotropic media, such as water, glass, or air.. In optics, the law is used in ray tracing to compute the angles of incidence or refraction The mathematics of general relativity are complex. In Newton's theories of motion, an object's length and the rate at which time passes remain constant while the object accelerates, meaning that many problems in Newtonian mechanics may be solved by algebra alone. In relativity, however, an object's length and the rate at which time passes both change appreciably as the object's speed In mathematics, the logarithm is the inverse function to exponentiation.That means the logarithm of a given number x is the exponent to which another fixed number, the base b, must be raised, to produce that number x.In the simplest case, the logarithm counts the number of occurrences of the same factor in repeated multiplication; e.g., since 1000 = 10 × 10 × 10 = 10 3, the "logarithm base An index (plural: indices) is the power, or exponent, of a number. For example, a 3 a^3 a 3 has an index of 3. A surd is an irrational number that can be expressed with roots, such as 2 \sqrt{2} 2 or 19 5 \ sqrt[5]{19} 5 1 9 . Technique. The manipulation of indices and surds can be a powerful tool for evaluating and simplifying expressions. The laws of indices Introduction A power, or an index, is used to write a product of numbers very compactly. The plural of index is indices. In this leaflet we remind you of how this is done, and state a number of rules, or laws, which can be used to simplify expressions involving indices. 1. Powers, or indices We write the expression 3×3× 3 In mathematics and computer programming, the order of operations (or operator precedence) is a collection of rules that reflect conventions about which procedures to perform first in order to evaluate a given mathematical expression.. For example, in mathematics and most computer languages, multiplication is granted a higher precedence than addition, and it has been this way since the Six rules of the Law of Indices: To manipulate math expressions, we can consider using the Law of Indices. These laws only apply to expressions with the same base, for example, 3 4 and 3 2 can be manipulated using the Law of Indices, but we cannot use the Law of Indices to manipulate the expressions 4 5 and 9 7 as their base differs (their bases are 4 and 9, respectively). In mathematics and computer programming, the order of operations is a collection of rules that reflect conventions about which procedures to perform first in order to evaluate a given mathematical expression. For example, in mathematics and most computer languages, multiplication is granted a higher precedence than addition, and it has been this way since the introduction of modern algebraic notation. Thus, the expression 2 + 3 × 4 is interpreted to have the value 2 + = 14, not × 4 = 20 Law of Indices. To manipulate expressions, we can consider using the Law of Indices. These laws only apply to expressions with the same base, for example, 3 4 and 3 2 can be manipulated using the Law of Indices, but we cannot use the Law of Indices to manipulate the expressions 3 5 and 5 7 as their base differs (their bases are 3 and 5, respectively). More Lessons for GCSE Maths Math Worksheets Examples, solutions and videos to help GCSE Maths students learn about the multiplication and division rules of indices. Maths : Indices : Multiplication Rule In this tutorial you are shown the multiplication rule for indices. You are given a short test at the end. x m × x n = x m+n Laws of Indices synonyms, Laws of Indices pronunciation, Laws of Indices translation, English dictionary definition of Laws of Indices. n. Mathematics The act of raising a quantity to a power. n the use of an exponent to raise the value of the base number to a power n. the raising of a In mathematics and computer programming, the order of operations (or operator precedence) is a collection of rules that reflect conventions about which procedures to perform first in order to evaluate a given mathematical expression.. For example, in mathematics and most computer languages, multiplication is granted a higher precedence than addition, and it has been this way since the Revise about how to multiply and divide indices, as well as apply negative and fractional rules of indices with this BBC Bitesize GCSE Maths Edexcel guide. Laws of indices. Algebra · Applied Mathematics · Calculus and Analysis · Discrete Mathematics · Foundations of Mathematics · Geometry · History and Terminology Congruence. There is a mathematical way of saying that all of the integers are the same as one of the modulo 5 residues. For instance, we 3 Jan 2020 Mathematics, Physics, Chemistry, Biology HyperPython: a practical introduction to the solution of hyperbolic conservation laws, a course by Math texts, online classes, and more for students in Retrieved from "https:// artofproblemsolving.com/wiki/index.php?title=Category:Theorems&oldid=17525". More Lessons for GCSE Maths Math Worksheets Examples, solutions and videos to help GCSE Maths students learn about the multiplication and division rules of indices. Maths : Indices : Multiplication Rule In this tutorial you are shown the multiplication rule for indices. You are given a short test at the end. x m × x n = x m+n Laws of Indices synonyms, Laws of Indices pronunciation, Laws of Indices translation, English dictionary definition of Laws of Indices. n. Mathematics The act of raising a quantity to a power. n the use of an exponent to raise the value of the base number to a power n. the raising of a In mathematics and computer programming, the order of operations (or operator precedence) is a collection of rules that reflect conventions about which procedures to perform first in order to evaluate a given mathematical expression.. For example, in mathematics and most computer languages, multiplication is granted a higher precedence than addition, and it has been this way since the indices. plural of index The subscript numbers after each element are the indices of that element. A common convention in computing is to have indices beginning at zero, whereas in mathematics indices usually begin at one. 1972, American Society for Metals, Materials Science and Engineering, volumes 9–10, page 67 (Elsevier Sequoia) Six rules of the Law of Indices. Rule 1: Any number, except 0, whose index is 0 is always equal to 1, regardless of the value of the base More Lessons for GCSE Maths Math Worksheets Examples, solutions and videos to help GCSE Maths students learn about the multiplication and division rules of indices. Maths : Indices : Multiplication Rule In this tutorial you are shown the multiplication rule for indices. You are given a short test at the end. x m × x n = x m+n Laws of Indices synonyms, Laws of Indices pronunciation, Laws of Indices translation, English dictionary definition of Laws of Indices. n. Mathematics The act of raising a quantity to a power. n the use of an exponent to raise the value of the base number to a power n. the raising of a In mathematics and computer programming, the order of operations (or operator precedence) is a collection of rules that reflect conventions about which procedures to perform first in order to evaluate a given mathematical expression.. For example, in mathematics and most computer languages, multiplication is granted a higher precedence than addition, and it has been this way since the indices. plural of index The subscript numbers after each element are the indices of that element. A common convention in computing is to have indices beginning at zero, whereas in mathematics indices usually begin at one. 1972, American Society for Metals, Materials Science and Engineering, volumes 9–10, page 67 (Elsevier Sequoia) Algebra · Applied Mathematics · Calculus and Analysis · Discrete Mathematics · Foundations of Mathematics · Geometry · History and Terminology Congruence. There is a mathematical way of saying that all of the integers are the same as one of the modulo 5 residues. For instance, we 3 Jan 2020 Mathematics, Physics, Chemistry, Biology HyperPython: a practical introduction to the solution of hyperbolic conservation laws, a course by Math texts, online classes, and more for students in Retrieved from "https:// artofproblemsolving.com/wiki/index.php?title=Category:Theorems&oldid=17525".
{"url":"https://bestbitaxqxedf.netlify.app/legrant8287dine/laws-of-indices-mathematics-wikipedia-bagy.html","timestamp":"2024-11-08T18:54:45Z","content_type":"text/html","content_length":"34910","record_id":"<urn:uuid:a9f9c0a1-9acd-4bea-9c8b-ebc11336ff36>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00431.warc.gz"}
Find the coordinates of the point of intersection of the graph of the Find the coordinates of the point of intersection of the graph of the equation X = 2 and y = -3. 1 thought on “Find the coordinates of the point of intersection of the graph of the <br /><br />equation X = 2 and y = -3.” 1. Answer: hey hope its helpful Step-by-step explanation: the coordinates of a x axis is 2 the coordinates of y axis is 3 Leave a Comment
{"url":"https://wiki-helper.com/find-the-coordinates-of-the-point-of-intersection-of-the-graph-of-the-equation-2-and-y-3-k-38306564-55/","timestamp":"2024-11-01T20:30:59Z","content_type":"text/html","content_length":"126264","record_id":"<urn:uuid:b08ecedc-d7ba-4af1-a108-d560667d92e4>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00634.warc.gz"}
6 ANUPQ Options [Up] [Previous] [Next] [Index] 6 ANUPQ Options In this chapter we describe in detail all the options used by functions of the ANUPQ package. Note that by ``options'' we mean GAP options that are passed to functions after the arguments and separated from the arguments by a colon as described in Chapter Function Calls in the Reference Manual. The user is strongly advised to read Section Hints and Warnings regarding the use of Options. AllANUPQoptions() F lists all the GAP options defined for functions of the ANUPQ package: gap> AllANUPQoptions(); [ "AllDescendants", "BasicAlgorithm", "Bounds", "CapableDescendants", "ClassBound", "CustomiseOutput", "Exponent", "Filename", "GroupName", "Identities", "Metabelian", "NumberOfSolubleAutomorphisms", "OrderBound", "OutputLevel", "PcgsAutomorphisms", "PqWorkspace", "Prime", "PrintAutomorphisms", "PrintPermutations", "QueueFactor", "RankInitialSegmentSubgroups", "RedoPcp", "RelativeOrders", "Relators", "SetupFile", "SpaceEfficient", "StandardPresentationFile", "StepSize", "SubList", "TreeDepth", "pQuotient" ] The following global variable gives a partial breakdown of where the above options are used. ANUPQoptions V is a record of lists of names of admissible ANUPQ options, such that each field is either the name of a ``key'' ANUPQ function or other (for a miscellaneous list of functions) and the corresponding value is the list of option names that are admissible for the function (or miscellaneous list of functions). Also, from within a GAP session, you may use GAP's help browser (see Chapter The Help System in the GAP Reference Manual); to find out about any particular ANUPQ option, simply type: ``?option option'', where option is one of the options listed above without any quotes, e.g. gap> ?option Prime will display the sections in this manual that describe the Prime option. In fact the first 4 are for the functions that have Prime as an option and the last actually describes the option. So follow up by choosing gap> ?5 This is also the pattern for other options (the last section of the list always describes the option; the other sections are the functions with which the option may be used). In the section following we describe in detail all ANUPQ options. To continue onto the next section on-line using GAP's help browser, type: gap> ?> Prime := p Specifies that the p-quotient for the prime p should be computed. ClassBound := n Specifies that the p-quotient to be computed has lower exponent-p class at most n. If this option is omitted a default of 63 (which is the maximum possible for the pq program) is taken, except for PqDescendants (see PqDescendants) and in a special case of PqPCover (see PqPCover). Let F be the argument (or start group of the process in the interactive case) for the function; then for PqDescendants the default is PClassPGroup(F) + 1, and for the special case of PqPCover the default is PClassPGroup(F). pQuotient := Q This option is only available for the standard presentation functions. It specifies that a p-quotient of the group argument of the function or group of the process is the pc p-group Q, where Q is of class less than the provided (or default) value of ClassBound. If pQuotient is provided, then the option Prime if also provided, is ignored; the prime p is discovered by computing PrimePGroup(Q). Exponent := n Specifies that the p-quotient to be computed has exponent n. For an interactive process, Exponent defaults to a previously supplied value for the process. Otherwise (and non-interactively), the default is 0, which means that no exponent law is enforced. Relators := rels Specifies that the relators sent to the pq program should be rels instead of the relators of the argument group F (or start group in the interactive case) of the calling function; rels should be a list of strings in the string representations of the generators of F, and F must be an fp group (even if the calling function accepts a pc group). This option provides a way of giving relators to the pq program, without having them pre-expanded by GAP, which can sometimes effect a performance loss of the order of 100 (see Section The Relators Option). 1. The pq program does not use / to indicate multiplication by an inverse and uses square brackets to represent (left normed) commutators. Also, even though the pq program accepts relations, all elements of rels must be in relator form, i.e. a relation of form w1 = w2 must be written as w1*(w2)^-1 and then put in a pair of double-quotes to make it a string. See the example below. 2. To ensure there are no syntax errors in rels, each relator is parsed for validity via PqParseWord (see PqParseWord). If they are ok, a message to say so is Info-ed at InfoANUPQ level 2. Metabelian Specifies that the largest metabelian p-quotient subject to any other conditions specified by other options be constructed. By default this restriction is not enforced. GroupName := name Specifies that the pq program should refer to the group by the name name (a string). If GroupName is not set and the group has been assigned a name via SetName (see SetName) it is set as the name the pq program should use. Otherwise, the ``generic'' name "[grp]" is set as a default. Identities := funcs Specifies that the pc presentation should satisfy the laws defined by each function in the list funcs. This option may be called by Pq, PqEpimorphism, or PqPCover (see Pq). Each function in the list funcs must return a word in its arguments (there may be any number of arguments). Let identity be one such function in funcs. Then as each lower exponent p-class quotient is formed, instances identity (w1 , ..., wn ) are added as relators to the pc presentation, where w1 , ..., wn are words in the pc generators of the quotient. At each class the class and number of pc generators is Info-ed at InfoANUPQ level 1, the number of instances is Info-ed at InfoANUPQ level 2, and the instances that are evaluated are Info-ed at InfoANUPQ level 3. As usual timing information is Info-ed at InfoANUPQ level 2; and details of the processing of each instance from the pq program (which is often quite voluminous) is Info-ed at InfoANUPQ level 3. Try the examples "B2-4-Id" and "11gp-3-Engel-Id" which demonstrate the usage of the Identities option; these are run using PqExample (see PqExample). Take note of Note 1. below in relation to the example "B2-4-Id"; the companion example "B2-4" generates the same group using the Exponent option. These examples are discussed at length in Section The Identities Option and PqEvaluateIdentities 1. Setting the InfoANUPQ level to 3 or more when setting the Identities option may slow down the computation considerably, by overloading GAP with io operations. 2. The Identities option is implemented at the GAP level. An identity that is just an exponent law should be specified using the Exponent option (see option Exponent), which is implemented at the C level and is highly optimised and so is much more efficient. 3. The number of instances of each identity tends to grow combinatorially with the class. So care should be exercised in using the Identities option, by including other restrictions, e.g. by using the ClassBound option (see option ClassBound). OutputLevel := n Specifies the level of ``verbosity'' of the information output by the ANU pq program when computing a pc presentation; n must be an integer in the range 0 to 3. OutputLevel := 0 displays at most one line of output and is the default; OutputLevel := 1 displays (usually) slightly more output and OutputLevels of 2 and 3 are two levels of verbose output. To see these messages from the pq program, the InfoANUPQ level must be set to at least 1 (see InfoANUPQ). See Section Hints and Warnings regarding the use of Options for an example of how OutputLevel can be used as a troubleshooting tool. RedoPcp Specifies that the current pc presentation (for an interactive process) stored by the pq program be scrapped and clears the current values stored for the options Prime, ClassBound, Exponent and Metabelian and also clears the pQuotient, pQepi and pCover fields of the data record of the process. SetupFile := filename Non-interactively, this option directs that pq should not be called and that an input file with name filename (a string), containing the commands necessary for the ANU pq standalone, be constructed. The commands written to filename are also Info-ed behind a ``ToPQ> '' prompt at InfoANUPQ level 4 (see InfoANUPQ). Except in the case following, the calling function returns true. If the calling function is the non-interactive version of one of Pq, PqPCover or PqEpimorphism and the group provided as argument is trivial given with an empty set of generators, then no setup file is written and fail is returned (the pq program cannot do anything useful with such a group). Interactively, SetupFile is ignored. Note: Since commands emitted to the pq program may depend on knowing what the ``current state'' is, to form a setup file some ``close enough guesses'' may sometimes be necessary; when this occurs a warning is Info-ed at InfoANUPQ or InfoWarning level 1. To determine whether the ``close enough guesses'' give an accurate setup file, it is necessary to run the command without the SetupFile option, after either setting the InfoANUPQ level to at least 4 (the setup file script can then be compared with the ``ToPQ> '' commands that are Info-ed) or setting a pq command log file by using ToPQLog (see ToPQLog). PqWorkspace := workspace Non-interactively, this option sets the memory used by the pq program. It sets the maximum number of integer-sized elements to allocate in its main storage array. By default, the pq program sets this figure to 10000000. Interactively, PqWorkspace is ignored; the memory used in this case may be set by giving PqStart a second argument (see PqStart). PcgsAutomorphisms := false Let G be the group associated with the calling function (or associated interactive process). Passing the option PcgsAutomorphisms without a value (or equivalently setting it to true), specifies that a polycyclic generating sequence for the automorphism group (which must be soluble) of G, be computed and passed to the pq program. This increases the efficiency of the computation; it also prevents the pq from calling GAP for orbit-stabilizer calculations. By default, PcgsAutomorphisms is set to the value returned by IsSolvable( AutomorphismGroup( G ) ), and uses the package AutPGrp to compute AutomorphismGroup( G ) if it is installed. This flag is set to true or false in the background according to the above criterion by the function PqDescendants (see PqDescendants and PqDescendants!interactive). Note: If PcgsAutomorphisms is used when the automorphism group of G is insoluble, an error message occurs. OrderBound := n Specifies that only descendants of size at most p^n , where n is a non-negative integer, be generated. Note that you cannot set both OrderBound and StepSize. StepSize := n StepSize := list For a positive integer n, StepSize specifies that only those immediate descendants which are a factor p^n bigger than their parent group be generated. For a list list of positive integers such that the sum of the length of list and the exponent-p class of G is equal to the class bound defined by the option ClassBound, StepSize specifies that the integers of list are the step sizes for each additional class. RankInitialSegmentSubgroups := n Sets the rank of the initial segment subgroup chosen to be n. By default, this has value 0. SpaceEfficient Specifies that the pq program performs certain calculations of p-group generation more slowly but with greater space efficiency. This flag is frequently necessary for groups of large Frattini quotient rank. The space saving occurs because only one permutation is stored at any one time. This option is only available if the PcgsAutomorphisms flag is set to true (see option PcgsAutomorphisms). For an interactive process, SpaceEfficient defaults to a previously supplied value for the process. Otherwise (and non-interactively), SpaceEfficient is by default CapableDescendants By default, all (i.e. capable and terminal) descendants are computed. If this flag is set, only capable descendants are computed. Setting this option is equivalent to setting AllDescendants := false (see option AllDescendants), except if both CapableDescendants and AllDescendants are passed, AllDescendants is essentially ignored. AllDescendants := false By default, all descendants are constructed. If this flag is set to false, only capable descendants are computed. Passing AllDescendants without a value (which is equivalent to setting it to true) is superfluous. This option is provided only for backward compatibility with the GAP 3 version of the ANUPQ package, where by default AllDescendants was set to false (rather than true). It is preferable to use CapableDescendants (see option CapableDescendants). TreeDepth := class Specifies that the descendants tree developed by PqDescendantsTreeCoclassOne (see PqDescendantsTreeCoclassOne) should be extended to class class, where class is a positive SubList := sub Suppose that L is the list of descendants generated, then for a list sub of integers this option causes PqDescendants to return Sublist( L, sub ). If an integer n is supplied, PqDescendants returns L[n]. NumberOfSolubleAutomorphisms := n Specifies that the number of soluble automorphisms of the automorphism group supplied by PqPGSupplyAutomorphisms (see PqPGSupplyAutomorphisms) in a p-group generation calculation is n. By default, n is taken to be 0; n must be a non-negative integer. If n ³ 0 then a value for the option RelativeOrders (see option RelativeOrders) must also be RelativeOrders := list Specifies the relative orders of each soluble automorphism of the automorphism group supplied by PqPGSupplyAutomorphisms (see PqPGSupplyAutomorphisms) in a p-group generation calculation. The list list must consist of n positive integers, where n is the value of the option NumberOfSolubleAutomorphisms (see option NumberOfSolubleAutomorphisms). By default list is empty. BasicAlgorithm Specifies that an algorithm that the pq program calls its ``default'' algorithm be used for p-group generation. By default this algorithm is not used. If this option is supplied the settings of options RankInitialSegmentSubgroups, AllDescendants, Exponent and Metabelian are ignored. CustomiseOutput := rec Specifies that fine tuning of the output is desired. The record rec should have any subset (or all) of the the following fields: perm := list where list is a list of booleans which determine whether the permutation group output for the automorphism group should contain: the degree, the extended automorphisms, the automorphism matrices, and the permutations, respectively. orbit := list where list is a list of booleans which determine whether the orbit output of the action of the automorphism group should contain: a summary, and a complete listing of orbits, respectively. (It's possible to have both a summary and a complete listing.) group := list where list is a list of booleans which determine whether the group output should contain: the standard matrix of each allowable subgroup, the presentation of reduced p-covering groups, the presentation of immediate descendants, the nuclear rank of descendants, and the p-multiplicator rank of descendants, respectively. autgroup := list where list is a list of booleans which determine whether the automorphism group output should contain: the commutator matrix, the automorphism group description of descendants, and the automorphism group order of descendants, respectively. trace := val where val is a boolean which if true specifies algorithm trace data is desired. By default, one does not get algorithm trace data. Not providing a field (or mis-spelling it!), specifies that the default output is desired. As a convenience, 1 is also accepted as true, and any value that is neither 1 nor true is taken as false. Also for each list above, an unbound list entry is taken as false. Thus, for example CustomiseOutput := rec(group := [,,1], autgroup := [,1]) specifies for the group output that only the presentation of immediate descendants is desired, for the automorphism group output only the automorphism group description of descendants should be printed, that there should be no algorithm trace data, and that the default output should be provided for the permutation group and orbit output. StandardPresentationFile := filename Specifies that the file to which the standard presentation is written has name filename. If the first character of the string filename is not /, filename is assumed to be the path of a writable file relative to the directory in which GAP was started. If this option is omitted it is written to the file with the name generated by the command Filename( ANUPQData.tmpdir, "SPres" );, i.e. the file with name "SPres" in the temporary directory in which the pq program executes. QueueFactor := n Specifies a queue factor of n, where n should be a positive integer. This option may be used with PqNextClass (see PqNextClass). The queue factor is used when the pq program uses automorphisms to close a set of elements of the p-multiplicator under their action. The algorithm used is a spinning algorithm: it starts with a set of vectors in echelonized form (elements of the p-multiplicator) and closes the span of these vectors under the action of the automorphisms. For this each automorphism is applied to each vector and it is checked if the result is contained in the span computed so far. If not, the span becomes bigger and the vector is put into a queue and the automorphisms are applied to this vector at a later stage. The process terminates when the automorphisms have been applied to all vectors and no new vectors have been For each new vector it is decided, if its processing should be delayed. If the vector contains too many non-zero entries, it is put into a second queue. The elements in this queue are processed only when there are no elements in the first queue left. The queue factor is a percentage figure. A vector is put into the second queue if the percentage of its non-zero entries exceeds the queue factor. Bounds := list Specifies a lower and upper bound on the indices of a list, where list is a pair of positive non-decreasing integers. See PqDisplayStructure and PqDisplayAutomorphisms where this option may be used. [Up] [Previous] [Next] [Index] ANUPQ manual Januar 2006
{"url":"http://www.math.rwth-aachen.de/homes/Greg.Gamble/ANUPQ/htm/CHAP006.htm","timestamp":"2024-11-02T08:05:33Z","content_type":"text/html","content_length":"29653","record_id":"<urn:uuid:1bd93aa8-8e39-4b44-92e6-8dae087b7903>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00012.warc.gz"}
Texas Go Math Grade 5 Lesson 5.6 Answer Key Add and Subtract Mixed Numbers Refer to our Texas Go Math Grade 5 Answer Key Pdf to score good marks in the exams. Test yourself by practicing the problems from Texas Go Math Grade 5 Lesson 5.6 Answer Key Add and Subtract Mixed Texas Go Math Grade 5 Lesson 5.6 Answer Key Add and Subtract Mixed Numbers Unlock the Problem Denise mixed 1\(\frac{4}{5}\) ounces of blue paint with 2\(\frac{1}{10}\) ounces of yellow paint. How many ounces of paint did Denise mix? • What operation should you use to solve the problem? • Do the fractions have the same denominator? Add. 1\(\frac{4}{5}\) + 2\(\frac{1}{10}\) To find the sum of mixed numbers with unequal denominators, you can use a common denominator. STEP 1: Estimate the sum. STEP 2: Find a common denominator. Use the common denominator to write equivalent fractions with equal denominators. STEP 3: Add the fractions. Then add the whole numbers. Write the answer in simplest form. So, Denise mixed __________ ounces of paint. So, Denise mixed \(\frac{39}{10}\) ounces of paint. Math Talk Mathematical Processes Did you use the least common denominator? Explain. Answer: yes The sum of mixed numbers with unequal denominators can use a common denominator. Question 1. Explain how you know whether your answer is reasonable. Answer: Yes my answer is reasonable Because, the sum of two mixed numbers is solved and it is proved Question 2. What other common denominator could you have used? Answer: 50 multiply 5 x 10 = 50 Subtract. 4\(\frac{5}{6}\) – 2\(\frac{3}{4}\) You can also use a common denominator to find the difference of mixed numbers with unequal denominators. STEP 1: Estimate the difference. STEP 2: Find a common denominator. Use the common denominator to write equivalent fractions with equal denominators. STEP 3: Subtract the fractions. Subtract the whole numbers. Write the answer in simplest form. Question 3. Explain how you know whether your answer is reasonable. Answer: Used the common denominator to write equivalent fractions with equal denominators. so, my answer is reasonable. Share and Show Question 1. Use a common denominator to write equivalent fractions with equal denominators and then find the sum. Write your answer in simplest form. Used a common denominator and written equivalent fractions with equal denominators and then find the sum. written the answer in simplest form. Find the sum. Write your answer in simplest form. Question 2. 2\(\frac{3}{4}\) + 3\(\frac{3}{10}\) 2\(\frac{3}{4}\) + 3\(\frac{3}{10}\) = \(\frac{11}{4}\) + \(\frac{33}{10}\) = \(\frac{55}{20}\) + \(\frac{66}{20}\) = \(\frac{121}{20}\) Step I: We add the whole numbers, separately. We change the mixed fractions into improper fractions. Step II: To add fractions, we take least common denominators and change the fractions into like fractions. Step III: We find the sum of the whole numbers and the fractions in the simplest form. Question 3. 5\(\frac{3}{4}\) + 1\(\frac{1}{3}\) 5\(\frac{3}{4}\) + 1\(\frac{1}{3}\) = \(\frac{23}{4}\) + \(\frac{4}{3}\) = \(\frac{69}{12}\)+ \(\frac{16}{12}\) = \(\frac{85}{12}\) Step I: We add the whole numbers, separately. We change the mixed fractions into improper fractions. Step II: To add fractions, we take least common denominators and change the fractions into like fractions. Step III: We find the sum of the whole numbers and the fractions in the simplest form. Question 4. 3\(\frac{4}{5}\) + 2\(\frac{3}{10}\) 3\(\frac{4}{5}\) + 2\(\frac{3}{10}\) = \(\frac{19}{5}\) + \(\frac{23}{10}\) = \(\frac{38}{10}\) + \(\frac{23}{10}\) = \(\frac{61}{10}\) Step I: We add the whole numbers, separately. We change the mixed fractions into improper fractions. Step II: To add fractions, we take least common denominators and change the fractions into like fractions. Step III: We find the sum of the whole numbers and the fractions in the simplest form. Problem Solving Practice: Copy and Solve Find the sum or difference. Write your answer in simplest form. Question 5. 1\(\frac{5}{12}\) + 4\(\frac{1}{6}\) 1\(\frac{5}{12}\) + 4\(\frac{1}{6}\) = \(\frac{17}{12}\) + \(\frac{21}{6}\) = \(\frac{17}{12}\) + \(\frac{42}{12}\) = \(\frac{59}{12}\) Step I: We add the whole numbers, separately. We change the mixed fractions into improper fractions. Step II: To add fractions, we take least common denominators and change the fractions into like fractions. Step III: We find the sum of the whole numbers and the fractions in the simplest form. Question 6. 8\(\frac{1}{2}\) + 6\(\frac{3}{5}\) 8\(\frac{1}{2}\) + 6\(\frac{3}{5}\) = \(\frac{17}{2}\) + \(\frac{33}{5}\) = \(\frac{85}{10}\) + \(\frac{66}{10}\) = \(\frac{151}{10}\) Step I: We add the whole numbers, separately. We change the mixed fractions into improper fractions. Step II: To add fractions, we take least common denominators and change the fractions into like fractions. Step III: We find the sum of the whole numbers and the fractions in the simplest form. Question 7. 2\(\frac{1}{6}\) + 4\(\frac{5}{9}\) 2\(\frac{1}{6}\) + 4\(\frac{5}{9}\) = \(\frac{13}{6}\) + \(\frac{36}{9}\) = \(\frac{39}{18}\) + \(\frac{72}{18}\) = \(\frac{41}{18}\) Step I: We add the whole numbers, separately. We change the mixed fractions into improper fractions. Step II: To add fractions, we take least common denominators and change the fractions into like fractions. Step III: We find the sum of the whole numbers and the fractions in the simplest form. Question 8. 3\(\frac{5}{8}\) + \(\frac{5}{12}\) 3\(\frac{5}{8}\) + \(\frac{5}{12}\) = \(\frac{29}{8}\) + \(\frac{5}{12}\) = \(\frac{87}{24}\) + \(\frac{10}{24}\) = \(\frac{97}{24}\) Step I: We add the whole numbers, separately. We change the mixed fractions into improper fractions. Step II: To add fractions, we take least common denominators and change the fractions into like fractions. Step III: We find the sum of the whole numbers and the fractions in the simplest form. Question 9. 3\(\frac{2}{3}\) – 1\(\frac{1}{6}\) 3\(\frac{2}{3}\) – 1\(\frac{1}{6}\) = \(\frac{11}{3}\) – \(\frac{7}{6}\) = \(\frac{22}{6}\) – \(\frac{7}{6}\) = \(\frac{15}{6}\) Step I: We add the whole numbers, separately. We change the mixed fractions into improper fractions. Step II: To add fractions, we take least common denominators and change the fractions into like fractions. Step III: We find the sum of the whole numbers and the fractions in the simplest form. Question 10. 5\(\frac{6}{7}\) – 1\(\frac{2}{3}\) 5\(\frac{6}{7}\) – 1\(\frac{2}{3}\) = \(\frac{41}{7}\) – \(\frac{5}{3}\) = \(\frac{123}{21}\) – \(\frac{35}{21}\) = \(\frac{88}{21}\) Step I: We add the whole numbers, separately. We change the mixed fractions into improper fractions. Step II: To add fractions, we take least common denominators and change the fractions into like fractions. Step III: We find the sum of the whole numbers and the fractions in the simplest form. Question 11. 2\(\frac{7}{8}\) – \(\frac{1}{2}\) 2\(\frac{7}{8}\) – \(\frac{1}{2}\) = \(\frac{23}{8}\) – \(\frac{1}{2}\) = \(\frac{23}{8}\) – \(\frac{4}{8}\) = \(\frac{19}{8}\) Step I: We add the whole numbers, separately. We change the mixed fractions into improper fractions. Step II: To add fractions, we take least common denominators and change the fractions into like fractions. Step III: We find the sum of the whole numbers and the fractions in the simplest form. Question 12. 4\(\frac{7}{12}\) – 1\(\frac{2}{9}\) 4\(\frac{7}{12}\) – 1\(\frac{2}{9}\) = \(\frac{55}{12}\) – \(\frac{7}{5}\) = \(\frac{275}{60}\) – \(\frac{72}{60}\) = \(\frac{203}{60}\) Step I: We add the whole numbers, separately. We change the mixed fractions into improper fractions. Step II: To add fractions, we take least common denominators and change the fractions into like fractions. Step III: We find the sum of the whole numbers and the fractions in the simplest form. Question 13. Communicate Why do you need to write equivalent fractions with common denominators to add 4\(\frac{5}{6}\) and \(\frac{11}{8}\)? Explain. Answer: 4\(\frac{5}{6}\) + \(\frac{11}{8}\) = \(\frac{25}{24}\) + \(\frac{11}{24}\) = \(\frac{36}{24}\) Step I: We add the whole numbers, separately. We change the mixed fractions into improper fractions. Step II: To add fractions, we take least common denominators and change the fractions into like fractions. Step III: We find the sum of the whole numbers and the fractions in the simplest form. Problem Solving Use the table to solve 14-15. Question 14. H.O.T. Multi-Step Gavin needs to make 2 batches of Mango paint. Explain how you could find the total amount of paint Gavin mixed. Answer: \(\frac{70}{6}\) Gavin needs to make 2 batches of Mango paint 5\(\frac{5}{6}\) red + 5\(\frac{5}{6}\) yellow = \(\frac{70}{6}\) mango Question 15. H.O.T. Gavin mixes the amount of red from one shade of paint with the amount of yellow from a different shade of paint. He mixes the batch so he will have the greatest possible amount of paint. What amounts of red and yellow from which shades are used in the mixture? Explain your answer. Answer: The amounts of red and yellow from each shades are used in the mixture is same Gavin needs to make 2 batches of Mango paint 5\(\frac{5}{6}\) red + 5\(\frac{5}{6}\) yellow = \(\frac{70}{6}\) mango Daily Assessment Task Fill in the bubble completely to show your answer. Question 16. Dr. Whether-or-Not collects two hailstones during a storm in California. One hailstone weighs 2\(\frac{3}{8}\) pounds, and the other hailstone weighs 1\(\frac{3}{10}\) pounds. How much heavier is the larger hailstone than the smaller hailstone? (A) \(\frac{3}{40}\) pounds (B) 1\(\frac{27}{40}\) pounds (C) 1\(\frac{3}{40}\) pounds (D) 3\(\frac{27}{40}\) pounds Answer: D Dr. Whether-or-Not collects two hailstones during a storm in California. One hailstone weighs 2\(\frac{3}{8}\) pounds, and the other hailstone weighs 1\(\frac{3}{10}\) pounds. 2\(\frac{3}{8}\) + 1\(\frac{3}{10}\) pounds \(\frac{19}{8}\) + \(\frac{13}{10}\) 3\(\frac{27}{40}\) pounds Question 17. Apply Jason is making a fruit salad. He mixes in 3\(\frac{1}{4}\) cups of orange melon and 2\(\frac{2}{3}\) cups of green melon. How many cups of melon does Jason put in the fruit salad? (A) 5\(\frac{1}{4}\) cups (B) 5\(\frac{1}{3}\) cups (C) 5\(\frac{7}{12}\) cups (D) 5\(\frac{11}{12}\) cups Answer: A Apply Jason is making a fruit salad. He mixes in 3\(\frac{1}{4}\) cups of orange melon and 2\(\frac{2}{3}\) cups of green melon. 3\(\frac{1}{4}\) + 2\(\frac{2}{3}\) \(\frac{13}{4}\) + \(\frac{8}{3}\) \(\frac{39}{12}\) + \(\frac{24}{12}\) Question 18. Multi-Step Dakota makes a salad dressing by combining 6\(\frac{1}{3}\) fluid ounces of oil and 2\(\frac{3}{8}\) fluid ounces of vinegar in a jar. She then pours 2\(\frac{1}{4}\) fluid ounces of the dressing onto her salad. How much dressing remains in the jar? (A) 6\(\frac{1}{8}\) fluid ounces (B) 6\(\frac{3}{8}\) fluid ounces (C) 6\(\frac{11}{24}\) fluid ounces (D) 6\(\frac{17}{24}\) fluid ounces Answer: C Dakota makes a salad dressing by combining 6\(\frac{1}{3}\) fluid ounces of oil and 2\(\frac{3}{8}\) fluid ounces of vinegar in a jar. She then pours 2\(\frac{1}{4}\) fluid ounces of the dressing onto her salad. 6\(\frac{1}{3}\) + 2\(\frac{3}{8}\) – 2\(\frac{1}{4}\) 6\(\frac{11}{24}\) fluid ounces Texas Test Prep Question 19. Yolanda walked 3\(\frac{6}{10}\) miles. Then she walked 4\(\frac{1}{2}\) more miles. How many miles did Yolanda walk? (A) 7\(\frac{1}{10}\) miles (B) 8\(\frac{7}{10}\) miles (C) 8\(\frac{1}{10}\) miles (D) 7\(\frac{7}{10}\) miles Answer: C Yolanda walked 3\(\frac{6}{10}\) miles. Then she walked 4\(\frac{1}{2}\) more miles 3\(\frac{6}{10}\) + 4\(\frac{1}{2}\) \(\frac{72}{20}\) + \(\frac{90}{20}\) Texas Go Math Grade 5 Lesson 5.6 Homework and Practice Answer Key Find the sum or difference. Write your answer in simplest form. Question 1. 1\(\frac{1}{4}\) + 2\(\frac{2}{3}\) _____________ 1\(\frac{1}{4}\) + 2\(\frac{2}{3}\) = \(\frac{5}{4}\) + \(\frac{8}{3}\) = \(\frac{15}{12}\) + \(\frac{32}{12}\) = \(\frac{47}{12}\) Step I: We add the whole numbers, separately. We change the mixed fractions into improper fractions. Step II: To add fractions, we take least common denominators and change the fractions into like fractions. Step III: We find the sum of the whole numbers and the fractions in the simplest form. Question 2. 3\(\frac{3}{4}\) + 4\(\frac{5}{12}\) _____________ 3\(\frac{3}{4}\) + 4\(\frac{5}{12}\) = \(\frac{36}{12}\) + \(\frac{53}{12}\) = \(\frac{89}{12}\) Step I: We add the whole numbers, separately. We change the mixed fractions into improper fractions. Step II: To add fractions, we take least common denominators and change the fractions into like fractions. Step III: We find the sum of the whole numbers and the fractions in the simplest form. Question 3. 1\(\frac{1}{3}\) + 2\(\frac{1}{6}\) _____________ 1\(\frac{1}{3}\) + 2\(\frac{1}{6}\) = \(\frac{8}{6}\) + \(\frac{13}{6}\) = \(\frac{21}{6}\) Step I: We add the whole numbers, separately. We change the mixed fractions into improper fractions. Step II: To add fractions, we take least common denominators and change the fractions into like fractions. Step III: We find the sum of the whole numbers and the fractions in the simplest form. Question 4. 4\(\frac{1}{2}\) + 3\(\frac{4}{5}\) _____________ 4\(\frac{1}{2}\) + 3\(\frac{4}{5}\) = \(\frac{45}{10}\) + \(\frac{38}{10}\) = \(\frac{83}{10}\) Step I: We add the whole numbers, separately. We change the mixed fractions into improper fractions. Step II: To add fractions, we take least common denominators and change the fractions into like fractions. Step III: We find the sum of the whole numbers and the fractions in the simplest form. Question 5. 5\(\frac{5}{6}\) + 4\(\frac{2}{9}\) ____________ 5\(\frac{5}{6}\) + 4\(\frac{2}{9}\) = \(\frac{35}{6}\) + \(\frac{38}{9}\) = \(\frac{105}{18}\) + \(\frac{76}{18}\) = \(\frac{181}{18}\) Step I: We add the whole numbers, separately. We change the mixed fractions into improper fractions. Step II: To add fractions, we take least common denominators and change the fractions into like fractions. Step III: We find the sum of the whole numbers and the fractions in the simplest form. Question 6. 7\(\frac{1}{4}\) + 3\(\frac{2}{5}\) ___________ 7\(\frac{1}{4}\) + 3\(\frac{2}{5}\) = \(\frac{29}{4}\) + \(\frac{17}{5}\) = \(\frac{145}{20}\) + \(\frac{68}{20}\) = \(\frac{213}{20}\) Step I: We add the whole numbers, separately. We change the mixed fractions into improper fractions. Step II: To add fractions, we take least common denominators and change the fractions into like fractions. Step III: We find the sum of the whole numbers and the fractions in the simplest form. Question 7. 3\(\frac{2}{7}\) + 8\(\frac{1}{3}\) _____________ 3\(\frac{2}{7}\) + 8\(\frac{1}{3}\) = \(\frac{23}{7}\) + \(\frac{25}{3}\) = \(\frac{69}{21}\) + \(\frac{175}{21}\) = \(\frac{244}{21}\) Step I: We add the whole numbers, separately. We change the mixed fractions into improper fractions. Step II: To add fractions, we take least common denominators and change the fractions into like fractions. Step III: We find the sum of the whole numbers and the fractions in the simplest form. Question 8. 4\(\frac{3}{7}\) + 3\(\frac{1}{2}\) ____________ 4\(\frac{3}{7}\) + 3\(\frac{1}{2}\) = \(\frac{31}{7}\) + \(\frac{7}{2}\) = \(\frac{62}{14}\) + \(\frac{49}{14}\) = \(\frac{114}{14}\) Step I: We add the whole numbers, separately. We change the mixed fractions into improper fractions. Step II: To add fractions, we take least common denominators and change the fractions into like fractions. Step III: We find the sum of the whole numbers and the fractions in the simplest form. Question 9. 2\(\frac{4}{5}\) – 1\(\frac{1}{2}\) ____________ 2\(\frac{4}{5}\) – 1\(\frac{1}{2}\) = \(\frac{14}{5}\) – \(\frac{3}{2}\) = \(\frac{28}{10}\) – \(\frac{15}{10}\) = \(\frac{13}{10}\) Step I: We add the whole numbers, separately. We change the mixed fractions into improper fractions. Step II: To add fractions, we take least common denominators and change the fractions into like fractions. Step III: We find the sum of the whole numbers and the fractions in the simplest form. Question 10. 5\(\frac{3}{8}\) – 1\(\frac{1}{4}\) ____________ 5\(\frac{3}{8}\) – 1\(\frac{1}{4}\) = \(\frac{43}{8}\) – \(\frac{5}{4}\) = \(\frac{43}{8}\) – \(\frac{10}{8}\) = \(\frac{33}{8}\) Step I: We add the whole numbers, separately. We change the mixed fractions into improper fractions. Step II: To add fractions, we take least common denominators and change the fractions into like fractions. Step III: We find the sum of the whole numbers and the fractions in the simplest form. Question 11. 4\(\frac{1}{3}\) – 3\(\frac{1}{6}\) _____________ 4\(\frac{1}{3}\) – 3\(\frac{1}{6}\) = \(\frac{13}{3}\) – \(\frac{19}{6}\) = \(\frac{26}{6}\) – \(\frac{19}{6}\) = \(\frac{7}{6}\) Step I: We add the whole numbers, separately. We change the mixed fractions into improper fractions. Step II: To add fractions, we take least common denominators and change the fractions into like fractions. Step III: We find the sum of the whole numbers and the fractions in the simplest form. Question 12. 6\(\frac{5}{6}\) – 5\(\frac{7}{9}\) _____________ 6\(\frac{5}{6}\) – 5\(\frac{7}{9}\) = \(\frac{41}{6}\) – \(\frac{53}{9}\) = \(\frac{123}{18}\) – \(\frac{106}{18}\) = \(\frac{17}{18}\) Step I: We add the whole numbers, separately. We change the mixed fractions into improper fractions. Step II: To add fractions, we take least common denominators and change the fractions into like fractions. Step III: We find the sum of the whole numbers and the fractions in the simplest form. Question 13. 4\(\frac{1}{3}\) – 2\(\frac{1}{4}\) ____________ 4\(\frac{1}{3}\) – 2\(\frac{1}{4}\) = \(\frac{13}{3}\) – \(\frac{9}{4}\) = \(\frac{52}{12}\) – \(\frac{27}{12}\) = \(\frac{25}{12}\) Step I: We add the whole numbers, separately. We change the mixed fractions into improper fractions. Step II: To add fractions, we take least common denominators and change the fractions into like fractions. Step III: We find the sum of the whole numbers and the fractions in the simplest form. Question 14. 3\(\frac{1}{4}\) – 1\(\frac{1}{6}\) _____________ 3\(\frac{1}{4}\) – 1\(\frac{1}{6}\) = \(\frac{13}{4}\) – \(\frac{7}{6}\) = \(\frac{39}{12}\) – \(\frac{14}{12}\) = \(\frac{25}{12}\) Step I: We add the whole numbers, separately. We change the mixed fractions into improper fractions. Step II: To add fractions, we take least common denominators and change the fractions into like fractions. Step III: We find the sum of the whole numbers and the fractions in the simplest form. Question 15. 6\(\frac{3}{4}\) – 2\(\frac{5}{16}\) _____________ 6\(\frac{3}{4}\) – 2\(\frac{5}{16}\) = \(\frac{23}{4}\) – \(\frac{37}{16}\)= \(\frac{92}{16}\) – \(\frac{37}{16}\) = \(\frac{55}{16}\) Step I: We add the whole numbers, separately. We change the mixed fractions into improper fractions. Step II: To add fractions, we take least common denominators and change the fractions into like fractions. Step III: We find the sum of the whole numbers and the fractions in the simplest form. Question 16. 7\(\frac{3}{5}\) – 2\(\frac{1}{4}\) _____________ 7\(\frac{3}{5}\) – 2\(\frac{1}{4}\) = \(\frac{38}{5}\) – \(\frac{9}{4}\) = \(\frac{152}{20}\) – \(\frac{45}{20}\) = \(\frac{107}{20}\) Step I: We add the whole numbers, separately. We change the mixed fractions into improper fractions. Step II: To add fractions, we take least common denominators and change the fractions into like fractions. Step III: We find the sum of the whole numbers and the fractions in the simplest form. Question 17. Use two mixed numbers to write an equation with a sum of 4\(\frac{1}{4}\). Step I: We add the whole numbers, separately. We change the mixed fractions into improper fractions. Step II: To add fractions, we take least common denominators and change the fractions into like fractions. Step III: We find the sum of the whole numbers and the fractions in the simplest form. Problem Solving Question 18. Lucas says his twin baby brothers have a total weight of 15\(\frac{1}{8}\) pounds. Jackson weighs pounds, and Jeremy weighs 8\(\frac{7}{8}\) pounds. Explain how you can use estimation to tell if the total weight is reasonable. Question 19. The gas tank in Rebecca’s old car held 14\(\frac{1}{5}\) gallons. The gas tank in Rebecca’s new car holds 18\(\frac{1}{2}\) gallons. How many more gallons will the tank in Rebecca’s new car hold than her old car? Answer: 4\(\frac{3}{10}\) 18\(\frac{1}{2}\) – 14\(\frac{1}{5}\) = 4\(\frac{3}{10}\) Lesson Check Fill in the bubble completely to show your answer. Use the table for 20-21. Four students made paper chains to decorate the community center. The table at the right shows the lengths of the paper chains. Question 20. If Ioana attaches her chain to the end of Gabrielle’s chain, what will be the length of the combined chain? (A) 13\(\frac{3}{4}\) feet (B) 13\(\frac{1}{4}\) feet (C) 12\(\frac{1}{4}\) feet (D) 12\(\frac{1}{2}\) feet Answer: B the length of the combined chain is 13\(\frac{1}{4}\) feet 7\(\frac{1}{2}\) feet + 5\(\frac{3}{4}\) feet= \(\frac{30+23}{4}\) feet Question 21. How much longer is Oksana’s chain than Gabrielle’s chain? (A) 15\(\frac{7}{12}\) feet (B) 14\(\frac{1}{12}\) feet (C) 4\(\frac{1}{4}\) feet (D) 4\(\frac{1}{12}\) feet Answer: D 4\(\frac{1}{12}\) feet is longer than Oksana’s chain than Gabrielle’s chain 9\(\frac{5}{6}\) feet + 5\(\frac{3}{4}\) feet= \(\frac{118-69}{12}\) feet Question 22. Mia hiked 2\(\frac{1}{2}\) miles farther than Jacob. Which could be the two distances each person hiked? (A) Mia: 2\(\frac{1}{2}\) miles; Jacob: 1\(\frac{1}{4}\) miles (B) Mia: 2\(\frac{1}{2}\) miles; Jacob: 7\(\frac{1}{2}\) miles (C) Mia: 3\(\frac{2}{5}\) miles; Jacob: 5\(\frac{9}{10}\) miles (D) Mia: 5\(\frac{9}{10}\) miles; Jacob: 3\(\frac{2}{5}\) miles Answer: A 2\(\frac{1}{2}\) than jacob if Mia: 2\(\frac{1}{2}\) miles; Jacob: 1\(\frac{1}{4}\) miles Question 23. Multi-Step Mr. Carter owned a ranch with 7\(\frac{1}{4}\) acres. Last year, he bought 3\(\frac{1}{5}\) acres of land from his neighbor. Then he sold 2\(\frac{1}{4}\) acres. How many acres does Mr. Carter own now? (A) 10\(\frac{9}{20}\) acres (B) 8\(\frac{1}{5}\) acres (C) 12\(\frac{7}{10}\) acres (D) 6\(\frac{3}{10}\) acres Answer: B Mr. Carter owned a ranch with 7\(\frac{1}{4}\) acres. Last year, he bought 3\(\frac{1}{5}\) acres of land from his neighbor Then he sold 2\(\frac{1}{4}\) acres. 7\(\frac{1}{4}\) + 3\(\frac{1}{5}\) – 2\(\frac{1}{4}\) = 8\(\frac{1}{5}\) acres Question 24. Multi-Step This week, Maddie worked 2\(\frac{1}{2}\) hours on Monday, 2\(\frac{2}{3}\) hours on Tuesday, and 3\(\frac{1}{4}\) hours on Wednesday. How many more hours will Maddie need to work this week to make her goal of 10\(\frac{1}{2}\) hours a week? (A) 2\(\frac{1}{12}\) hours (B) 8\(\frac{5}{12}\) hours (C) 18\(\frac{11}{12}\) hours (D) 5\(\frac{1}{3}\) hours Answer: A This week, Maddie worked 2\(\frac{1}{2}\) hours on Monday, 2\(\frac{2}{3}\) hours on Tuesday, and 3\(\frac{1}{4}\) hours on Wednesday. 2\(\frac{1}{2}\)+2\(\frac{2}{3}\) + 3\(\frac{1}{4}\) -10\(\frac{1}{2}\) =2\(\frac{1}{12}\) hours
{"url":"https://gomathanswerkeys.com/texas-go-math-grade-5-lesson-5-6-answer-key/","timestamp":"2024-11-08T10:32:11Z","content_type":"text/html","content_length":"166998","record_id":"<urn:uuid:1511fd64-95e1-4a0f-ba12-d02070bf49f3>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00235.warc.gz"}
[Solved] Acceleration is the rate of change of vel | SolutionInn Acceleration is the rate of change of velocity with time. Is the acceleration vector always aligned with Acceleration is the rate of change of velocity with time. Is the acceleration vector always aligned with the velocity vector? Explain. Fantastic news! We've Found the answer you've been seeking! Step by Step Answer: Answer rating: 71% (14 reviews) No the acceleration vector is not always aligned with the ...View the full answer Answered By Milbert Deomampo I have been tutoring for over 3 years and have experience with a variety of students, from those who are struggling to keep up with their class to those who are looking to get ahead. I specialize in English and writing, but I am also proficient in math and science. I am patient and adaptable, and I work with each student to find the best methods for them to learn and retain the material. 0.00 0 Reviews 10+ Question Solved Students also viewed these Engineering questions Study smarter with the SolutionInn App
{"url":"https://www.solutioninn.com/study-help/engineering-fluid-mechanics/acceleration-is-the-rate-of-change-of-velocity-with-time-is-the-873380","timestamp":"2024-11-07T03:07:52Z","content_type":"text/html","content_length":"79574","record_id":"<urn:uuid:836ce60e-79fe-4fcd-88db-40ee6316eb2a>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00637.warc.gz"}
Understanding Kinetic Energy: Definition, Formula, and Examples - The Tecky Energy As we go about our daily lives, we interact with energy in many forms. One of the most important types of energy is kinetic energy, which is the energy of motion. From the cars we drive to the sports we play, kinetic energy plays a vital role in many aspects of our lives. But what exactly is kinetic energy, and how is it measured? How can we calculate kinetic energy, and what are some real-life examples of it in action? In this blog post, we’ll explore the concept of kinetic energy in detail, covering everything from its definition and formula to its different types and applications. By the end of this post, you’ll have a solid understanding of kinetic energy and its importance, as well as some practical examples of how it’s used in engineering, physics, and everyday life. Whether you’re a student, a scientist, or simply someone who’s curious about the world around you, this post will provide a comprehensive introduction to the fascinating world of kinetic energy. READ ALSO: THE NET ZERO GOAL AND WHY IT MATTERS What is Kinetic Energy? Kinetic energy is the energy of motion. When an object is in motion, it has the potential to do work, and this potential energy is known as kinetic energy. The amount of kinetic energy an object has is determined by its mass and velocity. To understand kinetic energy better, it’s helpful to know how it’s measured. Kinetic energy is measured in joules (J), which is the standard unit of energy in the metric system. The formula for calculating kinetic energy is: KE = 1/2 mv^2 Where KE is the kinetic energy in joules, m is the mass of the object in kilograms, and v is the velocity of the object in meters per second. For example, if a car with a mass of 1000 kg is traveling at a velocity of 20 m/s, its kinetic energy would be: KE = 1/2 (1000 kg) (20 m/s)^2 KE = 200,000 J This means that the car has 200,000 joules of kinetic energy due to its motion. Real-life examples of kinetic energy are all around us. For instance, a ball that is thrown has kinetic energy, as does a moving car, a speeding bullet, or a roller coaster ride. Even the movement of molecules in a gas or liquid has kinetic energy. In fact, almost everything that moves has kinetic energy to some degree. In conclusion, kinetic energy is the energy of motion that an object possesses due to its mass and velocity. It’s measured in joules and can be calculated using the formula KE = 1/2 mv^2. Understanding kinetic energy is essential in understanding energy conservation and how it applies to real-life scenarios. Types of Kinetic Energy There are different types of kinetic energy, each of which corresponds to a different type of motion. The three main types of kinetic energy are translational, rotational, and vibrational. Translational Kinetic Energy: Translational kinetic energy is the energy an object has due to its linear motion. For example, a car moving in a straight line has translational kinetic energy. The formula for translational kinetic energy is the same as the formula for kinetic energy: KE = 1/2 mv^2 Where KE is the kinetic energy in joules, m is the mass of the object in kilograms, and v is the velocity of the object in meters per second. Rotational Kinetic Energy: Rotational kinetic energy is the energy an object has due to its rotational motion. For example, a spinning top has rotational kinetic energy. The formula for rotational kinetic energy is: KE = 1/2 Iω^2 Where KE is the kinetic energy in joules, I is the moment of inertia in kilograms per meter squared, and ω (omega) is the angular velocity in radians per second. Moment of inertia is a property of an object that depends on its mass, shape, and how it’s distributed. The moment of inertia determines how easy or difficult it is to start or stop an object’s rotation. The higher the moment of inertia, the harder it is to start or stop the rotation, and the more rotational kinetic energy the object has. Vibrational Kinetic Energy: Vibrational kinetic energy is the energy an object has due to its vibration or oscillation. For example, a guitar string vibrating has vibrational kinetic energy. The formula for vibrational kinetic energy is: KE = 1/2 kA^2 Where KE is the kinetic energy in joules, k is the spring constant in newtons per meter, and A is the amplitude of the oscillation in meters. Spring constant is a property of an object that determines how much force is required to stretch or compress it. The higher the spring constant, the stiffer the object, and the more vibrational kinetic energy it has. Applications of Kinetic Energy Kinetic energy plays a vital role in many aspects of our lives. From transportation to sports, kinetic energy has practical applications in various fields. Here are some examples of how kinetic energy is used in real-life scenarios: 1. Transportation: Kinetic energy is a crucial component in transportation. Vehicles such as cars, aeroplanes, and trains rely on kinetic energy to move. In a car, the kinetic energy is stored in the gasoline, which is converted into energy to power the engine. The kinetic energy of the car is then used to move it forward. Similarly, airplanes use kinetic energy to lift off the ground and stay in the air. 2. Sports: Sports such as basketball, soccer, and tennis rely on kinetic energy for movement. When basketball player shoots a ball, they are using their kinetic energy to propel the ball towards the basket. Similarly, a soccer player uses kinetic energy to kick the ball, and a tennis player uses kinetic energy to hit the ball with their racket. 3. Energy Generation: Kinetic energy can be converted into electrical energy through generators. Hydroelectric power plants use kinetic energy from the movement of water to generate electricity. Wind turbines also use kinetic energy from the wind to generate electricity. 4. Engineering: Kinetic energy plays a vital role in various engineering applications. For example, in manufacturing, kinetic energy is used to cut and shape metal and other materials. Kinetic energy is also used in robotics, where robots use their kinetic energy to move and perform various tasks. 5. Astronomy: Kinetic energy is an essential factor in the study of astronomy. The motion of planets, stars, and galaxies is determined by their kinetic energy. By measuring the kinetic energy of celestial bodies, astronomers can learn about their movement and behavior. In conclusion, kinetic energy has many practical applications in various fields. From transportation to sports, energy generation to engineering, and astronomy, kinetic energy plays a crucial role in our daily lives. Understanding kinetic energy and its applications is essential for solving real-world problems and advancing technology. In conclusion, kinetic energy can take on different forms depending on the type of motion an object has. Translational kinetic energy is the energy an object has due to its linear motion, rotational kinetic energy is the energy an object has due to its rotational motion, and vibrational kinetic energy is the energy an object has due to its vibration or oscillation. Understanding these different forms of kinetic energy is important in various fields, such as engineering and physics, where the motion of objects plays a crucial role.
{"url":"https://teckyenergy.com/understanding-kinetic-energy-definition-formula-and-examples/","timestamp":"2024-11-12T19:43:07Z","content_type":"text/html","content_length":"139695","record_id":"<urn:uuid:aadf6c10-c35d-4ed1-8d70-36e3d3c0e192>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00747.warc.gz"}
Jump to navigation Jump to search Interval information Ratio 2/1 Factorization 2 Monzo [1⟩ Size in cents 1200¢ Names ditave, Color name w8, wa 8ve FJS name [math]\text{P8}[/math] Special properties superparticular, Tenney height (log[2] nd) 1 Weil height (log[2] max(n, d)) 2 Wilson height (sopfr (nd)) 2 Harmonic entropy ~2.24202 bits (Shannon, [math]\sqrt{nd}[/math]) [sound info] open this interval in xen-calc The octave (abbreviation: 8ve, symbol: oct, frequency ratio: 2/1) is one of the most basic intervals found in musical systems throughout the entire world. It has a frequency ratio of 2/1 and a size of 1200 cents. It is used as the standard of logarithmic measurement for all intervals, regardless if they are justly tuned or not. Octave equivalence The octave is usually called the interval of equivalence, because tones separated by an octave are perceived to have the same or similar pitch class to the average human listener. The reason for this phenomenon is probably due to the strong region of attraction of low harmonic entropy, or the strong amplitude of the second harmonic in most harmonic instruments. The Pelog and Slendro scales of the Javanese contain near-octaves even though Gamelan instruments exhibit inharmonic spectra. It is most likely reminiscent of an older musical system, or derived using the human voice instead of inharmonic instruments. The Wikipedia article includes a short discussion on its ongoing nature–nurture debate and its psychoacoustic bases. For example, it is shown that many animals including monkeys and rats experience octave equivalence to a certain extent^[1]. Meanwhile, an article in Current Biology including an 8-minute video shows that octave equivalence might be a cultural phenomenon^[2]. A generalisation where we let a different interval define equivalence is equave, such as the tritave. Alternate names Ditave is an alternative name for the interval 2/1, which was proposed to neutralize the terminology against the predominance of 7-tone scales. The name is derived from the numeral prefix δι- (di-, Greek for "two") in analogy to "tritave" (3/1). A brief but complementary description about it is here. Diapason is another term also sometimes applied to 2/1. It is also of Greek origin, but not related to the number two; instead it is formed from διά (dia) + πασων (pason), meaning something like "through all the notes". See also
{"url":"https://en.xen.wiki/w/2/1","timestamp":"2024-11-12T16:09:57Z","content_type":"text/html","content_length":"33338","record_id":"<urn:uuid:efb7f222-fe2a-41e0-b5d8-d23e6fe0d86e>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00699.warc.gz"}
Find the multiplication in less than 5 seconds Multiplication in less than 5 seconds: The Base Method or Nikhilam Method Of Multiplication Like the Criss Cross System of Multiplication, the Base Method, also known as the Nikhilam method, is a wonderful contribution to Vedic Mathematics. When standard multiplication is used, it can often take a long time to calculate the result, making it incredibly helpful in some situations. You can find the multiplication in less than 5 seconds using the method outlined in the Base Method. In order to define this system, Swamiji provided the following Sanskrit sutra: “Nikhilam Navatascaram Dasatah.” Its translation is “all from 9 and the last from 10.” To comprehend the other Vedic Mathematics equations, it is essential to study the Base Method. Because a specific number is used as the base in this approach, it is known as the Base Method. Although any integer can serve as the base, powers of 10 are typically used. Numbers like 10, 100, 1000, and 10000 are among the powers of ten. Depending on the numbers in the query, we use a specific base. Suppose we are requested to multiply 95 by 98. Since both of these numbers are closer to 100, the correct base in this scenario would be 100. The correct basis for multiplying 1004 by 1021 would be 1000 because both values are closer to that number. We will find the answer in two parts — the left hand side and the right hand side. The left hand side will be denoted by the acronym LHS and the right hand side will be denoted by the acronym RHS. Let us have a look at the procedure involved in this technique of multiplication. (a) Find the Base and the Difference (b) Number of digits on the RHS = Number of zeros in the base (c) Multiply the differences on the RHS (d) Put the Cross Answer on the LHS These are the four primary steps that we will use in any given problem. To understand how the system works we will solve different questions. Find 97 X 99. STEP A: Find The Base and the difference. The first part of the step is to find the base. Have a look at example A. In this example the numbers are 97 and 99. We know that we can take only powers of 10 as bases. The powers of 10 are numbers like 10, 100, 1000, 10000, etc. In this case since both the numbers are closer to 100 we will take 100 as the base. We are still on step A. Next, we have to find the differences. In this example, the difference between 100 and 97 is 3 and the difference between 100 and 99 is 1, I.e. Difference between 100 & 97 = 100 – 97 = 3 Difference between 100 & 99 = 100 – 99 = 1 97 – 3 99 – 1 Step B: Number of digits in RHS = No. of zeros in the base. Step B is now at hand. The left hand side and the right hand side, sometimes referred to as the LHS and RHS, are where we will get the answer to the multiplication question in this stage. According to Step B, the right-hand side of the solution should have as many digits as there are zeros in the base. In this example the base 100 has two zeros. Hence, the RHS will be filled in by a two-digit number. Let us make provisions for the same: 97 – 3 99 – 1 – – We have separated the LHS and the RHS with a straight line. The RHS will have as many digits as the number of zeros in the base and so we have put empty blanks in the RHS of equal number. Step C: Multiply the differences in RHS. The third step (step C) says to multiply the differences and write the answer in the right-hand side. In example A we multiply the differences, viz. -3 by -1 and get the answer as 3. However, the RHS has to be filled by a two-digit number. Hence, we convert the answer 3 into 03 and put it on the RHS. 97 – 3 99 – 1 STEP D: Put the cross answer in the LHS. Now we come to the last step. At this stage we already have the right-hand part of the answer. If you are giving any competitive exam and the right-hand part of the answer uniquely matches with one of the given alternatives, you can straight-away tick that alternative as the correct answer. However, the multiplication in our case is still not complete. We still have to get the left hand side of the answer. Step D says to put the cross answer in the left hand side. In this example, the cross answer can be obtained by doing (97 – 1) or (99 – 3). In either case the answer will be 96. This 96 we will put on the LHS. But we already had 03 on the RHS. Hence, the complete answer is 9603. It’s not lengthy as it appears. I emphasized and explained every single step because I was describing the process for the first time. It appears long as a result. The instances we solve next will show that this is not the case in reality. Let’s solve few more examples: 1. 14 X 15 Base: 10 14 + 4 X 15 + 5 (21 ) (0) The number of digits on the RHS is more than the number of zeros in the base. In this case, we have carried over as we do in normal multiplication. Case 2: When the base is not a power of 10. 51 x 32 = ? (Important) All the while we were taking only powers of 10, namely 10, 100, 1000, etc. as bases, but in this section we will take numbers like 40, 50, 600, etc. as bases. In the problems that will follow, we will have two bases — an actual base and a working base. The actual base will be the normal power of ten. The working base will be obtained by dividing or multiplying the actual base by a suitable number. Hence, our actual bases will be numbers like 10,100, 1000, etc. and our working bases will be numbers like 50 (obtained by dividing 100 by 2) or 30 (obtained by multiplying 10 by 3), 250 (obtained by dividing 1000 by 4). Actual Bases: 10, 100, 1000, etc. Working bases: 40, 60, 500, 250, etc. Base: 100 Working base: 100 / 2 = 50 ( instead of division, we can use multiplication, 10 x 5 = 50, in order to get the working base, see Case 3 explained below.) 51 + 1 X 32 – 18 (33) (-18) = 16^1/2 (-18) = 16 (50-18) = 1632 is the answer. In this case the actual base is 100 (therefore RHS will be a two-digit answer). Now, since both the numbers are close to 50 we have taken 50 as the working base and all other calculations are done with respect to 50. Since 50 (the working base) is obtained by dividing 100 (the actual base) by 2, we divide the LHS 33 by 2 and get 16^1/2 as the answer on LHS. The RHS we get by subtracting RHS from 50. The complete answer is 1632. Case 3: Another case when the base is not a power of 10. 58 x 42 = ? (Important) Base: 10 Working base: 10 x 6 = 60 58 – 2 42 – 18 40 (36) = 240 (36) = 2436 is the answer. In this case the actual base is 10 (therefore RHS will be a one-digit answer). Now, since both the numbers are close to 60 we have taken 60 as the working base. Since, 60 is obtained by multiplying 10 by 6 so, we multiply the LHS 40 by 6 and get the answer 240. The RHS should be in one digit so, add 3 (carried over) to LHS to get the complete answer.. The complete answer is 2436. 3. Multiplying a number above the base with a number below the base. Base: 100 Working base: 100/2 = 50 58 + 8 X 42 – 8 50 (-64) = (50/2) * 100 (-64) At this point, we have the LHS and the RHS. Now, we multiply LHS with the base and subtract the RHS to get the final answer. = 2500 – 64 = 2436 Case 4: Multiplying numbers with different bases. Suppose we want to multiply 877 by 90. In this case the first number is closer to the base 1000 and the second number is closer to the base 100. Then, how do we solve the problem. Multiply 85 by 995 Here, the number 85 is close to the base 100 and the number 995 is close to the base 1000. We will multiply 85 with 10 and make both the bases equal thus facilitating the calculation. Since, we have multiplied 85 by 10 we will divide the final answer by 10 to get the accurate answer. Base: 1000 850 – 150 995 – 5 845 (750) = 845750 divided by 10 gives 84575. • We multiply 85 by 10 and make it 850. Now, both the numbers are close to 1000 which we will take as our base. • The difference is -150 and -5 which gives a product of 750. • The cross answer is 845 which we will put on the LHS. • Thus, the complete answer is 845750. But, since we have multiplied 85 by 10 and made it 850 we have to divide the final answer by 10 to get the effect of 85 again. When 845750 is divided by 10 we get 84575. Thus, the product of 85 into 9995 is 84575.
{"url":"https://webtutorspoint.com/find-the-multiplication-in-less-than-5-seconds/","timestamp":"2024-11-14T21:31:52Z","content_type":"text/html","content_length":"125044","record_id":"<urn:uuid:47172faf-43c3-4613-8184-7447263aea67>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00612.warc.gz"}
How to Type a Formula in Excel? How to Type a Formula in Excel? Are you looking for a quick and easy way to input formulas into an Excel spreadsheet? Are you tired of trying to understand the complicated instructions found online? Look no further – this guide will show you the step-by-step process of how to type a formula in Excel. We’ll cover the basics of formulas, how to enter them, and how to make sure they’re working correctly. With this guide, you’ll be typing formulas like a pro in no time! To type a formula in Excel, first click the cell in which you want to enter the formula. Then, type an equal sign (=) followed by the elements of your formula. Finally, press Enter to complete the To learn how to create complex formulas in Excel, follow these steps: • Select the cell in which you want to enter a formula. • Type an equal sign (=) followed by the elements of your formula. • If you need to include a cell reference, use the arrow keys on your keyboard, or click on the cell with your mouse. • If you need more than one operation in your formula, use the appropriate mathematical operator (+ for addition, – for subtraction, * for multiplication, / for division). • Click Enter. Your formula should now appear in the cell. Understanding How to Type Formulas in Excel Typing formulas into Excel can be a daunting task if you do not understand the basics of how the software works. Excel formulas are used to perform mathematical calculations on data in a spreadsheet. While formulas may seem intimidating, they are actually quite simple to type and use in Excel. With a few easy steps, you’ll be able to type a formula quickly and accurately. The first step in understanding how to type a formula in Excel is to understand the syntax of the formula. Every formula must start with an equal sign, followed by the name of the function you are using, followed by the arguments of the function. The arguments are the information that you are providing to the function to calculate a result. After the function and arguments, you will need to close the formula with a closing parenthesis. Once you understand the syntax of the formula, you can begin typing it in Excel. To do this, you need to select the cell that you want the result of the formula to appear in. Then, you will type the equal sign in the cell to open the formula. Next, you will type the name of the function, such as SUM or AVERAGE, followed by the arguments of the function. When you’ve finished typing the function and arguments, you will need to close the formula with a closing parenthesis. Once you’ve done this, the result of the formula should appear in the cell. Using the Autosum Feature in Excel If you are looking to save time when typing a formula, you can use the Autosum feature in Excel. Autosum is a feature that automatically adds up a column or row of numbers for you. To use it, select the cell where you want the result of the formula to appear. Then, click the Autosum button in the Home tab. The Autosum feature will then select the cells that it believes you want to add together and will display the SUM formula in the cell. If the Autosum feature does not select the correct cells, you can manually select the cells that you want to add together and the Autosum feature will insert the proper SUM formula for you. Using the Insert Function Dialog Box If you are not familiar with Excel formulas and do not know which formula to use, you can use the Insert Function dialog box. To do this, select the cell that you want the result of the formula to appear in. Then, click the Insert Function button in the Formulas tab. This will open the Insert Function dialog box. Here, you can select the function that you want to use. After selecting the function, the function’s arguments will appear in the dialog box. You can type the arguments into the dialog box and the proper formula will be inserted into the cell. Using Formulas in Excel to Perform Calculations Once you understand how to type a formula in Excel, you can use it to perform calculations. Excel formulas can be used to add, subtract, multiply, and divide numbers, as well as calculate percentages, averages, and other mathematical calculations. With a few simple steps, you can quickly and easily type a formula in Excel and use it to perform calculations. Checking the Accuracy of Formulas in Excel Once you have typed a formula, it is important to check that the formula is accurate. To do this, you can use the Formula Auditing feature in Excel. This feature will allow you to trace the formula to ensure that it is calculating the correct result. To use the Formula Auditing feature, select the cell that contains the formula and then click the Formula Auditing button in the Formulas tab. This will open the Formula Auditing window, which will display the cell references and functions used in the formula. Using Named Ranges in Formulas in Excel Named ranges are a useful feature in Excel that can make typing formulas easier. Named ranges allow you to assign a name to a range of cells. This makes it easier to type the formula because you can simply type the name of the range instead of typing out the cell references. To use named ranges, you need to select the range of cells that you want to name and then click the Name Box in the upper-left corner of the spreadsheet. Type the name of the range and then press Enter. The range will now have a name and you can use this name when typing formulas in Excel. Using Array Formulas in Excel Array formulas are a powerful feature in Excel that allow you to perform calculations on multiple ranges of cells at once. To use an array formula, you need to type the formula into a range of cells. To do this, select the range of cells that you want to use for the formula and then type the formula. To make the formula an array formula, you need to press Ctrl + Shift + Enter. Once you’ve done this, the formula will be applied to all of the cells in the range. Using the IF Function in Excel The IF function is a powerful function in Excel that allows you to perform calculations based on a condition. To use the IF function, you need to type the formula into the cell. The formula should include the IF statement, the condition, and the value that you want to return if the condition is met. Once you’ve typed the formula, press Enter to apply the formula. Using the VLOOKUP Function in Excel The VLOOKUP function is a useful function in Excel that allows you to look up data in a table. To use the VLOOKUP function, you need to type the formula into the cell. The formula should include the VLOOKUP statement, the lookup value, the table array, the column index, and the range lookup. Once you’ve typed the formula, press Enter to apply the formula. Related FAQ Q1. How do I type a formula in Excel? A1. To type a formula in Excel, click the cell in which you want the formula to be entered, type an equals sign (=), and then type the formula. Excel formulas use a combination of different mathematical operators and functions to perform calculations and return values. For example, to calculate the sum of two cells, you would enter “=A1+A2” in the cell where you want the result to appear. For more complex calculations, you can use built-in Excel functions such as SUM, AVERAGE, and COUNT. Once you’ve typed the formula, press the Enter key to see the result. Q2. How do I know if my formula is correct? A2. You can tell if your formula is correct by looking at the formula bar in Excel. The formula bar will show the formula you entered, as well as the result. If the result is incorrect, you can go back and check the formula for any errors. If the formula is correct, the result will be displayed in the cell where you entered it. Q3. How do I use cell references in a formula? A3. Cell references are used to refer to the values of other cells in a formula. To use cell references in a formula, enter the cell reference in the formula preceded by a dollar sign ($). For example, if you want to calculate the sum of two cells, you would enter “=A1+A2” in the cell where you want the result to appear. This will add the values of cell A1 and A2. Cell references can also be used to refer to a range of cells, such as “=SUM(A1:A10)” to sum the values of cells A1 through A10. Q4. How do I use functions in a formula? A4. Functions are frequently used in Excel formulas to perform calculations and return values. To use a function in a formula, enter the function name followed by a set of parentheses. Within the parentheses, you can enter the arguments used by the function. For example, to calculate the sum of two cells, you would enter “=SUM(A1,A2)” in the cell where you want the result to appear. This will add the values of cells A1 and A2. Some functions require more than two arguments, so be sure to check the documentation for the function you are using to make sure you enter the correct number and type of arguments. Q5. How do I enter a text string in a formula? A5. To enter a text string in a formula, enclose the text in quotation marks. For example, if you wanted to add a text string to the result of a calculation, you would enter “=A1+”Hello”” in the cell where you want the result to appear. This will add the value of cell A1 to the text string “Hello”. Note that if you are using a function in the formula, the text string must be enclosed in quotation marks after the function name. Q6. How do I make a formula absolute? A6. To make a formula absolute, you can use the dollar sign ($). When you use the dollar sign in a formula, it tells Excel to keep the cell reference or range reference the same no matter where you move or copy the formula. To make a cell reference absolute, enter the cell address preceded by a dollar sign, such as “=$A$1”. To make a range reference absolute, enter the range address preceded by a dollar sign, such as “=$A$1:$A$10”. Once the formula is absolute, you can copy or move it without changing the cell or range references. Overall, learning how to type a formula in Excel is a great skill to have. It’s a powerful tool that can help you save time and automate data analysis. With a little practice and some basic knowledge of Excel’s syntax, you can be up and running with formulas in no time. Just remember to use proper syntax, double-check your references, and use the function button for help. With these tips and the help of Excel, you can be confident that your formulas will be accurate and effective.
{"url":"https://keys.direct/blogs/blog/how-to-type-a-formula-in-excel","timestamp":"2024-11-03T21:50:02Z","content_type":"text/html","content_length":"371179","record_id":"<urn:uuid:f4c3eeba-aff2-4648-a35b-0481b80298a6>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00851.warc.gz"}
Control system classes Control system classes The classes listed below are used to represent models of input/output systems (both linear time-invariant and nonlinear). They are usually created from factory functions such as tf() and ss(), so the user should normally not need to instantiate these directly. InputOutputSystem([name, inputs, outputs, ...]) A class for representing input/output systems. LTI([inputs, outputs, states, name]) LTI is a parent class to linear time-invariant (LTI) system objects. StateSpace(A, B, C, D[, dt]) A class for representing state-space models. TransferFunction(num, den[, dt]) A class for representing transfer functions. FrequencyResponseData(d, w[, smooth]) A class for models defined by frequency response data (FRD). NonlinearIOSystem(updfcn[, outfcn, params]) Nonlinear I/O system. InterconnectedSystem(syslist[, connections, ...]) Interconnection of a set of input/output systems. LinearICSystem(io_sys[, ss_sys, connection_type]) Interconnection of a set of linear input/output systems. The following figure illustrates the relationship between the classes and some of the functions that can be used to convert objects from one class to another: Additional classes DescribingFunctionNonlinearity Base class for nonlinear systems with a describing function. DescribingFunctionResponse Results of describing function analysis. flatsys.BasisFamily Base class for implementing basis functions for flat systems. flatsys.FlatSystem Base class for representing a differentially flat system. flatsys.LinearFlatSystem Base class for a linear, differentially flat system. flatsys.PolyFamily Polynomial basis functions. flatsys.SystemTrajectory Class representing a trajectory for a flat system. optimal.OptimalControlProblem Description of a finite horizon, optimal control problem. optimal.OptimalControlResult Result from solving an optimal control problem. optimal.OptimalEstimationProblem Description of a finite horizon, optimal estimation problem. optimal.OptimalEstimationResult Result from solving an optimal estimationproblem. The use of these classes is described in more detail in the Differentially flat systems module and the Optimization-based control module
{"url":"https://python-control.readthedocs.io/en/latest/classes.html","timestamp":"2024-11-06T18:53:32Z","content_type":"text/html","content_length":"17897","record_id":"<urn:uuid:6521ab28-b3a2-4b04-8c02-dd5c92a34484>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00856.warc.gz"}
ICAMPA is the premier forum for the presentation of new advances and research results in the fields of Mathematics, Physics and Applied Science. The conference will bring together leading academic scientists, researchers and scholars in the domain of interest from around the world. Topics of interest for submission include, but are not limited to: © 2024 | Institute of Research and Journals | +91-8280047516
{"url":"http://iraj.in/Conference/10098/ICAMPA/call","timestamp":"2024-11-08T08:03:17Z","content_type":"text/html","content_length":"110490","record_id":"<urn:uuid:4412726c-5414-44eb-8f23-ca1ae4b90058>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00739.warc.gz"}
3 - Non-linear models: unscented Kalman filter Go to the end to download the full example code. or to run this example in your browser via Binder 3 - Non-linear models: unscented Kalman filter The previous tutorial showed how the extended Kalman filter propagates estimates using a first-order linearisation of the transition and/or sensor models. Clearly there are limits to such an approximation, and in situations where models deviate significantly from linearity, performance can suffer. In such situations it can be beneficial to seek alternative approximations. One such comes via the so-called unscented transform (UT). In this we characterise a Gaussian distribution using a series of weighted samples, sigma points, and propagate these through the non-linear function. A transformed Gaussian is then reconstructed from the new sigma points. This forms the basis for the unscented Kalman filter (UKF). This tutorial will first run a simulation in an entirely equivalent fashion to the previous (EKF) tutorial. We’ll then look into more precise details concerning the UT and try and develop some intuition into the reasons for its effectiveness. Limited detail on how Stone Soup does the UKF is provided below. See Julier et al. (2000) [1] for fuller, better details of the UKF. For dimension \(D\), a set of \(2 D + 1\) sigma points are calculated at: \[\begin{split}\mathbf{s}_j &= \mathbf{x}, \ \ j = 0 \\ \mathbf{s}_j &= \mathbf{x} + \alpha \sqrt{\kappa} A_j, \ \ j = 1, ..., D \\ \mathbf{s}_j &= \mathbf{x} - \alpha \sqrt{\kappa} A_j, \ \ j = D + 1, ..., 2 D\end{split}\] where \(A_j\) is the \(j\) th column of \(A\), a square root matrix of the covariance, \(P = AA^T\), of the state to be approximated, and \(\mathbf{x}\) is its mean. Two sets of weights, mean and covariance, are calculated: \[\begin{split}W^m_0 &= \frac{\lambda}{c} \\ W^c_0 &= \frac{\lambda}{c} + (1 - \alpha^2 + \beta) \\ W^m_j &= W^c_j = \frac{1}{2 c}\end{split}\] where \(c = \alpha^2 (D + \kappa)\), \(\lambda = c - D\). The parameters \(\alpha, \ \beta, \ \kappa\) are user-selectable parameters with default values of \(0.5, \ 2, \ 3 - D\). After the sigma points are transformed \(\mathbf{s^{\prime}} = f( \mathbf{s} )\), the distribution is reconstructed as: \[\begin{split}\mathbf{x}^\prime &= \sum\limits^{2 D}_{0} W^{m}_j \mathbf{s}^{\prime}_j \\ P^\prime &= (\mathbf{s}^{\prime} - \mathbf{x}^\prime) \, diag(W^c) \, (\mathbf{s}^{\prime} - \mathbf{x}^\ prime)^T + Q\end{split}\] The posterior mean and covariance are accurate to the 2nd order Taylor expansion for any non-linear model. [2] Nearly-constant velocity example This example is equivalent to that in the previous (EKF) tutorial. As with that one, you are invited to play with the parameters and watch what happens. Create ground truth from stonesoup.types.groundtruth import GroundTruthPath, GroundTruthState from stonesoup.models.transition.linear import CombinedLinearGaussianTransitionModel, \ transition_model = CombinedLinearGaussianTransitionModel([ConstantVelocity(0.05), timesteps = [start_time] truth = GroundTruthPath([GroundTruthState([0, 1, 0, 1], timestamp=timesteps[0])]) for k in range(1, 21): transition_model.function(truth[k-1], noise=True, time_interval=timedelta(seconds=1)), Set-up plot to render ground truth, as before. Simulate the measurement Create unscented Kalman filter components Note that the transition of the target state is linear, so we have no real need for a UnscentedKalmanPredictor. But we’ll use one anyway, if nothing else to demonstrate that a linear model won’t break anything. Run the Unscented Kalman Filter Create a prior Populate the track And plot The UT in slightly more depth Now try and get a sense of what actually happens to the uncertainty when a non-linear combination of functions happens. Instead of deriving this analytically (and potentially getting bogged-down in the maths), let’s just use a sampling method. We can start with a prediction, which is Gauss-distributed in state space, that we will use to make our measurement predictions from. We’ll recapitulate the fact that the sensor position is where it previously was. But this time we’ll make the measurement much noisier. The next tutorial will go into much more detail on sampling methods. For the moment we’ll just assert that we’re generating 2000 points from the state prediction above. We need these imports and parameters: Don’t worry what all this means for the moment. It’s a convenient way of showing the ‘true’ distribution of the predicted measurement - which is rendered as a blue cloud. Note that no noise is added by the predict_measurement() method, so we add some noise below. This is additive Gaussian in the sensor coordinates. from matplotlib import pyplot as plt fig = plt.figure(figsize=(10, 6), tight_layout=True) ax = fig.add_subplot(1, 1, 1, polar=True) ax.set_ylim(0, 30) ax.set_xlim(0, np.radians(180)) data = np.array([particle.state_vector for particle in predict_meas_samples.particles]) noise = multivariate_normal.rvs(np.array([0, 0]), measurement_model.covar(), size=len(data)) ax.plot(data[:, 0].ravel()+noise[:, 0], data[:, 1].ravel()+noise[:, 1], We can now see what happens when we create EKF and UKF updaters and compare their effect. Create updaters: Plot UKF (red) and EKF (green) predicted measurement distributions. # Plot UKF's predicted measurement distribution from matplotlib.patches import Ellipse from stonesoup.plotter import Plotter w, v = np.linalg.eig(ukf_pred_meas.covar) max_ind = np.argmax(w) min_ind = np.argmin(w) orient = np.arctan2(v[1, max_ind], v[0, max_ind]) ukf_ellipse = Ellipse(xy=(ukf_pred_meas.state_vector[0], ukf_pred_meas.state_vector[1]), width=2*np.sqrt(w[max_ind]), height=2*np.sqrt(w[min_ind]), # Plot EKF's predicted measurement distribution w, v = np.linalg.eig(ekf_pred_meas.covar) max_ind = np.argmax(w) min_ind = np.argmin(w) orient = np.arctan2(v[1, max_ind], v[0, max_ind]) ekf_ellipse = Ellipse(xy=(ekf_pred_meas.state_vector[0], ekf_pred_meas.state_vector[1]), width=2*np.sqrt(w[max_ind]), height=2*np.sqrt(w[min_ind]), # Add ellipses to legend label_list = ["UKF Prediction", "EKF Prediction"] color_list = ['r', 'g'] Plotter.ellipse_legend(ax, label_list, color_list) You may have to spend some time fiddling with the parameters to see major differences between the EKF and UKF. Indeed, the point to make is not that there is any great magic about the UKF. Its power is that it harnesses some extra free parameters to give a more flexible description of the transformed distribution. Key points 1. The unscented Kalman filter offers a powerful alternative to the EKF when undertaking tracking in non-linear regimes. Total running time of the script: (0 minutes 2.060 seconds)
{"url":"https://stonesoup.readthedocs.io/en/latest/auto_tutorials/03_UnscentedKalmanFilterTutorial.html","timestamp":"2024-11-11T20:42:11Z","content_type":"text/html","content_length":"253605","record_id":"<urn:uuid:1e9f22bc-6370-470c-8281-954563ae0d16>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00741.warc.gz"}
Research | Algorithmic bioinformatics | University of Helsinki Many real-world problems are modeled as computational problems, but unfortunately with incomplete data or knowledge. As such, they may admit a large number of solutions, and we have no way of finding the correct one. This issue is sometimes addressed by outputting all solutions, which is infeasible for many practical problems. We aim to construct a general methodology for finding the set of all sub-solutions common to all solutions. We can ultimately trust these to be part of the correct solution. We call this set "safe". Ultimately, we aim at creating automated and efficient ways of reporting all safe sub-solutions of a problem. The main motivation of this project comes from Bioinformatics, in particular from the analysis of high-throughput sequencing (HTS) of DNA. One of the main applications of HTS data is to assemble it back into the original DNA sequence. This genome assembly problem admits many solutions, and current research has indeed considered outputting only partial solutions that are likely to be present in the correct original DNA sequence. However, this problem has been approached only from an experimental point of view, with no definite answer on what are all the safe sub-solutions to report. In fact, the issue of safe sub-solutions has been mostly overlooked in Bioinformatics and Computer Science in general. This project will derive the first safe algorithms for a number of fundamental problems about walks in graphs, network flows, dynamic programming. We will apply these inside practical tools for genome assembly, RNA assembly and pan-genome analysis. This is very relevant at the moment, because HTS goes from research labs to hospitals, and we need answers that are first of all accurate. Our approach changes the perspective from which we address all real-world problems, and could spur a new line of research in Computer Science/Bioinformatics. The grand aim is a mathematical leap into understanding what can be safely reported from the data.
{"url":"https://www.helsinki.fi/en/researchgroups/algorithmic-bioinformatics/research","timestamp":"2024-11-02T20:36:18Z","content_type":"text/html","content_length":"69696","record_id":"<urn:uuid:5da00765-c256-4e6e-93f3-087dd009e432>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00699.warc.gz"}
Point processes – The Dan MacKinlay stable of variably-well-consider’d enterprises Point processes August 1, 2016 — February 18, 2019 point processes Another intermittent obsession, tentatively placemarked. Discrete-state random fields/processes with a continuous index. In general, I also assume they are non-lattice and simple, which terms I will define if I need them. The most interesting class for me is the branching processes. I’ve just spent 6 months thinking about nothing else, so I won’t write much here. There are comprehensive introductions. (Daley and Vere-Jones 2003, 2008; Møller and Waagepetersen 2003) A curious thing is that much point process estimation theory concerns estimating statistics from a single realisation of the point process. But in fact, you may have many point process realisations. This is not news per se, just a new emphasis. 1 Temporal point processes Sometimes including spatiotemporal point processes, depending on mood. In these, one has an arrow of time which simplifies things because you know that you “only need to consider the past of a process to understand its future,” which potentially simplifies many calculations about the conditional intensity processes; We consider only interactions from the past to the future, rather than some kind of mutual interaction. In particular, for nice processes, you can do fairly cheap likelihood calculations to estimate process parameters etc. Using the regular point process representation of the probability density of the occurrences, we have the following joint log likelihood for all the occurrences \[\begin{aligned} L_\theta(t_{1:N}) &:= -\int_0^T\lambda^*_\theta(t)dt + \int_0^T\log \lambda^*_\theta(t) dN_t\\ &= -\int_0^T\lambda^*_\theta(t)dt + \sum_{j} \log \lambda^*_\theta(t_j) \end{aligned} I do a lot of this, for example, over at the branching processes notebook, and I have no use at the moment for other types of process, so I won’t say much about other cases for the moment. See also change of time. 2 Spatial point processes Processes without an arrow of time arise naturally, say as processes where we observe only snapshots of the dynamics, or where whatever dynamics that gave rise to the process being too slow to be considered as anything but static (e.g. location of trees in forests).
{"url":"https://danmackinlay.name/notebook/point_processes.html","timestamp":"2024-11-08T22:15:45Z","content_type":"application/xhtml+xml","content_length":"90229","record_id":"<urn:uuid:8c774e36-f3e2-4c6c-a3b0-98295a12ee20>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00447.warc.gz"}
Artwork: All original copy: text, images and illustrations intended for printing. Files may be pixel or vector based. Bevelled edges: The edges of the die's perimeter are machined to form a neat flat edge, usually at an angle of 10° or 20° from the vertical. They enable the die to be clamped securely onto the press platten using clips. Bitmap: Artwork is often presented in bitmap format, produced from an image that has been converted using 50% threshold. In doing so, the file only contains black or white pixels and no shades of grey. See also DPI below. Blind embossing or debossing: A raised or sunken impression on the workpiece surface created without the use of ink or foil. Brass die: Ideal for foiling and embossing, long production runs and very versatile. Less expensive than copper. Allows multilevel and sculpted embossing dies to be manufactured. Easy material to Calliper: The thickness of the workpiece material (microns, μm), as well as its weight (gsm) should be provided in order that dies can be correctly tailored for the application. Chemical etching: Sometimes known as chemical machining. A process of using corrosive chemicals to remove unwanted metal from a metal plate in order to produce artwork, plaques, dies and tooling. Chemical machining: See chemical etching above. Chiselled: A shape put into embossed or debossed images resembling a V-shape. CNC-engraved die: A die produced using computer controlled machining equipment, programmed using computer aided manufacturing (CAM) software. Coated surface: The stock material’s surface must be considered. Smooth coated surfaces are well matched to fine detail and shiny foils. Textured surfaces require extra press pressure and produce an indented, sometimes duller foiled look. Combination die: A die used to emboss and foil in one operation. Sometimes known as a recess or foil/emboss die. See also fluted die. Copper die: A copper die produced by chemical etching. Perfect for flat foiling and some single level embossing. Good for long production runs. Can be used with higher press pressures. Retains heat well and good for crisp edges. Counter: See counterforce below. Counterdie: A duplicate die made solely with the intention of producing repeat orders of counterforces as and when required. Counterforce: A male counterpart usually moulded from the original female die, used to press the stock paper or card into the die to form embossed and/or debossed impressions. Also used with combination dies. Counterforces can also be CNC-engraved. Sometimes known as force plates. Counterforce pins: See registration pins below. Counterplate: See counterforce above. Debossing die: A die used to create an impression on which is sunk into a surface of the stock material (opposite of embossing). Deep etching: In order to prevent unwanted foil marks, thicker and softer stock materials often require deeper dies than our standard depth of 1.3mm. Depth: The vertical measurement from the main surface of the die to the bottom (or top if debossed) of the engraved image. The depth is critical and has to be specified to suit the process, image and the stock material (paper, card, leather etc). Die: A tool crafted from artwork and typically used to create embossings, cut shapes or apply foil. Die trimming: The perimeter of the die should be finished to suit the application. The specification for this may include bevelled edge angles, flange widths and other special requirements. See bevelled edges above and flange below. Domed: A shape engraved into an embossed or debossed image which resembles a semi-circle or half-moon. Double etched die: A die etched in the same way as a flat foil die, but with an additional etching process that adds a very finely etched pattern or detail to the foiling surface. Often used for artistic effect or security reasons. Also see micro etched die. DPI: The resolution of the artwork image file is normally expressed in units of "dots per inch". 1200dpi artwork may enable a better image to be produced than 300dpi artwork. Bitmap based artwork should be provided at suitably high enough file resolution. Low DPI may degrade the image which will subsequently contain unwanted digital artifacts. Duplicate die: Dies were often duplicated by using the original die to manufacture plastic castings or mouldings. Today, we prefer to machine originals and exact copies in brass, thus ensuring that all the superior benefits of using a metal die are realised. Embossing die: A die used to create an image that is raised up from the surface of the stock material (opposite of debossing). Emboss textured: Finely embossed patterns can be used to enhance backgrounds, as well as combined with foiling and other designs to create visual and tactile effects. Engraving: The process of chemically or mechanically removing metal to reproduce an image on, below or above the reference stock material surface. Etching: See chemical etching above. Expansion rate: Every material expands with heat and contracts when cooled. It’s length change is proportional to the original length and the change in temperature. This has implications when using a metal foiling die at elevated temperatures and must be allowed for. Copper and brass are superior choices for good registration as their expansion rate is about 25% that of magnesium. Flange: The amount of metal between the image and the edge of the die. Expressed as a width in mm and dependent on the application, may be specified differently for each edge. Flat foil die: A metal die that when heated, transfers foil onto the stock material and leaves a flat surface. Fluted counterforce: Male counterpart used to press the workpiece into the contours of a fluted die whilst simultaneously applying foil. See counterforce above. Fluted foil die: A brass die used to emboss and foil in one operation. Sometimes known as a combination, recess or foil/emboss die. Foil: Metallic or pigmented coating usually supplied on a polyester substrate. Used in foil stamping and foil embossing. Foil blocking: The process of applying the foil to the stock material using a metal die. Foil embossing die: A die used to simultaneously raise an image or text and apply foil in order to emphasise a design. See combination die. Force plate: See counterforces above. Gauge: A historical standard range of metal thicknesses. Imperial sizes persist e.g.16 gauge is 1.63mm and is commonly used for intaglio dies. Graphic Foil: Metallic hot stamping foil available in many colours and shades. Hand engraving: The skilled art of manually cutting, carving and creating detailed shapes and text in metal dies by hand. Holographic die: A die that has a holographic image micro-etched onto the surface. It can produce "hidden" foiled effects which appear to be three-dimensional to the viewer. Can be used for embellishment and security applications. They have the advantage of not requiring special registration during set-up. Holographic foil: Sometimes known as diffraction patterned foils. Many designs available ranging from simple dots to attractive snowflakes and in many coloured metallic foils. Hot foil stamp: Also known as hot foil block, hot foil die or foiling die. Intaglio die: A tool used to transfer ink onto the surface of the stock material. Produces a high quality image and/or text that is raised and has a tactile crafted appeal. Jacket: Removable cover of a book or brochure. KV counterforce: A counterforce made from PVC materials. A cost effective and mechanically flexible design. Laser exposure: A digitally based process of accurately transferring an image, pixel-by-pixel onto a photographically pre-sensitized metal plate prior to chemical etching. Resolutions of up to 2400dpi are possible. Letterpress die: A die used for printing with ink. When used with pressure it can also create an ‘indented’ effect. Location System: A mechanical system of pins and holes which allow counterforces and dies to be accurately positioned with respect to each other. See counterforce pins. Magnesium die: A magnesium metal die produced by chemical etching. Least expensive die material, good for short production runs. Fast to produce, used for foiling, embossing and debossing. Make ready: Materials used during the press set-up to ensure that uniform results are achieved when using dies. Make ready also helps to compensate for any irregularities between the die to press Multi-level die: A die where the image is engraved across more than one engraved depth. The image can be embossed and/or debossed. Multi-textured: An engraved image that has more than one style of texture. Micro embossing: Often used for security applications, enables small image details and/or text to be lightly embossed into the surface of foil. Micro-etched die: A die that has very finely etched image details on its foiling surface. Also see double etched die above. Negative image: An image where the light and dark areas are the reverse of the original e.g. shadows are white, highlights are black. Paper level: The reference point for all embossing or debossing. Dimensionally it is the "zero" position, usually the top surface of the stock material. Photo etched die: A photographic image is graphically interpreted via greyscale halftone, which is in turn micro-etched into the metal surface to create a foiling die. Pigment foils: Foils which produce strong uniform patches of colour, available in seemingly limitless shades and tones. Pillowed edges: Sometimes, in order to prevent non-image areas of a die marking the stock material, the top corners of the die are radiused in order to remove the angular corners. Positive image: An image where the light and dark areas are the same as the original e.g. shadows are black, highlights are white. Raster file: A digital file composed of pixels. Not scalable like a vector file. Recess die: See combination die. Registration pins: A system of pins and holes that enable the dies and counterforces to be perfectly aligned whilst setting up the printing press. Often referred to as counterforce pins. Resin counterforce: A counterforce made from a grade of polyurethane on an epoxy glass resin backing board. They are tough but not brittle, can be used at foiling temperatures and replicate fine details very well. Resolution: See also bitmap and DPI above. Reverses: Open parts of an image where the workpiece background and/or paper show through the printed or foiled areas. Also applies to text. Right reading: An image and/or text which is the right way around and has not been horizontally flipped. Routing: A machining process which further removes material from “non image” areas. Routed areas further reduce the risk of the die from marking the stock material, and/or unintentionally transferring foil. Sculptured die: A die that raises and/or lowers an image, utilising any appropriate shape, angle or edge to create a sculpted realistic effect. Security die: A die often made by both CNC machining and fine chemical etching. Sometimes used with special foils where it’s very finely detailed etched textures produce patterns and images that are extremely difficult to replicate. Sometimes known as a ‘Quality Seal’ die. Available with matching counterforces. Single level emboss die: A die that lowers an image from paper level to one depth. In the main, this depth will be achieved in all bold image areas, but may be shallower in narrower sections. Step and repeat: The same image repeated more than once, horizontally and/or vertically at set intervals. These intervals are referred to as 'centres' when measured from a point on the image, to the corresponding point on the adjacent image. Stock: The material to be printed, foiled or embossed/debossed. Textured foil die: A die which can apply foil as normal, but where the foil is slightly impressed to create a matte, diffractive or patterned texture on its surface. Many designs are available. Textured emboss die: A die used to produce a lightly embossed repeating pattern on the surface of the stock. Many designs are available. Type high: A term often used with letterpress. It is the distance between the foot (or base) of metal-type to the top i.e. the overall height. Different heights have been used in different countries. See wood mount below. Uncoated surface: See coated surface above. Up: Printing ‘two up’ or ‘three up’ means printing multiple copies of the same image on the same workpiece. Vector File: A digital file which can be infinitely adjusted in size without losing resolution. They are more suitable for some types of images than raster files. Wood mount: Often used for letterpress dies which are often securely fixed to a wood base in order to raise the printing surface to the correct height. See “type high” above. Wrong reading: An image and/or text which is the wrong way around because it has been horizontally flipped.
{"url":"https://tomlinsonlimited.co.uk/services/glossary?token=TJkYhynvynQMIBV8wxRPaAjFPXQsWvCu","timestamp":"2024-11-05T22:28:01Z","content_type":"text/html","content_length":"37597","record_id":"<urn:uuid:a54af8db-2044-4436-8169-8190305619b4>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00177.warc.gz"}
R Solution for Excel Puzzles | R-bloggersR Solution for Excel PuzzlesR Solution for Excel Puzzles R Solution for Excel Puzzles [This article was first published on Numbers around us - Medium , and kindly contributed to ]. (You can report issue about the content on this page ) Want to share your content on R-bloggers? if you have a blog, or if you don't. Puzzles no. 454–458 Author: ExcelBI All files (xlsx with puzzle and R with solution) for each and every puzzle are available on my Github. Enjoy. Puzzle #454 Sombody was counting something, and as usually we have to check it and find the most insigthful info. We have to find the length of each sequence, but in some of the, there are ranges with text. That mean that we have to use some R magic. Let’s play. Loading libraries and data input = read_excel("Excel/454 Extraction of number of nodes.xlsx", range = "A1:A9") test = read_excel("Excel/454 Extraction of number of nodes.xlsx", range = "B1:B9") replace_notation_with_range <- function(text_vector) { str_replace_all(text_vector, "\\d+ to \\d+", function(match) { numbers <- str_split(match, " to ") %>% unlist() %>% range <- seq(from = numbers[1], to = numbers[2]) paste(range, collapse = ", ") count_numbers <- function(text_vector) { str_count(text_vector, "\\d+") %>% result = input %>% mutate(Pronlem = str_to_lower(Pronlem)) %>% mutate(Pronlem = map_chr(Pronlem, replace_notation_with_range)) %>% mutate(Count = count_numbers(Pronlem)) %>% identical(result$Count, test$`Answer Expected`) # [1] TRUE Puzzle #455 Have you heard of antiperfect numbers? They are in some weird way perfect to me… perfect to play with. We need to find out which of given numbsers are antiperfect. But what does it mean? That if you take all divisors except number itself, change order of letters and add them up, they will be equal to original number. Let’s do it. Loading libraries and data input = read_excel("Excel/455 Anti perfect numbers.xlsx", range = "A1:A10") test = read_excel("Excel/455 Anti perfect numbers.xlsx", range = "B1:B5") is_antiperfect = function(number) { divisors = divisors(number) divisors = divisors[-length(divisors)] reversed_divisors = map(divisors, ~str_c(rev(str_split(.x, "")[[1]]), collapse = "")) %>% sum_rev_div = sum(reversed_divisors) return(sum_rev_div == number) result = input %>% mutate(is_antiperfect = map_lgl(Numbers, is_antiperfect)) %>% filter(is_antiperfect) %>% select(`Expected Answer` = Numbers) identical(result, test) # [1] TRUE Puzzle #456 Today’s challenge is pretty easy. And that is why I will give you two ways to do it. Loading libraries and data input = read_excel("Excel/456 Extract special Characters.xlsx", range = "A1:A10") test = read_excel("Excel/456 Extract special Characters.xlsx", range = "B1:B10") Transformation—approach 1 # approach 1 - remove alphanumerics result = input %>% mutate(String = str_replace_all(String, "[[:alnum:]]", "")) %>% mutate(String = ifelse(String == "", NA, String)) Transformation—approach 2 # approach 2 - extract special characters result2 = input %>% mutate(String = str_extract_all(String, "[^[:alnum:]]") %>% map_chr(~paste(.x, collapse = ""))) %>% mutate(String = ifelse(String == "", NA, String)) identical(result$String, test$`Expected Answer`) #> [1] TRUE identical(result2$String, test$`Expected Answer`) #> [1] TRUE Puzzle #457 Today we have task similar, but we have more complicated case. I used Regex capacities to find all numbers that are “hugged”with any kind of parenthesis. I can say that except Regex itself pretty easy case. Regex needs to use lookbehind and lookahead in cases. Loading libraries and data input = read_excel("Excel/457 Extract Numbers in Parenthesises.xlsx", range = "A1:A10") test = read_excel("Excel/457 Extract Numbers in Parenthesises.xlsx", range = "B1:B10") input = read_excel("Excel/457 Extract Numbers in Parenthesises.xlsx", range = "A1:A10") test = read_excel("Excel/457 Extract Numbers in Parenthesises.xlsx", range = "B1:B10") identical(result, test) # [1] TRUE Puzzle #458 Capital letters stands out in text as too tall soldier in a row. And our host gave us chance to make special meeting for the tallest soldiers. We need to find longest sequence of capital letters in this words. If there are more than one possible, we need to concatenate them. Loading libraries and data input = read_excel("Excel/458 Maximum Consecutive Uppercase Alphabets.xlsx", range = "A1:A11") test = read_excel("Excel/458 Maximum Consecutive Uppercase Alphabets.xlsx", range = "B1:B11") get_longest_capital = function(string) { caps = str_extract_all(string, "[A-Z]+") %>% unlist() caps_len = ifelse(length(caps) == 0, NA, max(nchar(caps))) caps = caps[nchar(caps) == caps_len] %>% paste0(collapse = ", ") result = input %>% mutate(ans = map_chr(Words, get_longest_capital)) %>% mutate(ans = ifelse(ans == "", NA_character_, ans)) all.equal(result$ans, test$`Expected Answer`) # [1] TRUE Feel free to comment, share and contact me with advices, questions and your ideas how to improve anything. Contact me on Linkedin if you wish as well. PS. Couple weeks ago, I started uploading on Github not only R, but also in Python. Come and check it. R Solution for Excel Puzzles was originally published in Numbers around us on Medium, where people are continuing the conversation by highlighting and responding to this story.
{"url":"https://www.r-bloggers.com/2024/05/r-solution-for-excel-puzzles-24/","timestamp":"2024-11-03T19:01:25Z","content_type":"text/html","content_length":"87853","record_id":"<urn:uuid:38244a8b-328e-4298-9ce2-82592c8b2a6f>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00414.warc.gz"}
Aug 2006 Thursday August 31 2006 Time Replies Subject 11:49PM 5 Tables with Graphical Representations 9:24PM 1 differnce between lme and proc mixed 9:05PM 1 grep question 8:22PM 2 cumulative growth rates indexed to a common starting point over n series of observations 8:03PM 0 alpha-channel transparency in filled.contour 6:49PM 0 Ooops, small mistake fixed (pretty printing multiple models) 6:43PM 0 Pretty-printing multiple regression models 6:23PM 1 alpha in Holt-Winters method 6:14PM 0 java R interface [Broadcast] 5:24PM 1 java R interface 5:19PM 0 periodic spline in glm 5:00PM 0 ANN: R/BioC Intro Course Oct 9th-11th in Seattle 4:51PM 0 Weighed 2D kernel density estimator 4:36PM 1 S4 Method Dispatch for Sealed Classes 4:30PM 4 problems with plot.data.frame 4:04PM 2 Consulta sobre bases en R 3:27PM 1 problem with postscript output of R-devel on Windows 3:14PM 0 sample size for recurrent events 2:41PM 6 newbie question about index 2:12PM 3 what's wrong with my simulation programs on logistic regression 1:32PM 0 New package ffmanova for 50-50 MANOVA released 12:52PM 1 Problems with OS X R 12:33PM 3 predict.lm within a function 12:32PM 0 Summary and thanks: .Rprofile under Windoze. 11:58AM 2 need help with an interaction term 11:47AM 1 Error in memory allocation 10:49AM 0 plot(augPred) and abline help 8:08AM 0 Moving Window regressions with corrections for Heteroscedasticity and Autocorrelations(HAC) 7:26AM 0 Weigthed mixed variable distance function needed for clustering 7:15AM 2 DCOM 1.3.5 Exception from HRESULT: 0x80040013 on iR.Init("R") 6:30AM 0 Question about the test of hartley and box 6:12AM 0 Data Download Probelm from Yahoo 4:36AM 2 Combine 'overlapping' dataframes, respecting row names 3:14AM 0 New package 'random' for non-deterministic random number 2:18AM 1 NaN when using dffits, stemming from lm.influence call 1:28AM 1 Frequency tables without underlying data Wednesday August 30 2006 Time Replies Subject 11:47PM 1 Update new version 10:28PM 2 how to read just a column 9:49PM 10 .Rprofile under Windoze. 8:45PM 0 Debugging with gdb 5:58PM 2 Ranking and selection statistical procedure 5:16PM 0 bootstrap for group and subgroup 5:02PM 1 lmer applied to a wellknown (?) example 2:27PM 5 working with summarized data 2:26PM 0 function which gives the hessian matrix of the log-likelihood of a nonlinear mixed model? 2:04PM 8 converting decimal - hexadecimal 1:56PM 1 Optimization 1:49PM 4 Create a vector from another vector 12:47PM 3 Antwort: Buying more computer for GLM 12:30PM 1 Datetime 11:37AM 0 Version 1.2-0 of the Rcmdr package 10:27AM 2 density() with from, to or cut and comparrison of density() 9:43AM 4 Barplot 8:55AM 1 How to put title Vertically 8:42AM 2 MCMClogit 7:35AM 1 Installation of SrcStatConnectorSrv on Windows 6:37AM 0 fitting an interaction term 5:47AM 1 Cross-correlation between two time series data 5:47AM 1 Need help to estimate the Coef matrices in mAr 12:27AM 1 Handling realisations in geoRglm 12:18AM 1 Help on apply() function Tuesday August 29 2006 Time Replies Subject 10:25PM 3 Substring and strsplit 9:35PM 0 my email: jonathan gheyssens (université de Montreal) 9:03PM 3 Producing R demos 8:18PM 2 lattice and several groups 8:05PM 1 subset by two variables 7:59PM 2 lattice/xyplot: plotting 4 variables in two panels - can this be done? 7:42PM 0 Key() and par(mfrow) 7:40PM 1 Dendrogram troubles 6:55PM 1 forestplot fucntion in rmeta package 4:59PM 0 Write signed short into a binary file (follow up and conclusion)(for real) 4:09PM 0 how to contrast with factorial experiment 4:08PM 1 passing namees 3:36PM 0 The rpanel package 3:08PM 2 Bioconductor installation errors 2:51PM 1 spectral clustering 2:35PM 0 symbols in coplots 2:17PM 1 First elements of a list. 12:24PM 0 MODWT exceeds sample size 11:11AM 1 AffyChip Background Analysis 10:53AM 2 EOF and CCA analysis 3:32AM 1 Deviance function in regression trees 1:34AM 0 En: Bootstraping for groups (right data tables) 1:13AM 1 Bootstraping for groups and subgroups and joing with other table 1:12AM 2 singular matrix 12:43AM 1 Legend box line thickness 12:15AM 1 standardized partial regression coefficients Monday August 28 2006 Time Replies Subject 10:45PM 1 Bug/problem reporting: Possible to modify posting guide FAQ? 10:28PM 7 Time plots 8:36PM 1 Extracting column name in apply/lapply 8:08PM 2 Cannot get simple data.frame binding. 7:16PM 3 matrix "Adjoint" function 5:54PM 2 regex scares me 5:20PM 6 Remove empty list from list 4:50PM 2 Help with Functions 3:39PM 1 Help on function adf.test 2:52PM 0 Write signed short into a binary file (follow up and conclusion) 1:34PM 0 Write signed short into a binary file 1:09PM 1 How to change the color of Plot area. 1:06PM 0 I'm on vacation 12:50PM 3 screen resolution effects on graphics 12:29PM 0 debugger 12:27PM 1 Modified Bessel function of third kind (fractional or real order) 11:44AM 1 Merge list to list - as matrix 8:56AM 1 Rgraphviz - neato layout - edge weights do not have an effect 8:45AM 0 Splancs Query 6:23AM 3 Firefox extension fo "R Site Search" 4:56AM 1 Download problems Sunday August 27 2006 Time Replies Subject 4:02PM 1 refer to objects with sequential names 7:30AM 0 (no subject) 2:53AM 1 Simulations in R during power failure 1:50AM 1 how to create many objects with sequencial names? Saturday August 26 2006 Time Replies Subject 11:00PM 2 Importing data from clipboard on Mac OSX 10:44PM 2 Permanently changing gui preferences 10:42PM 1 Capture of iterative integration output 10:07PM 5 Type II and III sum of square in Anova (R, car package) 8:28PM 0 boxplot( ..args.., axes=FALSE, frame.plot=TRUE) doesn't frame the plot 7:01PM 4 Can R compute the expected value of a random variable? 5:06PM 3 for() loop question 12:56PM 1 Adding a footnote in plot-window in R 12:47PM 1 problems with loop 11:48AM 1 Implementing EM Algorithm in R! 7:11AM 1 Problem on Histogtam 4:35AM 1 Memory usage decreases drastically after save workspace, quit, restart, load workspace Friday August 25 2006 Time Replies Subject 10:41PM 0 Problem with geeglm 7:35PM 2 horizontal direct product 7:16PM 5 Quickie : unload library 6:54PM 1 tick.number for date in xyplot 6:52PM 4 How to iteratively extract elements out of a list 6:36PM 0 tcltk command to figure out which widget is on focus (or clicked) 5:23PM 1 Modifying the embed-results 4:49PM 1 R.squared in Weighted Least Square using the Lm Function 4:18PM 1 Calculating critical values 3:52PM 4 fitting a gaussian to some x,y data 3:51PM 2 xyplot with different symbols and colors? 3:46PM 1 Plot y ~ x under condition of variable a and b [Broadcast ] 3:37PM 1 How to get back POSIXct format after calculating with hist() results 2:48PM 2 increasing the # of socket connections 2:47PM 0 biglm 0.4 2:14PM 0 correlation between 3 vectors 1:08PM 2 plot question 8:08AM 1 exact Wilcoxon signed rank test with ties and the "no longer under development" exactRanksumTests package 7:04AM 2 R in Nature 6:49AM 0 tktoplevel & tkgetSaveFile options 2:25AM 0 tcltk command to figure out which widget in active or in focus 1:28AM 0 sandwich: new version 2.0-0 1:19AM 0 zoo: new version 1.2-0 Thursday August 24 2006 Time Replies Subject 11:29PM 1 Need help with difficulty loading page www.bioconductor.org 10:17PM 0 R News, volume 6, issue 3 is now available 9:30PM 1 Using a 'for' loop : there should be a better way in R 9:21PM 2 Problem in library.dynam problems on Linux 9:01PM 3 generating an expression for a formula automatically 8:05PM 4 extremely slow recursion in R? 7:41PM 1 how to constrast with factorial experiment 6:51PM 0 ca.po Pz test question 6:21PM 0 forum.LancashireClubbers.co.uk - LAUNCH 4:59PM 5 Check values in colums matrix 4:30PM 1 Lattice symbol size and legend margins 3:53PM 5 xyplot tick marks and line thickness 3:23PM 1 problem in install on ubuntu 3:15PM 0 [Rd] reshape scaling with large numbers of times/rows 3:06PM 2 Why are lagged correlations typically negative? 2:53PM 2 installing the x86_64 R Binary on Fedora Core 5 1:58PM 1 lmer(): specifying i.i.d random slopes for multiple covariates 1:05PM 0 R and time series 12:54PM 1 Optim question 12:43PM 1 help: trouble using lines() 12:38PM 1 metaplot and meta.summaries 12:09PM 6 Intro to Programming R Book 11:41AM 2 my error with augPred 10:33AM 0 syntax for pdDiag (nlme) 10:03AM 3 How to compare rows of two matrices 9:34AM 0 Lost command area in R-SciViews 9:31AM 1 Omegahat-site down? 9:21AM 0 Waring message in mvBEKK.est 6:27AM 2 Search for best ARIMA model 3:01AM 2 help with pasting + expressions? 2:58AM 0 how to replace one row with other row? 2:55AM 2 "fixed effects" transformation 2:38AM 0 ess-remote question 2:00AM 0 Classification tree with a random variable 12:50AM 0 spatstat 1.9-5 Wednesday August 23 2006 Time Replies Subject 9:15PM 3 rgl: exporting to pdf or png does not work 8:13PM 1 rgl package: color of the axes 7:07PM 0 Wavelet Output 5:28PM 1 covariance matrix of predictions 4:58PM 2 editing ".Internal" functions 4:48PM 0 Calculating the combined standared errors from two regression equations 4:34PM 0 How would you run repeated-measures multifactorial MANCOVA? 2:55PM 2 nonlinear least squares trust region fitting ? 1:39PM 0 problems installing odrpack 12:08PM 5 negatively skewed data; reflecting 11:40AM 0 Random structure of nested design in lme 11:11AM 5 two density curves in one plot? 11:06AM 1 how to get a histogram of an POSIXct vector ? 9:41AM 0 creating a list from distance matrix 9:37AM 1 3d timeseries dataframe 3:30AM 1 glm inside one self-defined function 3:23AM 1 how to complete this task on data management 12:21AM 0 Rtangle in R-2.3.1 Tuesday August 22 2006 Time Replies Subject 10:46PM 0 rpart output: rule extraction beyond path.rpart() 9:51PM 2 R is wonderful 8:59PM 2 how to run ANCOVA? 8:32PM 1 Question on R Training 8:01PM 1 a generic Adaptive Gauss Quadrature function in R? 6:51PM 0 Mac os 6:27PM 1 Marginal Predicitions from nlme and lme4 6:15PM 1 Selection on dataframe based on order of rows 4:36PM 5 Authoring a book 4:30PM 0 Job Opportunity - Citigroup Quantitative Equity Research 3:20PM 0 NonLinearLeastSquares Trust-Region 3:19PM 0 3 September Courses: (1) Regression Modeling Strategies in R/Splus, (2) R/Splus Advanced Programming, (3) R/Splus Fundamentals 2:42PM 0 Comparing pre-post effect sizes and areas under the curve, resp. 2:35PM 1 HH and Rcmdr.HH packages available 1:50PM 2 new version of "The R Guide" available on CRAN 1:45PM 1 summary(lm ... conrasts=...) 1:41PM 0 :Circular-Linear Correlation 12:28PM 3 How to interrupt running computation? 12:27PM 1 boxplot order of the levels 11:29AM 2 listing a sequence of vectors in a matrix 10:47AM 1 big numbers 10:42AM 2 Rgraphviz installation Problem 9:31AM 4 Successive subsets from a vector? 6:41AM 2 Adding Grid lines 4:19AM 0 gl1ce warning message? 2:21AM 1 ANN: 'weaver' package, caching for Sweave 2:08AM 1 Total (un)standardized effects in SEM? Monday August 21 2006 Time Replies Subject 11:06PM 2 polychor error 10:15PM 1 return tree from .Call 9:39PM 4 question about 'coef' method and fitted_value calculation 9:00PM 0 Question about the varmx.pca.fd 8:52PM 1 Retrieving p-values and z values from lmer output 8:43PM 1 Escaping " ' " character 7:31PM 0 removing for-loop question 6:00PM 4 aggregate example : where is the state.region variable? 5:56PM 1 Dataframe modification 5:51PM 0 Fw: Permutations with replacement 5:49PM 5 lean and mean lm/glm? 4:57PM 1 "vcov" error in svyby and svytable functions 3:46PM 1 Fwd: Re: Finney's fiducial confidence intervals of LD50 3:29PM 0 R-packages posting guide (was: Re: [R-pkgs] New version of glmmML) 3:25PM 1 Creating a pixel image 2:57PM 1 reshape a data frame 2:57PM 0 Assistant Professor Position - Univ. of Central Florida (Orlando, FL) 2:29PM 1 R2WinBugs 2:05PM 1 interpreting coxph results 12:17PM 0 RE : test the tcltk package 12:06PM 0 test the tcltk package 12:03PM 2 Finney's fiducial confidence intervals of LD50 9:12AM 1 New version of glmmML 6:19AM 1 Clique technique-Package Sunday August 20 2006 Time Replies Subject 8:45PM 3 plot problem 6:39PM 1 C compile problem on Ubuntu linux 2:13PM 1 fit the series data 12:37PM 2 Grid Points 10:49AM 3 unquoting 6:46AM 2 how to the p-values or t-values from the lm's results 12:19AM 1 issues with Sweave and inclusion of graphics in a document Saturday August 19 2006 Time Replies Subject 11:58PM 2 question about cbind() 7:38PM 2 A matrix problem 7:10PM 0 [S] lapply? 6:41PM 0 centroid of manually given groups in a cca-plot 2:01PM 1 lapply? 12:54PM 1 need to find (and distinguish types of) carriage returns in a file that is scanned using scan 12:30PM 1 problem with Rcmd check and fortran95, makefile 11:58AM 4 string-to-number Friday August 18 2006 Time Replies Subject 8:44PM 1 list of lists to a data.frame 8:43PM 0 Affy: problems using neweS 8:26PM 1 multivariate analysis by using lme 8:25PM 1 Permutations with replacement 6:55PM 2 Floating point imprecision in sum() under R-2.3.1? 4:13PM 2 dataframe of unequal rows 4:13PM 0 [Fwd: Trend test and test for homogeneity of odd-ratios] 3:49PM 1 Boxplot Help 3:17PM 2 apply least angle regression to generalized linear models 2:50PM 2 R-update - what about packages and ESS? 2:50PM 3 Query: how to modify the plot of acf 2:41PM 5 as.data.frame(cbind()) transforming numeric to factor? 2:27PM 0 lmList and missing values 12:40PM 1 Insert rows - how can I accomplish this in R 10:17AM 2 4^2 factorial help 9:16AM 0 rotating axis labels in plot with multiple plots 8:28AM 1 Maximum length of R GUI input line? 8:13AM 1 using R to perform a word count - syntax refinement and incorrect number of dimensions error 8:03AM 1 Odd behaviour of R 3:23AM 3 Lattice package par.settings/trellis.par.settings questions 2:00AM 0 Fitting Truncated Lognormal to a truncated data set (was: fitting truncated normal distribution) Thursday August 17 2006 Time Replies Subject 11:46PM 0 Font-path error when starting X11 device in Gentoo 11:25PM 0 DSC 2007 10:04PM 2 getting sapply to skip columns with non-numeric data? 8:53PM 0 Rgraphviz fails to load 8:49PM 1 Simulate p-value in lme4 7:18PM 0 problem with cut(as.Date("2006-08-14"), "week") 6:46PM 2 Boxplot Help: Re-ordering the x-axis 6:23PM 1 R Site Search directly from Firefox's address bar 6:11PM 0 rbind-ing vectors inside lists 5:56PM 1 unlink disables help? 9:01AM 1 NLME: Limitations of using identify to interact with scatterplots? 9:01AM 1 tkinser 1:08AM 1 Setting contrasts for polr() to get same result of SAS 12:33AM 1 putting the mark for censored time on 1-KM curve or competing risk curve Wednesday August 16 2006 Time Replies Subject 11:01PM 1 bwplot in loop doesn't produce any output 8:19PM 0 confusing about contrasts concept [long] 7:23PM 1 Plots Without Displaying 6:43PM 6 read.csv issue 5:38PM 1 help about agnes 5:36PM 3 separate row averages for different parts of an array 5:09PM 5 Autocompletion 5:02PM 1 matching pairs in a Dataframe? 4:09PM 1 Problem with the special argument '...' within a function 3:39PM 0 Trend test and test for homogeneity of odd-ratios 3:34PM 0 Strange behavior with "hist" function filled with breaks and freq attribute 3:23PM 1 confusing about contrasts concept 3:22PM 1 list to balanced array 2:27PM 2 adding multiple fitted curves to xyplot graph 1:48PM 3 fitting truncated normal distribution 12:46PM 1 advice on exporting a distance matrix in the correct format needed please 11:35AM 1 [SPAM] - RE: REML with random slopes and random intercepts giving strange results - Bayesian Filter detected spam 9:12AM 0 Regular expressions: retrieving matches depending on intervening strings [Follow-up] 7:42AM 5 How to remove similar successive objects from a vector? 7:17AM 0 Regular expressions: retrieving matches depending on intervening strings 4:50AM 1 Specifying Path Model in SEM for CFA Tuesday August 15 2006 Time Replies Subject 11:14PM 1 coefficients' order in polr()? 9:39PM 1 A model for possibly periodic data with varying amplitude [repost, much edited] 7:29PM 1 "model = F" causing error in polr() 6:55PM 1 Hierarchical clustering 6:29PM 1 Grasper model error 6:14PM 1 rexp question 6:12PM 0 zlim not working in persp3d 5:10PM 1 How to show classes of all columns of a data frame? 4:59PM 2 nls convergence problem 3:34PM 1 REML with random slopes and random intercepts giving strange results 3:33PM 2 Looking for info on the "Regression Modeling Strategies in R" course in DC area 3:32PM 0 question re: "summarry.lm" and NA values 2:57PM 1 fMultivar OLS - how to do dynamic regression? 2:01PM 3 question re: "summarry.lm" and NA values 1:18PM 0 ARCH Jump diffusion models 12:54PM 4 nls 12:29PM 2 Aliases for arguments in a function 11:27AM 0 A plot with a bisector 8:35AM 2 Protection stack overflow 7:21AM 0 how to call forth a class definition buried in a package 6:57AM 0 Help with workaround for: Function '`[`' is not in thederivatives table 3:36AM 1 help: cannot allocate vector of length 828310236 2:43AM 3 merge 2 data frame based on more than 2 variables Monday August 14 2006 Time Replies Subject 11:09PM 0 Call for Beta Testers: R+ (read R plus) for Solaris and L inux: 11:00PM 0 Alexandre MENICACCI/Daix/RED/GroupeFournier est absent(e). 10:44PM 1 Help with workaround for: Function '`[`' is not in the derivatives table 8:00PM 1 Fast way to load multiple files 7:18PM 1 CircStats help 7:11PM 1 ARMA(1,1) for panel data 5:12PM 3 Making R script to run in a console 5:07PM 0 missing data treatment in MCMC pack 4:53PM 1 Attempt to access unmapped memory 3:56PM 3 Question on .Options$max.print 3:30PM 1 solving non-linear system of equations 3:19PM 0 help with glmmPQL 2:27PM 2 lme() F-values disagree with aov() 2:22PM 1 left-justified fixed-width format 2:11PM 1 Lattice barchart with different fill pattern 2:09PM 1 lasso for variable selection 1:57PM 1 mtext uses the typographical descender to align text 1:08PM 1 Presentation of multiple models in one table using xtable 11:58AM 2 Calculating trace of products 11:47AM 3 column to row 10:25AM 1 naive help with setting node attributes 10:05AM 0 posting 6:28AM 0 GAM Package: preplot.gam taking a **long** time 4:16AM 0 Random Survival Forest 1.0.0 is now available. Sunday August 13 2006 Time Replies Subject 11:35PM 1 Gower Similarity Coefficient 8:29PM 2 Puzzling warning using 2.3.1... 6:48PM 5 split a y-axis to show data on different scales 2:48PM 2 How to order or sort a data.frame 1:21PM 3 How to reply to a thread if receiving R-help mails in digest form 12:23PM 2 Vector Join Saturday August 12 2006 Time Replies Subject 2:09PM 1 adding columns to a table after a loop 6:59AM 0 anova.mlm for single model (one-way repeated measured anova) 12:50AM 2 problem in reading large files Friday August 11 2006 Time Replies Subject 11:41PM 1 x tick labels - sparse? 11:25PM 1 more on date conversion differences in 2.2.1 vs 2.3.1 8:44PM 2 invisible() - does not return immediately as return() does 8:05PM 3 Auto-save possible in R? 7:24PM 2 rpvm/snow packages on a cluster with dual-processor machi nes 6:48PM 3 An apply and rep question 6:15PM 2 Creating SAS transport files 4:43PM 0 Getting summary.lm to include data for coefficients that are NAs? 4:12PM 1 rpvm/snow packages on a cluster with dual-processor machines 3:56PM 2 Colour-coding intervals on a line 3:46PM 1 [BioC] problem loading affycoretools (more details) 3:14PM 2 tkinsert 2:56PM 1 Changing R network connections 1:09PM 1 Is the PC1 in PCA always a "size effect"? 11:21AM 1 Suggestions for help & weighted.mean 11:21AM 0 Box's M test [Broadcast] 11:20AM 1 help:coerce lmer.coef to matrix 11:05AM 0 Box's M test 10:42AM 1 garch results is different other soft 9:33AM 1 help: convert lmer.coef to matrix 8:32AM 2 about MCMC pack again... 7:31AM 1 RE : tcltk library on linux 6:35AM 1 - factanal scores correlated? 3:27AM 0 Read in vtk data Thursday August 10 2006 Time Replies Subject 10:54PM 2 day, month, year functions 10:40PM 0 basic question re lm() [Broadcast] 10:18PM 1 logistic discrimination: which chance performance?? 10:17PM 2 basic question re lm() 7:35PM 0 Negatie Binomial Regression: "Warning while fitting theta: alternation limit reached" 7:29PM 5 Variance Components in R 7:10PM 0 QUestion on prediction of class from rpart 5:10PM 1 mcmc pack 4:13PM 1 How to speed up nested for loop computations 4:07PM 3 Multiple density curves 3:40PM 1 glmmPQL question! 3:10PM 1 summary statistics on an entire data frame 2:32PM 3 bug in interaction order when using drop? 2:01PM 0 Convergence in geese/gee 2:00PM 1 help with structuring random factors using lmer() 1:33PM 0 jpeg() and JGR 1:09PM 1 tcltk library on linux 10:12AM 3 Geometrical Interpretation of Eigen value and Eigen vector 10:00AM 0 sn package - skew t - code for analytical expressions for first 4 moments 9:00AM 0 Colours in silhouette plots (cluster package) 8:59AM 2 hist() and bar spacing 8:54AM 1 How to fit bivaraite longitudinal mixed model ? 8:40AM 0 New upload of connectedness 7:14AM 1 installing rimage 4:40AM 2 OT UNIX grep question 3:21AM 0 R2HTML incomplete echo 2:39AM 2 index.cond in xyplot 2:17AM 0 Your email requires verification verify#QTaQ31oVh73RQhN8fgCXtoSQaCqGXMTf 1:06AM 0 how to link matrix with the variables 1:04AM 2 graphic output file format 12:30AM 3 Is there a better way than x[1:length(x)-1] ? Wednesday August 9 2006 Time Replies Subject 11:46PM 1 decimal accuracy in pnorm( ) 8:20PM 2 optim error 7:43PM 2 Linear Trend in Residiuals From lme 7:31PM 1 Plot with Julian dates. 7:23PM 0 Weighted Mean Confidence Interval 6:37PM 3 Unique rows 4:48PM 1 numerical differentiation 4:42PM 0 (3) Courses & R/Splus Advanced Programming in New York City ***September 11-12 ***by the R Development Core Tean Guru 4:25PM 0 Can one use correlations (or R^2) as data for an ANOVA? 4:21PM 2 R CMD check error 4:07PM 3 categorical data 3:38PM 0 cannot plot too much (using Mac interface) 3:33PM 1 minimization a quadratic form with some coef fixed and some constrained 3:21PM 3 objects and environments 3:14PM 1 Combinations question 2:46PM 1 GARCH(1,1) optimization with R 2:44PM 1 tk combobox question 2:36PM 2 Speeding indexing and sub-sectioning of 3d array 2:36PM 0 [Rcom-l] GARCH(1,1) optimization with R 2:34PM 0 Rd question: itemize in arguments 2:00PM 0 R: data.frame to shape 1:32PM 1 data.frame to shape 1:19PM 1 R2HTML: request for an extended example 1:02PM 0 2 places on R course 12:52PM 1 scaling constant in optim("L-BFGS-B") 12:24PM 2 evolutionary computing in R 11:14AM 1 legend on trellis plot 10:46AM 1 missing documentation entries 10:39AM 0 solving nonlinear equations in R 10:39AM 1 Joint confidence intervals for GLS models? 9:35AM 1 NLS and IV 9:03AM 1 nested ANOVA using lme 8:14AM 0 CRAN package: update of 'vars' submitted 8:01AM 0 exponential proportional hazard model 7:58AM 1 nlme iteration process, few questions 5:19AM 1 debug print() commands not showing during file write loop 5:16AM 2 How to draw the decision boundaries for LDA and Rpart object 12:38AM 1 unable to restore saved data Tuesday August 8 2006 Time Replies Subject 10:11PM 1 Split-plot model 10:00PM 2 Getting data out of a loop 9:02PM 0 trellis in black and white 6:22PM 1 Fitting data with optim or nls--different time scales 5:38PM 1 multinom details 4:54PM 1 locating intervals (corrected version) 4:47PM 1 locating intervals 4:25PM 3 prefixing list names in print 3:43PM 3 More Plots 3:21PM 1 Call for Beta Testers: R+ (read R plus) for Solaris and Linux: 2:32PM 0 (Fwd) Re: paired t-test. Need to rearrange data? 2:00PM 2 Frequency Distribution 10:59AM 1 How to convert list elements to data.frames or vectors? 10:33AM 0 gamm question 10:11AM 1 fixed effects following lmer and mcmcsamp - which to present? 9:58AM 1 fixed effects constant in mcmcsamp 8:25AM 0 OT: RBanking 3:23AM 1 oodraw command line usage 2:03AM 3 Pairwise n for large correlation tables? 1:46AM 1 parameter yaxs / function hist (graphics) Monday August 7 2006 Time Replies Subject 10:49PM 2 finding x values to meet a y 10:13PM 1 mathematica -> r (gamma function + integration) 8:05PM 2 Retain only those records from a dataframe that exist in another dataframe 7:00PM 1 Capturing stderr from system() 6:25PM 0 Your Message To scoug-general 6:23PM 2 Plots 5:09PM 1 failed to load gplots 4:46PM 0 Trying to do aseries of subsets with function or for loop 3:17PM 1 Running out of memory when using lapply 3:05PM 1 unwanted conversion of date formats in 2.3.1 to character 2:38PM 5 kmeans and incom,plete distance matrix concern 12:31PM 4 CPU Usage with R 2.1.0 in Windows (and with R 2.3.1) 12:09PM 1 How to export data to Excel Spreadsheet? 10:43AM 2 Plotting logarithmic and semiloarithmic charts. 10:35AM 3 Finding points with equal probability between normal distributions 9:25AM 2 Backquote in R syntax 8:38AM 1 Combination (a large numbers) 8:37AM 2 Constrain coefs. in linear model to sum to 0 7:36AM 2 Is there a function in R can help me to plot such a figure? 2:54AM 0 how to generate this simulation dataset in R 2:46AM 1 Source installation error: "gfortran and gcc disagree on int and double ... 2:24AM 1 classification tables Sunday August 6 2006 Time Replies Subject 10:21PM 2 removing intercept from lm() results in oddly high Rsquared 7:45PM 1 Take random sample from class variable 6:40PM 1 Beamer and Sweave 6:10PM 1 extractAIC using surf.ls 2:53PM 0 Reshape package: new version 0.7 2:13PM 2 paired t-test. Need to rearrange data? 4:42AM 1 ordering by a datframe date Saturday August 5 2006 Time Replies Subject 11:36PM 1 R CMD check and RUnit 5:08PM 2 Kmeans - how to display results 4:29PM 1 AIC for lognormal model 3:09PM 1 R GUI for Mac OS X bug 12:15PM 1 Interpretation of call to aov() 10:27AM 1 formating for dist function 12:51AM 0 cor of two matrices whose columns got shuffled Friday August 4 2006 Time Replies Subject 11:44PM 2 (... not defined because of singularities) in lm() 11:37PM 1 prettyR arrives 10:48PM 1 polychoric correlation error 10:43PM 2 Variance-Covariance matrix from glm() 10:02PM 2 Postscript fonts 9:45PM 3 Help with short time series 8:09PM 2 Doubt about Student t distribution simulation 7:14PM 0 training svm's with probability flag (re-send in plain text) 5:26PM 2 why does lm() not allow for negative weights? 5:20PM 0 training svm's with probability flag 5:01PM 0 GAM 2D-plotting 4:47PM 1 Error when loading odesolve 4:46PM 2 Sampling from a Matrix 4:33PM 1 Simulate an Overdispersed(extra-variance poisson process)? 3:47PM 2 expression() - Superscript in y-axis, keeping line break in string 1:12PM 2 plotting picture data 12:40PM 2 Sweave special token \\ from R to latex 11:58AM 0 Problem with installing R under Windows 11:48AM 2 Data frame referencing? 11:36AM 2 User input from keyboard 8:23AM 1 Integration and Loop in R 7:45AM 0 need sample parallelized R scripts 7:14AM 0 geodesic distance (solution) 5:07AM 0 Question regarding extrapolation 1:47AM 1 gnlsControl 1:21AM 1 Questions about sweave... 1:17AM 3 Building a random walk vector Thursday August 3 2006 Time Replies Subject 11:22PM 0 Ambitious newbie with some ongoing Q's 9:39PM 0 geodesic distance (solved) 8:02PM 4 meta characters in file path 6:44PM 2 How to access a column by its label? 6:23PM 2 fitting a model with the nlme package 6:22PM 1 gsummary 5:51PM 1 Looking for transformation to overcome heterogeneity ofvariances 5:09PM 0 Help Building packages for windows 4:11PM 1 levels of an array (strings and numbers) 2:46PM 2 bullseye or polar display of "circular" data 2:10PM 7 Vectorizing a "for" loop 1:56PM 1 questions on plotting dedrograms 1:46PM 2 efficient way to make NAs of empty cells in a factor (or character) 1:45PM 2 get() in sapply() in with() 1:33PM 3 Looking for transformation to overcome heterogeneity of variances 1:13PM 2 bringToTop without focus? 12:53PM 1 how to use the EV AND condEV from BMA's results? 12:34PM 0 problem with factor in list (same name for diffrent category) 12:11PM 0 Invoking R on UNIX Command line from a PERL CGI 12:01PM 3 between-within anova: aov and lme 11:08AM 0 Default first argument in assignment function possible? 10:02AM 1 question about dll crashing R 9:58AM 2 NLME: Problem with plotting ranef vs a factor 9:14AM 1 geodesic distance 8:25AM 0 Math elements in panel headers of lattice plots? 7:37AM 0 spatstat 1.9-4 7:36AM 2 run self written functions 4:43AM 0 Error in step() Wednesday August 2 2006 Time Replies Subject 10:38PM 0 Baseline levels summary.Design 10:25PM 1 unbalanced mixed effects models for fully factorial designs 9:56PM 1 help with formatting legend in xyplot 9:29PM 0 question about stdize() in PLS package 9:21PM 2 From 2.2.1 to 2.3 9:01PM 5 Finding the position of a variable in a data.frame 8:22PM 0 Course***Dr Frank Harrell's Regression Modeling Strategies in R/Splus course *** September 2006 near you (San Francisco, Washington DC, Atlanta) 8:06PM 2 tcl/tk bind destroy event 7:43PM 4 ggplot facet label font size 7:25PM 2 lme4 and lmeSplines 5:12PM 0 question about correlation coefficeint and root mean square (with code used) 4:57PM 2 listing of permutations 4:51PM 1 Summary method needed? 4:43PM 0 [Off-Topic-but somewhat related] DIA/FDA Open Toolbox Initiative 3:13PM 0 expected survival from a frailty cox model using survfit 3:09PM 1 ordering columns (longitudinal data in wide format) 2:41PM 2 best way to calculate per-parameter differences in across-subject means 1:55PM 0 Plotting a ranef object in NLME 1:22PM 1 read.spss and umlaut 1:18PM 2 How to share variables 12:55PM 2 missing value 11:04AM 0 Trying to use segmented in a function 10:56AM 1 loop, pipe connection, quote/unquote 10:32AM 1 Syntax of Levene's test 8:20AM 1 Support vector in lcrabs example 7:57AM 1 questions on aggregate data 6:43AM 1 RE 2:46AM 2 Data transformation Tuesday August 1 2006 Time Replies Subject 9:47PM 2 R Reference Card and other help (especially useful for Newbies) 9:28PM 1 plot() with TukeyHSD 9:26PM 0 rsurv in ipred 8:40PM 1 Replacing NA in fSeries 7:57PM 2 Indexing issue 5:43PM 1 What's a labelled data.frame? And how do I work with it? 5:28PM 0 natural spline function 5:01PM 2 open DLL in R 4:46PM 1 Pseudo R for Quant Reg 4:04PM 3 boxplot 3:57PM 2 deleting a directory 3:34PM 1 Tcltk package 2:33PM 1 help on fitting negative binomial distribution with MLE 2:23PM 1 How to convert two-dimensional function to matrix? 2:03PM 2 Extracting a row number from a matrix 1:06PM 2 A problem with R CMD SHLIB 12:09PM 4 Overlay Boxplot with scatter plot 9:33AM 1 Global setting for na.rm=TRUE 7:43AM 1 R crashes using pdf() windows() or postscript() 4:30AM 0 Confirmation Request (3355406281) 4:08AM 4 Fitting models in a loop 12:33AM 2 rgb and col2rgb color conversion/modification/shading
{"url":"https://thr3ads.net/r-help/2006/08","timestamp":"2024-11-06T15:49:31Z","content_type":"text/html","content_length":"137030","record_id":"<urn:uuid:54e8f4dd-7e03-4108-81cc-16da2d5f01e0>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00345.warc.gz"}
The Asymmetric Ciphers and Mathematics Quiz Podcast Beta Play an AI-generated podcast conversation about this lesson Which of the following is a prime number? Which of the following is a composite number? What does it mean to factor a number? To write a number as a product of other numbers (correct) To find the square root of a number To multiply a number by another number To divide a number by another number What is the product of the prime numbers 2 and 3? Signup and view all the answers What is the value of Euler's totient function for a prime number? Signup and view all the answers Which of the following is a prime number? Signup and view all the answers What is the value of Euler's totient function for the prime number 7? Signup and view all the answers What is the result of the Chinese Remainder Theorem? Signup and view all the answers Which key exchange algorithm is used in the Diffie Hellman key exchange? Signup and view all the answers What is the product of the prime numbers 2 and 3? Signup and view all the answers Study Notes Prime Numbers • A prime number is a positive integer that is divisible only by itself and 1. Composite Numbers • A composite number is a positive integer that has more than two distinct positive divisors. • To factor a number means to express it as a product of prime numbers. • Factoring is the reverse of multiplication. Prime Number Products • The product of the prime numbers 2 and 3 is 6. Euler's Totient Function • Euler's totient function for a prime number is the number itself minus 1. • Euler's totient function for the prime number 7 is 6. Chinese Remainder Theorem • The Chinese Remainder Theorem is a theorem in number theory that provides a unique solution to a system of congruences. • The Diffie-Hellman key exchange algorithm uses the difficulty of computing discrete logarithms in a group to establish a shared secret key. • The key exchange algorithm used in the Diffie-Hellman key exchange is modular exponentiation. Studying That Suits You Use AI to generate personalized quizzes and flashcards to suit your learning preferences. Test your knowledge on asymmetric ciphers and the mathematics behind asymmetric key cryptography in this quiz. Explore topics such as primes, primality testing, factorization, Euler's totient function, Fermat's and Euler's theorem, Chinese Remainder Theorem, exponentiation, and logarithm.
{"url":"https://quizgecko.com/learn/the-asymmetric-ciphers-and-mathematics-quiz-r8mbkx","timestamp":"2024-11-08T14:03:47Z","content_type":"text/html","content_length":"317302","record_id":"<urn:uuid:92335747-8b03-4a5d-94f1-1b00ab4f6887>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00342.warc.gz"}
Entropy vs. Anti-Entropy (How DNA Defeats the Blackhole) Not open for further replies. Entropy vs. Anti-Entropy the two opposing forces that balance the existential equation. Entropy is the tendency (or force if you will) to decrease the organization or complexity of systems. Anti-entropy is the opposite tendency which is to increase the organization of systems. This is simplistically stated but sufficient for our purposes. These are the two sides of the war for existence. The goliath in the camp of entropy, and its ultimate expression is the black hole. Although there are many generals that operate on behalf of entropy, the black hole is the beast with no equal that we know of. Likewise, the ultimate expression of anti-entropy is, believe it or not, an equally potent goliath which may seem more like a David, is the DNA molecule. Yes, life is the ultimate expression of the tendency to increase order throughout existence. These two opposing forces wage the war for dominance throughout existence. Entropy and its handy work are very familiar to us. The tendency toward disorder of everything around us seems intuitive and obvious even if we aren’t familiar with the terms Entropy and anti-entropy. We realize that even the most robust of structures ultimately are eroded to increasingly minute components. Even the blackhole, after it is good and done carrying out the work of entropy by reducing any organization within its reach, finally circum to its master’s appetite. Then there is life. We are part of life. We are privileged enough to have life all around us. Make no mistake however; life may be just as rare as it seems to be in the universe. You see, where a blackhole requires a tremendous amount of hardware (matter) to exist, its nemesis, life requires an equal amount of software to exist. This software is present and contained in DNA’s genome and epigenome combination. That’s right the existential equation is balanced on one side by the likes of blackholes which do the bidding of entropy, and on the other side of the equal sign by the likes of DNA and its constituent structures of anti-entropy. Everything that happens in existence, in this universe as well as in all the other universes, in all dimensions, are factors in this equation. You, me, every atom and every planet, the galaxies and quarks, and superstrings everywhere in existence are all factors in the existential equation. And the equation must always balance. Our ancestors although largely ignorant to the concept of entropy had an intuitive feeling about these concepts. They represented them as all seeing gods and their agents. Little did they know that the actual incarnation of their religious ideas and faith had a fundamentally scientific basis in entropy and anti-entropy. Interestingly, in our science we have studied entropy far more than anti-entropy. This is curious because it is anti-entropy that created life for the purpose of, and with the power to, independently increase the organization of the universe, even if the only mechanism to do so is to procreate. In this regard humans are an ultimate weapon of sorts in that we have an extra potential for increasing order through science and technology, even while resisting an almost equal pension for creating disorder. But weather we are human, or beaver, or bird, our skills and talents are the least of it. Our DNA is anti-entropies real weapon. Life balances the existential equation. Each DNA molecule on this planet, or anywhere else, contains a magnitude of anti-entropic order able to balance the disorder of an untold number of blackholes working hard for many years. Life is how anti-entropy provides the mathematical impact that balances entropies’ unrelenting influence. The cosmic frequency of something coming apart or reducing in complexity is quite high and each occurrence has a tiny probabilistic impact on the grand formula. Whereas the cosmic frequency of DNA level order and the resulting living organisms it produces is comparatively extremely low but each occurrence has a significant probabilistic impact on the grand formula. We hardly ever consider how important the software of the universe is to the grand process. We are instinctually hardware rational. The touchy feely is clear to us. But nature operates on levels that we can barely fathom at this point. In nature the ‘small’ packs as much punch as the very large in the existential formula. The quantum particles that ultimately erode the blackhole need no help in doing their work. Part of the difficulty for us in understanding the war of the entropies is that much of the battle is invisible to us. After all, how can the order expressed by tiny life forms in a universe balance the epic disorder in that universe? How can the little ‘David’ DNA molecule sustain against the awesome Goliath of a blackhole? You see the war of the entropies is statistical in nature. It is interesting that we all have an innate feeling that nature is somehow governed by statistics or probabilities. Our science has recognized and very accurately quantified the probabilistic nature of existence, which is at the heart of Quantum Physics. The statistics is not the anomaly; it is the point. For existence to persist, the statistics must balance. From the quark to the Galaxy clusters the mathematics must work out. The existential equation must balance. That is why mathematics is the language of the universe and can expose phenomena that we could never see. Existence persists mathematically. In this regard, a single life can statistically balance the contribution of a black hole. As the magnitude of entropy reached a critical point in the developing universe anti-entropy operated at the relatively tiniest scales that are less vulnerable to the majority of the chaos reeked by entropic forces like explosions and fire and impacts etc. The smaller things get the harder they are to break apart. While entropy is a very obtuse operator, anti-entropy on the other hand is more subtle. It requires long spans of time to do its work and relatively stable environments, laboratories if you will. These laboratories are principally defined by a relatively low magnitude of entropy. Anti-entropies’ principle ally are the laws of statistical probability which are oddly fundamental to the laws of physics. These laws dictate and enforce that every action has an equal and opposite reaction, every push has a pull, every hot produces a cold, you get the point. These laws together with applicable cosmic speed limits and relativistic mass-energy build up constraints guarantee that however rampant entropy and its agents may get it can’t be everywhere at once in a universe as appropriately immense as this one. Nature doesn't require our definitions or our understanding. Nature does and has only ever done one thing and one thing only: "crunch the numbers" balance the math. It is doing that every time you stand on one leg, every time a baby gestates in the womb, every time the mail man puts mail in your mailbox, every time a car hits the brakes, every time a star goes nova, every time a big bang occurs etc. The only thing going on in nature is running the numbers. Everything else emerges from this metaphorical number crunching. We don’t possess the mathematical or computing capabilities to model very much of nature but this is mostly just circumstantial up to a point. We are making steady progress. In this article I submit for your consideration that the mathematics of life, if we're ever able to fully realize it we would see is very potent in the mathematics of nature. What exactly is being comparing between the DNA molecule and a Black hole? Our most powerful computing systems programmed with our best models running non-stop for months can barley model the folding of a basic protein. Step that concept up to the full expression of a complex protein not to mention the Ribosome which is the tiny factory that builds proteins in living organisms, step that up all the way to modeling a living bacteria. This mathematical contribution of DNA and its systems, regardless of how we define them, is potent to the mathematics of nature and each instance is a multiplier of this mathematical potency. Each instance is each DNA strand in each cell that has ever been created in the four plus billion years that DNA has existed on Earth. Put in these terms you can begin to appreciate how earth life has contributed to nature as a very potent mathematical factory contributing to balancing the existential formula. On the other hand, we are much more capable of modeling a star like our Sun or even a black hole which we all know are both physically much larger than a DNA molecule or a Ribosome or your cat. As I'm sure you can see size doesn't matter in this regard. Likewise complexity can be deceptive to the human eye but is well defined in mathematical terms. The reason we are more able to model a Star is because the processes that make a star are far simpler mathematically than those that define a protein to a bacteria. Modeling a star is only a few orders of magnitude more difficult than simulating the aerodynamics and thermodynamics of the Space shuttle. Simulating even a single bacteria is far, far more complex. I do not presume to suggest how these complex factors combine, or cancel, or interact with each other or even suggest that they are fundamentally separate and distinct entities. On the most fundamental levels I suspect they may be ultimately indistinguishable. Nature balances itself and we can only hope for a glimpse into its workings. There is some profoundly important perspective to be gleaned from the comparisons. Anti-entropy seems to be something you have invented. At least, I'm unfamiliar with it. Entropy is usually defined as dS =dQ rev/T (per Clausius), or S = k lnW (per Boltzmann). It is not clear to me from these how you would define anti-entropy. Can you point me to a definition? This is ridiculus. Life is not anti-entropy. Do you think when dew forms on the grass that is anti-entropy? There is no doubt that the entropy of the water decreases when dew forms. Life looks like a decreasing entropy because we consume a huge amount of energy to make the order in our bodies. Life increases the entropy of the universe. It looks like life decreases entropy to you because you are looking at the open system and not including the vast amounts of energy that in put into the system (food). We consume 1000s of kcal a day just to maintain ourselves. This link or any thorough science text may assist you. There is no more proof for entropy than there is for anti-entropy these are both symmetrical concepts of each other that mankind has always struggled to comprehend. The point of the article is not to simply regurgitate accepted understanding but to bring a new perspective to concepts we may have all seen before but seen differently. This is ridiculus. Life is not anti-entropy. Do you think when dew forms on the grass that is anti-entropy? There is no doubt that the entropy of the water decreases when dew forms. Life looks like a decreasing entropy because we consume a huge amount of energy to make the order in our bodies. Life increases the entropy of the universe. It looks like life decreases entropy to you because you are looking at the open system and not including the vast amounts of energy that in put into the system (food). We consume 1000s of kcal a day just to maintain ourselves. No, life is not anti-entropy nor is it entropy nor is it hot or cold or high pressure or low pressure, these are all concepts conceived by mankind to attempt to quantify and understand nature. The article submits for your consideration the role life plays in nature. This link or any thorough science text may assist you. There is no more proof for entropy than there is for anti-entropy these are both symmetrical concepts of each other that mankind has always struggled to comprehend. The point of the article is not to simply regurgitate accepted understanding but to bring a new perspective to concepts we may have all seen before but seen differently. I'm afraid this is not science at all, it's woo. Entropy is an extremely well characterised term in thermodynamics and is fundamental to the understanding of physical and chemical processes. "anti-entropy", as defined by that crank site you linked to, is by its own admission a vague and woolly term. It evidently has no thermodynamic meaning (and I don't believe it will be found in any science text, thorough or otherwise, unless you can provide a reference). So there is no symmetry about it at all. All the processes of life are driven by the overall increase in entropy that occurs during metabolism of the organism, as Origin points out above. There is no mystery to explain here. As a foetus grows in its mother's womb, the mother/foetus system will decrease in entropy, but that system is not closed . The mother takes in nutrients and oxygen and expels waste, water and carbon dioxide. The net entropy of a closed system including all these components increases. Just the same thing happens when water freezes. The entropy of the water decreases, but Latent Heat of Fusion is given off, with the result that the overall entropy of the water plus its immediate environment, i.e. of the whole system, goes up. Trying to invent some spurious balance between order and disorder, as if life is somehow required as an antidote to black holes, or whatever it is, is just ignorant mysticism. This link or any thorough science text may assist you. There is no more proof for entropy than there is for anti-entropy these are both symmetrical concepts of each other that mankind has always struggled to comprehend. The point of the article is not to simply regurgitate accepted understanding but to bring a new perspective to concepts we may have all seen before but seen differently. That is absurd. Entropy may seem like this strange magical thing to you but ask any chemical engineer and you will find it is a huge part of many different thermal processes and is not mysterious. No, life is not anti-entropy nor is it entropy nor is it hot or cold or high pressure or low pressure, these are all concepts conceived by mankind to attempt to quantify and understand nature. The article submits for your consideration the role life plays in nature. Sorry but life is like any other process in the universe and it increases the entropy of the universe. I've never heard anyone challenge the existence of the concepts of entropy and anti-entropy before. I will not spend too much time on this as its not the point of the article but I'll offer this paper and let you do your own research; I've never heard anyone challenge the existence of the concepts of entropy and anti-entropy before. Well now you have. There is one point in the paper where they author imply that there is an increasing level of complexity in life as time goes on. I think that is wrong. An apatosaurus is as complex as any animal today. The authors IMO are quite confused. I've never heard anyone challenge the existence of the concepts of entropy and anti-entropy before. I will not spend too much time on this as its not the point of the article but I'll offer this paper and let you do your own research; To be clear, nobody is challenging the concept of entropy. It is the concept of anti-entropy that we are calling into question. Thank you for the reference. These French authors appear to be attempting to define anti-entropy, for the first time, in this paper, which dates from 2009. By their own admission it has no meaning in physics. Having read their paper, I honestly wonder if it is a spoof. They seem unable to say what this concept really does: see for example the concluding paragraph, which starts off clearly enough but swiftly degenerates into incomprehensibility, laced with references to Schroedinger, who had no particular competence in analysing biological complexity. There is also a bizarre reference to the equivalence of mass and energy, mentioned in the context of biomass, as if the mass present in living things is somehow made of "energy" as opposed to mass derived from inanimate nutrients etc. Quite baffling and awfully confused. The paper seems to me a terrible piece of work. Has it been followed up by others or is it a joke? Life requires constant energy input and other resources to sustain its 'anti-entropy' engine that builds information, and a black hole requires no energy other than gravitation to collapse and render most of its internal informational content incapable of rejoining the universe beyond its event horizon until and unless it evaporates to a mass / energy less than it was shortly before the initial collapse. The black hole wins the entropy tug-of-war hands down. Lee Smolin, with whom I find myself mostly in agreement, has a slightly different definition of entropy. Using his definition, maybe it's a slightly closer race, but I still think the bh will ultimately win if put to the test. Even a red giant like our Sun will eventually become will do a pretty fair number on destroying any data stores we have not sent away in space ships bearing time capsules, in about 5 billion years or so. Something as large as a bh is not necessary to destroy what little information we manage to produce, including our DNA. Nature has no dependency upon human beings, nor on our spacecraft, or upon any of our technological constructs. The only life that exists and has ever existed is the living cell in all of its variations regardless of any evolved form cells may assume. For billions of years, here on earth and probably elsewhere, it was and is the cell that holds all of the keys to life and determines the role life assumes in nature. Nature is not the hardware of the universe that we perceive such as technology or species or the cell nor its constituent components nor even its atoms or even energy as we understand it. It is only the underlying quantum states (software if you will) of existence that nature operates upon. This software is only comprehensible to us human beings via mathematics. What we see as nature is the instant-by-instant interaction of this natural software; everywhere in existence. It is this software that this article compares, not in the usual; how big is it? hardware centric manner to which we inevitably migrate, but rather in terms of natural complexity best expressed mathematically. It is the mathematical potency or density of this software present in life, in the living cell and all of its evolved structures, that is a prime mover in nature. This impact may not be observable via the standard set of properties that we are accustomed to measuring with or usual fair of instrumentation. This natural software exists as an ocean of quantum states a fraction of which our science is already familiar with. It comprises all of the structures and phenomena we observe in nature. All of the subatomic particles and forces of the current and any future standard models emerge from this entangled ocean of quantum states, as are every atom and molecule. As well as the planets and stars in all their many forms. So too is life. However, life is clearly unlike any other natural phenomena in a number of obvious regards but none that influences nature as does life’s mathematical potency, its software density if you will. Living structures are nature’s most concentrated implementation of nature’s software. You have only to attempt to mathematically simulate living structures vs. non living ones to demonstrate this. In living beings the software potency expressed in its mathematical complexity spikes in a real quantitative manner. Until we’ve developed the mathematics to properly express life we will continue to be at a loss to understand and appreciate the true impact life has upon nature. I suspect that this impact is quite significant if not pivotal at this stage in this universes evolution particularly if earth-life is the only, or one of very few instances of life that exists. Nature has no dependency upon human beings, nor on our spacecraft, or upon any of our technological constructs. The only life that exists and has ever existed is the living cell in all of its variations regardless of any evolved form cells may assume. For billions of years, here on earth and probably elsewhere, it was and is the cell that holds all of the keys to life and determines the role life assumes in nature. Nature is not the hardware of the universe that we perceive such as technology or species or the cell nor its constituent components nor even its atoms or even energy as we understand it. It is only the underlying quantum states (software if you will) of existence that nature operates upon. This software is only comprehensible to us human beings via mathematics. What we see as nature is the instant-by-instant interaction of this natural software; everywhere in existence. It is this software that this article compares, not in the usual; how big is it? hardware centric manner to which we inevitably migrate, but rather in terms of natural complexity best expressed mathematically. It is the mathematical potency or density of this software present in life, in the living cell and all of its evolved structures, that is a prime mover in nature. This impact may not be observable via the standard set of properties that we are accustomed to measuring with or usual fair of instrumentation. This natural software exists as an ocean of quantum states a fraction of which our science is already familiar with. It comprises all of the structures and phenomena we observe in nature. All of the subatomic particles and forces of the current and any future standard models emerge from this entangled ocean of quantum states, as are every atom and molecule. As well as the planets and stars in all their many forms. So too is life. However, life is clearly unlike any other natural phenomena in a number of obvious regards but none that influences nature as does life’s mathematical potency, its software density if you will. Living structures are nature’s most concentrated implementation of nature’s software. You have only to attempt to mathematically simulate living structures vs. non living ones to demonstrate this. In living beings the software potency expressed in its mathematical complexity spikes in a real quantitative manner. Until we’ve developed the mathematics to properly express life we will continue to be at a loss to understand and appreciate the true impact life has upon nature. I suspect that this impact is quite significant if not pivotal at this stage in this universes evolution particularly if earth-life is the only, or one of very few instances of life that exists. OK that's a lot better, without the "anti-entropy" stuff. If, by this "software" idea, you mean that all nature follows a certain ordered behaviour arising ultimately from quantum mechanical interactions, then I suppose that must be right. Certainly, as one studies increasingly complex macroscopic phenomena it becomes harder and harder to trace the observed properties and behaviour all the way back to the quantum-scale interactions of the particles of which matter is composed. And again, yes I think it must be true that the chemical and physical processes involved in living organisms are the most complicated systems we know of in nature. But I think you are on the wrong track to start speculating about some special mathematics that "expresses" (do you mean "models"?) life and that by understanding this we will somehow see, for the first time the degree of influence of life upon (the rest of?) nature. This seems - on the face of it - to be just a rather mystical assertion. Why do you say this? Do you have any evidence for it? Sorry, wrong number. Valued Senior Member Entropy is the tendency (or force if you will) to decrease the organization or complexity of systems. This is a common misconception. Entropy is a quantity - i.e. it is about the movement (dynamics) of heat (thermo), or of energy in general. It is not about the "organization" of "things". Entropy is not the tendency of things to fall apart; it is the tendency of energy to dissipate. This is a common misconception. Entropy is a thermodynamic quantity - i.e. it is about the movement (dynamics) of heat (thermo), or of energy in general. It is not about the "organization" of "things". Entropy is not the tendency of things to fall apart; it is the tendency of energy to dissipate. Nicely expressed. The Second Law of Thermodynamics clearly allows for spatially and/or temporally local reversals of entropy. Our universe will eventually be devoid of organization, but until it reaches that final state, bits of organization will appear and disappear just as they do now. Life, specifically, can be defined very simply as a local reversal of entropy. Organisms kill other organisms, destroying the organization they had when living, and use their tissues for nutrition. This manifests as a significant decrease of entropy in the organism that ate the other one, but there is also an increase of entropy in the organism that was eaten. The increase of entropy in the now-dead organism is enormous: it will be dead forever, and in fact will continue to decay until the organization that drives its metabolism and keeps its tissues in place is completely gone. Whereas the decrease of entropy in the organism that ate it is of much lesser magnitude. It will have to find another organism to tear apart and eat before too long, and will continue to look for more to eat for the rest of its life. This is but one example of the net change in the entropy of the universe: its entropy continues to increase as it approaches 100%. The universe may collapse back in on itself, as all of the particles and antiparticles collide and poof out of existence, or it may continue to expand forever as the particles become so far apart that they almost never collide. In either case, the Second Law of Thermodynamics rules, and entropy trumps everything. Anti-entropy seems to be something you have invented. At least, I'm unfamiliar with it. Entropy is usually defined as dS =dQ rev/T (per Clausius), or S = k lnW (per Boltzmann). It is not clear to me from these how you would define anti-entropy. Can you point me to a definition? For a historical discussion of thinking on entropy and life Last edited: Thanks for these. However the "anti-entropy" proposed - obscurely - by these French authors not the same as the "negative" entropy mentioned in these references, which is simply a term for an entropy deficit relative to the surroundings and as such can fairly easily be understood. In fact the second reference is the more helpful as it clarifies that what Schroedinger was really talking about in his 1944 book was Free Energy. Gibbs Free Energy, G which is expressed in terms of Enthalpy H, takes into account work done by the reactants, or done on them, by the atmosphere (H = U + PV), is what chemists use day in, day out, to account for and predict the direction and extent of chemical reactions and equilibria. It is particularly suitable for chemistry, as most reactions in the lab take place at constant (atmospheric) pressure. So, of course, do the chemical reactions that drive the processes of life. The key thing to appreciate about free energy is it allows for the trade off that can occur between enthalpy change and entropy change: ΔG = ΔH - TΔS. In nature it frequently occurs that enthalpy pulls one way and entropy pulls the other. One example is the one I gave earlier about water freezing. The enthalpy change is -ve (heat is given off and the system moves to a lower enthalpy state, which is generally favoured), but the entropy decreases, which is generally disfavoured, as shown by the minus sign in front of the TΔS term. Which term wins is determined by the variable in the equation which is T, the temperature. So when T is large, the TΔS term overpowers the enthalpy term and when T is small the reverse happens. One can easily treat the processes of living things by this principle as well. OK that's a lot better, without the "anti-entropy" stuff. If, by this "software" idea, you mean that all nature follows a certain ordered behaviour arising ultimately from quantum mechanical interactions, then I suppose that must be right. Certainly, as one studies increasingly complex macroscopic phenomena it becomes harder and harder to trace the observed properties and behaviour all the way back to the quantum-scale interactions of the particles of which matter is composed. And again, yes I think it must be true that the chemical and physical processes involved in living organisms are the most complicated systems we know of in But I think you are on the wrong track to start speculating about some special mathematics that "expresses" (do you mean "models"?) life and that by understanding this we will somehow see, for the first time the degree of influence of life upon (the rest of?) nature. This seems - on the face of it - to be just a rather mystical assertion. Why do you say this? Do you have any evidence for it? This line of thinking is a rational and reasonable extrapolation given that as far as our relationship to the concept of mathematics as being the language of nature goes, we are as earthworms are to the concept of farming. We may have a foot in the mud but it is folly to believe we’ve done more than scratch the surface. Before Newton having practically single handedly develop the Calculus there was no means of expressing dynamic aspects of nature mathematically, after he did… we could. Further, if mathematics can describe some of nature then it does ultimately describe all of nature. Life is part of nature. To speak of an unrealized branch of mathematics that could effectively quantify and express (or model if you prefer) living structures is indeed reasonable. Also to take the further small leap to suggest that given the computational indications that in nature a bacterium, for example, may be to the software (natural complexity) of nature as an average star is to the hardware of nature may be strange at first, but not at all unreasonable. “If one cannot handle, ‘strange’ in today’s scientific climate then one should perhaps take up religion.” In other words if each cell(s) that has ever existed on Earth is indeed as fundamentally influential in nature as say, a star is, we begin to see how life, as opposed to non-life, may be a very potent factor to an aspect of nature science doesn't yet acknowledge. Not open for further replies.
{"url":"https://www.sciforums.com/threads/entropy-vs-anti-entropy-how-dna-defeats-the-blackhole.144448/","timestamp":"2024-11-03T01:13:41Z","content_type":"text/html","content_length":"180713","record_id":"<urn:uuid:e88060f1-4185-43ef-b937-1665b879e48f>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00264.warc.gz"}
Normal approximation - (Mathematical Probability Theory) - Vocab, Definition, Explanations | Fiveable Normal approximation from class: Mathematical Probability Theory Normal approximation refers to the use of the normal distribution to estimate the probabilities of a discrete random variable, particularly when the sample size is large. This approach is based on the central limit theorem, which states that as the number of trials increases, the distribution of the sample mean will tend to follow a normal distribution, regardless of the shape of the original population distribution. It allows for easier calculations and interpretations in probability theory. congrats on reading the definition of normal approximation. now let's actually learn it. 5 Must Know Facts For Your Next Test 1. Normal approximation is particularly useful for binomial distributions when both np and n(1-p) are greater than 5, ensuring that the distribution is sufficiently close to normal. 2. The normal approximation simplifies calculations for probabilities, allowing for the use of z-scores instead of binomial probabilities. 3. To apply normal approximation, one can use continuity correction by adjusting discrete values to account for the continuous nature of the normal distribution. 4. Normal approximation can be visually represented using bell curves, which illustrate how probabilities distribute around the mean. 5. In practice, normal approximation is commonly used in quality control and various fields where large sample sizes are involved. Review Questions • How does the Central Limit Theorem relate to normal approximation and why is it important? □ The Central Limit Theorem is crucial because it establishes that as sample sizes grow, the distribution of sample means approaches a normal distribution, even if the original data does not. This theorem justifies using normal approximation for discrete distributions like the binomial when the conditions are met. Essentially, it allows statisticians to apply normal distribution techniques for large samples, making complex probability calculations more manageable. • Discuss how continuity correction enhances the accuracy of normal approximation when dealing with discrete distributions. □ Continuity correction improves normal approximation by adjusting for the differences between discrete and continuous distributions. When using normal approximation for a discrete variable like binomial or Poisson, we add or subtract 0.5 from our integer values. This adjustment helps better represent the probability mass function of discrete distributions within the continuous framework of normal distribution, leading to more accurate probability estimates. • Evaluate a situation where you would choose to use normal approximation over exact calculations and explain your reasoning. □ Consider a scenario where a factory produces thousands of items daily and aims to determine the probability of producing a certain number of defective items. Using exact calculations might be cumbersome and time-consuming due to the large sample size involved. Instead, applying normal approximation would allow for quick and efficient probability estimates using z-scores. This approach is justifiable because if np and n(1-p) exceed 5, we can confidently rely on the normal distribution as an adequate approximation without significant loss of accuracy. © 2024 Fiveable Inc. All rights reserved. AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.
{"url":"https://library.fiveable.me/key-terms/mathematical-probability-theory/normal-approximation","timestamp":"2024-11-08T15:08:57Z","content_type":"text/html","content_length":"149104","record_id":"<urn:uuid:a86e467c-5aac-462d-8f07-71c159477419>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00544.warc.gz"}
Use MathML today, with CSS fallback! These days, I’m working on the slides for my next talk, “The humble border-radius”. It will be about how much work is put into CSS features that superficially look as simple as border-radius, as well as what advances are in store for it in CSS Backgrounds & Borders 4 (of which I’m an editor). It will be fantastic and you should come, but this post is not about my talk. As you may know, my slides are made with HTML, CSS & JavaScript. At some point, I wanted to insert an equation to show how border-top-left-radius (as an example) shrinks proportionally when the sum of radii on the top side exceeds the width of the element. I don’t like LaTeX because it produces bitmap images that don’t scale and is inaccessible. The obvious open standard to use was MathML, and it can even be directly embedded in HTML5 without all the XML cruft, just like SVG. I had never written MathML before, but after a bit of reading and poking around existing samples, I managed to write the following MathML code: <math display="block"> I was very proud of myself. My first MathML equation! It’s actually pretty simple when you get the hang of it: <mi> is for identifiers, <mo> for operators and those are used everywhere. For more complex stuff, there’s <mfrac> for fractions (along with <mrow> to denote the rows), <msqrt> for square roots and so on. It looked very nice on Firefox, especially after I applied Cambria Math to it instead of the default Times-ish font: However, I soon realized that as awesome as MathML might be, not not all browsers had seen the light. IE10 and Chrome are the most notable offenders. It looked like an unreadable mess in Chrome: There are libraries to make it work cross-browser, the most popular of which is MathJax. However, this was pretty big for my needs, I just wanted one simple equation in one goddamn slide. It would be like using a chainsaw to cut a slice of bread! The solution I decided to go with was to use Modernizr to detect MathML support, since apparently it’s not simple at all. Then, I used the .no-mathml class in conjunction with selectors that target the MathML elements, to mimic proper styling with simple CSS. It’s not a complete CSS library by any means, I just covered what I needed for that particular equation and tried to write it in a generic way, so that if I need it in future equations, I only have to add rules. Here’s a screenshot of the result in Chrome: It doesn’t look as good as Firefox, but it’s decent. You can see the CSS rules I used in the following Dabblet: Obviously it’s not a complete MathML-to-CSS library, if one is even possible, but it works well for my use case. If I have to use more MathML features, I’d write more CSS rules. The intention of this post is not to provide a CSS framework to use as a MathML fallback, but to show you a solution you could adapt to your needs. Hope it helps!
{"url":"https://verou.me/blog/2013/03/use-mathml-today-with-css-fallback/","timestamp":"2024-11-13T05:46:56Z","content_type":"text/html","content_length":"14391","record_id":"<urn:uuid:9a93cb43-f7d4-4ea0-ba43-83c6ddbcb5b2>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00652.warc.gz"}
Frontiers | Enhancing genomic mutation data storage optimization based on the compression of asymmetry of sparsity • ^1The Sixth Affiliated Hospital of Guangzhou Medical University, Qingyuan People’s Hospital, Qingyuan, China • ^2School of Biomedical Engineering, Guangzhou Medical University, Guangzhou, China Background: With the rapid development of high-throughput sequencing technology and the explosive growth of genomic data, storing, transmitting and processing massive amounts of data has become a new challenge. How to achieve fast lossless compression and decompression according to the characteristics of the data to speed up data transmission and processing requires research on relevant compression algorithms. Methods: In this paper, a compression algorithm for sparse asymmetric gene mutations (CA_SAGM) based on the characteristics of sparse genomic mutation data was proposed. The data was first sorted on a row-first basis so that neighboring non-zero elements were as close as possible to each other. The data were then renumbered using the reverse Cuthill-Mckee sorting technique. Finally the data were compressed into sparse row format (CSR) and stored. We had analyzed and compared the results of the CA_SAGM, coordinate format (COO) and compressed sparse column format (CSC) algorithms for sparse asymmetric genomic data. Nine types of single-nucleotide variation (SNV) data and six types of copy number variation (CNV) data from the TCGA database were used as the subjects of this study. Compression and decompression time, compression and decompression rate, compression memory and compression ratio were used as evaluation metrics. The correlation between each metric and the basic characteristics of the original data was further investigated. Results: The experimental results showed that the COO method had the shortest compression time, the fastest compression rate and the largest compression ratio, and had the best compression performance. CSC compression performance was the worst, and CA_SAGM compression performance was between the two. When decompressing the data, CA_SAGM performed the best, with the shortest decompression time and the fastest decompression rate. COO decompression performance was the worst. With increasing sparsity, the COO, CSC and CA_SAGM algorithms all exhibited longer compression and decompression times, lower compression and decompression rates, larger compression memory and lower compression ratios. When the sparsity was large, the compression memory and compression ratio of the three algorithms showed no difference characteristics, but the rest of the indexes were still different. Conclusion: CA_SAGM was an efficient compression algorithm that combines compression and decompression performance for sparse genomic mutation data. 1 Introduction Genes are one of the basic units of life and are of irreplaceable importance in the fields of understanding life phenomena, exploring the laws of biological evolution, and preventing and controlling human diseases (Tu et al., 2006; Oh et al., 2012). Gene sequences are the carriers of biological genetic information, and the biological properties of all organisms are related to genes (Mota and Franke, 2020). Due to the enormous usefulness of genetic data and the reduced cost of sequencing, many countries and organizations have initiated various genetic engineering projects, such as the Personal Genome Project (Ball et al., 2012) and the Bio Genome Project (Lewin et al., 2018). The rapid growth of genetic data can provide a significant boost to the life sciences. A rich gene pool can be very beneficial to the study of certain types of diseases, providing a new breakthrough to promote precision medicine and help solve medical problems (Janssen et al., 2011; Chen et al., 2020; Garand et al., 2020). However, the growth of genetic data has now greatly outpaced the growth of storage and transmission bandwidth, posing significant storage and transmission challenges (Xi et al., 2023a). The Human Genome Project (Cavalli-Sforza, 2005; Boeke et al., 2016) and the 1000 Genomes Project (Belsare et al., 2019; Fairley et al., 2020), for example, generate huge amounts of data, tens of terabytes or even more. Thus, issues related to genetic data compression have become a hot topic and focus of research in recent years. Genomic mutation data contain a large amount of genetic variation information that can be used to resolve the functional and phenotypic effects of genetic variants, which is of great value for human evolutionary genetic and medical research. Comparative databases (such as dbSNP and ClinVar) allow sequencing and differential analysis of genes in individuals or populations of species. Genetic information such as single-nucleotide variation (SNV), insertion deletion (InDel), structural variation (SV) and copy number variation (CNV) can be used to develop molecular markers and create databases of genetic polymorphisms. Cross-species genome alignment methods provide genomic context for the identification of annotated gene regions for variation across species (Samaha et al., 2021). In recent years, many researchers have developed a variety of rapid detection methods or tools for CNV (Huang et al., 2021; Lavrichenko et al., 2021; Kim et al., 2022) and SNV (van der Borght et al., 2015; Schnepp et al., 2019; Li et al., 2022). However, variant genomic mutation data are often sparse data formats that are difficult to apply with traditional compression methods. Traditional compression algorithms generally reduce the storage space of data by encoding it, such as Huffman coding (Moffat, 2019), Lempel-Ziv-Welch coding (Fira and Goras, 2008; Naqvi et al., 2011 ), etc. These algorithms are designed based on the assumption that there is a large amount of repetitive information in the data. But for sparse data, there is less redundancy in the information present in the data, making it difficult to compress effectively. The operations in turn waste a lot of time performing invalid operations with zero elements. As a result, traditional algorithms such as gzip, bzip2, lzo, snappy, etc. Are memory wasting and inefficient. As a result, compressed storage methods for sparse genes, a special form of data, have received increasing attention from researchers (Shekaramiz et al., 2019; Yao et al., 2019; Li et al., 2021; Wang et al., 2022). Although there are some sparse compression methods available, such as coordinate format (COO) and compressed sparse column format (CSC) compression (Park et al., 2020), they suffer from different drawbacks. Some are difficult to operate and cannot perform matrix operations, while others have problems such as slow inner product operations and slow row/column slicing operations, so none are particularly desirable either. In this paper, based on the sparse asymmetry of variant genomic data, we propose a method for lossless compression of genomic mutation data called CA_SAGM. Preprocessing steps such as prioritization and reverse Cuthill-Mckee (RCM) sorting are performed on the data to greatly reduce the bandwidth of the matrix, so that the scattered non-zero elements all converge towards the diagonal. The data is then compressed sparse row format (CSR) (Koza et al., 2014; Chen et al., 2018; Xing et al., 2022) and stored. This method can theoretically optimize the efficiency and quality of the rearranged data, saving processing time and memory requirements. This study shows that CA_SAGM exhibits higher compression performance and best decompression performance for sparse genomic data compared to COO and CSC. From a combination of several evaluation metrics such as compression and decompression time, compression and decompression rate, compression size and compression ratio, the CA_SAGM method performs the best and outperforms the rest of the methods. It is confirmed that the CA_SAGM method has fast and efficient compression and decompression performance for sparse genomic data, has good applicability and can be further extended to other similar data. 2 Materials and methods Both SNV and CNV are common formats for genomic mutation data storage. SNV is a single nucleotide mutation resulting in a deletion, insertion or substitution in a normal human gene. Large-scale tumor sequencing studies have shown that most cancers are caused by SNV (Macintyre et al., 2016). DNA copy number variation is a structural form of genomic variation (Medvedev et al., 2009; Stankiewicz and Lupski, 2010). Many studies have shown that CNVs are associated with complex diseases such as autism, schizophrenia, Alzheimer’s disease, and cancer. In recent years, there have been a large number of studies on SNVs and CNVs (Jugas et al., 2021; Prashant et al., 2021; Ladeira et al., 2022; Lee et al., 2022; Li et al., 2022; Zheng, 2022). 2.1 Materials In this paper, SNV data for nine different diseases and CNV data for six different diseases were selected from the TCGA database (The ICGC/TCGA Pan-Cancer Analysis of Whole Genomes Consortium, 2020), all data are level3. SNVs or mutations are less common than other variants and mutations and cannot be observed in the diversity of the genome (Press et al., 2019). It is a single-nucleotide variation without any frequency restriction and may arise in somatic cells. The number and type of SNVs and other characteristics can reflect the genetic diversity, evolutionary history and other information of a species. SNVs also play an important role in the occurrence and development of human diseases (Xi et al., 2020a; Xi et al., 2023b). For example, some SNVs may cause gene mutations and affect protein structure and function, leading to the development of diseases; SNV-based research also helps to find susceptibility genes for diseases and develop corresponding drug targets, etc. SNV data are from brain tumor, acute myeloid leukemia, thyroid cancer, prostate cancer, ovarian cancer, breast cancer, bladder cancer, renal clear cell carcinoma and colorectal cancer. CNV, or copy number variation, is caused by rearrangements in the genome. It generally refers to an increase or decrease in the copy number of a large segment of the genome. It is mainly manifested as deletions and duplications at the sub-microscopic level. CNV is an important genetic basis for individual differences and is widely distributed in the human genome (Xi and Li, 2016; Xi et al., 2020b). The CNV data are more complex than the SNV data, with larger data sets, higher numbers of non-zeros and higher densities. CNV data were obtained from acute spinal leukemia, thyroid cancer, prostate cancer, bladder cancer, renal clear cell carcinoma and colorectal cancer. The basic characteristics of SNV and CNV data were analyzed in detail, including Data set size (n), non-zero number (n), sparsity (%), rows (n), rows/columns (%), file size (K), L1-norm, L2-norm and Rank. They are shown in Tables 1, 2 respectively. TABLE 1 TABLE 2 2.2 Methods 2.2.1 Compression algorithm COO and CSC are two common compression methods for sparse data. COO uses a triplet to store information about the non-zero elements of the matrix, storing the row subscripts, column subscripts and values of the non-zero elements respectively. The non-zero elements are found by traversing the rows and columns and storing the corresponding number of rows, columns and values in the corresponding arrays. Let A ∈ Rm×n be a sparse matrix where the number of non-zero elements. Using the COO storage method, A can be stored as three vectors (I, J, V). Where I and J store the coordinates of the rows and columns of the non-zero elements respectively, and V stores the values of the non-zero elements. Examples of mathematical formulas are as follows: Data can be converted to other storage formats by COO method quickly and easily, and data can be quickly converted with compressed sparse row format (CSR)/CSC formats and can be repeatedly indexed. However, the COO format is almost impossible to manipulate or matrix-operate except by converting it to other formats. The CSC is compressed and stored according to the principle of data column precedence. The matrix is determined by the row indexes of non-zero elements, index pointers, and non-zero data. Suppose an m × n sparse matrix, with A[ij] denoting the elements of row i and column j. CSC can store A as three vectors (in_dices, indptr and value). Where in_dices is the row index of the non-zero elements, indptr is an array of index pointers and value is the non-zero data in the matrix. The steps are as follows: 1. Get the row index of the non-zero element in column i according to indices [indptr[i]: indptr[i+1]]. 2. Get the number of non-zero elements in column i according to [indptr[i]: indptr[i+1]]. 3. The column index and row index are obtained and the corresponding data is stored in: value [indptr[i]: indptr[i+1]]. The following mathematical formula is an example: The CSC data format performs efficient column slicing, but the inner matrix product and row slicing operations are relatively slow. 2.2.2 CA_SAGM algorithm CA_SAGM is an optimization algorithm based on compressed sparse row format, which is implemented by optimizing the matrix ordering for the characteristics of variable genomic data. The process is as follows: first, the variant genomic data is sorted by row-major order so that adjacent non-zero elements are also physically stored as close as possible to each other. Then, the reverse Cuthill-McKee sorting algorithm is used to renumber the rows and columns of the data according to the sorting results. Finally, using the new row and column numbering, the sparse matrix is CSR compressed and stored in a file. Reverse Cuthill-Mckee sorting is an algorithm that can be used to optimize the storage of sparse matrices by rearranging the rows and columns of a sparse matrix so that the matrix has a smaller bandwidth. Bandwidth is understood to be the widest diagonal distance between the non-zero elements of a matrix and has a significant impact on the efficiency of computational operations such as matrix multiplication. The basic idea of the RCM sorting algorithm is to reduce the bandwidth of a matrix by arranging interconnected points as close to each other as possible. The sparse matrix is first transformed into an undirected graph, and then this graph is traversed and pruned as a way to determine the new order of nodes, which in turn leads to the rearranged matrix. Specifically, when a node is processed, the traversal of that branch is stopped if the number of remaining nodes is not sufficient to cause a smaller bandwidth to the already traversed nodes. In addition, the RCM sorting algorithm can also use other heuristic rules such as degree sorting and greedy strategy to further improve the efficiency and quality of matrix reordering. The main ideological steps of the RCM algorithm are as follows: 1. Select a starting point and mark it as a visited node. 2. Sort the nodes adjacent to this starting point in order of traversal distance from closest to farthest. 3. Recursively executes steps 1 and 2 for the sorted neighboring nodes. 4. When all adjacent nodes have been traversed, return to the previous level of nodes and continue until the last level has been traversed. 5. For all unvisited nodes, sort the nodes according to the depth-first traversal method, again prioritizing the nodes adjacent to the visited nodes until all nodes have been traversed. Sparse genomic matrix data has a large bandwidth due to the dispersed arrangement of non-zero elements. With the use of reverse Cuthill-Mckee matrix bandwidth compression, the bandwidth of the matrix is greatly reduced, and the scattered non-zero elements all converge towards the diagonal, which greatly improves computational efficiency and reduces memory requirements according to the relationship between computational complexity of lower-upper (LU) decomposition and memory requirements and bandwidth, which is followed by LU decomposition after RCM pre-processing. For most sparse matrix problems, due to the small number of elements being sorted, RCM has proven to be a more efficient algorithm in practice, as neither quick sort nor merge sort is as fast. It performs as fast as traditional execution, but with no reduction in speed for problems with a high number of nodes. The steps of the reverse Cuthill-Mckee algorithm are as follows: 1. Instantiate an empty queue Q for the alignment of the object R. 2. Find the object with the smallest degree whose index has not been added to R. Assume that the object corresponding to row p has been identified as the object with the smallest degree. Add p to R. (The degree of a node is defined as the sum of the non-diagonal elements in the corresponding row.) 3. Add the index to R, and add all neighbors of the corresponding object at the index, in increasing order to Q. Neighbors are nodes with non-zero values between them. 4. Extract the first node in Q, e.g., C. Insert C into R if it has not already been inserted, then add Q’s C neighbors to Q in increasing order. 5. If Q is not empty, repeat step4. 6. If Q is empty, but there are objects in the matrix that are not yet included in R, start again from Step2. 7. Until all objects are contained in R terminate the algorithm. 2.3 Performance evaluation metrics A number of metrics were used to evaluate the compression and decompression performance between COO, CSC and CA_SAGM. Compression time (CT, Milliseconds or Seconds), compression rate (CR, Megabytes/ Second), compression memory (CM, Kilobytes or Megabytes), compression ratio (CRO), decompression time (DCT, Milliseconds) and decompression rate (DCR, Megabytes/Second) are included. These parameters are calculated in Eqs 3–8. $CT=Compression end time−Compression start time(3)$ $DCT=Decompression end time−Decompression start time(4)$ $CR=Compression size / CT(5)$ $DCR=Decompression size / DCT(6)$ $CM=Memory size after compression(7)$ $CRO=Pre−compressed memory / post−compressed memory(8)$ The above metrics allow the compression algorithms to be evaluated in terms of the speed at which the data is compressed/decompressed for work, the amount of data, the memory space occupied and other different aspects. In general, shorter CT and DCT, faster CR and DCR, smaller CM and larger CRO represent better compression and decompression performance. And, we performed a statistical analysis of the experimental results. However, we can also evaluate algorithms based on different data, different usage scenarios and requirements. Different compression algorithms will perform differently in these performance metrics, users will need to choose the right algorithm for their specific scenario and needs. 3 Experiments and results In order to objectively compare the performance metrics of the different algorithms, all experiments were conducted in the same environmental configuration. The system configuration used in this study is Windows 10 (Microsoft Corporation, United States), CPU: Inter(R) Core(TM) I5-10500, 3.10 GHz; RAM: 8 G. The compression algorithm processing software is MATLAB R2022a (Mathworks. United States). And the statistical analysis software is IBM SPSS Statistics 26 (IBM Corp. United States). No other applications were run during any of the programs to ensure a consistent working 3.1 SNV data compression performance 3.1.1 Comparison of SNV data compression algorithms The general process of processing SNV data includes data read-in, pre-processing, compression and storage. The original SNV data is read in and tested for basic characteristics, including data set size (n), non-zero number (n), sparsity (%), rows (n), rows/columns (%), file size (K), L1-norm, L2-norm and rank. First, the SNV data runs the COO and CSC programs separately. The sparse data matrix was then preprocessed by row-first sorting and RCM sorting successively. Next, SNV data were run through CA_SAGM compression programs. Compression time, decompression time, compression rate, decompression rate, compression memory and compression ratio are respectively obtained by the three methods. The results are shown in Figure 1. Finally, the compressed data were stored in a suitable location. The experimental results were in mean ± SD (Standard deviation, SD) format, and were analyzed by comparing the evaluation indexes among different algorithms and using statistical methods. FIGURE 1 FIGURE 1. Compares the compression and decompression metrics of COO, CSC and CA_SAGM for SNV. Where (A) stands for compression time, (B) for decompression time, (C) for compression speed, (D) for decompression speed, (E) for compressed memory and (F) for compression ratio. As can be seen from the Figure 1, the COO algorithm has the shortest CT (5.91 ± 2.42 vs. 184.61 ± 142.89 vs. 12.25 ± 5.81), the largest CR (4989.43 ± 2753.14 vs. 238.85 ± 153.2 vs. 2377.17 ± 1093.17), the smallest CM (509.53 ± 472.56 vs. 604.55 ± 472.59 vs. 511.97 ± 472.65), CRO was the largest (170.31 ± 192.38 vs. 70.8 ± 46.65 vs. 164.48 ± 181.78), compression performance was the best. However, decompression took the longest to recover the original data (30.76 ± 23.89 vs. 21.33 ± 9.42 vs. 7.96 ± 3.32) and had a smaller decompression rate (1596.72 ± 1187.87 vs. 1389.44 ± 629.08 vs. 3467.85 ± 1246.34). The performance of CSC was the opposite of COO, CT was the longest (184.61 ± 142.89), CR was the lowest (238.85 ± 153.2), CM was the largest (604.55 ± 472.59) and CRO was the smallest (70.8 ± 46.65). The decompression performance of CSC is between COO and CA_SAGM, with DCT and DCR both performing in the middle. In addition, CA_SAGM has the best decompression performance, with the shortest DCT (7.96 ± 3.32) and the largest DCR (3467.85 ± 1246.34). If the overall total time of compression and decompression time, the average rate of compression rate and decompression rate are considered, it is clear that the CA_SAGM algorithm has the shortest total time and the largest average rate. A paired sample t-test was used to assess whether there were differences in the same metrics between any two algorithms. The results show that there is a significant difference (p < 0.05) between any two algorithms for almost all metrics: compression time (COO to CSC: 0.005; COO to CA_SAGM: 0.001; CA_SAGM to CSC: 0.006), decompression time (COO to CSC: 0.111; COO to CA_SAGM: 0.013; CA_SAGM to CSC: 0.000), compression rate (COO to CSC: 0.001; COO to CA_SAGM: 0.003; CA_SAGM to CSC: 0.000), decompression rate (COO to CSC: 0.493; COO to CA_SAGM: 0.001; CA_SAGM to CSC: 0.000), compression memory (COO to CSC: 0.000; COO to CA_SAGM: 0.000; CA_SAGM to CSC: 0.000), compression ratio (COO to CSC: 0.003; COO to CA_SAGM: 0.000; CA_SAGM to CSC: 0.003). There is little difference between COO and CSC in terms of compression time and decompression speed. 3.1.2 Correlation analysis of SNV data We used spearman correlation analysis to investigate whether the compression and decompression performance was correlated with the basic characteristics of the original SNV data. Table 3 shows that the compression time, decompression time, compression rate, decompression rate, compression memory and compression ratio are all correlated with the non-zero number of the original data, sparsity, file size, L1-norm and L2-norm. There was a strong correlation between sparsity and the non-zero number of raw data (p = 0.983), file size (p = 0.967), L1-norm (p = 0.983) and L2-norm (p = 0.983). TABLE 3 TABLE 3. Spearman correlation analysis between compression and decompression metrics of COO, CSC and CA_SAGM algorithms for SNV data and basic characteristics of the original data. As sparsity is easy to calculate and obtain, we further analyzed the effect of sparsity on the SNV data, as shown in Figure 2. As can be seen from the figure, CSC compression performance performs the worst, with the longest CT, the smallest CR and the smallest CRO. Both COO and CA_SAGM show better compression characteristics, with shorter CT and larger CR. In terms of decompression, COO performs the worst, with the longest DCT and smallest DCR. CA_SAGM performs the best, with the shortest DCT and largest DCR, CSC performs in the middle. The difference between the compression & decompression performance of COO, CSC and CA_SAGM is small when the sparsity is close to 0. As the data sparsity increases (but the sparsity is still small, <2%), the compression & decompression time tends to become larger, the compression and decompression rate tends to decrease, and the compression ratio also tends to decrease. The difference in compression and decompression times between algorithms increases with sparsity. FIGURE 2 FIGURE 2. Curves of compression and decompression metrics vs. sparsity variation for COO, CSC and CA_SAGM for SNV. Where (A) stands for compression time, (B) for decompression time, (C) for compression speed, (D) for decompression speed, (E) for compressed memory and (F) for compression ratio. 3.2 CNV data compression performance 3.2.1 Comparison of CNV data compression algorithms CNV data are more complex than SNV data, with larger datasets, a larger number of non-zeros and greater sparsity. Thus, we further investigated and analyzed the experimental results of the CNV data. Similarly, the process of processing CNV data includes steps such as data read-in, pre-processing, compression and storage. The raw CNV data is read in and tested for basic characteristics, including data set size (n), non-zero number (n), sparsity (%), rows (n), rows/columns (%), file size (K), L1-norm, L2-norm and rank. First, the CNV data runs the COO and CSC programs separately. The sparse data matrix was then preprocessed by row-first sorting and RCM sorting successively. Next, SNV data were run through CA_SAGM compression programs. Compression time, decompression time, compression rate, decompression rate, compression memory and compression ratio are respectively obtained by the three methods. The results are shown in Figure 3. Finally, the compressed data were stored in a suitable location. The experimental results were in mean ± SD, and were analyzed by comparing the evaluation indexes among different algorithms and using statistical methods. FIGURE 3 FIGURE 3. Compares the compression and decompression metrics of COO, CSC and CA_SAGM for CNV. Where (A) stands for compression time, (B) for decompression time, (C) for compression speed, (D) for decompression speed, (E) for compressed memory and (F) for compression ratio. From the Figure 3, we can see that in terms of compression performance, COO performs the best with the shortest CT (0.11 ± 0.06 vs. 4.51 ± 3.71 vs. 0.24 ± 0.21) and the largest CR (357.02 ± 337.97 vs. 12.72 ± 12.72 vs. 238.27 ± 240.35). CSC has the worst compression performance with the longest CT and the smallest CR. CA_SAGM had the middle compression performance. However, the CM (16.11 ± 12.45 vs. 16.11 ± 12.45 vs. 16.11 ± 12.45) and CRO (0.62 ± 0.41 vs. 0.62 ± 0.41 vs. 0.62 ± 0.41) were the same after compression by the three methods, which may be associated with a larger sparsity (19.58% ± 17.52%). In terms of decompression, COO had the worst performance, with the longest DCT (0.86 ± 0.59 vs. 0.09 ± 0.04 vs. 0.07 ± 0.04) and the smallest DCR (65.71 ± 67.19 vs. 375.74 ± 252.88 vs. 639.42 ± 553.6). CA_SAGM had the best decompression performance, with the shortest DCT and the smallest DCR. CSC decompression performance in the middle. Similarly, a paired sample t-test was used to assess whether there were differences between any two algorithms for the same metrics. The results showed that almost all metrics were significantly different between any two algorithms (p < 0.05), with the exception of compression memory and compression ratio (p > 0.05). The detailed analysis results are as follows: Compression time (COO to CSC: 0.032; COO to CA_SAGM: 0.087; CA_SAGM to CSC: 0.031), decompression time (COO to CSC: 0.018; COO to CA_SAGM: 0.016; CA_SAGM to CSC: 0.000), compression rate (COO to CSC: 0.05; COO to CA_SAGM: 0.357; CA_SAGM to CSC: 0.06), decompression rate (COO to CSC: 0.011; COO to CA_SAGM: 0.034; CA_SAGM to CSC: 0.115), compression memory (COO to CSC: 0.018; COO to CA_SAGM: 0.002; CA_SAGM to CSC: 0.000), compression ratio (COO to CSC: 0.006; COO to CA_SAGM: 0.000; CA_SAGM to CSC: 0.007). 3.2.2 Correlation analysis of CNV data Spearman correlation analysis was used to investigate whether the compression and decompression performance was correlated with the basic characteristics of the CNV raw data (see Table 4). The results show that CT, DCT, CR, DCR, CM and CRO all have large correlation coefficients with the non-zero number, sparsity and L2-norm of the original data. In addition, CT, DCT and CM are strongly correlated with data file size and L1-norm. Also, there was a strong correlation between sparsity, non-zero number (p = 0.771) and L2-norm (p = 0.714). There was also a strong correlation between file size and L1-norm (p = 0.943). TABLE 4 TABLE 4. Spearman correlation analysis between compression and decompression metrics of COO, CSC and CA_SAGM algorithms for CNV data and basic characteristics of the original data. Similarly, we have further analyzed the effect of the variation of CNV data sparsity on the experimental results, as shown in Figure 4. It can also be seen from the figure that in terms of compression performance, CSC has the worst compression characteristics, with the longest CT and the smallest CR. While both COO and CA_SAGM show better compression characteristics, with shorter CT and larger CR, with less difference between them. In terms of decompression, COO has the worst performance, with the longest DCT and the smallest DCR. CA_SAGM shows the best decompression characteristics, with the shortest DCT and the largest DCR. CSC decompression characteristics are between COO and CA_SAGM. When the sparsity is relatively small, the difference in compression and decompression performance between COO, CSC and CA_SAGM is small. The difference in compression and decompression time between CSC, COO and CA_SAGM increases as the sparsity increases. However, the difference between CR and DCR decreases with increasing sparsity. FIGURE 4 FIGURE 4. Curves of compression and decompression metrics vs. sparsity variation for COO, CSC and CA_SAGM for CNV. Where (A) stands for compression time, (B) for decompression time, (C) for compression speed, (D) for decompression speed, (E) for compressed memory and (F) for compression ratio. 4 Discussion and conclusion In this paper, we propose a sparse asymmetric gene mutation compression algorithm CA_SAGM. The compression and decompression performance of COO, CSC and CA_SAGM is compared and analyzed using SNV and CNV data as the study objects. The results show that CA_SAGM can meet the high performance requirements of compression and decompression, achieve fast and lossless compression and decompression. In addition, it was found that the compression and decompression performance has a strong correlation with sparse. As the sparsity increases, all algorithms show longer compression and decompression times, lower compression and decompression rates, increased compression memory and lower compression ratios. In our current study, CA_SAGM proved to have high compression and decompression performance for sparse genomic mutation data. CA_SAGM is a CSR compression algorithm for row-first sorting and reverse Cuthill-McKee sorting optimization. CA_SAGM has its own unique advantages over other compression algorithms. In combination with the reverse Cuthill-McKee sorting and optimization algorithm phase, the scattered non-zero elements of the data can be brought together on the diagonal and the bandwidth of the matrix is reduced considerably. Computational complexity versus memory and bandwidth based on the results of low-high (LU) decomposition. RCM pre-processing followed by LU decomposition can significantly reduce processing time, improve computational efficiency and reduce memory requirements. CA_SAGM has significant advantages in terms of compression and decompression time, as well as compression and decompression speed. CA_SAGM also has a very significant compression ratio advantage when the sparsity is low. It should be noted that the results of this paper also have some limitations. Firstly, the SNV and CNV data from the experiments are limited and the sources of test data need to be expanded. Secondly, the data were only obtained from TCGA and the rest of the databases (e.g., GEO) were not studied. Recently, dedicated and integrated tools, genetic data compression algorithms, software and methods for compression in combination with machine learning (Wang et al., 2019; Kryukov et al., 2020; Chen et al., 2022; Niu et al., 2022; Yao et al., 2022) have received increasing attention and application by researchers, making it possible to process huge amounts of genetic data. For example, Cui Huanyu et al. proposed a new method of matrix compression based on CSR and COO: PBC algorithm for the problem that SPMV (sparse matrix vector multiplication) computation leads to computational redundancy, storage redundancy, load imbalance and low GPU utilization (Cui et al., 2022). The method considers load balancing conditions during the SPMV calculation. The blocks are divided according to a row-major order strategy, ensuring that the standard deviation between each block is minimized to satisfy the maximum similarity in the number of non-zero elements between each block. The result exhibits both speed-up ratio and compression performance. For lossless compression, researchers such as Jiabing Fu recommended LCQS; a lossless compression tool specialized for quality scores (Fu et al., 2020). The further development of specialized and integrated tools, software and evaluation methods, combined with artificial intelligence algorithms for the analysis and processing of genetic data are also the main directions and elements of our next research work. In summary, CA_SAGM has been shown to reduce data transfer time and storage space, and improve the utilization of network and storage resources. Promoting the use of this method will make the researcher’s work more effective and convenient. Data availability statement The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding authors. Author contributions Conception and design: GZ and JW; Data analysis and interpretation: YD, YL, JH, JM, XW, and XL. All authors contributed to the article and approved the submitted version. This work was supported by the Medical Scientific Research Foundation of Guangdong Province, China (No. B2022347). Conflict of interest The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Publisher’s note All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher. Ball, M. P., Thakuria, J. V., Zaranek, A. W., Clegg, T., Rosenbaum, A. M., Wu, X., et al. (2012). A public resource facilitating clinical use of genomes. Proc. Natl. Acad. Sci. U. S. A. 109 (30), 11920–11927. doi:10.1073/pnas.1201904109 Belsare, S., Levy-Sakin, M., Mostovoy, Y., Durinck, S., Chaudhuri, S., Xiao, M., et al. (2019). Evaluating the quality of the 1000 genomes project data. Bmc Genomics 20 (1), 620. doi:10.1186/ Boeke, J. D., Church, G., Hessel, A., Kelley, N. J., Arkin, A., Cai, Y., et al. (2016). GENOME ENGINEERING. The genome project-write. Science 353 (6295), 126–127. doi:10.1126/science.aaf6850 Cavalli-Sforza, L. L. (2005). The human genome diversity project: Past, present and future. Nat. Rev. Genet. 6 (4), 333–340. doi:10.1038/nrg1596 Chen, D., Mao, Y., Ding, Q., Wang, W., Zhu, F., Chen, C., et al. (2020). Prognostic implications of programmed death ligand 1 expression in resected lung adenocarcinoma: A systematic review and meta-analysis. Eur. J. Cardio-Thoracic Surg. 58 (5), 888–898. doi:10.1093/ejcts/ezaa172 Chen, H., Chen, J., Lu, Z., and Wang, R. (2022). Cmic: An efficient quality score compressor with random access functionality. BMC Bioinforma. 23 (1), 294. doi:10.1186/s12859-022-04837-1 Chen, X., Xie, P., Chi, L., Liu, J., and Gong, C. (2018). An efficient SIMD compression format for sparse matrix-vector multiplication. Concurrency Computation-Practice Exp. 30 (23), e4800. Cui, H., Wang, N., Wang, Y., Han, Q., and Xu, Y. (2022). An effective SPMV based on block strategy and hybrid compression on GPU. J. Supercomput. 78 (5), 6318–6339. doi:10.1007/s11227-021-04123-6 Fairley, S., Lowy-Gallego, E., Perry, E., and Flicek, P. (2020). The International Genome Sample Resource (IGSR) collection of open human genomic variation resources. Nucleic Acids Res. 48 (D1), D941–D947. doi:10.1093/nar/gkz836 Fira, C. M., and Goras, L. (2008). An ECG signals compression method and its validation using NNs. Ieee Trans. Biomed. Eng. 55 (4), 1319–1326. doi:10.1109/TBME.2008.918465 Fu, J., Ke, B., and Dong, S. (2020). Lcqs: An efficient lossless compression tool of quality scores with random access functionality. BMC Bioinforma. 21 (1), 109. doi:10.1186/s12859-020-3428-7 Garand, M., Kumar, M., Huang, S. S. Y., and Al Khodor, S. (2020). A literature-based approach for curating gene signatures in multifaceted diseases. J. Transl. Med. 18 (1), 279. doi:10.1186/ Huang, T., Li, J., Jia, B., and Sang, H. (2021). CNV-MEANN: A neural network and mind evolutionary algorithm-based detection of copy number variations from next-generation sequencing data. Front. Genet. 12, 700874–708021. doi:10.3389/fgene.2021.700874 Janssen, S., Ramaswami, G., Davis, E. E., Hurd, T., Airik, R., Kasanuki, J. M., et al. (2011). Mutation analysis in Bardet-Biedl syndrome by DNA pooling and massively parallel resequencing in 105 individuals. Hum. Genet. 129 (1), 79–90. doi:10.1007/s00439-010-0902-8 Jugas, R., Sedlar, K., Vitek, M., Nykrynova, M., Barton, V., Bezdicek, M., et al. (2021). CNproScan: Hybrid CNV detection for bacterial genomes. Genomics 113 (5), 3103–3111. doi:10.1016/ Kim, M. J., Lee, S., Yun, H., Cho, S. I., Kim, B., Lee, J. S., et al. (2022). Consistent count region-copy number variation (CCR-CNV): An expandable and robust tool for clinical diagnosis of copy number variation at the exon level using next-generation sequencing data. Genet. Med. 24 (3), 663–672. doi:10.1016/j.gim.2021.10.025 Koza, Z., Matyka, M., Szkoda, S., and Mirosław, Ł. (2014). Compressed multirow storage format for sparse matrices on graphics processing units. Siam J. Sci. Comput. 36 (2), C219–C239. doi:10.1137/ Kryukov, K., Ueda, M. T., Nakagawa, S., and Imanishi, T. (2020). Sequence Compression Benchmark (SCB) database-A comprehensive evaluation of reference-free compressors for FASTA-formatted sequences. Gigascience 9 (7), giaa072. doi:10.1093/gigascience/giaa072 Ladeira, G. C., Pilonetto, F., Fernandes, A. C., Bóscollo, P. P., Dauria, B. D., Titto, C. G., et al. (2022). CNV detection and their association with growth, efficiency and carcass traits in Santa Ines sheep. J. Animal Breed. Genet. 139 (4), 476–487. doi:10.1111/jbg.12671 Lavrichenko, K., Johansson, S., and Jonassen, I. (2021). Comprehensive characterization of copy number variation (CNV) called from array, long- and short-read data. BMC Genomics 22 (1), 826. Lee, W.-P., Zhu, Q., Yang, X., Liu, S., Cerveria, E., Ryan, M., et al. (2022). A whole-genome sequencing-based algorithm for copy number detection at clinical grade level. Genomics, proteomics Bioinforma. 20. 1197. doi:10.1016/j.gpb.2021.06.003 Lewin, H. A., Robinson, G. E., Kress, W. J., Baker, W. J., Coddington, J., Crandall, K. A., et al. (2018). Earth BioGenome project: Sequencing life for the future of life. Proc. Natl. Acad. Sci. U. S. A. 115 (17), 4325–4333. doi:10.1073/pnas.1720115115 Li, B., Yu, L., and Gao, L. (2022). Cancer classification based on multiple dimensions: SNV patterns. Comput. Biol. Med. 151, 106270. doi:10.1016/j.compbiomed.2022.106270 Li, R., Chang, C., Tanigawa, Y., Narasimhan, B., Hastie, T., Tibshirani, R., et al. (2021). Fast numerical optimization for genome sequencing data in population biobanks. Bioinformatics 37 (22), 4148–4155. doi:10.1093/bioinformatics/btab452 Macintyre, G., Ylstra, B., and Brenton, J. D. (2016). Sequencing structural variants in cancer for precision therapeutics. Trends Genet. 32 (9), 530–542. doi:10.1016/j.tig.2016.07.002 Medvedev, P., Stanciu, M., and Brudno, M. (2009). Computational methods for discovering structural variation with next-generation sequencing. Nat. Methods 6 (11), S13–S20. doi:10.1038/nmeth.1374 Moffat, A. (2019). Huffman coding. Acm Comput. Surv. 52 (4), 1–35. doi:10.1145/3342555 Mota, N. R., and Franke, B. (2020). 30-year journey from the start of the human genome project to clinical application of genomics in psychiatry: Are we there yet? Lancet Psychiatry 7 (1), 7–9. Naqvi, S., Naqvi, R., Riaz, R. R., and Siddiqi, F. (2011). Optimized RTL design and implementation of LZW algorithm for high bandwidth applications. Przeglad Elektrotechniczny 87 (4), 279–285. Niu, Y., Ma, M., Li, F., Liu, X., and Shi, G. (2022). ACO:lossless quality score compression based on adaptive coding order. BMC Bioinforma. 23 (1), 219. doi:10.1186/s12859-022-04712-z Oh, S., Lee, J., Kwon, M. S., Weir, B., Ha, K., and Park, T. (2012). A novel method to identify high order gene-gene interactions in genome-wide association studies: Gene-based MDR. Bmc Bioinforma. 13, S5. doi:10.1186/1471-2105-13-S9-S5 Park, J., Yi, W., Ahn, D., Kung, J., and Kim, J. J. (2020). Balancing computation loads and optimizing input vector loading in LSTM accelerators. Ieee Trans. Computer-Aided Des. Integr. Circuits Syst. 39 (9), 1889–1901. doi:10.1109/tcad.2019.2926482 Prashant, N. M., Liu, H., Dillard, C., Ibeawuchi, H., Alsaeedy, T., Chan, H., et al. (2021). Improved SNV discovery in barcode-stratified scRNA-seq alignments. Genes 12 (10), 1558. doi:10.3390/ Press, M. O., Hall, A. N., Morton, E. A., and Queitsch, C. (2019). Substitutions are boring: Some arguments about parallel mutations and high mutation rates. Trends Genet. 35 (4), 253–264. Samaha, G., Wade, C. M., Mazrier, H., Grueber, C. E., and Haase, B. (2021). Exploiting genomic synteny in felidae: Cross-species genome alignments and SNV discovery can aid conservation management. Bmc Genomics 22 (1), 601. doi:10.1186/s12864-021-07899-2 Schnepp, P. M., Chen, M., Keller, E. T., and Zhou, X. (2019). SNV identification from single-cell RNA sequencing data. Hum. Mol. Genet. 28 (21), 3569–3583. doi:10.1093/hmg/ddz207 Shekaramiz, M., Moon, T. K., and Gunther, J. H. (2019). Bayesian compressive sensing of sparse signals with unknown clustering patterns. Entropy 21 (3), 247. doi:10.3390/e21030247 Stankiewicz, P., and Lupski, J. R. (2010). Structural variation in the human genome and its role in disease. Annu. Rev. Med. 61, 437–455. doi:10.1146/annurev-med-100708-204735 The ICGC/TCGA Pan-Cancer Analysis of Whole Genomes Consortium (2020). Pan-cancer analysis of whole genomes. Nature 578 (7793), 82. Tu, Z. D., Wang, L., Xu, M., Zhou, X., Chen, T., and Sun, F. (2006). Further understanding human disease genes by comparing with housekeeping genes and other genes. Bmc Genomics 7, 31. doi:10.1186/ van der Borght, K., Thys, K., Wetzels, Y., Clement, L., Verbist, B., Reumers, J., et al. (2015). QQ-SNV: Single nucleotide variant detection at low frequency by comparing the quality quantiles. Bmc Bioinforma. 16, 379. doi:10.1186/s12859-015-0812-9 Wang, J., Ding, D., Li, Z., Feng, X., Cao, C., and Ma, Z. (2022). Sparse tensor-based multiscale representation for point cloud geometry compression. IEEE Trans. pattern analysis Mach. Intell. 2022, 1. doi:10.1109/TPAMI.2022.3225816 Wang, R., Zang, T., and Wang, Y. (2019). Human mitochondrial genome compression using machine learning techniques. Hum. Genomics 13 (1), 49. doi:10.1186/s40246-019-0225-3 Xi, J., Deng, Z., Liu, Y., Wang, Q., and Shi, W. (2023). Integrating multi-type aberrations from DNA and RNA through dynamic mapping gene space for subtype-specific breast cancer driver discovery. Peerj 11, e14843. doi:10.7717/peerj.14843 Xi, J., and Li, A. (2016). Discovering recurrent copy number aberrations in complex patterns via non-negative sparse singular value decomposition. Ieee-Acm Trans. Comput. Biol. Bioinforma. 13 (4), 656–668. doi:10.1109/TCBB.2015.2474404 Xi, J., Li, A., and Wang, M. (2020). HetRCNA: A novel method to identify recurrent copy number alternations from heterogeneous tumor samples based on matrix decomposition framework. Ieee-Acm Trans. Comput. Biol. Bioinforma. 17 (2), 422–434. doi:10.1109/TCBB.2018.2846599 Xi, J., Sun, D., Chang, C., Zhou, S., and Huang, Q. (2023). An omics-to-omics joint knowledge association subtensor model for radiogenomics cross-modal modules from genomics and ultrasonic images of breast cancers. Comput. Biol. Med. 155, 106672. doi:10.1016/j.compbiomed.2023.106672 Xi, J., Yuan, X., Wang, M., Li, X., and Huang, Q. (2020). Inferring subgroup-specific driver genes from heterogeneous cancer samples via subspace learning with subgroup indication. Bioinformatics 36 (6), 1855–1863. doi:10.1093/bioinformatics/btz793 Xing, L., Wang, Z., Ding, Z., Chu, G., Dong, L., and Xiao, N. (2022). An efficient sparse stiffness matrix vector multiplication using compressed sparse row storage format on AMD GPU. Concurrency Computation-Practice Exp. 34 (23). doi:10.1002/cpe.7186 Yao, H., Hu, G., Liu, S., Fang, H., and Ji, Y. (2022). SparkGC: Spark based genome compression for large collections of genomes. BMC Bioinforma. 23 (1), 297. doi:10.1186/s12859-022-04825-5 Yao, W., Huang, F., Zhang, X., and Tang, J. (2019). Ecogems: Efficient compression and retrieve of SNP data of 2058 rice accessions with integer sparse matrices. Bioinformatics 35 (20), 4181–4183. Zheng, T. (2022). DETexT: An SNV detection enhancement for low read depth by integrating mutational signatures into TextCNN. Front. Genet. 13, 943972–948021. (Print)). doi:10.3389/fgene.2022.943972 Keywords: genomic, sparse, compression, single-nucleotide variation, copy number variation Citation: Ding Y, Liao Y, He J, Ma J, Wei X, Liu X, Zhang G and Wang J (2023) Enhancing genomic mutation data storage optimization based on the compression of asymmetry of sparsity. Front. Genet. 14:1213907. doi: 10.3389/fgene.2023.1213907 Received: 28 April 2023; Accepted: 24 May 2023; Published: 01 June 2023. Reviewed by: Xianling Dong , Chengde Medical University, China Zhenhui Dai , Guangzhou University of Chinese Medicine, China Copyright © 2023 Ding, Liao, He, Ma, Wei, Liu, Zhang and Wang. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. *Correspondence: Guiying Zhang, guiyingzh@126.com; Jing Wang, eewangjing@163.com ^†These authors have contributed equally to this work
{"url":"https://www.frontiersin.org/journals/genetics/articles/10.3389/fgene.2023.1213907/full","timestamp":"2024-11-07T21:54:58Z","content_type":"text/html","content_length":"516045","record_id":"<urn:uuid:c7b89129-314a-479b-84b0-e206e6e0fbce>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00243.warc.gz"}
Multiplying and Dividing Real Numbers Learning Outcomes • Multiply and divide real numbers □ Multiply two or more real numbers. □ Divide real numbers □ Simplify expressions with both multiplication and division Multiplication and division are inverse operations, just as addition and subtraction are. You may recall that when you divide fractions, you multiply by the reciprocal. Inverse operations “undo” each Multiply Real Numbers Multiplying real numbers is not that different from multiplying whole numbers and positive fractions. However, you haven’t learned what effect a negative sign has on the product. With whole numbers, you can think of multiplication as repeated addition. Using the number line, you can make multiple jumps of a given size. For example, the following picture shows the product [latex]3\cdot4[/latex] as [latex]3[/latex] jumps of [latex]4[/latex] units each. So to multiply [latex]3(−4)[/latex], you can face left (toward the negative side) and make three “jumps” forward (in a negative direction). The product of a positive number and a negative number (or a negative and a positive) is negative. The Product of a Positive Number and a Negative Number To multiply a positive number and a negative number, multiply their absolute values. The product is negative. Find [latex]−3.8(0.6)[/latex]. Show Solution Try It The following video contains examples of how to multiply decimal numbers with different signs. The Product of Two Numbers with the Same Sign (both positive or both negative) To multiply two positive numbers, multiply their absolute values. The product is positive. To multiply two negative numbers, multiply their absolute values. The product is positive. Find [latex] ~\left( -\frac{3}{4} \right)\left( -\frac{2}{5} \right)[/latex] Show Solution The following video shows examples of multiplying two signed fractions, including simplification of the answer. To summarize: • positive [latex]\cdot[/latex] positive: The product is positive. • negative [latex]\cdot[/latex] negative: The product is positive. • negative [latex]\cdot[/latex] positive: The product is negative. • positive [latex]\cdot[/latex] negative: The product is negative. You can see that the product of two negative numbers is a positive number. So, if you are multiplying more than two numbers, you can count the number of negative factors. Multiplying More Than Two Negative Numbers If there are an even number ([latex]0, 2, 4[/latex], …) of negative factors to multiply, the product is positive. If there are an odd number ([latex]1, 3, 5[/latex], …) of negative factors, the product is negative. Find [latex]3(−6)(2)(−3)(−1)[/latex]. Show Solution The following video contains examples of multiplying more than two signed integers. Divide Real Numbers You may remember that when you divided fractions, you multiplied by the reciprocal. Reciprocal is another name for the multiplicative inverse (just as opposite is another name for additive inverse). An easy way to find the multiplicative inverse is to just “flip” the numerator and denominator as you did to find the reciprocal. Here are some examples: • The reciprocal of [latex]\frac{4}{9}[/latex] is [latex] \frac{9}{4}[/latex]because [latex]\frac{4}{9}\left(\frac{9}{4}\right)=\frac{36}{36}=1[/latex]. • The reciprocal of [latex]3[/latex] is [latex]\frac{1}{3}[/latex] because [latex]\frac{3}{1}\left(\frac{1}{3}\right)=\frac{3}{3}=1[/latex]. • The reciprocal of [latex]-\frac{5}{6}[/latex] is [latex]\frac{-6}{5}[/latex] because [latex]-\frac{5}{6}\left( -\frac{6}{5} \right)=\frac{30}{30}=1[/latex]. • The reciprocal of [latex]1[/latex] is [latex]1[/latex] as [latex]1(1)=1[/latex]. When you divided by positive fractions, you learned to multiply by the reciprocal. You also do this to divide real numbers. Think about dividing a bag of [latex]26[/latex] marbles into two smaller bags with the same number of marbles in each. You can also say each smaller bag has one half of the marbles. [latex] 26\div 2=26\left( \frac{1}{2} \right)=13[/latex] Notice that [latex]2[/latex] and [latex] \frac{1}{2}[/latex] are reciprocals. Try again, dividing a bag of [latex]36[/latex] marbles into smaller bags. Number of bags Dividing by number of bags Multiplying by reciprocal [latex]3[/latex] [latex]\frac{36}{3}=12[/latex] [latex] 36\left( \frac{1}{3} \right)=\frac{36}{3}=\frac{12(3)}{3}=12[/latex] [latex]4[/latex] [latex]\frac{36}{4}=9[/latex] [latex]36\left(\frac{1}{4}\right)=\frac{36}{4}=\frac{9\left(4\right)}{4}=9[/latex] [latex]6[/latex] [latex]\frac{36}{6}=6[/latex] [latex]36\left(\frac{1}{6}\right)=\frac{36}{6}=\frac{6\left(6\right)}{6}=6[/latex] Dividing by a number is the same as multiplying by its reciprocal. (That is, you use the reciprocal of the divisor, the second number in the division problem.) Find [latex] 28\div \frac{4}{3}[/latex] Show Solution Now let’s see what this means when one or more of the numbers is negative. A number and its reciprocal have the same sign. Since division is rewritten as multiplication using the reciprocal of the divisor, and taking the reciprocal doesn’t change any of the signs, division follows the same rules as multiplication. Rules of Division When dividing, rewrite the problem as multiplication using the reciprocal of the divisor as the second factor. When one number is positive and the other is negative, the quotient is negative. When both numbers are negative, the quotient is positive. When both numbers are positive, the quotient is positive. Find [latex]24\div\left(-\frac{5}{6}\right)[/latex]. Show Solution Find [latex] 4\,\left( -\frac{2}{3} \right)\,\div \left( -6 \right)[/latex] Show Solution Try It The following video explains how to divide signed fractions. Remember that a fraction bar also indicates division, so a negative sign in front of a fraction goes with the numerator, the denominator, or the whole fraction: [latex]-\frac{3}{4}=\frac{-3}{4}=\frac In each case, the overall fraction is negative because there’s only one negative in the division. The following video explains how to divide signed fractions.
{"url":"https://courses.lumenlearning.com/wm-developmentalemporium/chapter/5-3-multiplying-and-dividing-real-numbers/","timestamp":"2024-11-04T17:11:55Z","content_type":"text/html","content_length":"63382","record_id":"<urn:uuid:67d091f0-be67-4885-9932-fb722284450c>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00805.warc.gz"}
Metaballs can be thought of as force fields whose surface is an implicit function defined at any point where the density of the force field equals a certain threshold. This field can currently be specified as an elliptical or super-quadric shape around a point. When two metaballs overlap in space, their field effects are added together. See also Metaball SOP The field is specified by a weight and a kernel function. The kernel function results in a value of 0 at the outside edge of the metaball and a value of 1 at the center. The kernel function is scaled by the weight to shift the location of the surface closer or further away from the center. Because the density of the force field can be increased by the proximity of other metaball force fields, metaballs have the unique property that they change their shape to adapt and fuse with surrounding metaballs. This makes them very effective for modeling organic surfaces. For example, below we have a metaball. The surface of the metaball exists whenever the density of the metaball's field reaches a certain threshold: When two or more metaball force fields are combined, as in the illustration below, the resulting density of the force fields is added, and the surface extends to include that area where the force fields intersect and create density values with a value of one. Metaballs are defined by the parameters Center x/y/z, Radius x/y/z, Exponent x/y/z, and a 3×3 rotation matrix which determines the orientation. A metaball is known as a super-quadratic if either exponent is not equal to one. You can see a metaball's sphere of influence by turning on Display Hulls in a Geometry Viewer's options dialog. In the SOP editor, a metaball can be selected only by its hull. Pusher Metaballs[edit] It is possible for metaballs to have negative Weights (Pusher Metaballs). This allows holes to be created by effectively subtracting from the surface. What does an Exponent do?[edit] In the instance of metaballs, the XY and Z exponent determines the inflation towards "squarishness" or contraction towards "starishness" as described below: • Value > 1 - Results in metaballs that appear more like a "star". • Value < 1 - Results in metaballs that appear more "squarish". • Value = 1 - Results in metaballs that appear spherical. In Touch, metaballs are often used as force fields for particle systems. You can create metaballs with a Metaball SOP, or in the SOP editor. Metaball Model Types[edit] • Blinn Kernal - Always puts a sphere at the blob centre, even if the weight is less than 1.0. The Blinn model is the fastest and most stable of all the models. • Wyvill and Elendt Kernals - These models are very similar; only the weight distribution function is different. • Links Kernal - This is the slowest method, but provides a good compromise between the Blinn and Wyvill methods in terms of weight distribution.
{"url":"https://docs.derivative.ca/Metaball","timestamp":"2024-11-07T19:57:51Z","content_type":"text/html","content_length":"26039","record_id":"<urn:uuid:b4e6591e-9ac6-4a17-bc36-eab90980439f>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00813.warc.gz"}
Projection Methods for Incompressible Fluid Dynamics Mathematics Group Projection Methods for Incompressible Fluid Dynamics Projection methods are the core mathematical and computational fluid methodology to accurately account for incompressibility in the Navier-Stokes equations. By continually projecting the evolving solution onto the subspace of divergence-free vectors fields through the Hodge decomposition, these methods provide a robust way to compute a wide host of pratical engineering problems. They are ubiquitous in computational fluid mechanics, and are at the core of current day aircraft simulations, combustion calculations, fluid/solid solvers, ocean and meteorological codes, and simulations of complex physiology such as the design of heart valves.
{"url":"https://crd.lbl.gov/divisions/amcr/mathematics-dept/math/research/past-research/projection-methods-for-incompressible-fluid-dynamics/","timestamp":"2024-11-09T19:35:10Z","content_type":"text/html","content_length":"28589","record_id":"<urn:uuid:2547a2ca-480b-45bf-9238-e7825ddc5e4e>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00845.warc.gz"}
What is the average speed of a train that traveled 543 kilometers in 6 hours? | HIX Tutor What is the average speed of a train that traveled 543 kilometers in 6 hours? Answer 1 Distance traveled divided by time taken equals average speed. Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer 2 To find the average speed, divide the total distance traveled by the total time taken. So, the average speed of the train is ( \frac{543 \text{ km}}{6 \text{ hours}} = 90.5 \text{ km/h} ). Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer from HIX Tutor When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some Not the question you need? HIX Tutor Solve ANY homework problem with a smart AI • 98% accuracy study help • Covers math, physics, chemistry, biology, and more • Step-by-step, in-depth guides • Readily available 24/7
{"url":"https://tutor.hix.ai/question/what-is-the-average-speed-of-a-train-that-traveled-543-kilometers-in-6-hours-8f9af89ba0","timestamp":"2024-11-09T21:08:54Z","content_type":"text/html","content_length":"576615","record_id":"<urn:uuid:07dc8e22-6a3c-4ae4-b8fb-f1215b09a2c4>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00092.warc.gz"}
Vedic Math Tricks May 16, 2023 | Northville Encouraging your children to develop good study habits early is one of the most important things you can do as a parent. Help your child develop a lifelong love of learning by setting up a study space without distractions, getting organized, and helping them to feel positive about their schoolwork.
{"url":"https://www.mathnasium.com/math-centers/novi/news/vedic-math-tricks-918495549","timestamp":"2024-11-05T23:00:18Z","content_type":"text/html","content_length":"68979","record_id":"<urn:uuid:1d62e476-ad02-4adb-929c-adc3ffefa7b2>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00725.warc.gz"}
Parser combinator Jump to navigation Jump to search In computer programming, a parser combinator is a higher-order function that accepts several parsers as input and returns a new parser as its output. In this context, a parser is a function accepting strings as input and returning some structure as output, typically a parse tree or a set of indices representing locations in the string where parsing stopped successfully. Parser combinators enable a recursive descent parsing strategy that facilitates modular piecewise construction and testing. This parsing technique is called combinatory parsing. Parsers built using combinators are straightforward to construct, readable, modular, well-structured, and easily maintainable. They have been used extensively in the prototyping of compilers and processors for domain-specific languages such as natural-language interfaces to databases, where complex and varied semantic actions are closely integrated with syntactic processing. In 1989, Richard Frost and John Launchbury demonstrated^[1] use of parser combinators to construct natural-language interpreters. Graham Hutton also used higher-order functions for basic parsing in 1992.^[2] S.D. Swierstra also exhibited the practical aspects of parser combinators in 2001.^[3] In 2008, Frost, Hafiz and Callaghan^[4] described a set of parser combinators in Haskell that solve the long-standing problem of accommodating left recursion, and work as a complete top-down parsing tool in polynomial time and space. Basic idea[edit] In any programming language that has first-class functions, parser combinators can be used to combine basic parsers to construct parsers for more complex rules. For example, a production rule of a context-free grammar (CFG) may have one or more alternatives and each alternative may consist of a sequence of non-terminal(s) and/or terminal(s), or the alternative may consist of a single non-terminal or terminal or the empty string. If a simple parser is available for each of these alternatives, a parser combinator can be used to combine each of these parsers, returning a new parser which can recognise any or all of the alternatives. In languages that support operator overloading, a parser combinator can take the form of an infix operator, used to glue different parsers to form a complete rule. Parser combinators thereby enable parsers to be defined in an embedded style, in code which is similar in structure to the rules of the formal grammar. As such, implementations can be thought of as executable specifications with all the associated advantages. (Notably: readability) The combinators[edit] To keep the discussion relatively straightforward, we discuss parser combinators in terms of recognizers only. If the input string is of length #input and its members are accessed through an index j, a recognizer is a parser which returns, as output, a set of indices representing positions at which the parser successfully finished recognizing a sequence of tokens that began at position j. An empty result set indicates that the recognizer failed to recognize any sequence beginning at index j. A non-empty result set indicates the recognizer ends at different positions successfully. • The empty recognizer recognizes the empty string. This parser always succeeds, returning a singleton set containing the current position: ${\displaystyle empty(j)=\{j\}}$ • A recognizer term x recognizes the terminal x. If the token at position j in the input string is x, this parser returns a singleton set containing j + 1; otherwise, it returns the empty set. ${\displaystyle term(x,j)={\begin{cases}\left\{\right\},&j\geq \#input\\\left\{j+1\right\},&j^{th}{\mbox{ element of }}input=x\\\left\{\right\},&{\mbox{otherwise}}\end{cases}}}$ Note that there may be multiple distinct ways to parse a string while finishing at the same index: this indicates an ambiguous grammar. Simple recognizers do not acknowledge these ambiguities; each possible finishing index is listed only once in the result set. For a more complete set of results, a more complicated object such as a parse tree must be returned. Following the definitions of two basic recognizers p and q, we can define two major parser combinators for alternative and sequencing: • The ‘alternative’ parser combinator, ⊕, applies both of the recognizers on the same input position j and sums up the results returned by both of the recognizers, which is eventually returned as the final result. It is used as an infix operator between p and q as follows: ${\displaystyle (p\oplus q)(j)=p(j)\cup q(j)}$ • The sequencing of recognizers is done with the ⊛ parser combinator. Like ⊕, it is used as an infix operator between p and q. But it applies the first recognizer p to the input position j, and if there is any successful result of this application, then the second recognizer q is applied to every element of the result set returned by the first recognizer. ⊛ ultimately returns the union of these applications of q. ${\displaystyle (p\circledast q)(j)=\bigcup \{q(k):k\in p(j)\}}$ Consider a highly ambiguous context-free grammar, s ::= ‘x’ s s | ε. Using the combinators defined earlier, we can modularly define executable notations of this grammar in a modern functional language (e.g. Haskell) as s = term ‘x’ <*> s <*> s <+> empty. When the recognizer s is applied on an input sequence xxxxx at position 1, according to the above definitions it would return a result set {5,4,3,2}. Shortcomings and solutions[edit] Parser combinators, like all recursive descent parsers, are not limited to the context-free grammars and thus do no global search for ambiguities in the LL(k) parsing First[k] and Follow[k] sets. Thus, ambiguities are not known until run-time if and until the input triggers them. In such cases, the recursive descent parser may default (perhaps unknown to the grammar designer) to one of the possible ambiguous paths, resulting in semantic confusion (aliasing) in the use of the language. This leads to bugs by users of ambiguous programming languages, which are not reported at compile-time, and which are introduced not by human error, but by ambiguous grammar. The only solution that eliminates these bugs is to remove the ambiguities and use a context-free grammar. The simple implementations of parser combinators have some shortcomings, which are common in top-down parsing. Naïve combinatory parsing requires exponential time and space when parsing an ambiguous context-free grammar. In 1996, Frost and Szydlowski demonstrated how memoization can be used with parser combinators to reduce the time complexity to polynomial.^[5] Later Frost used monads to construct the combinators for systematic and correct threading of memo-table throughout the computation.^[6] Like any top-down recursive descent parsing, the conventional parser combinators (like the combinators described above) will not terminate while processing a left-recursive grammar (e.g. s ::= s <*> term ‘x’|empty). A recognition algorithm that accommodates ambiguous grammars with direct left-recursive rules is described by Frost and Hafiz in 2006.^[7] The algorithm curtails the otherwise ever-growing left-recursive parse by imposing depth restrictions. That algorithm was extended to a complete parsing algorithm to accommodate indirect as well as direct left-recursion in polynomial time, and to generate compact polynomial-size representations of the potentially exponential number of parse trees for highly ambiguous grammars by Frost, Hafiz and Callaghan in 2007.^[8] This extended algorithm accommodates indirect left recursion by comparing its ‘computed context’ with ‘current context’. The same authors also described their implementation of a set of parser combinators written in the Haskell programming language based on the same algorithm.^[4]^[9] 1. ^ cf. X-SAIGA — executable specifications of grammars • Burge, William H. (1975). Recursive Programming Techniques. The Systems programming series. Addison-Wesley. ISBN 0201144506. • Frost, Richard; Launchbury, John (1989). "Constructing natural language interpreters in a lazy functional language" (PDF). The Computer Journal. Special edition on Lazy Functional Programming. 32 (2): 108–121. doi:10.1093/comjnl/32.2.108. Archived from the original on 2013-06-06.CS1 maint: BOT: original-url status unknown (link) • Frost, Richard A.; Szydlowski, Barbara (1996). "Memoizing Purely Functional Top-Down Backtracking Language Processors" (PDF). Sci. Comput. Program. 27 (3): 263–288. doi:10.1016/0167-6423(96) • Frost, Richard A. (2003). "Monadic Memoization towards Correctness-Preserving Reduction of Search" (PDF). Proceedings of the 16th Canadian Society for Computational Studies of Intelligence Conference on Advances in Artificial Intelligence (AI'03): 66–80. ISBN 3-540-40300-0. • Frost, Richard A.; Hafiz, Rahmatullah (2006). "A New Top-Down Parsing Algorithm to Accommodate Ambiguity and Left Recursion in Polynomial Time" (PDF). ACM SIGPLAN Notices. 41 (5): 46–54. doi: • Frost, Richard A.; Hafiz, Rahmatullah; Callaghan, Paul (2007). "Modular and Efficient Top-Down Parsing for Ambiguous Left-Recursive Grammars". Proceedings of the 10th International Workshop on Parsing Technologies (IWPT), ACL-SIGPARSE: 109–120. CiteSeerX 10.1.1.97.8915. • Frost, Richard A.; Hafiz, Rahmatullah; Callaghan, Paul (2008). "Parser Combinators for Ambiguous Left-Recursive Grammars". Proceedings of the 10th International Symposium on Practical Aspects of Declarative Languages (PADL). ACM-SIGPLAN. 4902: 167–181. CiteSeerX 10.1.1.89.2132. doi:10.1007/978-3-540-77442-6_12. ISBN 3-540-77441-6. • Hutton, Graham (1992). "Higher-order functions for parsing" (PDF). Journal of Functional Programming. 2 (3): 323–343. doi:10.1017/s0956796800000411. • Okasaki, Chris (1998). "Even higher-order functions for parsing or Why would anyone ever want to use a sixth-order function?". Journal of Functional Programming. 8 (2). doi:10.1017/ • Swierstra, S. Doaitse (2001). "Combinator parsers: From toys to tools" (PDF). Electronic Notes in Theoretical Computer Science. 41. doi:10.1016/S1571-0661(05)80545-6. • Wadler, Philip (1985). "How to replace failure by a list of successes — A method for exception handling, backtracking, and pattern matching in lazy functional languages". Proceedings of a Conference on Functional Programming Languages and Computer Architecture: 113–128. ISBN 0-387-15975-4. External links[edit]
{"url":"https://static.hlt.bme.hu/semantics/external/pages/elemz%C3%A9si_fa/en.wikipedia.org/wiki/Parser_combinator.html","timestamp":"2024-11-03T09:29:48Z","content_type":"text/html","content_length":"75540","record_id":"<urn:uuid:b5868835-0af9-4745-b188-1bb8d0e5bad1>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00510.warc.gz"}
Q 2 d) ellipse May 15, 2006 with this question how did u do it... i wsnt sure cos ive only ever seen circles on the argand diagram not ellipses |z-1-3i| + |z-9-3i| = 10 Feb 6, 2006 i didnt do it but i when to my teacher and he said u remember the property of the ellispies and i got NO and then he goes PS + PM = 2a so the major is 5 and i got dam i guessed 6 Feb 1, 2005 its just like any other locus at first i was like wtf but u can tell the ends of the ellispse is 1 and 9 with the same height 3 1+9 / 2 = 5 so centre for x value is 5 centre (5,3) 5 + 3i (argh i forgot to put i rofl) i tried the long way also by letting z = x + iy then squaring , and u get like this random equation and i found it was 5,3 too come to think about it,, i think i mite've screwed this question up ... nOooooooooo i dunno lol~ i dun wanna think about 4unit maths ever again... May 29, 2006 I got liek a circle with center (5,3) and radius 5, lol. something went wrong for part d i said that the range for arg(z) was where the vectors from the origin touched the endpoints of the ellipse and then gave a value for it. i got something<=arg(z)<=pi/2 May 5, 2006 omfg whats majar n minor again?? i got 10 n 6 centre (5,3) r u supposed to write 5 n 3???????????? idid write 10(2x5) Last edited: Nov 22, 2005 i spent agesss on this q saying z=x+iy .. but im really pissed cos i wasn't sure if the major axis was a or 2 a .. so i just put 10 . but was unsure for the tie i spent .. and for one part i didn't put it as a complex number :S .. i put 5,3 instead of 5+3i ... n the time there costs me lyk 6-7 marks later on major axis is 2a semi-major is a minor axis is 2b semi-minor is b i used the eccentricity equation to find b. Mar 31, 2006 the centre was 5 + 3i, not (5,3) it asked for the complex representation also remember u when u draw the ellipse, it hasta 'touch' the x and y axis May 5, 2006 zeek said: major axis is 2a semi-major is a minor axis is 2b semi-minor is b i used the eccentricity equation to find b. i just drew 2 lines to the mid (top) of ellipse.. then use a^2+b^2=c^2 to find the half minor, then times it by 2! and same for the arg(z) at that pt, angle's a maximum, then minimum is negative that number Feb 28, 2006 considering that d) iii) "find range of arg(z) for z points on ellipse" was only a 1 mark question it was a give away that the ellipse touches x and y otherwise it would be to hard that way i figured out the major and minor axes and then checked it with a point that would lie on one of the axes actually... since one of the focii was 1+3i and x=ae then e=1/5 if you subbed this into b^2=a^2(1-e^2) and then find b, you will see that the ellipse cuts the x-axis so you had to calculate the next complex number and then find the argument from there. Feb 11, 2005 Aghhh gahh, i totally stuffed it XD. Coudln't square properly and got a stupid circle. Did it second time and STILL got a circle. THen, because question called for ellipse, I managed to convince myself that a "circle" was a SPECIAL type of "ellipse". Oct 11, 2005 I did it the algebraic way as I wasn't sure how to work out the features of the ellipse using the starting equation. Let z=x+iy.... etc. Riviet said: I did it the algebraic way as I wasn't sure how to work out the features of the ellipse using the starting equation. Let z=x+iy.... etc. Same, took way too long Nov 25, 2005 Probably the easiest way to do this question would be to recognise that the points in the question are the foci, then find that the locus has to pass thru 3i and 10 + 3i which means the centre would be at 5 + 3i, so the max y has to have Im part of 3i Oct 11, 2005 jctaylor said: Probably the easiest way to do this question would be to recognise that the points in the question are the foci, then find that the locus has to pass thru 3i and 10 + 3i which means the centre would be at 5 + 3i, so the max y has to have Im part of 3i I knew there was that geometric method, but wasn't sure of it so just decided to go algebraic. Feb 27, 2006 I went algebraically, then grouped them: (x-1)^2 + (y-3)^2 + (x-9)^2 + (y-3)^2= 100 (thats how i was shown to get rid of absolute vaule thingy then i expanded: x^2-2x+1+y^2-6y+9+x^2-18x+81+y^2-6y+9=100 : 2x^2-20x +2y^2-12y = 0 : div by 2 and complete square : x^2-10x +25+y^2-6y+9=25+9 : (x-5)^2 + (y-3)^2= 34 so i thought ellipse was: [(x-5)^2]/34 + [(y-3)^2]/34=1 so i said my centre was 5/34, 3/34? what did i do wrong? Feb 13, 2003 Shady01, your first line is wrong. If you were to go down the z = x + iy path, using modulus definitions: sqrt(c) + sqrt(d) = 10 You can't just square both sides to get c + d = 100. In fact, the easiest way to do this is to rearrange as sqrt(c) = 10 - sqrt(d) and then square both sides. Regarding the geometric method, as people said, you're supposed to know that is the locus of an ellipse and relate it to the equation PS + PS' = 2a. Then you can find S(1,3), S'(9,3) and a = 5 really quickly. Then, if you were to draw a diagram, you could find the value of b = 3 through some nifty Pythagoras or some other method. Users Who Are Viewing This Thread (Users: 0, Guests: 1)
{"url":"https://boredofstudies.org/threads/q-2-d-ellipse.126085/#post-2616744","timestamp":"2024-11-03T01:20:11Z","content_type":"text/html","content_length":"126921","record_id":"<urn:uuid:69d962b0-6060-4b94-bfd4-3882bd2c7993>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00208.warc.gz"}
Old Skool Fun From Back in the Day: Origami Fortune Teller Introduction: Old Skool Fun From Back in the Day: Origami Fortune Teller So much changes from one time of our lives to the next. I think nothing makes me more nostalgic than doing one of those small things from High School in my adult years. An example is writing a note to my honey and giving it to him folded in a special way. Today the thought of the ever popular Origami Fortune Teller came to mind, and I thought I would make one and share. What do you need to get started? 1) 8.5 x 11 paper (use a colorful one to make it more fun or let it stay white so you can add color to it) 2) scissors That's simple enough. Now, let's go back in time 14 years....at least for me.... Step 1: Step One and Two: Get the Paper Ready for Cutting Position the paper to sit portrait. Then take the top right corner of the paper and fold it down so the top right corner is touching the left side of the paper. It should look like a triangle on top with a rectangle on the bottom. Flip over the paper. Take the rectangle from its bottom corners and fold them upwards. You will see two triangles once you flip the paper back over. One big one on the left and one small one on the right side. Step 2: Step Three: Get the Paper Ready for Folding Unfold the paper. You will see a square on top with a folded line going diagonally through it and a rectangle at the bottom. Next, cut off the rectangle. It is officially trash. All you will need is the left over perfect square. Step 3: Step Four: First Set of Inside Folds Take the perfect square and fold the corners together so that when you unfold it, there are two folded diagonal lines making an "X" through the square. These will be your guidelines for the next couple of steps. Next, take each corner tip and fold the tip into the center, staying within the "X" guidelines (see picture if need be). Do this for all four corners. It should look like another perfect square with an "X", but the X shapes are actually flaps. Step 4: Step Five: Second Set of Inside Folds Flip the smaller square over. The "X" and the flaps should now be on the bottom. Now, take the corner tips and fold them into the center just like the last step. It should look just as it did before with an "X" and flaps. The difference is there will be lines that cut the flaps into two triangles. Step 5: Step Six: Final Folders for Fingers Don't flip the square back over, instead, fold it in half so there are two rectangles. It doesn't matter which way as long as you don't fold it in half diagonally. Once you are finished, you will see that both sides have two square flaps with two sides attached and two sides loose. Take the corner tip and fold the paper inside of the square. So basically, you are tucking in the square and it becomes a triangle. This is a good step to check out the pictures. Do this for all four sides. For reference, I included a photo of what it will look like once you open the square and all four sides are folded in. Those parts you folded in is where your fingers go. Step 6: Step Seven: Make the "flower" Once you have finished folding, take your fingers and place them into the folds. Bring your fingers together as the corners come together like a flower. I added a photo of what the inside and outside both look like when closed. Step 7: Step Eight: Time to Play With Your Fortune To set up the game...write numbers on the outside of the corners. There are 8 spaces. Then, open up the "flower" and write a number on each triangle (again, 8). Finally, open the big flaps and write something about a person's possible fortune. To play the game...You ask someone to pick a number from the outside. Open the "flower" horizontally and then vertically for each time for the number. Meaning, if they chose 3, you will open horizontally, vertically and horizontally (3 times). Then, ask them to chose a number from inside. Open and close again for the number. Finally, they choose one more number from inside. Open that flap and voilà! you have given them their fortune. Step 8: Nostalgia Found I miss the simple times of simple games. I hope you enjoyed this little moment of "Traveling back in time". About Author: Miscelleana Rhinehart is a lover of games and a writer for NJ car dealers
{"url":"https://www.instructables.com/Old-Skool-Fun-from-Back-in-the-Day-Origami-Fortun/","timestamp":"2024-11-03T10:12:57Z","content_type":"text/html","content_length":"99918","record_id":"<urn:uuid:ded03389-c458-4d50-92d6-32609a40f147>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00533.warc.gz"}
What is known about the classification of N=4 SCFTs with central charge 6? I was talking about K3 surfaces with some physicists, and one of them told me that the N=4 superconformal field theories with central charge 6 are expected to be relatively scarce. In particular, one should expect a lot of a priori different theories (e.g., those coming from sigma models whose targets are different hyperkähler surfaces, or the Gepner model) to be isomorphic. I have not found similar statements in the mathematical literature, but it sounds like a statement that, if suitably tweaked, could conceivably make sense to mathematicians. Question: Where can I find such a claim (and perhaps additional justification)? Also, I am curious to know if there are underlying physical principles behind such a claim, or if it was conjectured due to a scarcity of characters (i.e., the space of suitable modular/Jacobi forms is small), or perhaps some combination. This post has been migrated from (A51.SE) For a survey on what was known in 1999 on the subject, there is the review A Hiker's Guide to K3 - Aspects of N=(4,4) Superconformal Field Theory with central charge c=6 by Werner Nahm and Katrin Wendland. I have not been following this subject, so I am not sure whether the current picture is substantially different. Some of the papers citing that review might also be relevant. This post has been migrated from (A51.SE) Thank you. The Nahm-Wendland paper suggests that the moduli space of such theories is somewhat more subtle than I had been led to believe. This post has been migrated from (A51.SE)
{"url":"https://www.physicsoverflow.org/5/what-known-about-classification-scfts-with-central-charge","timestamp":"2024-11-13T06:26:29Z","content_type":"text/html","content_length":"116770","record_id":"<urn:uuid:07fb3dd6-652d-460b-abe1-6927497764fd>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00354.warc.gz"}
Mathematics, the distance between one point (a) and another point (b), each with coordinates (x,y), can be computed by taking the differences of their x coordinates and their y coordinates and then squaring those differences. the squares are added and the square root of the resulting sum is taken and... voila! the distance. assume that point has already been defined as a structured type with two double fields, x and y. define a function dist that takes two point arguments and returns the distance between the points they re 1. Home 2. General 3. Mathematics, the distance between one point (a) and another point (b), each with coordinates (x,y),...
{"url":"https://math4finance.com/general/mathematics-the-distance-between-one-point-a-and-another-point-b-each-with-coordinates-x-y-can-be-computed-by-taking-the-differences-of-their-x-coordinates-and-their-y-coordinates-and-then-squaring-those-differences-the-squares-are-added-an","timestamp":"2024-11-10T15:00:47Z","content_type":"text/html","content_length":"30701","record_id":"<urn:uuid:d3ee7416-285f-418d-bf16-53ef17724a90>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00877.warc.gz"}
Prabha Mandayam • A class of distance-based incompatibility measures for quantum measurements Srinivas, M. D. We discuss a recently proposed class of incompatibility measures for quantum measurements, which is based on quantifying the effect of measurements of one observable on the statistics of the outcome of another. We summarize the properties of this class of measures, and present a tight upper bound for the incompatibility of any set of projective measurements in finite dimensions. We also discuss non-projective measurements, and give a non-trivial upper bound on the mutual incompatibility of a pair of Lüders instruments. Using the example of incompatible observables that commute on a subspace, we elucidate how this class of measures goes beyond uncertainty relations in quantifying the mutual incompatibility of quantum measurements. • Impact of local dynamics on entangling power Jonnadula, Bhargavi ; ; Zyczkowski, Karol It is demonstrated here that local dynamics have the ability to strongly modify the entangling power of unitary quantum gates acting on a composite system. The scenario is common to numerous physical systems, in which the time evolution involves local operators and nonlocal interactions. To distinguish between distinct classes of gates with zero entangling power we introduce a complementary quantity called gate typicality and study its properties. Analyzing multiple, say n, applications of any entangling operator, U, interlaced with random local gates we prove that both investigated quantities approach their asymptotic values in a simple exponential form. These values coincide with the averages for random nonlocal operators on the full composite space and could be significantly larger than that of Un. This rapid convergence to equilibrium, valid for subsystems of arbitrary size, is illustrated by studying multiple actions of diagonal unitary gates and controlled unitary gates. • Qubits through queues: The capacity of channels with waiting time dependent errors We consider a setting where qubits are processed sequentially, and derive fundamental limits on the rate at which classical information can be transmitted using quantum states that decohere in time. Specifically, we model the sequential processing of qubits using a single server queue, and derive explicit expressions for the capacity of such a 'queue-channel.' We also demonstrate a sweet-spot phenomenon with respect to the arrival rate to the queue, i.e., we show that there exists a value of the arrival rate of the qubits at which the rate of information transmission (in bits/sec) through the queue-channel is maximised. Next, we consider a setting where the average rate of processing qubits is fixed, and show that the capacity of the queue-channel is maximised when the processing time is deterministic. We also discuss design implications of these results on quantum information processing systems. • Security with 3-Pulse Differential Phase Shift Quantum Key Distribution Ranu, Shashank Kumar Shaw, Gautam Kumar ; ; 3-pulse DPS-QKD offers enhanced security compared to conventional DPS-QKD by decreasing the learning rate of an eavesdropper and unmasking her presence with an increased error rate upon application of intercept and resend attack. The probability of getting one bit of sifted key information using beamsplitter attack also reduces by 25% in our implentation compared to normal DPS. • The Classical Capacity of a Quantum Erasure Queue-Channel We consider a setting where a stream of qubits is processed sequentially. We derive fundamental limits on the rate at which classical information can be transmitted using qubits that decohere as they wait to be processed. Specifically, we model the sequential processing of qubits using a single server queue, and derive expressions for the classical capacity of such a quantum 'queue-channel.' Focusing on quantum erasures, we obtain an explicit single-letter capacity formula in terms of the stationary waiting time of qubits in the queue. Our capacity proof also implies that a 'classical' coding/decoding strategy is optimal, i.e., an encoder which uses only orthogonal product states, and a decoder which measures in a fixed product basis, are sufficient to achieve the classical capacity of the quantum erasure queue-channel. More broadly, our work begins to quantitatively address the impact of decoherence on the performance limits of quantum information processing systems. • 3 pulse differential phase shift quantum key distribution with spatial, or time, multiplexed Shaw, G. K. Shyam, S. Foram, S. Ranu, S. K. ; ; We demonstrated 3 pulse differential phase shift quantum key distribution with 30 km quantum channel with two different approaches, namely path superposition and time bin superposition. • Pretty good state transfer via adaptive quantum error correction Jayashankar, Akshaya We examine the role of quantum error correction (QEC) in achieving pretty good quantum state transfer over a class of one-dimensional spin Hamiltonians. Recasting the problem of state transfer as one of information transmission over an underlying quantum channel, we identify an adaptive QEC protocol that achieves pretty good state transfer. Using an adaptive recovery and approximate QEC code, we obtain explicit analytical and numerical results for the fidelity of transfer over ideal and disordered one-dimensional Heisenberg chains. In the case of a disordered chain, we study the distribution of the transition amplitude, which in turn quantifies the stochastic noise in the underlying quantum channel. Our analysis helps us to suitably modify the QEC protocol so as to ensure pretty good state transfer for small disorder strengths and indicates a threshold beyond which QEC does not help in improving the fidelity of state transfer. • 3 pulse differential phase shift quantum key distribution with spatial, or time, multiplexed Shaw, G. K. Shyam, S. Foram, S. Ranu, S. K. ; ; We demonstrated 3 pulse differential phase shift quantum key distribution with 30 km quantum channel with two different approaches, namely path superposition and time bin superposition. • Differential phase encoding scheme for measurement-device-independent quantum key distribution Ranu, Shashank Kumar ; ; This paper proposes a measurement-device-independent quantum key distribution (MDI-QKD) scheme based on differential phase encoding. The differential phase shift MDI-QKD (DPS-MDI-QKD) couples the advantages of DPS-QKD with that of MDI-QKD. The proposed scheme offers resistance against photon number splitting attack and phase fluctuations as well as immunity against detector side-channel vulnerabilities. The design proposed in this paper uses weak coherent pulses in a superposition of three orthogonal states, corresponding to one of three distinct paths in a delay-line interferometer. The classical bit information is encoded in the phase difference between pulses traversing successive paths. This 3-pulse superposition offers enhanced security compared to using a train of pulses by decreasing the learning rate of an eavesdropper and unmasking her presence with an increased error rate upon application of intercept and resend attack and beamsplitter attack. The proposed scheme employs phase locking of the sources of the two trusted parties so as to maintain the coherence between their optical signal, and uses a beamsplitter (BS) at the untrusted node (Charlie) to extract the key information from the phase encoded signals. • Disturbance trade-off principle for quantum measurements Srinivas, M. D. We demonstrate a fundamental principle of disturbance tradeoff for quantum measurements, along the lines of the celebrated uncertainty principle: The disturbances associated with measurements performed on distinct yet identically prepared ensembles of systems in a pure state cannot all be made arbitrarily small. Indeed, we show that the average of the disturbances associated with a set of projective measurements is strictly greater than zero whenever the associated observables do not have a common eigenvector. For such measurements, we show an equivalence between disturbance tradeoff measured in terms of fidelity and the entropic uncertainty tradeoff formulated in terms of the Tsallis entropy (T2). We also investigate the disturbances associated with the class of nonprojective measurements, where the difference between the disturbance tradeoff and the uncertainty tradeoff manifests quite clearly.
{"url":"https://irepose.iitm.ac.in/entities/person/68373/publications?f.dateIssued.min=2014&f.dateIssued.max=2019&spc.page=1","timestamp":"2024-11-06T01:46:40Z","content_type":"text/html","content_length":"1049239","record_id":"<urn:uuid:e56236a6-1a34-406b-9473-154fee1f7726>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00610.warc.gz"}
Mittag-Leffler modules What is a Mittag-Leffler module? Let R be a ring and let M be an R-module. Write M = colim_i M_i as a directed colimit of finitely presented R-modules. (This is always possible.) Pick any R-module N. Then consider the inverse system (Hom_R(M_i, N))_i. We say M is Mittag-Leffler if this inverse system is a Mittag-Leffler system for any N. It turns out that this condition is independent of the choices made, see Proposition Tag 059E. A prototypical example of a Mittag-Leffler module is an arbitrary direct sum of finitely presented modules. Some examples of non-Mittag-Leffler modules are: Q as Z-module, k[x, 1/x] as k[x]-module, k [x, y, t]/(xt – y) as k[x,y]-module, and ∏_n k[[x]]/(x^n) as k[[x]]-module. Why is this notion important? It turns out that an R-module P is projective if and only if P is (a) flat, (b) a direct sum of countably generated modules, and (c) Mittag-Leffler, see Theorem Tag 059Z . This characterization is a key step in the proof of descent of projectivity. For us this characterization is also important because it turns out that if R —> S is a finitely presented ring map, which is flat and “pure” (I hope to discuss this notion in a future post), then S is Mittag-Leffler as an R-module and hence projective as an R-module. This result is a key lemma in Raynaud-Gruson. Let me say a bit about the structure of countably generated Mittag-Leffler R-module M. First, you can write M as the colimit of a system M_1 —> M_2 —> M_3 —> M_4 —> … with each M_n finitely presented (see Lemma Tag 059W and the proof of Lemma Tag 0597). Another application of the Mittag-Leffler condition, using N = ∏ M_i and using that the system is countable, gives for each n an m ≥ n and a map φ : M —> M_m such that M_n —> M —> M_m is the transition map M_n —> M_m. In other words, there exists a self map ψ : M —> M which factors through a finitely presented R-module and which equals 1 on the image of M_n in M. Loosely speaking M has a lot of “compact” endomorphisms. Continuing, I think the existence of ψ means that etale locally on R we have a direct sum decomposition M = M_unit ⊕ M_rest with M_unit finitely presented and such that M_n maps into M_unit. Formulated a bit more canonically we get: (*) Given any map F —> M from a finitely presented module F into M there exists etale locally on R a direct sum decomposition M = A ⊕ B with A a finitely presented module such that F —> M factors through A. It seems likely that (*) also implies that M is Mittag-Leffler (but I haven’t checked this). In the last couple of weeks I have tried, without any success, to understand what it means for a finitely presented R-algebra S to be Mittag-Leffler as an R-module, without assuming S is flat over R. If you know a nice characterization, or if you think there is no nice characterization please email or leave a comment. [Edit Oct 7, 2010: Some of the above is now in the stacks project, see Lemma Tag 05D2 for the existence of the maps ψ and see Lemma Tag 05D6 for the result on splitting M as a direct sum of finitely presented modules.] One thought on “Mittag-Leffler modules”
{"url":"https://www.math.columbia.edu/~dejong/wordpress/?p=910","timestamp":"2024-11-05T23:25:59Z","content_type":"text/html","content_length":"28008","record_id":"<urn:uuid:85fa491c-63b4-42f5-a721-682e771319ac>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00366.warc.gz"}
How do you translate word phrases to algebraic expressions: five more than twice a number? | HIX Tutor How do you translate word phrases to algebraic expressions: five more than twice a number? Answer 1 See a solution process below: First, let's call "a number": #n# Next, "twice a number" means to multiply the number by #2# giving: #2 xx n# or #2 * n# or #2n# Then, "five more than" denotes adding five to the next sentence or: #2n + 5# Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer from HIX Tutor When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some Not the question you need? HIX Tutor Solve ANY homework problem with a smart AI • 98% accuracy study help • Covers math, physics, chemistry, biology, and more • Step-by-step, in-depth guides • Readily available 24/7
{"url":"https://tutor.hix.ai/question/how-do-you-translate-word-phrases-to-algebraic-expressions-five-more-than-twice--8f9af8ceb8","timestamp":"2024-11-11T08:21:26Z","content_type":"text/html","content_length":"575414","record_id":"<urn:uuid:c9cc83a8-dac4-4677-a60f-22d0fea419f9>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00607.warc.gz"}
In this episode, David Wong introduces Ethereum and explains its account-based model and the concept of smart contracts. He compares Ethereum to Bitcoin and highlights the advantages of Ethereum's more expressive programming capabilities. He also discusses the execution of transactions and the role of the Ethereum network in maintaining the state of smart contracts. comment on this story
{"url":"https://www.cryptologie.net/home/1/2","timestamp":"2024-11-12T22:45:28Z","content_type":"text/html","content_length":"48630","record_id":"<urn:uuid:dafe410a-55ce-4052-986a-8376a64483f3>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00760.warc.gz"}
1990 Block Data Standardized to 2010 Geography In time series tables that are standardized to 2010 census geography, NHGIS produces 1990 statistics by reaggregating census block data from 1990 Census Summary Tape File 1 (NHGIS dataset 1990_STF1). NHGIS first allocates census counts from 1990 blocks to 2010 blocks and then sums the reallocated counts for all 2010 blocks that lie within each target 2010 unit. In cases where a 1990 block may intersect multiple 2010 blocks, NHGIS applies interpolation to estimate how 1990 block characteristics are distributed among the intersecting 2010 blocks, primarily using the population and housing densities of 2000 and 2010 blocks to model the distribution of 1990 characteristics within 1990 blocks. Boundary data for 1990, 2000 and 2010 blocks vary in quality, making it impossible to determine exact relationships between 1990 and 2010 block boundaries. NHGIS uses a combination of direct and indirect overlay of block boundaries, constrained to avoid invalid intersections and balanced to employ the most suitable strategy depending on the degree of boundary mismatch. Modeling block relationships Problem: Inconsistent representations A major complication for 1990-to-2010 block interpolation is that there exists no data defining the exact spatial relationships between 1990 and 2010 blocks. The Census Bureau provides relationship files that give correspondences between 1990 and 2000 blocks and between 2000 and 2010 blocks but not between 1990 and 2010 blocks, and the 1990-to-2000 relationship file provides no information about the sizes of intersections between blocks. More helpfully, the Census's 2000 TIGER/Line files provide both 1990 and 2000 block boundary definitions, and the 2010 TIGER/Line files provide both 2000 and 2010 block boundaries, making it possible to delineate areas of intersection among blocks for consecutive censuses. (Our interpolation model uses NHGIS block boundary files, which were derived from these TIGER/Line sources.) No single TIGER/Line release includes both 1990 and 2010 block boundaries, and because the Census Bureau revises feature representations between TIGER/Line releases, it is impossible to determine the exact spatial relationships between many 1990 and 2010 blocks. An exacerbating factor is that between the release of 2000 and 2010 TIGER/Line files, the Census Bureau undertook a major accuracy improvement project that resulted in significant revisions to TIGER/Line feature representations throughout the country. As a result, there are millions of cases where no real change occurred in block boundaries between 1990 and 2010, but the representations of the boundaries still differ between the two TIGER/Line sources (Figure 1a). Figure 1. Inconsistent representations of block boundaries in Cincinnati, Ohio. Click for larger version Solution options We identify five options for modeling spatial relationships between 1990 and 2010 blocks. Our chosen solution is to combine Options 3 and 4 (indirect overlay and constrained direct overlay) through an approach we term "balanced overlay" (Option 5). 1. Direct overlay of polygons from the 2000 and 2010 TIGER/Line files: This approach is the simplest option but would yield millions of invalid relationships: cases where the overlay indicates an overlap between 1990 and 2010 blocks due only to changes in TIGER/Line boundary representations. In Figure 1a, many blue 1990 boundaries clearly correspond to brown 2010 boundaries, but the boundaries are variously offset, sometimes by a large amount. An interpolation model based on these relationships would frequently allocate a 1990 block's counts to 2010 blocks that do not in reality overlap the source block. 2. Conflation (i.e., re-alignment) of 1990 boundaries to correspond properly to 2010 TIGER/Line features: In most cases in Figure 1a, it is easy to see how the misaligned boundaries should agree. An obvious solution, therefore, is to revise the 1990 boundaries to achieve what the human eye does so easily: matching each 1990 boundary with its corresponding 2010 boundary (wherever matches exist). Unfortunately, although there are simple ways to correct systematic misalignments (e.g., uniform displacement having a consistent direction and distance), there remains, to our knowledge, no reliable automated means of eliminating massive numbers of idiosyncratic displacements, and most of the millions of differences between TIGER/Line versions are somehow idiosyncratic. We may reconsider using this approach if we someday find or develop a cost-effective means of reliably conflating all 1990 blocks for the entire U.S. 3. Indirect overlay (IO), determining relationships between 1990 and 2010 blocks via their relationships to 2000 blocks in separate TIGER/Line sources: This approach involves three operations: 1. Overlay 1990 and 2000 block polygons from 2000 TIGER/Line (Figure 1b) 2. Overlay 2000 and 2010 block polygons from 2010 TIGER/Line (Figure 1c) 3. To estimate the proportion of each 1990 block within each 2010 block, multiply 1990-2000 intersection proportions from step A with 2000-2010 proportions from step B, and sum the resulting products for each pair of 1990 and 2010 blocks IO performs well for the simple, common case where a single block in all three census years covers the same real extent. E.g., if a 1990 block matches a 2000 block in 2000 TIGER/Line data, and the corresponding 2000 block matches a 2010 block in 2010 TIGER/Line data, then IO will correctly determine that the 1990 block matches the 2010 block. IO is also effective wherever a 2000 block intersects exactly one 1990 block or exactly one 2010 block. For example, if 2000 TIGER/Line data show that a 2000 block corresponds to part of exactly one 1990 block, and in 2010 TIGER/Line data, the 2000 block is subdivided among 2010 blocks, we can be confident that the part of the 1990 block lying in the 2000 block is also subdivided among the same 2010 blocks. Conversely, if 2000 TIGER/Line data show that a 2000 block shares area with multiple 1990 blocks, but in 2010 TIGER/Line data, the 2000 block corresponds to only one 2010 block, then we can be confident that each of the 1990 blocks intersecting the 2000 block also intersect that 2010 block.^1 The problem with IO arises when a 2000 block intersects multiple 1990 blocks and multiple 2010 blocks. (See examples marked by arrows in Figure 1.) In such cases, IO essentially aggregates 1990 block data up to a larger 2000 block area and then disaggregates back down to smaller 2010 blocks, taking no account of correspondences among 1990 and 2010 blocks within the 2000 block. In the case marked at the top of Figure 1 (b and c), a pair of 1990 blocks and a pair of 2010 blocks both correspond to one 2000 block. IO would suggest that each of the two 1990 blocks intersect both of the 2010 blocks in roughly equal proportion. If one of the 1990 blocks contains a large population and the other contains none, then IO could yield significant errors by allocating population to both of the 2010 blocks, rather than allocating only to the one 2010 block that corresponds to the populated 1990 block. In each of the two other marked cases in Figure 1, a 2000 block corresponds to several 1990 and 2010 blocks. The basic problem remains: IO does not accurately capture any of the relationships among 1990 and 2010 blocks within each 2000 block. 4. Constrained direct overlay (CDO), applying direct overlay of 2000 and 2010 TIGER/Line data, restricted to areas that lie within the same 2000 block in both TIGER/Line sources: This approach involves two operations: 1. Overlay 1990 and 2000 block polygons from 2000 TIGER/Line with 2000 and 2010 block polygons from 2010 TIGER/Line 2. Identify "invalid intersections" where the 2000 block IDs from 2000 and 2010 TIGER/Line do not match, and omit these intersections from all area computations Figure 2. Constrained direct overlay For example, in Figure 2, the 2000 TIGER/Line data indicates that a 2000 block (1012) corresponds to exactly three 1990 blocks (118, 119, 120), and the 2010 TIGER/Line data indicates that the same 2000 block corresponds to exactly three 2010 blocks (2143, 2149, 2150). It follows that 1990 blocks 118, 119, and 120 should not intersect any 2010 blocks other than 2143, 2149, and 2150, and vice versa. Simple direct overlay (Option 1) would ignore this constraint, yielding several erroneous intersections (hashed regions in Figure 2a). IO (Option 3) would properly impose this constraint, but it would take no account of the arrangement of the 1990 and 2010 blocks within the 2000 block. By combining information from direct and indirect overlay, CDO eliminates invalid intersections and identifies correspondences between 1990 and 2010 blocks within 2000 blocks. CDO still yields some errors, as in Figure 2 where some of the direct intersections within the 2000 block appear to be false, but despite these errors, CDO does correctly identify the three main correspondences in Figure 2: 118 to 2150, 119 to 2143, 120 to 2149. CDO should generally perform well where the discrepancies between 2000 and 2010 TIGER/Line positions are small or moderate, as in Figures 1 and 2, but in some areas, the TIGER/Line discrepancies are so large that even constrained direct overlay is problematic. Figure 3. Extreme discrepancies in census block representations in California. Click for larger version Figure 3 illustrates two types of large discrepancies. First, there are idiosyncratic displacements (circled) in San Joaquin County, where a few 2010 TIGER/Line boundaries diverge sharply from 2000 TIGER/Line positions even though the two sources diverge only slightly throughout most of the county. Second, there are systematic displacements in Stanislaus County, where the 2000 TIGER/ Line boundaries are universally displaced by hundreds of meters to the west or northwest of their more accurate 2010 TIGER/Line positions. If the two TIGER/Line representations of a single 2000 block diverge so much that no part of the two representations overlap—a circumstance occurring in both of the counties in Figure 3 and also not uncommon elsewhere in the country—then CDO is not even applicable because there are no "valid intersections" between 1990 and 2010 blocks within the 2000 block. And even where a 2000 block's representations do intersect, if the intersection covers only a small portion of either of the representations, it indicates significant misalignment between the two sources, in which case, CDO could produce very poor 1990-2010 block correspondences within the problem block. 5. Balanced overlay, combining IO and CDO in varying degrees according to the degree of mismatch among blocks: This is the approach NHGIS uses for its interpolation from 1990 to 2010 blocks. The aim of "balanced overlay" is to take advantage of both IO and CDO by blending their results through a context-dependent weighted average, giving all of the weight to IO results where necessary (where CDO is not applicable) and otherwise giving greater weight to CDO according to its suitability, as determined by two factors: □ α (alpha): The proportion of a 2000 block covered by its intersection with a 1990 or 2010 block ☆ If α is small, then the 2000 block is much larger than a source or target block, so IO is likely to produce inaccurate correspondences for these blocks, and CDO is preferable ☆ If α is large, then the 2000 block is mainly covered by a single source or target block, and either CDO or IO may be suitable ☆ If α equals 1, then the 2000 block is fully covered by a single source or target block, so CDO offers no advantage, and IO is preferable □ β (beta): The proportion of a 2000 block's intersection with a 1990 or 2010 block that is "valid" (within both the 2000 and 2010 TIGER/Line representations of the 2000 block) ☆ If β is large, then the discrepancy between 2000 and 2010 TIGER/Line 2000 boundaries is likely small, and CDO is likely to perform well ☆ If β is small, then there is a large discrepancy between 2000 and 2010 TIGER/Line boundaries, and CDO is questionable ☆ If β equals 0, then there is no valid intersection between 2000 and 2010 TIGER/Line polygons, so CDO is not applicable, and IO must be used Both factors can be measured for 1990-2000 block relationships (α[1] and β[1]) and for 2000-2010 block relationships (α[2] and β[2]). We compute all four measures, and integrate them into a weighted average of IO and CDO results as specified below. Interpolation model General approach: Cascading density weighting The general interpolation approach is cascading density weighting (CDW). In CDW, the target year's densities guide the interpolation of the preceding year's densities, which in turn guide the interpolation of the next preceding year's densities, and so on^2. CDW interpolation from 1990 blocks to 2010 units has 3 main steps: 1. Use 2010 block densities to guide the allocation of 2000 counts among 2000-2010 block intersections 2. Use the estimated 2000 densities to guide the allocation of 1990 counts among 1990-2000-2010 block intersections 3. Sum the allocated 1990 counts within each target 2010 unit to produce final estimates For the first step, we use the same interpolation model that we use to produce block-based 2000 data standardized to 2010 geography. This model primarily uses 2010 block population and housing densities to guide interpolation, but for any inhabited 2000 block that is split by target 2010 units, the model also makes use of 2001 land cover and 2010 road data to refine the modeled For the second step, we use no additional land cover or road data to refine the model. In this regard, the model for 1990 interpolation is simpler than the model for 2000 interpolation. It would be possible to integrate land cover or road data from circa 1990 to improve the model, but evidence from an assessment of NHGIS's 2000 model (Schroeder 2017) indicates that applying this kind of "dasymetric refinement" to a density-weighting approach for block interpolation offers relatively small advantages compared to using simpler total-land-area density-weighting. By using the hybrid 2000-2010 interpolation model in Step 1 of the 1990 CDW model, some dasymetric refinement is still employed, though it uses 2001 land cover and 2010 road data, and it is limited to cases where an inhabited 2000 block is split by target 2010 units. Specification: Interpolation via balanced overlay Our execution of CDW uses the following sequence of operations: 1. Georectify Hawaii's 2000 TIGER/Line polygons to 2010 TIGER/Line features in order to accommodate a systematic change in the coordinate system used to represent Hawaii features between the two TIGER/Line versions. □ In this one instance, our model entails some degree of "conflation". 2. Separately apply IO (indirect overlay) and CDO (constrained direct overlay) to estimate z[00], the sum of 2000 population and housing units^3, for each 1990-2000-2010 block intersection, refining the IO and CDO operations by using the NHGIS hybrid model for 2000-2010 block interpolation rather than simple area weighting alone: □ IO(z[00]) = (2000 block population + housing units) * (expected proportion of 2000 block count in 2010 block, from NHGIS 2000-to-2010 block crosswalk) * (proportion of 2000 block's area^4 in 1990 block, from 2000 TIGER/Line files) □ CDO(z[00]) = (2000 block population + housing units) * (expected proportion of 2000 block count in 2010 block, from NHGIS 2000-to-2010 block crosswalk) * (proportion of 2000-2010 block intersection's "valid" area—the area^4 within both 2000 and 2010 TIGER/Line representations of the 2000 block—that intersects 1990-2000 block intersection's 3. If there is no valid intersection between the 2010 TIGER/Line 2000-2010 block intersection and the 2000 TIGER/Line 2000 block polygon, then the CDO(z[00]) estimates within the 2000-2010 block intersection are arbitrarily set to zero with no effect on final estimates, as the balancing equation in Step 5 will assign all weight in this area to the IO estimates. 4. For each 1990-2000-2010 block intersection, compute four measures of block area mismatch (α[1], α[2], β[1], and β[2]) indicating relative suitability of IO and CDO 5. Compute geometric means of α measures and β measures to produce single score for each factor: □ α = (α[1]α[2])^1/2 ☆ α is large only where both the 1990 block and 2010 block comprise a large portion of the 2000 block ☆ α is small if either the 1990 block or 2010 block comprise a small portion of the 2000 block □ β = (β[1]β[2])^1/2 ☆ β is large only if the 2000 block's intersections with both the 1990 block and 2010 block lie mainly within the 2000 block's valid area (the area lying in both the 2000 and 2010 TIGER/ Line representations of the 2000 block) ☆ β is small if the 2000 block's intersections with either the 1990 block or 2010 block lie mainly outside of the 2000 block's valid area ☆ β is zero if the 2000 block's intersections with either the 1990 block or 2010 block lie entirely outside of the 2000 block's valid area 6. Combine mismatch factor scores into single indicator of suitability of CDO relative to IO: □ w[CDO] = (1 - α^12)β^α ☆ w[CDO] is generally large where α is small or β is large ☆ w[CDO] is small where α is large and β is small ☆ w[CDO] is zero where α is one or β is zero 7. Produce final estimate of z[00] for each 1990-2000-2010 block intersection through balanced overlay (BO), combining the IO and CDO estimates through a context-dependent weighted average: □ BO(z[00]) = w[CDO]*CDO(z[00]) + (1 - w[CDO])*IO(z[00]) 8. Sum BO(z[00]) for each 1990 block 9. Estimate the proportion of each 1990 block's population and housing in each 1990-2000-2010 block intersection as:^5 □ ([BO(z[00]) for the intersection] / [sum of BO(z[00]) for the 1990 block] 10. Use the estimated proportions to allocate 1990 block counts to 2010 blocks 11. Sum the estimated 1990 counts within each target 2010 unit to produce final estimates for NHGIS time series A file containing the final interpolation weights (the estimated proportion of each 1990 block's characteristics lying in each 2010 block) is available through the NHGIS Geographic Crosswalks page. Lower and upper bounds In NHGIS time series standardized to 2010 geography, the lower and upper bounds on 1990 estimates are based on IO (indirect overlay) alone. In effect, the assumption is that, given all of the 2000 blocks with which a 1990 block shares area^6 (according to 2000 TIGER/Line data), if there is more than one 2010 block that also shares area with any of those 2000 blocks (according to 2010 TIGER/ Line data), then it is possible that either all or none of the 1990 block's characteristics were located in any one of the "indirectly" associated 2010 blocks (regardless of whether the 1990 and 2010 blocks intersect via direct overlay). 1. ^ These examples assume that the topological relationships among boundaries within each TIGER/Line version are accurate, which is reasonable given the essential priority given to topological accuracy throughout the history of the TIGER/Line program. 2. ^ For a complete specification and assessment of cascading density weighting, see Chapter 3 in Schroeder (2009). 3. ^ As in the 2000-2010 block interpolation model, we use the sum of population and housing units, rather than population or housing units alone, to guide 1990-2010 block interpolation. A model that uses total population alone as a guide would be problematic for interpolating counts of housing units in areas with large group-quarters populations (which do not reside in housing units) or large numbers of vacant housing units. 4. ^a b We restrict the area measurement to land area if the 2000 block includes any land area; otherwise, we use total area including water. 5. ^ In cases where BO(z[00]), the estimated count of 2000 population and housing units, equals zero for an entire 1990 block, we use a balanced areal weighting model to generate interpolation weights. Balanced areal weighting uses balanced overlay to estimate areas of intersection between 1990 and 2010 blocks, restricting area measures to land areas in blocks that have land area. It then allocates 1990 counts to 2010 blocks in proportion to the estimated areas. The equation balancing IO and CDO areal weighting uses measures of α and β based on only 1990-2000 block relationships (α[1] and β[1]) and not 2000-2010 block relationships (α[2] and β[2]). 6. ^ We restrict the area measurement to land area if the 1990 block includes any land area; otherwise, we use total area including water.
{"url":"https://www.nhgis.org/1990-block-data-standardized-2010-geography","timestamp":"2024-11-12T02:40:59Z","content_type":"text/html","content_length":"59753","record_id":"<urn:uuid:de6d947b-b018-475e-b1e2-b7a89861c813>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00594.warc.gz"}