content
stringlengths
86
994k
meta
stringlengths
288
619
What is Probabilistic Forecasting? Do your customers ever ask “When will it be done?” When dealing with the future, there’s almost never an accurate deterministic answer (Tuesday, exactly at 3:45pm) to that question but there is an accurate probabilistic answer (85% chance of completion on or before October 1) and in most cases, it’s a lot easier to calculate than you’d expect. There are several different flavours of probablistic forecasting and we’re going to look at a couple of them. Many people use the terms probabilistic forecasting and Monte Carlo interchangeably, as if they’re the same thing, except they’re not. Monte Carlo is certainly the most common way to create a probabilistic forecast, but it’s only one way of many. For any kind of a probabilistic forecast, we have an assumption that the data is inherently predictable. This implies that we sliced the work appropriately in the first place and then took steps to make sure the data quality was maintained. If our historical data is unreliable then it doesn’t matter what kind of model we use, the result will also be unreliable. Service Level Expectation If we only have one item that you want to forecast then we can just use the Service Level Expectation (SLE) for the system. In the example below, 85% of the items completed in nine days or less and that’s our SLE. If we want to determine how long a single item will take and we’re able to start immediately then we can say that with 85% certainty, we’ll be done in nine days or less. Cycletime Scatterplot from JiraMetrics In this example, we’re looking at the SLE for a story, although we could have just as easily done that for an Epic or other type of work, assuming we’ve sliced that work appropriately. This is the fastest way to get a forecast but is limited by the fact that it only works for a single item and also assumes that we can start the work right away. If we have many items to forecast, which is the more common case, we need to look at something more complex, such as a Monte Carlo simulation. Story level Monte Carlo simulation A Monte Carlo simulation uses large numbers of simulated outcomes to determine the likelihood of a thing happening. There are two common cases that we can solve for with this approach. 1. We have a hundred items to do and want to know how long it will take to complete them 2. We have a release date already promised and want to know how much work we can fit before that date. The approach works by running thousands of simulated runs and looking at how often each one occurs, as shown in the diagram below. If we run the simulation a 1000 times and 800 of those simulated runs complete on or before October 1 then we can state that we have an 80% (800/1000) chance of completing on or before that date. Throughput Forecaster from FocusedObjective.com While individual simulation runs are largely unpredictable, we start to see patterns emerge as we run thousands of them together and aggregate the results. These patterns turn out to be highly accurate predictors of future performance. There are tools^1 that will run all of these simulations in seconds so this is an extremely easy way to generate a probabilistic forecast across many items. As this is so quick to do, we encourage you to redo the forecast regularly to see how we are tracking. In the same way that we continually update a weather forecast with new information, we want to do the same here. Epic level Monte Carlo simulation We are sometimes asked to make predictions about large pieces of work for which we know very little. Decomposing those large items into small enough stories to do the Monte Carlo that we discussed above is a significant amount of up-front work that may not be justified. The good news is that we can do a Monte Carlo on epics or other larger types of stories, assuming that we’ve been slicing our epics well. A Monte Carlo simulation can be easily done at an epic level and decomposing our large work into epics isn’t nearly as daunting an effort as splitting into stories would have been. The problem is that most companies have not traditionally split their epics well and so the data that we would need to do the Monte Carlo is flawed and we are left with a garbage-in, garbage-out How would you know if the data is good? Start by looking at how we group stories within the epics. If we’re using epics as collections of items, rather than as distinct units of value then they’ll be useless for a Monte Carlo simulation. See this this article on slicing epics for more on that. If you want to know how a Monte Carlo simulation works under the covers, then see this article. If our data doesn’t help with that, then we move on to reference class forecasting. Reference class forecasting Reference class forecasting is when we look for comparable work from the past, consider all the ways that historical work is both the same and different from the upcoming work, and create a forecast from that. There are some tricks to this and it does take some work to get a reasonable forecast, but it’s a good technique when you’re not in a position to do a Monte Carlo simulation. See this other article that walks through reference class forecasting with examples. Monte Carlo simulations are just one, of several ways, to do a probabilistic forecast. Which approach you take will depend on what information you have available and what you need to know. 1. The two Monte Carlo tools I use most often are the Throughput Forecaster from FocusedObjective.com (look under free tools) and Actionable Agile from 55 Degrees. You can’t go wrong with either one of these. ↩
{"url":"https://improvingflow.com/2024/06/02/probabilistic-forecasting.html","timestamp":"2024-11-05T08:33:59Z","content_type":"text/html","content_length":"17689","record_id":"<urn:uuid:196e63cb-b69b-46a3-b40b-e89f56425120>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00174.warc.gz"}
Damon's maths and numeracy blog How far is the island? A week or so ago I posed the following problem: You are stuck on an island inhabited by man-eating crabs. Unless you swim to another island you will be eaten. The problem is, you can't tell how far away the island is. Is it two kilometers or five? Knowing the distance with reasonable accuracy may be the difference between making it or not. How do you determine how far away an object is when you are unable to directly measure it? Well this is where maths is so cool. Maths allows you to reach beyond the mortal coil, to see into the future and stretch beyond your physical limitations. I asked my children how they would work out how far the island was. One said you could observe how fast the shadow of a cloud moved, and time it, as it traveled the distance. Awesome. My second eldest said you could use the sun traveling across the sky as a sort of timer and convert it to distance. Very nice idea also. There is another way. It involves a right angled, 45 degree triangle. That's a square folded in half for the uninitiated. A VERY useful shape. I cannot be bothered writing down how I would do it - however I do want to amaze you with my brilliance so I'm going to have another go at a video. A note: I am a poor student. I don't have ANY high-tech gadgetry. This clip is raw. But I have filmed in shaky cam style to provide some realism. Think 'Blair Witch project' or 'The Borne Identity'. In other words- sorry about the quality. Also- I can only upload 2 minute clips (Errr), so it is in two two minutes blocks. So there you have it - I hope that gives you an idea anyway. If we know the length of one side of a right angled triangle and one of other angles we can determine the length of any side. The 45 degree triangle rocks. You can use it to work out how high cliffs are, trees etc. This is essentially the system the navigators on ships would use. They had Sextants that would work better than my bamboo square - but the same fundamental idea. My question to you is this... What kind of scenarios might get your children/adult learners (or you) interested in exploring this concept of distance estimation further? I may put up a learning plan in coming weeks as to how I begin to develop interest and knowledge of trigonometry with learners. 4 comments: 1. I so wish I had had a teacher who got me excited about learning this stuff! Those two little clips are fabulous -and I learned something about 45% angles! So cool! 2. This comment has been removed by the author. 3. Cool... Clips would play on my iphone though. Thanks for the follow up... 4. The material and aggregation is excellent and telltale as comfortably. inverse trig identities
{"url":"https://damonmath.blogspot.com/2014/07/how-far-is-island-week-or-so-ago-i.html","timestamp":"2024-11-04T14:16:27Z","content_type":"text/html","content_length":"77903","record_id":"<urn:uuid:0e9cdd87-09e2-4b7c-846b-c69c50c04aae>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00493.warc.gz"}
A 2.09 g piece of zinc metal is added to a calorimeter containing 1.50 L of... A 2.09 g piece of zinc metal is added to a calorimeter containing 1.50 L of... A 2.09 g piece of zinc metal is added to a calorimeter containing 1.50 L of 0.600 M HCl. This results in a reaction where zinc chloride and hydrogen gas is formed, and the temperature of the solution increases by 0.780°C. The density of the solution is 1.00 g/mL, and its specific heat capacity is the same as water, what is the enthalpy change, per mole of zinc, for the reaction? (a) 65.8 kJ (b) –2.34 kJ (c) –4.90 kJ (d) –153 kJ (e) –136 kJ Amount of heat leberated , q = mcdt m = mass of solution = volume x density = 1.50 L x(1000 mL/L) x 1.00(g/mL) = 1500 g c = specific heat capacity of solution = 4.184 J/goC dt = raise in temperature = 0.780 oC Plug the values we get Q = 4895.3 J This is the amount of heat leberated by 2.09 g of Zn So 2.09 g of Zn leberates 4895.3 J of heat 1 mole = 65.4 g of Zn leberates (65.4x4895.3)/2.09 = 153x103 J = 153 kJ Therefore enthalpy change, per mole of zinc is -153 kJ So option (d) is correct
{"url":"https://justaaa.com/chemistry/93123-a-209-g-piece-of-zinc-metal-is-added-to-a","timestamp":"2024-11-04T17:39:04Z","content_type":"text/html","content_length":"41997","record_id":"<urn:uuid:f4622d24-ea0c-4720-8c2e-f2bf900d16fd>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00452.warc.gz"}
Local buckling collapse of marine pipelines To keep up with the growing demand for oil and gas, the oil and gas industry ventures into deeper waters. For a deep water pipeline project, South Stream, a pipeline test program is developed. Part of this program is the investigation of the resistance of an externally pressurised pipeline against local buckling collapse. This is a decisive factor in design of marine pipelines. Local buckling of a pipeline is the buckling behaviour within its cross section. Buckling is defined as the state of a structure for which a relatively small increment in load leads to a relatively large increment in displacement. Generally this is reflected in a change in deformation shape and possibly a loss of stability. A perturbation theory, first developed by Koiter [1], is applied to model this behaviour. The response of a geometrically imperfect structure is obtained by using the response of an initially perfect structure, e.g. a straight beam or a perfect ring. From the principle of minimum potential energy an equilibrium state can be obtained. For a certain load, the bifurcation load, multiple equilibrium configurations are possible. The nature of this equilibrium state is investigated by expanding the load around this bifurcation load and expanding the displacement functions around their fundamental solutions. The value and sign of the post-bifurcation load coefficients determine the system’s initial post-bifurcation stability. Introduction of initial imperfections leads to modified post-bifurcation load coefficients. Generality is enhanced by using dimensionless identities. System collapse can occur in the elastic domain for unstable initial post-buckling behaviour or in the plastic domain due to material yielding. It is likely that collapse of a system with (small) initial geometric imperfections occurs due to an interaction of elastic and plastic buckling. Buckling leads to relatively large displacements that induce material yielding. This can lead to loss of stiffness and can induce collapse. Relatively thin walled rings and cylinders tend to collapse more in the elastic domain, while relatively thick walled rings and cylinders tend to collapse more in the plastic domain. This is due to the fact that thin walled structures require more deformation to induce yielding than thick walled structures. When performing a collapse test, end caps are attached to a pipeline specimen. This is modelled by boundary constraints. End caps are very stiff and modelled as being rigid. Their influence on the bifurcation and collapse behaviour of a cylinder is investigated. The constraints introduce boundary layer behaviour in the regions close the end caps. It is found that these constraints increase the buckling load of a cylinder with respect to an infinitely long cylinder (ring under plane strain condition). Besides, for relatively short cylinders, the buckling mode is altered. While a long cylinder prefers to collapse in an oval shape mode (described by 2 lobes), a short cylinder prefers to collapse in a mode shape described by a higher number of lobes. A collapse test is performed to estimate the collapse behaviour of a real life pipeline. Hence it is required that the collapse shape that is observed in the test matches the oval collapse shape of a long real life pipeline. This results to a minimum required length of a tested pipeline specimen. A relation for the required length is obtained. An analytical method has been developed to determine the buckling load and mode of a constrained cylinder. Finally, the analytically obtained results have been verified using finite element analysis (FEA) and experimental results obtained from literature. [1] W.T. Koiter. Over de stabiliteit van het elastisch evenwicht. PhD Thesis, TH Delft, 1945
{"url":"https://repository.tudelft.nl/record/uuid:1ec92517-ad48-4b23-aade-a2c61458ef4a","timestamp":"2024-11-09T20:19:15Z","content_type":"text/html","content_length":"23855","record_id":"<urn:uuid:a27840b3-c039-4171-a245-111abb7d4d36>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00582.warc.gz"}
Cite Efficiently in Your Academic Paper Citation Styles Many academic journals, publishers or university departments require a specific citation style. If that is the case, check their guidelines. However, if nothing is specified you still have to choose one and be consistent with it. Which style should be chosen? Usually your choice will depend on your field/discipline or even country of publication. For instance, APA is one of the most common styles for social sciences, MLA in humanities, AMA in medicine, OSCOLA for law in the UK mainly etc. Luckily enough if you use the right tools, changing from one style to another comes at no additional effort. Citing in LaTex Store your references in a ".bib" file To cite in LaTeX make sure you have a .bib file in the same folder as your.tex file. Be efficient, export your references to a .bib file straight from a reference management tool. This way, you avoid manually typing in the references or downloading them one by one from the journal. • Not familiar with reference management tools? Check our building block Cite in text The basic command to add citation in text is \cite{label}. However, you may not like how the citation is printed out and might want to change it. For this, you need to add additional packagess to your ".tex" document which will give more options on how citations in text appear. Here we show 2 commonly used options, which allow for further flexibility when citing: A) Natbib Natbib is a widely used, and very reliable package as it relies on the bibtex environment. To employ it, type in \usepackage{natbib} in the preamble of your document. The table below describes some examples of additional citation commands that come with the Natbib package: | Command | Description | Example | | ------------- | :--------------------------------------: | --------------: | | \citet{} | Textual citation | Jon Doe (2021) | | \citep{} | Parenthetical citation | (Jon Doe, 2021) | | \citeauthor{} | Prints only the name of the authors(s) | Jon Doe | | \citeyear{} | Prints only the year of the publication. | 2021 | You can find more information on how to employ Natbib here. B) Biblatex Biblatex has the advantage that it is also designed to use the biber environment, which allows for further flexibility when formatting. To employ Biblatex, in the preamble of your document make sure to include: • \usepackage{biblatex} • \addbibresource{.bib} To cite in text, using Biblatex you only need the command \cite{label}. How this appears in the text depends on the citation style that you choose. Choose your citation style Each of the above mentioned packages contains standard citation styles and some journal-specific styles. Some will be better for the standard styles (according to your taste). However, for for the non-standard-styles, Biblatex, contains a wider range of options. If using Natbib, in the preamble of your document type in: Where the predetermined stylename options for Natbib are: dinat, plainnat, abbrvnat, unsrtnat, rusnat, rusnat and ksfh_nat. Check how they look in the following page Want a specific style, for instance, APA? You can also use the bibliography style option apalike among others (e.g. jep, harvard, chicago, astron...). If using Biblatex, in the preamble of your document type in: \usepackage[backend=biber, style=stylename,]{biblatex} Where there are many more predetermined stylename options than for Natbib. You can find these options in the following link. Biblatex is especially good at non-standard citation styles, which are usually journal-specific. For instance, among others, it includes the following commonly used citation styles: | Citation style | biblatex "stylename" | | -------------- | :------------------: | | Nature | nature | | Chicago | chicago-authordate | | MLA | mla | | APA | apa | Print your formatted references Again, it will change slightly depending on what package you're using. \bibliography{bibfile} % Wherever you want your references to be printed \printbibliography % Wherever you want your references to be printed Don't forget the \addbibresource command if using Biblatex. Citing in Lyx Because Lyx is based on LaTex, adding citations straight from a .bib file is also possible in Lyx. Citing in Lyx from a .bib file • Go to "Insert" > "List/TOC" > "Bibliography". • Browse for the .bib file and click Add. • In the same window, under Style, choose the style you wish to use. • In the Content section, choose from the drop-down to select the references. • Find the locations for which the in-text citations shall appear: • Select the matching references for each citation Citing in Word Not comfortable with LaTeX and using Word? Well, though not as easy as in LaTeX, citing and printing the bibliography in Word can be quite efficient if combined with the Mendeley plug-in. • In Mendeley Reference Manager, go to: □ "Tools" > "Install Mendeley Cite for Word" • Go to Word and click on "References". If correctly installed, the following should appear in the top right-hand corner of Word: • To cite your references, click on the plug-in, select your citation and click on insert citation. Go to "Citation Style" if you wish to choose from the available citation style options. • To print your bibliography go to "insert bibliography" • Need more help or information? Go to the following page
{"url":"https://tilburgsciencehub.com/topics/research-skills/writing/citations/citations/","timestamp":"2024-11-11T07:16:21Z","content_type":"text/html","content_length":"71038","record_id":"<urn:uuid:cdba0880-aa0e-48b1-8679-0fd121cf59cc>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00795.warc.gz"}
The IMARGUMENT function returns the argument corresponding to the given number in a complex number in rectangular form. Use the IMARGUMENT formula with the syntax shown below, it has 1 required parameter: 1. number (required): A number between 1 and 2 specifying which argument to return. If number is not an integer, it is truncated. Here are a few example use cases that explain how to use the IMARGUMENT formula in Google Sheets. Extract the real part of a complex number To extract the real part of a complex number in rectangular form, use IMARGUMENT with number set to 1. Extract the imaginary part of a complex number To extract the imaginary part of a complex number in rectangular form, use IMARGUMENT with number set to 2. Calculate the argument of a complex number To calculate the argument of a complex number in rectangular form, use IMARGUMENT with number set to 1 and the imaginary part of the complex number divided by the real part. Common Mistakes IMARGUMENT not working? Here are some common mistakes people make when using the IMARGUMENT Google Sheets Formula: Incorrectly entering the argument Make sure to provide the correct syntax and argument to the function, otherwise it will return an error. Using non-complex numbers The IMARGUMENT function can only be used with complex numbers in x + yi or x + yj text notation. Related Formulas The following functions are similar to IMARGUMENT or are often used with it in a formula: Learn More You can learn more about the IMARGUMENT Google Sheets function on Google Support.
{"url":"https://checksheet.app/google-sheets-formulas/imargument/","timestamp":"2024-11-10T04:37:58Z","content_type":"text/html","content_length":"43982","record_id":"<urn:uuid:36bd433a-1189-4efa-a691-2655bd068133>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00262.warc.gz"}
Two-space, two-time similarity solution for decaying homogeneous turbulence A two-point, two-time similarity solution is derived for homogeneous decaying turbulence. This is the first known solution which includes the temporal decay at two-different times. It assumes that the turbulence is homogeneous in all three space dimensions, and finds that homogeneity holds across time. The solutions showthat time is logarithmically "stretched" while the homogeneous spatial scales grow. This solution reduces to the two point, single time equation when the two times are set equal. The turbulence initially decays exponentially, then asymptotically as t^n where n ≥ 1 and equality is possible only if the initial energy is infinite. The methodology should be applicable to other non-equilibrium homogeneous turbulent flows. Published by AIP Publishing. All Science Journal Classification (ASJC) codes • Computational Mechanics • Condensed Matter Physics • Mechanics of Materials • Mechanical Engineering • Fluid Flow and Transfer Processes Dive into the research topics of 'Two-space, two-time similarity solution for decaying homogeneous turbulence'. Together they form a unique fingerprint.
{"url":"https://collaborate.princeton.edu/en/publications/two-space-two-time-similarity-solution-for-decaying-homogeneous-t","timestamp":"2024-11-06T17:52:36Z","content_type":"text/html","content_length":"48604","record_id":"<urn:uuid:b4e3d932-a8a4-4f50-8c0c-2fa3e1daa65f>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00019.warc.gz"}
Eigenvector Centrality of a Graph Given a graph with adjacency matrix $\mathbf A$, the eigenvector centrality is $$ \mathbf e_u = \frac{1}{\lambda} \sum_{v\in\mathcal V} \mathbf A[u,v] \mathbf e_v, \qquad \forall u \in \mathcal V. $$ Why is it called Eigenvector Centrality The definition is equivalent to $$ \lambda \mathbf e = \mathbf A\mathbf e. $$ Power Iteration The solution to $\mathbf e$ is the eigenvector that corresponds to the largest eigenvalue $\lambda_1$. Power iteration method can help us get this eigenvector, i.e., the $^{(t+1)}$ iteration is related to the previous iteration $^{(t)}$, through the following relation, $$ \mathbf e^{(t+1)} = \mathbf A \mathbf e^{(t)}. $$ If we set $\mathbf e^{(0)} \to (1, 1, \cdots , 1)$. Indications of the Power Iteration Method The power iteration method hints that the eigenvector centrality is related to walking through the graph. For step 0, we have $$ e^{(1)}_i = A_{ij} e^{(0)}_j, $$ $e^{(0)} _ j$ is the probability of visiting node $j$, which is 1. $A_{ij}$ is the transfer probability from node $j$ to $i$. After this iteration, we get the number of visits on node $i$. Repeat this, we have a vector that shows the number of visits on node $i$ after $t+1$ steps. That being said, the eigenvector centrality is proportional to the likelihood of visiting the nodes after infinite steps of random walks on the graph. Planted: by L Ma; L Ma (2021). 'Eigenvector Centrality of a Graph', Datumorphism, 09 April. Available at: https://datumorphism.leima.is/cards/graph/graph-eigenvector-centrality/.
{"url":"https://datumorphism.leima.is/cards/graph/graph-eigenvector-centrality/?ref=footer","timestamp":"2024-11-12T03:55:33Z","content_type":"text/html","content_length":"112730","record_id":"<urn:uuid:ef8c6e58-9f15-41a4-aae0-0bf04d4092c3>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00750.warc.gz"}
Magnetohydrodynamic (MHD) fluid flows attract a lot of attention in the extrusion of polymers, in the theory of nanofluids, as well as in the consideration of biological fluids. The considered problem in the paper is the flow and heat transfer of nano and micropolar fluid in inclined channel. Fluid flow is steady, while nano and micropolar fluids are incompressible, immiscible, and electrically conductive. The upper and lower channel plates are electrically insulated and maintained at constant and different temperatures. External applied magnetic field is perpendicular to the fluid flow and considered problem is in induction-less approximation. The equations of the considered problem are reduced to ordinary differential equations, which are analytically solved in closed form. The influence of characteristics parameters of nano and micropolar fluids on velocity, micro-rotation and temperature fields are graphically shown and discussed. The general conclusions given through the analysis of graphs can be used for better understanding of the flow and heat transfer of nano and micropolar fluid, which have a great practical application. Fluids with nanoparticles innovated the modern era, due to their comprehensive applications in nanotechnology and manufacturing processes, while the theory of micropolar fluids explains the flow of biological fluids and various types of liquid metals and crystals. PAPER SUBMITTED: 2023-05-15 PAPER REVISED: 2023-07-01 PAPER ACCEPTED: 2023-07-10 PUBLISHED ONLINE: 2023-08-05 , VOLUME , ISSUE Issue 6 , PAGES [4473 - 4484]
{"url":"https://thermalscience.vinca.rs/2023/6/10","timestamp":"2024-11-03T16:23:30Z","content_type":"text/html","content_length":"15998","record_id":"<urn:uuid:b58d308a-47c6-4f36-9c56-bb323213a390>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00168.warc.gz"}
CFM Calculator – Easily Determine Your Airflow Needs Use this tool to quickly calculate the cubic feet per minute (CFM) of air flow needed for your space. CFM Calculator How to Use the CFM Calculator: To use the CFM calculator, input the volume of the room in cubic feet and the desired air changes per hour. After entering these values, click the “Calculate” button to determine the necessary CFM (Cubic Feet per Minute). How it Calculates the Results: The calculator determines the CFM by multiplying the room’s volume by the desired air changes per hour and then dividing by 60 minutes to convert the result into cubic feet per minute. This calculator assumes uniform air flow and does not account for obstructions, variations in ceiling height, or the efficiency of your HVAC system. For more complex scenarios, please consult with a HVAC professional.
{"url":"https://madecalculators.com/cfm-calculator/","timestamp":"2024-11-06T21:37:18Z","content_type":"text/html","content_length":"141934","record_id":"<urn:uuid:18bc1823-ff22-4518-8cf1-62aeb99898bf>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00746.warc.gz"}
The Influence of Geometrical Parameters on the Rigidity of Couplings with Elastic Metal Elements of Circular Cross Section Authors: Zyablikov&nbspV.M., Semenov-Ezhov&nbspI.E., Shirshov&nbspA.A. Published: 04.09.2018 Published in issue: #8(701)/2018 DOI: 10.18698/0536-1044-2018-8-3-10 Category: Mechanical Engineering and Machine Science | Chapter: Machine Science Keywords: coupling with elastic elements, rigidity of the coupling, rod of circular cross section, contact pressure, stress concentration factor, factor analysis The main characteristic of couplings is rigidity. In couplings with elastic metal elements in the form of rods, rigidity chiefly depends on the flexural rigidity and the free length of the rods, as well as on the contact compliance and the gap between the diameters of the rod and the hole in the half-coupling, into which the rod is inserted. The influence of these parameters on the contact pressure and the stress concentration, estimated by means of the theoretical stress concentration factor is examined. The numerical study of the contact interaction of the rod — half-coupling pair is conducted using the factor analysis. It is established that in addition to the flexural rigidity and the free length of the rods, the gap between the diameters of the rod and the hole in the half-coupling into which the rod is inserted has the greatest effect on rigidity. The calculations are carried out using the ANSYS software (version R17, Academic). The results obtained are presented in the form of graphs of the dependences of the relative deflection, the relative contact pressure and the theoretical coefficient of stress concentration on the relative gap. [1] Dunaev P.F., Lelikov O.P. Konstruirovanie uzlov i detalei mashin [Designing machine components and parts]. Moscow, Akademiia publ., 2004. 496 p. [2] Tashkinova E.V. Mekhanicheskie mufty [Mechanical couplings]. Perm, PSTU publ., 2012. 27 p. [3] Ziablikov V.M., Shirshov A.A. Raschet zhestkosti muft s uprugimi elementami v vide stal’nykh sterzhnei kruglogo secheniia [The calculation of the stiffness of joints with elastic elements in the form of steel bars of round section]. Spravochnik. Inzhenernyi zhurnal s prilozheniem [Handbook. An Engineering journal with appendix]. 2014, no. 8, pp. 26–30. [4] Feodos’ev V.I. Soprotivlenie materialov [Strength of materials]. Moscow, Bauman Press, 1999. 592 p. [5] Kaplun A.B., Morozov E.M., Olfer’eva M.A. ANSYS v rukakh inzhenera [ANSYS in the hands of an engineer]. Moscow, Editorial URSS publ., 2003. 272 p. [6] Rao S.S. The Finite Element Method in Engineering. Elsevier, UK, 2011. 726 p. [7] Fischer-Cripps A.C. Introduction to Contact Mechanics. Springer-Verlag, US, 2007. 248 p. [8] Wriggers P., Laursen T.A. Computational Contact Mechanics. Springer, 2008. 248 p. [9] Sidniaev N.I. Teoriia planirovaniia eksperimenta i analiz statisticheskikh dannykh [Theory of experiment planning and statistical data analysis]. Moscow, Iurait publ., 2011. 399 p. [10] Freeman L.J., Ryan A.G., Kensler J.L., Dickinson R.M., Vining G.G. A Tutorial on the Planning of Experiments. Quality Engineering, 2013, vol. 25, is. 4, pp. 315–332. [11] Dean A., Voss D., Draguljić D. Design and Analysis of Experiments. New York, Springer, 2017. 847 p.
{"url":"https://izvuzmash.bmstu.ru/eng/catalog/mechanical/mach_scien/1572.html","timestamp":"2024-11-05T04:05:40Z","content_type":"application/xhtml+xml","content_length":"12279","record_id":"<urn:uuid:8e63c64d-ee84-4d81-8caf-46a1e14532c6>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00144.warc.gz"}
Swapping variables in JavaScript ๐ There may be many different reasons why you'd want to swap two variables be it just changing two item's location in an array or when sorting collections. The traditional way is just define a new variable, assign one value to it, put one of the items in the old place, then put the temp variable back in the new place. But my question is not: is that the only way, rather is it the best for your The old way of swapping two variables is done like below: let x = "Yas"; let y = "Hints"; let temp = x; x = y; y = temp; console.log(x); // Hints console.log(y); // Yas There is nothing wrong with this approach unless you're doing it frequently. Without the temp variable There is another way you could swap two variables without any temp variable. But this only works with numbers: let x = 10; let y = 20; x = x + y; y = x - y x = x - y; console.log(x); // 20 console.log(y); // 10 This works two, but now we're doing three additional operations to save some space, so you need to be careful when you use this one. Another thing to consider with this approach is the chance of having overflows with additions or subtractions (sum should be less than Number.MAX_SAFE_INTEGER which is 9007199254740991). Bitwise XOR Similar to above approach, you could use XOR to swap the two variables, but this also works only on numbers: let x = 3; let y = 5; x = x ^ y; y = x ^ y; x = x ^ y; console.log(x); // 5 console.log(y); // 3 If you're not familiar with XOR, it works on bits. When you perform XOR two bits, it evaluates to 1 if they are different, and evaluates to 0 if they're the same. x y XOR So let's see why this works. 1. x = x ^ y 2. y = y ^ x when x = (x ^ y), so the y = (x ^ y) ^ y which equals to x ^ (y ^ y) = x ^ 0 = x. So now our y is the old x. 3. x = x ^ y when according to our first step x is not x ^ y, and so x = (x ^ y) ^ x = y ^ (x ^ x) = y ^ 0 = y. Is this better than the previous one, probably faster, but still limited to numbers only. ES6 destructuring Destructuring is an ES6 feature which is used a lot in many of the modern frameworks. In its core, it allows you to store array elements in variables. let x; let y; [x, y] = [1, 2, 3]; console.log(x); // 1 console.log(y); // 2 Now considering how we can use this to swap the elements of an array: let x = "Yas"; let y = "Hints"; [x, y] = [y , x]; console.log(x); // Hints console.log(y); // Yas This method is much elegant, but still creates two arrays to perform the swapping. So the efficiency might not be that good if you're swapping many elements. Just because a feature is available, doesn't mean you should use it in every situation. Think about what is the most important factor in the solution you're implementing. If it's space, choose one which doesn't take much, although it's a bit slower. If the memory doesn't matter but speed is important, choose accordingly. But definitely consider the situation before deciding on your approach. Top comments (1) Yaser Adel Mehraban • Yup, thanks for picking it up For further actions, you may consider blocking this person and/or reporting abuse
{"url":"https://dev.to/yashints/swapping-variables-in-javascript-136h","timestamp":"2024-11-09T16:28:01Z","content_type":"text/html","content_length":"90404","record_id":"<urn:uuid:d4c80c8b-559e-47d2-9615-04165b8e40ac>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00607.warc.gz"}
Qubits versus gates: The road to practical quantum advantage Since the early days of quantum computing, progress in the field has been measured by the increasing numbers of qubits — the benchmark companies like Google and IBM use to showcase their dominance in quantum development. Yet this metric doesn’t tell the whole story, writes Professor Ashley Montanaro, co-founder and CEO of Phasecraft, As we’ve learned more about the power of quantum computing, gate fidelity – how well we can manipulate the qubits to undertake computations – has emerged as an equally important metric. Gate fidelity isn’t just a way to evaluate today’s Noisy Intermediate-Scale Quantum (NISQ) computers, but also for finally paving the way to practical quantum advantage - when quantum computers outperform classical ones for useful real-world applications. The qubit obsession This emphasis on qubits made sense in quantum’s early days. It’s an easily quantifiable metric, providing a way to gauge and compare processors, much like how classical computers were compared using clock speeds, or smartphone camera makers focusing on megapixels. Creating a stable qubit was a significant achievement, whilst precision of operations was less of a concern. Increasing the number of qubits was a natural next step. Quantum computers below 40-50 qubits can easily be simulated by their classical counterparts, so it was essential to get beyond this barrier to achieve a meaningful quantum advantage. But now we’re past this stage, gate fidelity is emerging as a more relevant metric. As qubit numbers have increased, so too has the potential to run meaningful algorithms on quantum hardware. With this comes an increased chance of errors in the computations, due to noise and decoherence. It is therefore more important to perform quantum operations accurately. The case for gate fidelity Quantum computers work by changing the state of qubits through quantum operations (gates), with quantum algorithms determining the nature of gates and the order in which they must be performed (i.e. the quantum circuit) to achieve the desired outcome. Gate fidelity measures how accurately a quantum gate does its job: specifically a measure of the difference between the ideal, theoretical operation of a gate (how the gate should be working) versus its actual operation (how it actually performed). This distance between ideal and actual operation is a percentage — with 100% fidelity meaning the gate performed with no errors. Any quantum circuit, and algorithm, comprises multiple gates. These gates are often executed sequentially in an algorithm or circuit, where the number of consecutive gates is termed the circuit depth, and more complex calculations require deeper circuits. With errors on consecutive gates accumulating, gate fidelity translates into a measure of how many gates you can run on a quantum computer before the computation fails. This sets a limit on the maximum complexity of any algorithm that can run successfully on the hardware. The engineering dilemma Achieving high gate fidelity is easier said than done. Implementing a gate can introduce a range of errors and noise, whilst the type of gate adds to this complexity. Single-qubit gates require precise control to avoid errors, whilst 2-qubit entangling gates, which are the key element that makes the computation quantum, require a high level of control of the interactions between two qubits simultaneously. This leads to higher error rates and increased sensitivity to environmental factors. Errors exist in classical computation too, but correction strategies ensure the average of bits will return the correct value. Quantum error correction algorithms have been developed, but to be effective they need to make use of large numbers of qubits to store additional information to detect and correct errors. Estimates vary, but around 1,000 physical qubits will be required for each logical qubit. Despite the rapid growth in qubit numbers we have witnessed in the recent past, reaching the threshold for “fault-tolerant quantum computation,” where we can run error-corrected codes on quantum computers, is still far from reality. For now, the challenge is to make the most of the existing, imperfect quantum computers and develop ways to use them to find solutions to practical problems. Gate fidelity plays a critical role in binding the size and complexity of algorithms that can be run. Solving this dilemma Hardware companies across the world have been working on these challenges by simplifying architectures and increasing control. Companies including Quantinuum and Oxford Ionics, have passed the so-called “three nines” threshold for two-qubit gate fidelity. With gate fidelities greater than 99.9%, it is now possible to implement around 1,000 entangling operations before errors compromise the final result, opening the possibility of exciting applications soon. At Phasecraft, we’ve taken a software-led approach and demonstrated a way of drastically cutting the number of quantum gates needed to run simulations, by a factor of more than a million in some cases. We’ve reduced the complexity of simulating the time-evolution of a quantum materials system by 400,000x, run the largest-ever simulation of a materials system on actual hardware by 10x, and proved for the first time that near-term quantum optimization algorithms outperformed classical algorithms. This complementary, hardware-software mix is paving the way towards practical quantum computers. Indeed, IBM’s road map, having historically centred on size, is now focused on getting a smaller number of qubits to work more efficiently to unlock quantum’s true, real-world advantage. Fidelity paving the way to practical advantage These breakthroughs are just two examples of why focusing on gate fidelity is the future of quantum innovation. There is no point in increasing qubits if improvements in error rates are not made in tandem. The priority now is to develop high-fidelity gates fast enough to perform logical operations in a realistic amount of time and to fabricate more and better physical qubits to build error-corrected logical qubits. The good news is that several of today’s platforms are progressing towards meeting these requirements, which are necessary for quantum computing to have a tangible impact on the world, from discovering more efficient battery materials or improving energy grid efficiency. Focusing on gate performance could accelerate this development within a few years, rather than a decade. We need to shift from focusing on system size, towards what we can and should be doing with these systems today.
{"url":"https://www.thestack.technology/qubits-versus-gate-fidelity-phasecraft/","timestamp":"2024-11-04T02:43:27Z","content_type":"text/html","content_length":"116826","record_id":"<urn:uuid:8c363adb-1143-4982-a47a-78d9bfabd36b>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00674.warc.gz"}
Geometry MCQs [PDF] Quiz Questions Answers | Geometry MCQ App Download & e-Book: Test 1 Class 10 Math MCQs - Chapter 7 Geometry Multiple Choice Questions (MCQs) PDF Download - 1 The Geometry Multiple Choice Questions (MCQs) with Answers PDF (geometry MCQs PDF e-Book) download Ch. 7-1 to study Grade 10 Math Course. Learn Geometry Quiz Questions and Answers to learn distance learning courses. The Geometry MCQs App Download: Free learning app for lines and angles, cylinder, polygon test prep for free online courses. The MCQ: Vertical angles that are opposite to each other are also; "Geometry" App Download (Free) with answers: Opposite; Not equal; Scalene; Equal; to learn distance learning courses. Solve Lines and Angles MCQ Questions, download Google eBook (Free Sample) for free online courses. Geometry MCQs with Answers PDF Download: Quiz 1 MCQ 1: Vertical angles that are opposite to each other are also 1. not equal 2. opposite 3. scalene 4. equal MCQ 2: Two lines that make an angle are called 1. scalene 2. rays 3. segment 4. vertex MCQ 3: The surface area of hollow cylinder with radius ‘r’ and height ‘h’ is measured by 1. 2πr - h 2. 2πr + h 3. πrh 4. 2πrh MCQ 4: A polygon having 10 sides is called 1. decagon 2. heptagon 3. quadrilateral 4. hexagon MCQ 5: A polygon having 8 sides is called 1. hexagon 2. nonagon 3. decagon 4. octagon Geometry Learning App: Free Download Android & iOS The App: Geometry MCQs App to learn Geometry Textbook, 10th Grade Math MCQ App, and 9th Grade Math MCQs App. The "Geometry" App to free download iOS & Android Apps includes complete analytics with interactive assessments. Download App Store & Play Store learning Apps & enjoy 100% functionality with subscriptions!
{"url":"https://mcqlearn.com/grade10/math/geometry-multiple-choice-questions-answers.php","timestamp":"2024-11-03T03:08:55Z","content_type":"text/html","content_length":"71457","record_id":"<urn:uuid:709d19a4-5075-43e5-80f2-ca5c7542be94>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00120.warc.gz"}
seminars - Invitation to crystal bases for quantum symmetric pairs 2023-02-15 (Wed) AM 10:00 ~ 12:00 2023-02-17 (Fri) AM 10:00 ~ 11:00 The theory of crystal bases for quantum symmetric pairs, i.e., $\imath$crystal bases, which is still in progress, is an $\imath$quantum group (also known as ``quantum symmetric pair coideal subalgebra'') counterpart of the theory of crystal bases.A goal of the theory of $\imath$crystal bases is to provide a way to recover much information about the structures of representations of $\ imath$quantum groups from its crystal limit, just like the theory of crystal bases for quantum groups.In these three hours of lecture, we first review basic theory of canonical bases and crystal bases for quantum groups, and $\imath$canonical bases for $\imath$quantum groups. Then, we introduce a recent progress on the theory of $\imath$crystal bases of quasi-split locally finite type. As mentioned above, the theory of $\imath$crystal bases of arbitrary type is not completed yet. Toward a next step, we discuss how the already known theory of $\imath$crystal bases could be generalized to locally finite types. It would be a great pleasure for the speaker if the audience would be interested in and develop this ongoing project. *This seminar will be held on Zoom.
{"url":"http://www.math.snu.ac.kr/board/index.php?mid=seminars&page=80&document_srl=1032998&sort_index=date&order_type=asc&l=en","timestamp":"2024-11-02T17:27:14Z","content_type":"text/html","content_length":"45831","record_id":"<urn:uuid:0214b93c-25d3-422e-9c85-968d5a21fda8>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00542.warc.gz"}
Electronics 101 pt. 3: Applying the Laws – Norwegian Creations Earlier we have written blog posts about both Ohm’s Law and Kirchhoff’s Laws. It’s time to put these in action combined to analyze simple DC circuits. Sounds fun, right!? Don’t worry, it’s pretty easy and straight forward, and we’ll walk you through it by going through a simple example. You should read those other posts first (they’re not long) before continuing if this topic is new to you. When Georg Simon Ohm (left) and Gustav Kirchhoff (right) join forces, they form an unstoppable legendary powerhouse, capable of some pretty slick DC circuit analysis. [Wikipedia] A Very Simple Theoretical Example Even though we’ll continue in the theoretical domain, these examples make it much easier to understand these theoretical concepts. The circuit below contains a 48 V voltage source, two resistors (R[0] and R[1]) with respectively 100 Ω and 500 Ω, and a current source of 0.5 A. You can look at current sources as voltage sources which will supply a constant amount of current. Here’s the circuit we want to analyze. We want to find how much current that goes through the resistors and the direction of those currents. The goal in this example is to find the current values of i[0] and i[1]. We will also automatically find out which way the current flows: if a result is positive, the current flows the same way as its corresponding arrow. And vice versa: if the result is negative, the current flows the opposite way of the arrow. Remember that it doesn’t really matter what direction we set the arrows to point – the physical result will stay the same – but we need to define a direction for calculation purposes. Since we have two unknowns we need to obtain two equations. We’ll use both of Kirchhoff’s two laws for this. Applying Kirchhoff’s Current Law (KCL) Let’s apply KCL in the top center node: eq. 1 As we remember from the previous Kirchhoff blog post, KCL states that current into a node is equal to current out of the node. We have defined i[1] to exit the node and i[0] and the 0.5 A current from the current source to enter the node. Hence the equation above. Applying Kirchhoff’s Voltage Law (KVL) First, let’s find an expression for the voltage drop over the resistors. This is where Ohm’s law comes in! eq. 2 (Ohm’s law) This can be used to define the voltage drop over the two resistors: eq. 3 eq. 4 Lets apply KVL in the loop to the left: eq. 5 Remember that we need to have different signs on elements if they’re either sources or typical “voltage drop components” (e.g. resistors). If we put eq. 1 into eq. 5 we get: eq. 6 And with the magic of algebra, we get: eq. 7 Here we have our first actual result! 0.34 A flows through R[0]. As we can se from the negative value, the current is actually flowing in the opposite direction of the arrow (that is from right to left) through the resistor, despite the 48 V voltage source. From here we put the result from eq. 7 into eq. 1: eq. 8 And here’s our second result. The current through R[1] is 0.16 A and flows the same way as the arrow (downwards). If you’ve had enough of equations and theory stop reading now and come back later. On the other hand, if you want to make Georg Simon and Gustav proud, man up and venture forth! Calculating the Power Consumption and Generation Now we’ll test the solution by verifying that the total power generated equals the total power consumed, which is an interesting exercise in itself. Remember from pt. 1 that the equation for power is eq. 9 We’ll use this to calculate power consumed/generated for each of our four circuit elements. First, R[0]: eq. 10 This resistor dissipates 11.56 W in the form of heat. Then, R[1]: eq. 11 The calculation for the voltage source is a bit different: eq. 12 Since i[0] is defined to go in a clockwise direction, it enters the “negative” side of the source, and the convention says that we therefore use a negative number in the equation above (-48). This is just a rule of thumb which is handy to remember. Read more about the so called passive sign convention here! The current (0.34 A) is negative since it goes the other way than the defined direction. The result is positive, the same as with resistors, which means that energy is consumed and not delivered to the circuit. Lastly, we have the current source: eq. 13 The voltage difference over the source is the same as over R[1]: 80 V. By following the same convention as with the voltage source we end up with a negative answer, which means that the source is supplying 40 W to the circuit. Now we check if the supplied power is equal to the consumed power: eq. 14 Since we have rounded some of the results and squared those several places, we don’t get an exact result, but it’s close enough. This might not be the most relevant example out there, but the approach is applicable to many similar real-world circuits. However, remember that the fundamental principles here are much more important than the way we solved this problem. A similar approach is also used to analyze more complex RLC circuits (circuits with resistors, inductors and capacitors). We might visit this particular topic at a later date. The example in this post was inspired by an example from Nilson and Riedel’s Electric Circuits (8th ed., 2008).
{"url":"https://www.norwegiancreations.com/2017/01/electronics-101-pt-3-applying-the-laws/","timestamp":"2024-11-11T19:28:01Z","content_type":"text/html","content_length":"85673","record_id":"<urn:uuid:cefde426-fc3f-480e-af59-ec96dcc00846>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00033.warc.gz"}
Limits: Leading Behaviors Motivating Questions • How can we evaluate complicated limits as \(x \to \infty\text{?}\) • How can we evaluate complicated limits as \(x \to -\infty\text{?}\) Question 3.7.5. How can we evaluate complicated limits as \(x \to \infty\text{?}\) Question 3.7.6. How can we evaluate complicated limits as \(x \to -\infty\text{?}\) To evaluate limits of the form \(\lim_{x\to \infty} f(x)\text{,}\) we can determine if \(f(x)\) has a leading behavior at \(\infty\text{,}\) \(f_\infty(x)\text{.}\) If so, we know \begin{equation*} \lim_{x\to \infty} f(x) = \lim_{x\to \infty} f_\infty(x)\text{.} \end{equation*} To evaluate limits of the form \(\lim_{x\to -\infty} f(x)\text{,}\) we can determine if \(f(x)\) has a leading behavior at \(-\infty\text{,}\) \(f_{-\infty}(x)\text{.}\) If so, we know \begin{equation*} \lim_{x\to -\infty} f(x) = \lim_{x\to -\infty} f_{-\infty}(x)\text{.} \end{equation*}
{"url":"https://www.math.colostate.edu/~shriner/sec-3-6.html","timestamp":"2024-11-05T03:39:13Z","content_type":"text/html","content_length":"71385","record_id":"<urn:uuid:692817b1-8144-424e-afae-86a96fde1adb>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00435.warc.gz"}
Data Science Interview Questions Part-5 (Data Preprocessing) Top-15 frequently asked data science interview questions and answers on Data preprocessing for fresher and experienced Data Scientist, Data analyst, statistician, and machine learning engineer job Data Science is an interdisciplinary field. It uses statistics, machine learning, databases, visualization, and programming. So in this fifth article, we are focusing on Data Preprocessing questions. Let’s see the interview questions. 1. What do you mean by feature engineering? Features are the core characteristics of any prediction that impact the results. Feature engineering is the process of creating a new feature, transforming a feature, and encoding a feature. Sometimes we also use the domain knowledge to generate new features. It prepares the data that easily input to the model and improves model performance. 2. What do you mean by feature scaling or data normalization? Explain some techniques for feature scaling? In feature scaling, we change the scale of features to convert it into the same range such as (-1,1) or (0,1). This process is also known as data normalization. There are various methods for scaling or normalizing data such as min-max normalization, z-score normalization(standard scaler), and Robust scaler. Min-max normalization performs a linear transformation on the original data and converts it into a given minimum and maximum range. z-score normalization (or standard scaler) normalizes the data based on the mean and standard deviation. 3. What are the missing values? and How do you handle missing values? In the data cleaning process, we can see there are lots of values that are missing or not filled or collected during the survey. WE can handle such missing values using the following methods: • Ignoring such rows or dropping such records. • Fill values with mean, mode, and median. • you can also fill values using mean but for different classes, the different means can be used. • You can also fill the most probable value using regression, Bayesian formula, or decision tree, KNN, and Prebuilt imputing libraries. • Fill with a constant value. • Fill values manually. 4. What is an outlier? How you detect outliers in your data? Outliers are abnormal observations that deviate from the norm. Outliers do not fit in the normal behavior of the data. We can detect outliers using the following methods: • Box plot • Scatter plot • Histogram • Standard Deviation or Z-Score • Inter Quartile Range(IQR): values out of 1.5 times of IQR • Percentile: you can select 99 percentile values and remove the • DBSCAN • Isolation forest, One-Class SVM 5. How you treat the outliers in your data? We can treat outlier by removing from the data. After detecting outliers we can filter the outliers using Z-score, Percentile, or 1.5 times of IQR. 6. What do you mean by feature splitting? Feature split is an approach to generate a few other features from the existing one to improve the model performance. for example, splitting name into first and last name. 7. How do you handle the skewed data columns? We can handle skewed data using transformations such as square, log, square, square root, reciprocal (1/x), and Box-Cox Transform. 8. What do you mean by data transformation? Data transformation consolidated or aggregate your data columns. It may impact your machine learning model performance. There are the following strategies to transform data: • Data Smoothing using binning, or clustering • Aggregate your data • Scale or normalize your data for example scaling income column between 0 and 1 range. • Discretize your data for example convert continuous age column into the range 0–10, 11–20, and so on. Or we can also convert the continuous age column into conceptual labels such as youth, middle, and senior. 9. How do you select the important features in your data? We can select the important features using random forest, or remove redundant features using recursive feature elimination. Let’s all the categories of such methods. • Filter Methods: Pearson Correlation, Chi-Square, Anova, Information gain, and LDA. • Wrapper Methods: Forward Selection, backward elimination, Recursive feature elimination. • Embedded Methods: Ridge and Lasso Regression 10. How will you perform feature engineering on a date column? You can get lots of other important features from the date such as day of the week, day of the month, day of the quarter, and day of the year. Also, you can extract the date, month, and year. These all features can impact your prediction for example sales can be impacted by month or day of the week. 11. How will you handle the multi-collinearity? Explain with example. Multicollinearity is a high correlation among two or more predictor variables in a multiple regression model. It is high intercorrelations or inter-association among the independent variables. It is caused by the inaccurate use of dummy variables and the repetition of the same kind of variable. Multicollinearity impacts change in the signs as well as in the magnitudes of the regression coefficients. We can detect multicollinearity using the Correlation coefficient, Variance inflation factor (VIF), and Eigenvalues. The easiest way to compute is the correlation coefficient. Let’s understand this with an example: you have a dataset with the following columns: • blood pressure (y = BP, in mm Hg) • age (x1 = Age, in years) • weight (x2 = Weight, in kg) • body surface area (x3 = BSA, in sq m) • duration of hypertension (x4 = Dur, in years) • basal pulse (x5 = Pulse, in beats per minute) • stress index (x6 = Stress) suppose body surface area or weight have high correlation: Which variable needs to remove to overcome multicollinearity. In this case, you can see the weight is easy to measure compared to body surface area(BSA). Removing a variable that has a high correlation with others depends upon domain knowledge. 12. What is heteroscedasticity? How will you handle the heteroscedasticity in the data? heteroscedasticity is a situation where the variability of a variable is unequal across the range of values of a second variable that predicts it. We can detect heteroscedasticity using graphs or statistical tests such as Breush-Pagan test and NCV test. You can remove heteroscedasticity Box-Cox transformations and log transformations. Box-Cox transformation is a kind of power transformation that transforms data into a normal distribution. 13. What are dummy variables? In Regression analysis, we need to convert all the categorical columns into binary variables. Such variables are known as dummy variables. The dummy variable is also known as an indicator variable, design variable, Boolean indicator, categorical variable, binary variable, or qualitative variable. It takes only 0 or 1 to indicate the absence or presence of some categorical effect that may be expected to shift the outcome. K categories will takes K-1 dummy variables. For example, you can see the eye color and gender columns converted into one-hot encoded binary values. 14. What are the label and ordinal encodings? Label encoding is a kind of integer encoding. Here, each unique category value is replaced with an integer value so that the machine can understand. Ordinal encodings is a label encoding with an order in the encoded values. 15. Explain one-hot encoding. One hot encoding is used to encode the categorical column. It replaces a categorical column with its labels and fills values either 0 or 1. For example, you can see the “color” column, there are 3 categories such as red, yellow, and green. 3 categories labeled with binary values. In this article, we have focused on the data preprocessing interview questions. In the next article, we will focus on the interview questions related to NLP and Text Analytics. Data Science Interview Questions Part-6 (NLP and Text Analytics)
{"url":"https://machinelearninggeek.com/data-science-interview-questions-part-5-data-preprocessing/","timestamp":"2024-11-14T20:14:00Z","content_type":"text/html","content_length":"100215","record_id":"<urn:uuid:fc0505a1-22b7-40ba-a321-6d0c03aee617>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00771.warc.gz"}
Approximating the bottleneck plane perfect matching of a point set A bottleneck plane perfect matching of a set of n points in ℝ^2 is defined to be a perfect non-crossing matching that minimizes the length of the longest edge; the length of this longest edge is known as bottleneck. The problem of computing a bottleneck plane perfect matching has been proved to be NP-hard. We present an algorithm that computes a bottleneck plane matching of size at least (formula presented.) in O(n log^2 n)-time. Then we extend our idea toward an O(n log n)-time approximation algorithm which computes a plane matching of size at least (formula presented.) whose edges have length at most (formula presented.) the bottleneck. • Approximation algorithm • Bottleneck matching • Geometric graph • Plane matching • Unit disk graph ASJC Scopus subject areas • Computer Science Applications • Geometry and Topology • Control and Optimization • Computational Theory and Mathematics • Computational Mathematics Dive into the research topics of 'Approximating the bottleneck plane perfect matching of a point set'. Together they form a unique fingerprint.
{"url":"https://cris.bgu.ac.il/en/publications/approximating-the-bottleneck-plane-perfect-matching-of-a-point-se-2","timestamp":"2024-11-05T13:36:36Z","content_type":"text/html","content_length":"57153","record_id":"<urn:uuid:f93357a0-93e1-4224-a10b-66664f17a785>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00674.warc.gz"}
How To Calculate Volume analysis in Clojure? Here's an example of how you can calculate the volume of a sphere in Clojure: 1 (defn sphere-volume [radius] 2 (* (/ 4 3) Math/PI (Math/pow radius 3))) 4 (def radius 5) 6 (println (format "Volume of sphere with radius %s is %s" radius (sphere-volume radius))) In this example, we define a function sphere-volume which takes the radius of the sphere as a parameter and uses the formula (4/3) * π * radius^3 to calculate the volume. We then define the radius of the sphere, call the sphere-volume function with the radius, and print the result. You can use a similar approach to calculate the volumes of other shapes like cylinders, cones, etc. Just define the corresponding functions with the appropriate formulas and parameters and call them
{"url":"https://forum.finquota.com/thread/how-to-calculate-volume-analysis-in-clojure","timestamp":"2024-11-09T23:08:57Z","content_type":"text/html","content_length":"119894","record_id":"<urn:uuid:d3582995-4902-4107-9b29-0c49d2b5a486>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00211.warc.gz"}
RICHARD FEYNMAN Biography - Famous Scientists Richard Feynman (1918 - 1988) Richard P. Feynman was born in Queens, New York, on May 11, 1918, to Jewish (although non-practicing) parents. By age 15, he had mastered differential and integral calculus, and frequently experimented and re-created mathematical topics such as the half-derivative before even entering college. Feynman received a bachelor's degree from the Massachusetts Institute of Technology in 1939, and was named Putnam Fellow that same year. He received a Ph.D. from Princeton University in 1942, and in his theses applied the principle of stationery action to problems of quantum mechanics, laying the groundwork for the "path integral" approach and Feynman diagrams. While researching his Ph.D., Feynman married his first wife and longtime sweetheart, Arline Greenbaum, who was already quite ill with tuberculosis. At Princeton, Robert W. Wilson encouraged Feynman to participate in the Manhattan Project. He did so, visiting his wife in a sanitarium in Albuquerque on weekends until her death in July 1945. He then immersed himself in work on the project and was present at the Trinity bomb test. Hans Bethe made the 24 year old Feynman a group leader in the theoretical division. Although his work on the project was relatively removed from the major action, Feynman did calculate neutron equations for the Los Alamos "Water Boiler," a small nuclear reactor at the desert lab, in order to measure how close a particular assembly of fissile material was to becoming critical. After this work, he was transferred to the Oak Ridge facility, where he aided engineers in calculating safety procedures for material storage so that inadvertent criticality accidents could be avoided. After the project, Feynman started working as a professor at Cornell University, and then moved to Cal Tech in Pasadena, Calif., where he did much of his best work including research in quantum electrodynamics, the physics of the superfluidity of supercooled liquid helium, and a model of weak decay. Feynman's collaboration on the latter with Murray Gell-Mann was seen as seminal, as the weak interaction was neatly described. He also developed Feynman diagrams, a bookkeeping device that helps in conceptualizing and calculating interactions between particles in spacetime, notably the interactions between electrons and their antimatter counterparts, positrons. He later married Gweneth Howarth and had a son, Carl Richard, and a daughter, Michelle Catherine. In 1965, Feynman, along with Julian Schwinger and Shinichiro Tomonaga, shared the Nobel Prize in Physics for work in quantum electrodynamics. Feynman's popular lection series was published in "The Feynman Lectures," while his personal side was captured in "Surely You're Joking, Mr. Feynman!" and "What Do You Care What Other People Think?" Feynman is also known for his work on the Space Shuttle Challenger accident investigation, shocking the world by demonstrating the failure of the O-Rings. He died February 15, 1988, at the age of 69, from several rare forms of cancer.
{"url":"http://findbiography.tuspoemas.net/famous-scientists/richard-feynman","timestamp":"2024-11-12T22:43:58Z","content_type":"application/xhtml+xml","content_length":"19991","record_id":"<urn:uuid:b5ace035-f318-4091-86b6-1d6ff13d4b67>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00180.warc.gz"}
Mathematical Biology - M1 - 8EC Sander Hille: (shille@math.leidenuniv.nl) Assistant Professor at the Department of Mathematics, Leiden University Bob Planqué (r.planque@vu.nl) Assistant Professor at the Mathematics Department, Vrije Universiteit, Amsterdam Basic knowledge about linear algebra (e.g. determinant and trace of matrices, eigenvalues), analysis, ODEs (steady states and their stability, bifurcations) and PDEs (e.g. separation of variables), and stochastic processes. (The key point, however, is the attitude: students should be willing to quickly fill in gaps in background knowledge.) Aim of the course In the course, a lot of attention is paid to "translation": how do we get from biological information to a mathematical formulation of questions? And what do the mathematical results tell us about biological phenomena? In addition, the course aims to introduce general physical ideas about time scales and spatial scales and how these can be used to great advantage when performing a mathematical analysis. At the end of the course the student is capable of reading a scientific paper on a topic in Mathematical Biology in depth and can summarize and discuss the contents and impact of the paper in a scientific presentation.
{"url":"https://elo.mastermath.nl/course/info.php?id=881","timestamp":"2024-11-04T20:28:55Z","content_type":"text/html","content_length":"45306","record_id":"<urn:uuid:0858b8c9-a2e0-4d3a-9686-3e42cf58882d>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00817.warc.gz"}
Geometry of shapes Slide 1 Quadrilaterals are four-sided polygons Parallelogram: is a quadrilateral with both pairs of opposite sides parallel. Slide 2 Parallelograms (2) Theorem 6.1 : Opposite sides of a parallelograms are congruent Theorem 6.2: Opposite angles of a parallelogram are congruent Theorem 6.3: Consecutive angles in a parallelogram are supplementary. AD BC and AB DC <A <C and <B <D m<A+m<B = 180° m <B+m<C = 180° m<C+m<D = 180° m<D+m<A = 180° Slide 3 Parallelograms (3) Diagonals of a figure: Segments that connect any to vertices of a polygon Theorem 6.4: The diagonals of a parallelogram bisect each other. Slide 4 Parallelograms (4) Draw a parallelogram : ABCD on a piece of construction paper. Cut the parallelogram. Fold the paper and make a crease from A to C and from B to D. Fold the paper so A lies on C. What do you observe? Fold the paper so B lies on D. What do you observe? What theorem is confirmed by these Observations? Slide 5 Tests for Parallelograms Theorem 6.5 :If both pairs of opposite sides of a quadrilateral are congruent, then the quadrilateral is a parallelogram. Theorem 6.6: If both pairs of opposite angles of a quadrilateral are congruent, then the quadrilateral is a parallelogram. If AD BC and AB DC, then ABCD is a parallelogram If <A <C and <B <D, then ABCD is a parallelogram Slide 6 Tests for Parallelograms 2 Theorem 6.7: If the diagonals of a quadrilateral bisect each other, then the quadrilateral is a parallelogram Theorem 6.8: If one pair of opposite sides of a quadrilateral is both parallel and congruent, then the quadrilateral is a parallelogram. Slide 7 A quadrilateral is a parallelogram if . Diagonals bisect each other. (Theorem 6.7) A pair of opposite sides is both parallel and congruent. (Theorem 6.8) Both pairs of opposite sides are congruent. (Theorem 6.5) Both pairs of opposite angles are congruent. (Theorem 6.6) Both pairs of opposite sides are parallel. (Definition) Slide 8 Area of a parallelogram
{"url":"https://www.sliderbase.com/spitem-253-1.html","timestamp":"2024-11-05T06:38:53Z","content_type":"text/html","content_length":"14413","record_id":"<urn:uuid:cea946a5-06f7-47c2-9013-24fbcec21c0a>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00203.warc.gz"}
Mastering Recursion: A Powerful Technique in Python Programming - Adventures in Machine Learning Introduction to Recursion It’s hard to imagine computer programming without the use of recursion. It’s an essential technique that comes in handy when a function calls itself over and over again to solve a problem. Recursive algorithms are often elegant, compact, and easy to understand once you’ve seen them in action. Many common algorithms, such as sorting and searching, use recursion in one form or another. In this article, we’ll take a closer look at recursion, its definition, and why it’s such a valuable tool for programmers. What is Recursion? In computer science, recursion refers to the process of a function calling itself during its execution. Recursive functions divide a problem into smaller sub-problems and solve them independently. These sub-problems are usually identical to the original problem and can be solved using the same algorithm. The function calls itself with smaller input until the problem is trivial enough to be solved without further recursion. Essentially, recursion is nothing more than a function calling itself until a condition is met, known as the base case. The base case is the stopping condition that prevents the function from continuing to call itself and entering an infinite loop. It’s essential to have a base case in any recursive function; otherwise, the function calls itself forever, consuming system resources and eventually crashing. Why Use Recursion? One reason to use recursion is that it’s an easy way to solve certain kinds of problems. In many cases, it’s more natural to define a problem recursively than iteratively. Recursive solutions are elegant and straightforward to understand since they break a problem down into sub-problems, solve each sub-problem separately, and combine the results to get the final answer. Recursion can be more efficient than an iterative solution, especially when solving larger problems. It may be less efficient in terms of memory usage since each recursive call creates a new stack frame. However, this trade-off is usually worth it since it leads to simpler code that’s easier to understand and debug. Recursion in Python Python supports recursive functions, just like many other programming languages. In Python, when a function calls itself, a new namespace is created for that function. This new namespace is called a local namespace and contains all the local variables and parameters for the function. A common example of recursion in Python is the factorial function. The factorial of a given number n is defined as the product of all integers from 1 to n. The factorial function can be defined recursively as follows: def factorial(n): if n == 0: return 1 return n * factorial(n-1) In this function, the base case is when n is equal to 0, and the recursive call is made with n-1. The function calls itself with decreasing input values until it reaches the base case. Then, it returns the final result by multiplying the base case with the input value. Get Started: Countdown to Zero Let’s take a look at a simple example of a recursive function in Python. Consider the countdown function, which takes an integer as input and prints all the integers from the input value down to Here’s the code: def countdown(n): if n == 0: In this example, the base case is when n is equal to zero, and the function prints the message “Zero!” The recursive call is made with n-1, and the function prints the value of n during each call, going all the way down to zero. Non-Recursive Implementation of Countdown We can implement the same functionality utilizing an iterative loop instead of a recursive call with similar code. Here’s a non-recursive implementation of the countdown function: def countdown(n): for i in range(n, -1, -1): if i == 0: In this implementation, we use a for loop to iterate over all the integers from n down to zero. We print the value of i during each iteration, going all the way down to zero. Once i reaches zero, we print the message “Zero!” and exit the loop. Recursion is a powerful technique in computer programming that can simplify complex problems by breaking them down into smaller, more manageable sub-problems. The elegance and simplicity of recursive functions make them an essential tool for any programmer. Understanding recursion, its definition, and implementation in Python can help you create efficient algorithms that solve a wide variety of problems. 3) Calculate Factorial Factorial is a mathematical operation that calculates the product of all positive integers from 1 to a given number n. It’s denoted by the symbol ‘!’ and can be expressed as n! = n (n-1) (n-2) … 1. In this section, we’ll take a closer look at the factorial function, its recursive definition, recursive Python function to calculate factorial, and non-recursive implementations using for loop and reduce(). Recursive Definition of Factorial One way to define the factorial function is recursively. The base case for the factorial function is when n is equal to zero or one, and the factorial value is by definition 1. The recursive case is when n is greater than one, where we multiply n by the factorial of n – 1. This gives us the formula for the factorial function: n! = 1 if n = 0 or n = 1 n! = n * (n-1)! if n > 1 Recursive Python Function to Calculate Factorial We can write a recursive Python function to calculate the factorial of a given input. Here’s an example: def factorial(n): if n == 0 or n == 1: return 1 return n * factorial(n-1) In this function, we use an if statement to check if n is equal to 0 or 1, which are the base cases for the factorial function. If n is not equal to one or zero, we return n multiplied recursively by the factorial of n-1. This way, we call the function with decreasing input values until the base case is reached. Non-Recursive Implementations of Factorial using For Loop and Reduce() We can also implement the factorial function using a for loop or the reduce() function. Here are two non-recursive implementations: # Using For Loop def factorial(n): fact = 1 for i in range(1, n+1): fact *= i return fact # Using Reduce Function from functools import reduce def factorial(n): return reduce(lambda x, y: x * y, range(1, n+1)) In the first implementation, we use a for loop to iterate over all integers from 1 to n and multiply an initial value of 1 by each value in the loop. In the end, we return the final value of the In the second implementation, we use the reduce() function to multiply all integers from 1 to n. This is done by passing a lambda function to the reduce() function that multiplies x by y. Speed Comparison of Factorial Implementations Using timeit() To compare the speed of different factorial implementations, we can use the timeit() function in Python. Here’s an example: import timeit print(timeit.timeit(lambda: factorial(10), number=1000)) print(timeit.timeit(lambda: for_factorial(10), number=1000)) print(timeit.timeit(lambda: reduce_factorial(10), number=1000)) In this example, we compare the recursive factorial function with the for loop and reduce() implementations for n=10, executed 1000 times using the lambda functions. This way, we can see how long each implementation takes and determine which one is faster. 4) Traverse a Nested List A nested list is a list that contains other lists as its items. Nested lists are a commonly used data structure in programming, especially when working with hierarchical data. Traversing a nested list means visiting each item in the list, including nested lists. In this section, we’ll take a closer look at the nested list structure, problem description and algorithm using recursion, implementation of a count_leaf_items() function using recursion, and demonstration of the function on several lists.to the Nested List Structure A nested list is a list that contains elements, which can be other lists or any other data type. Here’s an example of a nested list: nested_list = [1, [2, [3, 4], 5], 6, [7, 8]] In this example, the first element of the list is an integer, but the second, fourth, and a member of the second element, 2, are nested lists. Problem Description and Algorithm Using Recursion The problem we want to solve is to count the number of leaf items in a nested list. A leaf item is an item that is not a list, i.e., an integer, string, or any other data type that is not a list. To solve this problem, we can use a recursive algorithm that traverses the nested list, and for each element in the list, we check if it’s a list or leaf item. If it’s a leaf item, we increment the leaf count. If it’s a list, we call the function recursively with the list as the input. Implementation of count_leaf_items() Function Using Recursion Here’s a Python implementation of a count_leaf_items() function that takes a nested list as input and returns the number of leaf items: def count_leaf_items(nested_list): leaf_count = 0 for item in nested_list: if isinstance(item, list): leaf_count += count_leaf_items(item) leaf_count += 1 return leaf_count In this function, we iterate over each item in the input nested list. If the item is a list, we recursively call the function with the current item as input. If it’s a leaf item, we increment the leaf count by one. Demonstration of count_leaf_items() on Several Lists Here’s an example of how we can use the count_leaf_items() function to count the number of leaf items in a few nested lists: nested_list1 = [1, [2, [3, 4], 5], 6, [7, 8]] nested_list2 = [[1, 2], [3, 4, [5, 6]], 7, 8] nested_list3 = [[[1, 2, 3]], [4], 5, [6, 7], [8], [[9], 10]] print(count_leaf_items(nested_list1)) # Output: 8 print(count_leaf_items(nested_list2)) # Output: 6 print(count_leaf_items(nested_list3)) # Output: 10 In this example, we define three nested lists and pass each of them to the count_leaf_items() function. We print the output, which is the number of leaf items in each list. Recursion is a fundamental technique in computer programming that helps solve complex problems by breaking them down into smaller, more manageable sub-problems. Recursive functions are elegant, easy to understand, and efficient in certain cases. We explored the recursive definition of the factorial function, implemented recursive and non-recursive versions of the factorial and nested list traversal algorithms. Comparing the speed of these implementations, we found recursive algorithms to be slower in some cases due to the overhead of function calls. The principle takeaway from this article is that recursive and non-recursive implementations have their advantages and disadvantages. While recursive solutions are simpler and more elegant, non-recursive solutions may be faster and more straightforward to implement. Therefore, choosing an algorithm depends on the requirements of the problem at hand. Understanding recursion and its various applications is an asset to any programmer, allowing them to solve problems more efficiently and elegantly.
{"url":"https://www.adventuresinmachinelearning.com/mastering-recursion-a-powerful-technique-in-python-programming/","timestamp":"2024-11-06T23:20:46Z","content_type":"text/html","content_length":"85015","record_id":"<urn:uuid:b6ef4792-f38a-49a8-86ac-5183e4a1058d>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00511.warc.gz"}
Courses Bachelor Display 2021-2022 Course Description To PDF Course title Analysis II Course code EBC1032 ECTS credits 6,5 Assessment Whole/Half Grades Period Start End Mon Tue Wed Thu Fri Period 4 31-1-2022 25-3-2022 X X 5 11-4-2022 3-6-2022 X X Level Intermediate Coordinator Janos Flesch For more information: j.flesch@maastrichtuniversity.nl Language of instruction English * Students learn the concepts and techniques in the fields of integral calculus and differential equations. * Students can apply the solution methods to calculate integrals and solve differential equations. * Students can find and validate the right method to solve the mathematical problem. * Students learn the concepts and techniques and can calculate the convergence interval for infinite series. Goals * Students learn for functions of two variables the concepts of continuity and differentiability, the implicit function theorem, and their implications. * Students can show that a function is continuous, calculate its derivative, and apply the implicit function theorem. * Students learn the definition and solution methods and their application for unconstrained and constrained optimization problems for functions of two variables. * Students can explain their mathematical arguments clearly and discuss their solutions for the mathematical problems in small groups. The course Analysis II provides a more advanced study of mathematical analysis, including a rigorous introduction to integration, infinite series, differential Description equations, functions of more variables, multivariate calculus, and their applications to unconstrained and constrained optimization. The theory, concepts, tools and methods that are covered during the course are essential and heavily applied in problems arising in econometrics, mathematical economics and operations research. Literature Reader. - Differential calculus for functions of one variable (as, for instance, in the course Analysis 1). Prerequisites - Elementary linear algebra (as, for instance, in the course Linear Algebra). An advanced level of English. Teaching methods (indicative; Lecture / Assignment course manual is definitive) Assessment methods (indicative; course manual is Written Exam Evaluation in previous For the complete evaluation of this course please click "here" academic year This course belongs to the following programmes / Bachelor Econometrics and Operations Research Year 1 Compulsory Course(s)
{"url":"https://code.unimaas.nl/Code/Display?intCalendarID=28&intBAMA=1&SearchString=EBC1032","timestamp":"2024-11-11T01:40:33Z","content_type":"text/html","content_length":"14479","record_id":"<urn:uuid:6f974810-4e09-46c8-9d9a-a14eeed6efb3>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00676.warc.gz"}
Linear Algebra – Walking Randomly Archive for the ‘Linear Algebra’ Category April 10th, 2019 I recently wrote a blog post for my new employer, The Numerical Algorithms Group, called Exploiting Matrix Structure in the solution of linear systems. It’s a demonstration that shows how choosing the right specialist solver for your problem rather than using a general purpose one can lead to a speed up of well over 100 times! The example is written in Python but the NAG routines used can be called from a range of languages including C,C++, Fortran, MATLAB etc etc May 23rd, 2017 I’m working on optimising some R code written by a researcher at University of Sheffield and its very much a war of attrition! There’s no easily optimisable hotspot and there’s no obvious way to leverage parallelism. Progress is being made by steadily identifying places here and there where we can do a little better. 10% here and 20% there can eventually add up to something worth shouting One such micro-optimisation we discovered involved multiplying two matrices together where one of them needed to be transposed. Here’s a minimal example. #Set random seed for reproducibility # Generate two random n by n matrices n = 10 a = matrix(runif(n*n,0,1),n,n) b = matrix(runif(n*n,0,1),n,n) # Multiply the matrix a by the transpose of b c = a %*% t(b) When the speed of linear algebra computations are an issue in R, it makes sense to use a version that is linked to a fast implementation of BLAS and LAPACK and we are already doing that on our HPC Here, I am using version 3.3.3 of Microsoft R Open which links to Intel’s MKL (an implementation of BLAS and LAPACK) on a Windows laptop. In R, there is another way to do the computation c = a %*% t(b) — we can make use of the tcrossprod function (There is also a crossprod function for when you want to do t(a) %*% b) c_new = tcrossprod(a,b) Let’s check for equality c_new == c [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10] [1,] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE [2,] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE [3,] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE [4,] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE [5,] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE [6,] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE [7,] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE [8,] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE [9,] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE [10,] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE Sometimes, when comparing the two methods you may find that some of those entries are FALSE which may worry you! If that happens, computing the difference between the two results should convince you that all is OK and that the differences are just because of numerical noise. This happens sometimes when dealing with floating point arithmetic (For example, see https://www.walkingrandomly.com/?p=5380). Let’s time the two methods using the microbenchmark package. We time just the matrix multiplication part of the code above: original = a %*% t(b), tcrossprod = tcrossprod(a,b) Unit: nanoseconds expr min lq mean median uq max neval original 2918 3283 3491.312 3283 3647 18599 1000 tcrossprod 365 730 756.278 730 730 10576 1000 We are only saving microseconds here but that’s more than a factor of 4 speed-up in this small matrix case. If that computation is being performed a lot in a tight loop (and for our real application, it was), it can add up to quite a difference. As the matrices get bigger, the speed-benefit in percentage terms gets lower but tcrossprod always seems to be the faster method. For example, here are the results for 1000 x 1000 matrices #Set random seed for reproducibility # Generate two random n by n matrices n = 1000 a = matrix(runif(n*n,0,1),n,n) b = matrix(runif(n*n,0,1),n,n) original = a %*% t(b), tcrossprod = tcrossprod(a,b) Unit: milliseconds expr min lq mean median uq max neval original 18.93015 26.65027 31.55521 29.17599 31.90593 71.95318 100 tcrossprod 13.27372 18.76386 24.12531 21.68015 23.71739 61.65373 100 The cost of not using an optimised version of BLAS and LAPACK While writing this blog post, I accidentally used the CRAN version of R. The recently released version 3.4. Unlike Microsoft R Open, this is not linked to the Intel MKL and so matrix multiplication is rather slower. For our original 10 x 10 matrix example we have: #Set random seed for reproducibility # Generate two random n by n matrices n = 10 a = matrix(runif(n*n,0,1),n,n) b = matrix(runif(n*n,0,1),n,n) original = a %*% t(b), tcrossprod = tcrossprod(a,b) Unit: microseconds expr min lq mean median uq max neval original 3.647 3.648 4.22727 4.012 4.1945 22.611 100 tcrossprod 1.094 1.459 1.52494 1.459 1.4600 3.282 100 Everything is a little slower as you might expect and the conclusion of this article — tcrossprod(a,b) is faster than a %*% t(b) — seems to still be valid. However, when we move to 1000 x 1000 matrices, this changes #Set random seed for reproducibility # Generate two random n by n matrices n = 1000 a = matrix(runif(n*n,0,1),n,n) b = matrix(runif(n*n,0,1),n,n) original = a %*% t(b), tcrossprod = tcrossprod(a,b) Unit: milliseconds expr min lq mean median uq max neval original 546.6008 587.1680 634.7154 602.6745 658.2387 957.5995 100 tcrossprod 560.4784 614.9787 658.3069 634.7664 685.8005 1013.2289 100 As expected, both results are much slower than when using the Intel MKL-lined version of R (~600 milliseconds vs ~31 milliseconds) — nothing new there. More disappointingly, however, is that now tcrossprod is slightly slower than explicitly taking the transpose. As such, this particular micro-optimisation might not be as effective as we might like for all versions of R. May 15th, 2017 For a while now, Microsoft have provided a free Jupyter Notebook service on Microsoft Azure. At the moment they provide compute kernels for Python, R and F# providing up to 4Gb of memory per session. Anyone with a Microsoft account can upload their own notebooks, share notebooks with others and start computing or doing data science for free. They University of Cambridge uses them for teaching, and they’ve also been used by the LIGO people (gravitational waves) for dissemination purposes. This got me wondering. How much power does Microsoft provide for free within these notebooks? Computing is pretty cheap these days what with the Raspberry Pi and so on but what do you get for nothing? The memory limit is 4GB but how about the computational power? To find out, I created a simple benchmark notebook that finds out how quickly a computer multiplies matrices together of various sizes. Matrix-Matrix multiplication is often used as a benchmark because it’s a common operation in many scientific domains and it has been optimised to within an inch of it’s life. I have lost count of the number of times where my contribution to a researcher’s computational workflow has amounted to little more than ‘don’t multiply matrices together like that, do it like this…it’s much faster’ So how do Azure notebooks perform when doing this important operation? It turns out that they max out at 263 Gigaflops! For context, here are some other results: • A 16 core Intel Xeon E5-2630 v3 node running on Sheffield’s HPC system achieved around 500 Gigaflops. • My mid-2014 Mabook Pro, with a Haswell Intel CPU hit, hit 169 Gigaflops. • My Dell XPS9560 laptop, with a Kaby Lake Intel CPU, manages 153 Gigaflops. As you can see, we are getting quite a lot of compute power for nothing from Azure Notebooks. Of course, one of the limiting factors of the free notebook service is that we are limited to 4GB of RAM but that was more than I had on my own laptops until 2011 and I got along just fine. Another fun fact is that according to https://www.top500.org/statistics/perfdevel/, 263 Gigaflops would have made it the fastest computer in the world until 1994. It would have stayed in the top 500 supercomputers of the world until June 2003 [1]. Not bad for free! [1] The top 500 list is compiled using a different benchmark called LINPACK so a direct comparison isn’t strictly valid…I’m using a little poetic license here. January 27th, 2015 Linear Algebra – Foundations to Frontiers (or LAFF to its friends) is a popular, high quality and free MOOC that, as the title suggests, teaches aspects of linear algebra in a way that takes the student from the very basics through to some cutting edge techniques. I worked through much of it last year and thoroughly enjoyed the approach it took — focusing on programming aspects from the very beginning. The course authors are also among the developers of the FLAME project, a high performance linear algebra library, and one of the interesting aspects of the LAFF course (for me at least) was that it taught linear algebra in a way that also allowed you to understand the approaches used in the algorithms behind FLAME. Last year, all of the programming assignments in LAFF were done in Python, making use of the IPython notebook. This year, the software stack will be different and will be based on MATLAB. I understand that everyone who signs up to LAFF will be able to get a free MATLAB license from Mathworks for the duration of the course. Understandably, this caused quite a bit of discussion between the LAFF team and software/language geeks like me. In a recent Facebook thread, I asked about the switch and received the reply ‘MATLAB will be free during the course. There are open source equivalents, but Mathworks staff is supporting the use of MATLAB (staff for us). There were some who never got the IPython notebooks to work properly. We are really excited at the opportunity to innovate again and perhaps clear up snags in the programming issues we had. It was complicated to support IPython on all of the operating systems and machines that participants use. MATLAB promises to be easier and will allow us again to concentrate on the Linear Algebra’ – LAFF UTx I’m sufficiently interested in this change from IPython to MATLAB that I’ll be signing up for the course again this year and I encourage you to do the same — I believe that the programming-centric teaching approach taken by LAFF is extremely well done and your time would be well-spent working through the course. The course starts on 28th January 2015 so sign up now! Here’s the trailer for last year’s course. November 17th, 2014 Given a symmetric matrix such as What’s the nearest correlation matrix? A 2002 paper by Manchester University’s Nick Higham which answered this question has turned out to be rather popular! At the time of writing, Google tells me that it’s been cited 394 times. Last year, Nick wrote a blog post about the algorithm he used and included some MATLAB code. He also included links to applications of this algorithm and implementations of various NCM algorithms in languages such as MATLAB, R and SAS as well as details of the superb commercial implementation by The Numerical algorithms group. I noticed that there was no Python implementation of Nick’s code so I ported it myself. Here’s an example IPython session using the module In [1]: from nearest_correlation import nearcorr In [2]: import numpy as np In [3]: A = np.array([[2, -1, 0, 0], ...: [-1, 2, -1, 0], ...: [0, -1, 2, -1], ...: [0, 0, -1, 2]]) In [4]: X = nearcorr(A) In [5]: X array([[ 1. , -0.8084125 , 0.1915875 , 0.10677505], [-0.8084125 , 1. , -0.65623269, 0.1915875 ], [ 0.1915875 , -0.65623269, 1. , -0.8084125 ], [ 0.10677505, 0.1915875 , -0.8084125 , 1. ]]) This module is in the early stages and there is a lot of work to be done. For example, I’d like to include a lot more examples in the test suite, add support for the commercial routines from NAG and implement other algorithms such as the one by Qi and Sun among other things. Hopefully, however, it is just good enough to be useful to someone. Help yourself and let me know if there are any problems. Thanks to Vedran Sego for many useful comments and suggestions. • NAG’s commercial implementation – callable from C, Fortran, MATLAB, Python and more. A superb implementation that is significantly faster and more robust than this one! December 17th, 2013 In a recent Stack Overflow query, someone asked if you could switch off the balancing step when calculating eigenvalues in Python. In the document A case where balancing is harmful, David S. Watkins describes the balancing step as ‘the input matrix A is replaced by a rescaled matrix A* = D^-1AD, where D is a diagonal matrix chosen so that, for each i, the ith row and the ith column of A* have roughly the same norm.’ Such balancing is usually very useful and so is performed by default by software such as MATLAB or Numpy. There are times, however, when one would like to switch it off. In MATLAB, this is easy and the following is taken from the online MATLAB documentation A = [ 3.0 -2.0 -0.9 2*eps; -2.0 4.0 1.0 -eps; -eps/4 eps/2 -1.0 0; -0.5 -0.5 0.1 1.0]; [VN,DN] = eig(A,'nobalance') VN = 0.6153 -0.4176 -0.0000 -0.1528 -0.7881 -0.3261 0 0.1345 -0.0000 -0.0000 -0.0000 -0.9781 0.0189 0.8481 -1.0000 0.0443 DN = 5.5616 0 0 0 0 1.4384 0 0 0 0 1.0000 0 0 0 0 -1.0000 At the time of writing, it is not possible to directly do this in Numpy (as far as I know at least). Numpy’s eig command currently uses the LAPACK routine DGEEV to do the heavy lifting for double precision matrices. We can see this by looking at the source code of numpy.linalg.eig where the relevant subsection is lapack_routine = lapack_lite.dgeev wr = zeros((n,), t) wi = zeros((n,), t) vr = zeros((n, n), t) lwork = 1 work = zeros((lwork,), t) results = lapack_routine(_N, _V, n, a, n, wr, wi, dummy, 1, vr, n, work, -1, 0) lwork = int(work[0]) work = zeros((lwork,), t) results = lapack_routine(_N, _V, n, a, n, wr, wi, dummy, 1, vr, n, work, lwork, 0) My plan was to figure out how to tell DGEEV not to perform the balancing step and I’d be done. Sadly, however, it turns out that this is not possible. Taking a look at the reference implementation of DGEEV, we can see that the balancing step is always performed and is not user controllable–here’s the relevant bit of Fortran * Balance the matrix * (Workspace: need N) IBAL = 1 CALL DGEBAL( 'B', N, A, LDA, ILO, IHI, WORK( IBAL ), IERR ) So, using DGEEV is a dead-end unless we are willing to modifiy and recompile the lapack source — something that’s rarely a good idea in my experience. There is another LAPACK routine that is of use, however, in the form of DGEEVX that allows us to control balancing. Unfortunately, this routine is not part of the numpy.linalg.lapack_lite interface provided by Numpy and I’ve yet to figure out how to add extra routines to it. I’ve also discovered that this functionality is an open feature request in Numpy. Enter the NAG Library My University has a site license for the commercial Numerical Algorithms Group (NAG) library. Among other things, NAG offers an interface to all of LAPACK along with an interface to Python. So, I go through the installation and do import numpy as np from ctypes import * from nag4py.util import Nag_RowMajor,Nag_NoBalancing,Nag_NotLeftVecs,Nag_RightVecs,Nag_RCondEigVecs,Integer,NagError,INIT_FAIL from nag4py.f08 import nag_dgeevx eps = np.spacing(1) def unbalanced_eig(A): Compute the eigenvalues and right eigenvectors of a square array using DGEEVX via the NAG library. Requires the NAG C library and NAG's Python wrappers http://www.nag.co.uk/python.asp The balancing step that's performed in DGEEV is not performed here. As such, this function is the same as the MATLAB command eig(A,'nobalance') A : (M, M) Numpy array A square array of real elements. On exit: A is overwritten and contains the real Schur form of the balanced version of the input matrix . w : (M,) ndarray The eigenvalues v : (M, M) ndarray The eigenvectors Author: Mike Croucher (www.walkingrandomly.com) Testing has been mimimal order = Nag_RowMajor balanc = Nag_NoBalancing jobvl = Nag_NotLeftVecs jobvr = Nag_RightVecs sense = Nag_RCondEigVecs n = A.shape[0] pda = n pdvl = 1 wr = np.zeros(n) wi = np.zeros(n) pdvr = n vr = np.zeros(pdvr*n) scale = np.zeros(n) abnrm = c_double(0) rconde = np.zeros(n) rcondv = np.zeros(n) fail = NagError() n, A.ctypes.data_as(POINTER(c_double)), pda, wr.ctypes.data_as(POINTER(c_double)), vr.ctypes.data_as(POINTER(c_double)),pdvr,ilo,ihi, scale.ctypes.data_as(POINTER(c_double)), abnrm, rconde.ctypes.data_as(POINTER(c_double)),rcondv.ctypes.data_as(POINTER(c_double)),fail) if all(wi == 0.0): w = wr v = vr.reshape(n,n) w = wr+1j*wi v = array(vr, w.dtype).reshape(n,n) Define a test matrix: A = np.array([[3.0,-2.0,-0.9,2*eps], Do the calculation (w,v) = unbalanced_eig(A) which gives (array([ 5.5616, 1.4384, 1. , -1. ]), array([[ 0.6153, -0.4176, -0. , -0.1528], [-0.7881, -0.3261, 0. , 0.1345], [-0. , -0. , -0. , -0.9781], [ 0.0189, 0.8481, -1. , 0.0443]])) This is exactly what you get by running the MATLAB command eig(A,’nobalance’). Note that unbalanced_eig(A) changes the input matrix A to array([[ 5.5616, -0.0662, 0.0571, 1.3399], [ 0. , 1.4384, 0.7017, -0.1561], [ 0. , 0. , 1. , -0.0132], [ 0. , 0. , 0. , -1. ]]) According to the NAG documentation, this is the real Schur form of the balanced version of the input matrix. I can’t see how to ask NAG to not do this. I guess that if it’s not what you want unbalanced_eig() to do, you’ll need to pass a copy of the input matrix to NAG. The IPython notebook The code for this article is available as an IPython Notebook The future This blog post was written using Numpy version 1.7.1. There is an enhancement request for the functionality discussed in this article open in Numpy’s git repo and so I expect this article to become redundant pretty soon. September 16th, 2013 Last week I gave a live demo of the IPython notebook to a group of numerical analysts and one of the computations we attempted to do was to solve the following linear system using Numpy’s solve Now, the matrix shown above is singular and so we expect that we might have problems. Before looking at how Numpy deals with this computation, lets take a look at what happens if you ask MATLAB to do >> A=[1 2 3;4 5 6;7 8 9]; >> b=[15;15;15]; >> x=A\b Warning: Matrix is close to singular or badly scaled. Results may be inaccurate. RCOND = 1.541976e-18. x = MATLAB gives us a warning that the input matrix is close to being singular (note that it didn’t actually recognize that it is singular) along with an estimate of the reciprocal of the condition number. It tells us that the results may be inaccurate and we’d do well to check. So, lets check: >> A*x ans = >> norm(A*x-b) ans = We seem to have dodged the bullet since, despite the singular nature of our matrix, MATLAB has able to find a valid solution. MATLAB was right to have warned us though…in other cases we might not have been so lucky. Let’s see how Numpy deals with this using the IPython notebook: In [1]: import numpy from numpy import array from numpy.linalg import solve array([-39., 63., -24.]) It gave the same result as MATLAB [See note 1], presumably because it’s using the exact same LAPACK routine, but there was no warning of the singular nature of the matrix. During my demo, it was generally felt by everyone in the room that a warning should have been given, particularly when working in an interactive setting. If you look at the documentation for Numpy’s solve command you’ll see that it is supposed to throw an exception when the matrix is singular but it clearly didn’t do so here. The exception is sometimes thrown though: In [4]: LinAlgError Traceback (most recent call last) in () 1 C=array([[1,1,1],[1,1,1],[1,1,1]]) ----> 2 x=solve(C,b) C:\Python32\lib\site-packages\numpy\linalg\linalg.py in solve(a, b) 326 results = lapack_routine(n_eq, n_rhs, a, n_eq, pivots, b, n_eq, 0) 327 if results['info'] > 0: --> 328 raise LinAlgError('Singular matrix') 329 if one_eq: 330 return wrap(b.ravel().astype(result_t)) LinAlgError: Singular matrix It seems that Numpy is somehow checking for exact singularity but this will rarely be detected due to rounding errors. Those I’ve spoken to consider that MATLAB’s approach of estimating the condition number and warning when that is high would be better behavior since it alerts the user to the fact that the matrix is badly conditioned. Thanks to Nick Higham and David Silvester for useful discussions regarding this post. [1] – The results really are identical which you can see by rerunning the calculation after evaluating format long in MATLAB and numpy.set_printoptions(precision=15) in Python
{"url":"https://walkingrandomly.com/?cat=73","timestamp":"2024-11-02T05:32:49Z","content_type":"application/xhtml+xml","content_length":"93775","record_id":"<urn:uuid:ec758d6a-3fbf-445a-8bab-844c7928596b>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00263.warc.gz"}
ROC Curve | FlowHunt A Receiver Operating Characteristic (ROC) curve is a graphical representation used to assess the performance of a binary classifier system as its discrimination threshold is varied. Originating from signal detection theory during World War II for radar signal analysis, the ROC curve has become an essential tool in various fields, including machine learning, medicine, and artificial intelligence In the context of AI, especially in areas like AI automation and chatbots, understanding and utilizing ROC curves can enhance the development and evaluation of classification models, ensuring better decision-making processes. This article delves into what a ROC curve is, how it is used, provides examples of its application, and explores its significance in AI and related technologies. Understanding the ROC Curve A ROC curve is a plot that illustrates the diagnostic ability of a binary classifier system by graphing the True Positive Rate (TPR) against the False Positive Rate (FPR) at various threshold settings. The TPR, also known as sensitivity or recall, measures the proportion of actual positives correctly identified, while the FPR represents the proportion of actual negatives that are incorrectly identified as positives. • True Positive Rate (TPR): TPR = TP / (TP + FN) • False Positive Rate (FPR): FPR = FP / (FP + TN) • TP: True Positives • FP: False Positives • TN: True Negatives • FN: False Negatives Historical Background The term “Receiver Operating Characteristic” originates from signal detection theory developed during World War II to analyze radar signals. Engineers used ROC curves to distinguish between enemy objects and noise. Over time, ROC curves found applications in psychology, medicine, and machine learning to evaluate diagnostic tests and classification models. How ROC Curves Are Used Evaluating Classification Models In machine learning and AI, ROC curves are instrumental in evaluating the performance of binary classifiers. They provide a comprehensive view of a model’s capability to distinguish between the positive and negative classes across all thresholds. Threshold Variation Classification models often output probabilities or continuous scores rather than definitive class labels. By applying different thresholds to these scores, one can alter the sensitivity and specificity of the model: • Low Thresholds: More instances are classified as positive, increasing sensitivity but potentially increasing false positives. • High Thresholds: Fewer instances are classified as positive, reducing false positives but potentially missing true positives. Plotting TPR against FPR for all possible thresholds yields the ROC curve, showcasing the trade-off between sensitivity and specificity. Area Under the Curve (AUC) The Area Under the ROC Curve (AUC) quantifies the overall ability of the model to discriminate between positive and negative classes. An AUC of 0.5 indicates no discriminative ability (equivalent to random guessing), while an AUC of 1.0 represents perfect discrimination. Interpretation of AUC Values: • 0.90 – 1.00: Excellent discrimination • 0.80 – 0.90: Good discrimination • 0.70 – 0.80: Fair discrimination • 0.60 – 0.70: Poor discrimination • 0.50 – 0.60: Fail (no better than chance) Model Selection and Comparison ROC curves and AUC scores are invaluable for comparing different classification models or tuning a model’s parameters. A model with a higher AUC is generally preferred as it indicates a better ability to distinguish between the positive and negative classes. Selecting Optimal Thresholds While ROC curves provide a visual tool for assessing model performance, they also aid in selecting an optimal threshold that balances sensitivity and specificity according to the specific requirements of an application. • High Sensitivity Needed: Choose a threshold with high TPR (useful in medical diagnostics where missing a positive case is costly). • High Specificity Needed: Choose a threshold with low FPR (useful in situations where false positives are highly undesirable). Components of the ROC Curve Confusion Matrix Understanding ROC curves necessitates familiarity with the confusion matrix, which summarizes the performance of a classification model: Predicted Positive Predicted Negative Actual Positive True Positive (TP) False Negative (FN) Actual Negative False Positive (FP) True Negative (TN) The confusion matrix forms the basis for calculating TPR and FPR at various thresholds. Sensitivity and Specificity • Sensitivity (Recall or True Positive Rate): Measures the proportion of actual positives correctly identified. • Specificity (True Negative Rate): Measures the proportion of actual negatives correctly identified. ROC curves plot sensitivity against 1 – specificity (which is the FPR). Examples and Use Cases Medical Diagnostics In medical testing, ROC curves are used to evaluate the effectiveness of diagnostic tests. Example: Determining the threshold for a biomarker to diagnose a disease. • Scenario: A new blood test measures the level of a protein indicative of a disease. • Objective: Find the optimal cutoff level that balances sensitivity and specificity. • Application: Plot the ROC curve using patient data to select a threshold that maximizes diagnostic accuracy. Machine Learning Classification ROC curves are widely used in evaluating classification algorithms in machine learning. Example: Email Spam Detection • Scenario: Developing a classifier to identify spam emails. • Objective: Assess the model’s performance across different thresholds to minimize false positives (legitimate emails marked as spam) while maximizing true positives. • Application: Use ROC curves to select a threshold that provides an acceptable balance for the application’s needs. AI Automation and Chatbots In AI automation and chatbots, ROC curves assist in refining intent recognition and response accuracy. Example: Intent Classification in Chatbots • Scenario: A chatbot uses machine learning to classify user messages into intents (e.g., booking inquiries, complaints). • Objective: Evaluate the classifier’s ability to correctly identify user intents to provide accurate responses. • Application: Generate ROC curves for the intent classifier to adjust thresholds and improve the chatbot’s performance, ensuring users receive appropriate assistance. Credit Scoring and Risk Assessment Financial institutions use ROC curves to evaluate models predicting loan defaults. Example: Loan Default Prediction • Scenario: A bank develops a model to predict the likelihood of loan applicants defaulting. • Objective: Use ROC curves to assess the model’s discrimination ability across thresholds. • Application: Select a threshold that minimizes financial risk by accurately identifying high-risk applicants. Mathematical Foundations Calculating TPR and FPR For each threshold, the model classifies instances as positive or negative, leading to different values of TP, FP, TN, and FN. • TPR (Sensitivity): TP / (TP + FN) • FPR: FP / (FP + TN) By varying the threshold from the lowest to the highest possible score, a series of TPR and FPR pairs is obtained to plot the ROC curve. AUC Calculation The AUC can be calculated using numerical integration techniques, such as the trapezoidal rule, applied to the ROC curve. • Interpretation: AUC represents the probability that a randomly chosen positive instance is ranked higher than a randomly chosen negative instance by the classifier. ROC Curves in Imbalanced Datasets In datasets where classes are imbalanced (e.g., fraud detection with few positive cases), ROC curves may present an overly optimistic view of the model’s performance. Precision-Recall Curves In such cases, Precision-Recall (PR) curves are more informative. • Precision: TP / (TP + FP) • Recall (Sensitivity): TP / (TP + FN) PR curves plot precision against recall, providing better insight into the model’s performance on imbalanced datasets. ROC Curve in the Context of AI and Chatbots Enhancing AI Model Evaluation In AI systems, particularly those involving classification tasks, ROC curves provide essential insights into model performance. • AI Automation: In automated decision-making systems, ROC curves help in fine-tuning models to make accurate predictions. • Chatbots: For chatbots utilizing natural language processing (NLP) to classify intents, emotions, or entities, ROC curves assist in evaluating and improving the underlying classifiers. Optimizing User Experience By leveraging ROC curve analysis, AI developers can enhance user interactions. • Reducing False Positives: Ensuring the chatbot does not misinterpret user messages, leading to inappropriate responses. • Increasing True Positives: Improving the chatbot’s ability to understand user intent correctly, providing accurate and helpful replies. AI Ethics and Fairness ROC curves can also be used to assess model fairness. • Fair Classification: Evaluating ROC curves across different demographic groups can reveal disparities in model performance. • Bias Mitigation: Adjusting models to achieve equitable TPR and FPR across groups contributes to fair AI practices. Practical Implementation of ROC Curves Software and Tools Various statistical software and programming languages offer functions to compute and plot ROC curves. • Python: Libraries like scikit-learn provide functions such as roc_curve and auc. • R: Packages like pROC and ROCR facilitate ROC analysis. • MATLAB: Functions are available for ROC curve plotting and AUC calculation. Steps to Generate a ROC Curve 1. Train a Binary Classifier: Obtain predicted probabilities or scores for the positive class. 2. Determine Thresholds: Define a range of thresholds from the lowest to the highest predicted scores. 3. Compute TPR and FPR: For each threshold, calculate TPR and FPR using the confusion matrix. 4. Plot the ROC Curve: Graph TPR against FPR. 5. Calculate AUC: Compute the area under the ROC curve to quantify overall performance. Example in Python from sklearn.metrics import roc_curve, auc import matplotlib.pyplot as plt # y_true: True binary labels # y_scores: Predicted probabilities or scores fpr, tpr, thresholds = roc_curve(y_true, y_scores) roc_auc = auc(fpr, tpr) # Plotting plt.plot(fpr, tpr, color='blue', lw=2, label='ROC curve (area = %0.2f)' % roc_auc) plt.plot([0, 1], [0, 1], color='grey', lw=2, linestyle='--') plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('Receiver Operating Characteristic (ROC)') plt.legend(loc='lower right') Limitations of ROC Curves Imbalanced Classes ROC curves can be misleading when dealing with highly imbalanced datasets. In such cases, high TPR may be achieved with a proportionally high FPR, which may not be acceptable in practice. Decision Threshold Influence ROC curves consider all possible thresholds but do not indicate which threshold is optimal for a specific situation. Overestimation of Performance An AUC close to 1.0 may suggest excellent performance, but without considering the context (such as class distribution and costs of errors), it may lead to overconfidence in the model. Alternative Evaluation Metrics While ROC curves are valuable, other metrics may be better suited in certain situations. Precision-Recall Curves Useful for imbalanced datasets where the positive class is of primary interest. F1 Score The harmonic mean of precision and recall, providing a single metric to assess the balance between them. Matthews Correlation Coefficient (MCC) A balanced measure that can be used even if the classes are of very different sizes. Research on ROC Curve The Receiver Operating Characteristic (ROC) curve is a fundamental tool used in evaluating the performance of binary classifiers. It is widely used across various fields including medicine, machine learning, and statistics. Below are some relevant scientific papers that explore different aspects of ROC curves and their applications: 1. Title: Receiver Operating Characteristic (ROC) Curves □ Authors: Tilmann Gneiting, Peter Vogel □ Published: 2018-09-13 □ Summary: This paper delves into the use of ROC curves for evaluating predictors in binary classification problems. It highlights the distinction between raw ROC diagnostics and ROC curves, emphasizing the importance of concavity in interpretation and modeling. The authors propose a paradigm shift in ROC curve modeling as curve fitting, introducing a flexible two-parameter beta family for fitting cumulative distribution functions (CDFs) to empirical ROC data. The paper also provides software in R for estimation and testing, showcasing the beta family’s superior fit compared to traditional models, especially under concavity constraints. 2. Title: The Risk Distribution Curve and its Derivatives □ Authors: Ralph Stern □ Published: 2009-12-16 □ Summary: This research introduces the concept of the risk distribution curve as a comprehensive summary of risk stratification. It demonstrates how the ROC curve and other related curves can be derived from this distribution, providing a unified view of risk stratification metrics. The paper derives a mathematical expression for the Area Under the ROC Curve (AUC), elucidating its role in measuring the separation between event and non-event patients. It emphasizes the positive correlation between risk distribution dispersion and ROC AUC, underscoring its utility in assessing risk stratification quality. 3. Title: The Fuzzy ROC □ Authors: Giovanni Parmigiani □ Published: 2019-03-04 □ Summary: This paper extends the concept of ROC curves to fuzzy logic environments where some data points fall into indeterminate regions. It addresses the challenges of defining sensitivity and specificity in such scenarios and provides a method for visual summarization of various indeterminacy choices. This extension is crucial for scenarios where traditional binary classification is insufficient due to inherent data uncertainty. 4. Title: Conditional Prediction ROC Bands for Graph Classification □ Authors: Yujia Wu, Bo Yang, Elynn Chen, Yuzhou Chen, Zheshi Zheng □ Published: 2024-10-20 □ Summary: This recent study introduces Conditional Prediction ROC (CP-ROC) bands, which are designed for graph classification tasks in medical imaging and drug discovery. CP-ROC bands provide uncertainty quantification and robustness against distributional shifts in test data. The method is particularly useful for Tensorized Graph Neural Networks (TGNNs) but adaptable to other models, enhancing prediction reliability and uncertainty quantification in real-world applications. Discover Hidden Markov Models: essential for speech recognition, bioinformatics, and finance, learn their key components and applications. Simplify ML workflows with Kubeflow on Kubernetes. Discover tools for scalable model deployment, training, and more. Explore linear regression, a fundamental tool in statistics & machine learning for modeling relationships. Learn key concepts & applications. Optimize your machine learning models with Gradient Descent. Learn techniques, applications, and challenges. Visit now!
{"url":"https://www.flowhunt.io/glossary/roc-curve/","timestamp":"2024-11-03T06:42:20Z","content_type":"text/html","content_length":"113189","record_id":"<urn:uuid:c8138b0b-b6e7-4d3b-a948-98ad008d95c7>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00477.warc.gz"}
Into Math Grade 4 Module 11 Review Answer Key We included HMH Into Math Grade 4 Answer Key PDF Module 11 Review to make students experts in learning maths. HMH Into Math Grade 4 Module 11 Review Answer Key common factor Choose the correct term from the Vocabulary box to complete the sentence. Question 1. When two or more fractions represent the same number of equal parts of the same-sized whole, they have a ______. When two or more fractions represent the same number of equal parts of the same-sized whole, they have a multiple. Question 2. You can use a ___ to divide the numerator and the denominator to find an equivalent fraction. You can use a common factor to divide the numerator and the denominator to find an equivalent fraction. Concepts and Skills Use multiplication or division to generate an equivalent fraction. Question 3. \(\frac{3}{5}\) = \(\frac{3}{5}\) × \(\frac{3}{3}\) = \(\frac{9}{15}\) \(\frac{3}{5}\) = \(\frac{3}{5}\) × \(\frac{5}{5}\) = \(\frac{15}{25}\) Question 4. \(\frac{2}{4}\) = \(\frac{2}{4}\) × \(\frac{2}{2}\) = \(\frac{4}{18}\) \(\frac{2}{4}\) = \(\frac{2}{4}\) × \(\frac{4}{4}\) = \(\frac{8}{16}\) Question 5. \(\frac{4}{8}\) = \(\frac{4}{8}\) ÷ \(\frac{4}{4}\) = \(\frac{1}{2}\) \(\frac{4}{8}\) = \(\frac{4}{8}\) ÷ \(\frac{8}{8}\) = \(\frac{1}{2}\) ÷ 1 = \(\frac{1}{2}\) Question 6. \(\frac{6}{10}\) = \(\frac{6}{10}\) × \(\frac{6}{6}\) = 1 ÷ \(\frac{10}{6}\) = \(\frac{10}{6}\) = 1 × \(\frac{6}{10}\) = \(\frac{6}{10}\) \(\frac{6}{10}\) = \(\frac{6}{10}\) × \(\frac{10}{10}\) = \(\frac{6}{10}\) ÷ 1 = \(\frac{6}{10}\) Question 7. Use Tools Nate and Tameeka have two same-sized canvases. Nate paints \(\frac{5}{8}\) of his canvas blue. Tameeka paints \(\frac{7}{12}\) of her canvas blue. Who paints more of the canvas blue? Tell what strategy or tool you will use to solve the problem, explain your choice, and then find the answer. Nate paints more of the canvas blue. We used division method to solve the problem. Length of Nate paints the blue canvas = \(\frac{5}{8}\) = 0.625. Length of Tameeka paints the blue canvas = \(\frac{7}{12}\) = 0.583. Compare. Write >, < or =. Question 8. \(\frac{2}{3}\) > \(\frac{1}{8}\). \(\frac{2}{3}\) = 0.67. \(\frac{1}{8}\) = 0.125. Question 9. \(\frac{2}{6}\) < \(\frac{5}{10}\). \(\frac{2}{6}\) = 0.33. \(\frac{5}{10}\) = 0.5. Question 10. \(\frac{6}{12}\) < \(\frac{3}{8}\). \(\frac{6}{12}\) = 0.5. \(\frac{3}{8}\) = 0.375. Write the pair of fractions as a pair of fractions with a common denominator. Question 11. \(\frac{1}{4}\) < \(\frac{5}{6}\) – not having common denominator. \(\frac{1}{4}\) = \(\frac{1}{4}\) = 0.25. \(\frac{5}{6}\) = \(\frac{5}{6}\) = 0.83. Question 12. \(\frac{3}{5}\) > \(\frac{4}{10}\) – are having common denominator. \(\frac{3}{5}\) = \(\frac{3}{5}\) = 0.6. \(\frac{4}{10}\) = \(\frac{2}{5}\) = 0.4. Question 13. Write the fractions in order, from greatest to least: \(\frac{1}{3}\), \(\frac{3}{10}\), \(\frac{2}{8}\). The fractions in order, from greatest to least: \(\frac{2}{8}\), \(\frac{3}{10}\), \(\frac{1}{3}\) The fractions in order, from greatest to least: \(\frac{1}{3}\) = 0.33. \(\frac{3}{10}\) = 0.3. \(\frac{2}{8}\) = 0.25. Question 14. Which fractions are to select all the correct answers. (A) \(\frac{2}{4}\) (B) \(\frac{3}{4}\) (C) \(\frac{3}{16}\) (D) \(\frac{8}{16}\) (E) \(\frac{12}{16}\) (F) \(\frac{16}{18}\) (B) \(\frac{3}{4}\) and(E) \(\frac{12}{16}\) fractions are equivalent. (A) \(\frac{2}{4}\) and (D) \(\frac{8}{16}\) fractions are equivalent. Fractions are equivalent: (A) \(\frac{2}{4}\) = \(\frac{1}{2}\) (B) \(\frac{3}{4}\) = \(\frac{3}{4}\) (C) \(\frac{3}{16}\) = \(\frac{3}{16}\) (D) \(\frac{8}{16}\) = \(\frac{1}{2}\) (E) \(\frac{12}{16}\) = \(\frac{3}{4}\) (F) \(\frac{16}{18}\) = \(\frac{8}{9}\) Question 15. Mariah and Ken make sandwiches that are the same size. Mariah eats \(\frac{2}{3}\) of her sandwich. Ken eats \(\frac{8}{12}\) of his. Did they eat the same amount of sandwich? • Use multiplication to explain your answer. • Shade the visual model and complete the comparison to justify your answer. They both ate the same amount of sandwich. Quantity of her sandwich Mariah eats = \(\frac{2}{3}\) Quantity of his sandwich Ken eats = \(\frac{8}{12}\) = \(\frac{2}{3}\) Leave a Comment You must be logged in to post a comment.
{"url":"https://ccssmathanswers.com/into-math-grade-4-module-11-review-answer-key/","timestamp":"2024-11-07T02:37:28Z","content_type":"text/html","content_length":"260061","record_id":"<urn:uuid:ba981d3d-6206-4eac-a227-45bffdc49888>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00576.warc.gz"}
Samacheer Kalvi 10th Science Guide Chapter 3 Thermal Physics Students can download 10th Science Chapter 3 Thermal Physics Questions and Answers, Notes, Samacheer Kalvi 10th Science Guide Pdf helps you to revise the complete Tamilnadu State Board New Syllabus, helps students complete homework assignments and to score high marks in board exams. Tamilnadu Samacheer Kalvi 10th Science Solutions Chapter 3 Thermal Physics Samacheer Kalvi 10th Science Thermal Physics Text Book Back Questions and Answers I. Choose the correct answer. Question 1. The value of universal gas constant: (a) 3.81 mol^-1 K^-1 (b) 8.03 mol^-1 K^-1 (c) 1.38 mol^-1 K^-1 (d) 8.31 mol^-1 K^-1 (d) 8.31 mol^-1 K^-1 Question 2. If a substance is heated or cooled, the change in mass of that substance is: (a) positive (b) negative (c) zero (d) none of the above (b) negative Question 3. If a substance is heated or cooled, the linear expansion occurs along the axis of ______. (a) X or -X (b) Y or -Y (c) both (a) and (b) (d) either (a) or (b). (c) both (a) and (b) Hint: When a substance is heated its expansion is positive i,e, can be taken along either +X or +Y direction. But when substance is cooled it’s either length or area or volume decreases i.e. with respect expansion, it is opposite direction i.e. either -X or -Y direction respectively. Question 4. Temperature is the average of the molecules of a substance. (a) difference in K.E and P.E (b) sum of P.E and K.E (c) difference in T.E and P.E (d) difference in K.E and T.E (b) sum of P.E and K.E Question 5. In the Given diagram, the possible direction of heat energy transformation is: (a) A ← B, A ← C, B ← C (b) A → B, A → C, B → C (c) A → B, A ← C, B → C (d) A ← B, A → C, B ← C (a) A ← B, A ← C, B ← C II. Fill in the blanks. 1. The value of Avogadro number ……….. 2. The temperature and heat are ……….. quantities. 3. One calorie is the amount of heat energy required to raise the temperature of ……….. of water through 4. According to Boyle’s law, the shape of the graph between pressure and reciprocal of volume is ………… 1. 6.023 × 10^23 2. Inter convertible 3. 1 gram, 1°C 4. A straight line III. State whether the following statements are true or false, if false explain why? 1. For a given heat in liquid, the apparent expansion is more than that of real expansion. 2. Thermal energy always flows from a system at higher temperature to a system at lower temperature. 3. According to Charles’s law, at constant pressure, the temperature is inversely proportional to volume. 1. True 2. True 3. False – According to Charles law, at constant pressure, the volume is directly proportional to temperature. IV. Match the items in column-I to the items in column-II A. (s) B. (t) C. (p) D. (q) E. (r) V. Assertion and Reason type questions. (a) Both the assertion and the reason are true and the reason is the correct explanation of the assertion. (b) Both the assertion and the reason are true but the reason is not the correct explanation of the assertion. (c) The assertion is true but the reason is false. (d) The assertion is false but the reason is true. 1. Assertion: There is no effects on other end when one end of the rod is only heated. Reason: Heat always flows from a region of lower temperature to higher temperature of the rod. 2. Assertion: Gas is highly compressible than solid and liquid Reason: Interatomic or intermolecular distance in the gas is comparably high. 1. (b) 2. (a) VI. Answer in briefly. Question 1. Define one calorie. One calorie is defined as the amount of heat energy required to rise the temperature of 1 gram of water through 1°C. Question 2. Distinguish between linear and superficial areal expansion. Question 3. What is the coefficient of cubical expansion? The ratio of increase in the volume of the body per degree rise in temperature to its unit volume is called a coefficient of cubical expansion. Question 4. State Boyle’s law When the temperature of a gas is kept constant, the volume of a fixed mass of gas is inversely proportional to its pressure. P ∝ 1 / V Question 5. State-the law of volume. When the pressure of a gas is kept constant, the volume of a gas is directly proportional to the temperature of the gas. i.e., V ∝ T. \(\frac{\mathrm{V}}{\mathrm{T}}\) = constant. Question 6. Distinguish between ideal gas and real gas. Question 7. What is co-efficient of real expansion? Coefficient of real expansion is defined as the ratio of the true rise in the volume of the liquid per degree rise in temperature to its unit volume. The SI unit of coefficient of real expansion is the K^-1. Question 8. What is the coefficient of apparent expansion? Coefficient of apparent expansion is defined as the ratio of the apparent rise in the volume of the liquid per degree rise in temperature to its unit volume. The SI unit of the coefficient of apparent expansion is K^-1. VII. Numerical problems. Question 1. Find the final temperature of a copper rod whose area of cross section changes from 10 m² to 11 m² due to heating. The copper rod is initially kept at 90 K. (Coefficient of superficial expansion is 0.0021 /K). Change in area ΔA = 11 – 10 = 1 m² Initial temperature T[1] = 90 K Let Final temperature be T[2]K A[0] = 10 m² Coefficient of superficial expansion is α[A] = 0.0021 / k \(\frac{ΔA}{A_0}\) = α[A]ΔT \(\frac{1}{10}\) = 0.0021 ΔT ∴ ΔT = 0.0021 × 10 = 0.021 T[2] – T[1] = 0.021 T[2] – 90 = 0.021 ∴ Final temperature T[2] = 90.021 K Question 2. Calculate the coefficient of cubical expansion of a zinc bar. Whose volume is increased 0.25 m³ from 0.3 m³ due to the change in its temperature of 50 K. Initial volume V[0] = 0.25 m³ Final volume = 0.30 m³ Change in volume ΔV = 0.3 – 0.25 = 0.05 m³ Temperature ΔT = 50K Coefficient of cubical expansion is ∴ Coefficient of Cubical expansion α[v] = 0.004 /K VIII. Answer in detail. Question 1. Derive the ideal gas equation. The ideal gas equation is an equation, which relates all the properties of an ideal gas. An ideal gas obeys Boyle’s law and Charles’s law and Avogadro’s law. According to Boyle’s law, PV = constant ………. (1) According to Charles’s law, \(\frac{V}{T}\) = constant ……… (2) According to Avogadro’s law, \(\frac{V}{T}\) = constant …….. (3) After combining equations (1), (2) and (3), you equation. can get the following \(\frac{V}{nT}\) = constant ……. (4) The above relation is called the combined law of gases. If you consider a gas, which contains µ moles of the gas, the number of atoms contained will be equal to µ times the Avogadro number, N[0]. i.e., n = µN[A] Using equation (5), in equation (4) can be written as \(\frac{PV}{µN_{A}T}\) = constant The value of the constant in the above equation is taken to be K[B], which is called as Boltzmann constant (1.38 × 10^-23 JK^-1). Hence, we have the following equation: \(\frac{PV}{µN_{A}T}\) = K[B] PV = µN[A]K[B]T µN[A]K[B] = R which is termed as universal gas constant whose value is 8.31 J mol^-1 K^-1. PV = RT Ideal gas equation is also called as equation of state because it gives the relation between the state variables and it is used to describe the state of any gas. Question 2. Explain the experiment of measuring the real and apparent expansion of a liquid with a neat diagram. To start with, the liquid whose real and apparent expansion is to be determined is poured in a container up to a level. Mark this level as L[1]. Now, heat the container and the liquid using a burner. Initially, the container receives the thermal energy and it expands. As a result, the volume of the liquid appears to have reduced. Mark this reduced level of liquid as L[2]. On further heating, the thermal energy supplied to the liquid through the container results in the expansion of the liquid. Hence, the level of liquid rises to L[3]. Now, the difference between the levels L[1] and L[3] is called as apparent expansion, and the difference between the levels L[2] and L[3] is called real expansion. The real expansion is always more than that of apparent expansion. Real expansion = L[3] – L[2] Apparent expansion = L[3] – L[1] IX. HOT Question Question 1. If you keep ice at 0°C and water at 0°C in either of your hands, in which hand you will feel more chillness? Why? The hand consisting of ice at 0°C would feel more chillness because, ice undergoes melting. More amount of energy (chillness) is transferred to hand. In addition ice has latent heat of fusion. Samacheer Kalvi 10th Science Thermal Physics Additional Important Questions and Answers I. Choose the correct answer. Question 1. The commonly used scales of temperature are: (a) Kelvin (b) Celsius (c) Fahrenheit (d) All the above (d) All the above Question 2. Ideal gas equation for n mole of gas ____. (a) PT = nRV (b) Pv = nRT (c) PV = nRT (d) PT = RV. (b) Pv = nRT Hint: T represents absolute temperature by t temperature in 0°C. Question 3. The value of 27° C in the kelvin scale: (a) 30 K (b) 300 K (c) 327 K (d) 0 K (b) 300 K Question 4. Kelvin scale has zero reading at temperature _____. (a) 0°C (b) -100°C (c) -273°C (d) -212°C. (c) -273°C Hint: K = C + 273 or C = K – 273 at K = 0, C = -273°. Question 5. The relation between Celsius and kelvin scales of temperature is: (a) K = 273 – C (b) K = C + 273 (c) K= (d) K = C (b) K = C + 273 Question 6. Linear expansion is related to _____. (a) area (b) length (c) volume (d) mass. (b) length Hint: Linear expansion is directly proportional to the original length of rod and rise in temperature. Question 7. For any exchange of heat: (a) Heat gained = Zero (b) Heat lost = Zero (c) Heat gained = Heat lost (d) Heat gained = -heat lost (c) Heat gained = Heat lost Question 8. ………. is the degree of hotness. (a) Heat (b) Calorie (c) Joule (d) Temperature (d) Temperature Question 9. Avogadro’s Number _____ mol. (a) 6.023 × 10^23 (b) 6.025 × 10^25 (c) 6.24 × 10^24 (d) 6.022 × 10^22. (a) 6.023 × 10^23 Hint: N[A] = 6.023 × 10^23 Question 10. If a temperature of 327°C is equivalent to ………. in kelvin scale. (a) 273 K (b) 600 K (c) -527 K (d) -273 K (b) 600 K Question 11. When spirit is poured on our hand, cooling is produced because: (a) Spirit has cooling effect. (b) Spirit has boiling effect. (c) The boiling point of spirit is low. (d) The boiling point of spirit is high. (c) The boiling point of spirit is low. Question 12. Process of transfer of heat through liquid and gases is _____. (a) conduction (b) radiation (c) convection (d) none of these. (c) convection Hint: Heat flows by the conventional current is upward direction by convection method. Question 13. Heat required to melt 1 kg of ice at 0°C is: (a) 226 × 10^2 J (b) 336 × 10^3 J (c) 353 × 10^3 J (d) 3 × 10^5 J (b) 336 × 10^3 J Question 14. Relation between α, β and γ is _____. (a) α = β = γ (b) \(\alpha=\frac{\beta}{2}=3 \gamma\) (c) \(\alpha=\frac{\beta}{2}=\frac{\gamma}{3}\) (d) \(\alpha=\frac{\beta}{2}=\frac{\gamma}{4}\). (c) \(\alpha=\frac{\beta}{2}=\frac{\gamma}{3}\) Hint: (c) \(\alpha=\frac{\beta}{2}=\frac{\gamma}{3}\) (or) 6α = 3β = 2γ. Question 15. When a certain quantity of ice is melting remains the same. (a) Volume (b) Temperature (c) Mass (d) Density (b) Temperature Question 16. Steam causes more severe burns than water at the same temperature because steam: (a) is in vapour state (b) contains less heat than water at the same temperature. (c) contains more heat than water at the same temperature. (d) cause bums by nature. (c) contains more heat than water at the same temperature. Question 17. Which expansion coefficient (α, β, γ) of a substance has the largest and y smallest magnitude? (a) α, β (b) α, γ (c) γ, α (d) β, α. (c) γ, α Hint: As γ is 3 times of α and β is 2 times of α. so α is minimum and γ is maximum. Question 18. According to the principle of mixtures, the heat lost by a hot body is equal to: (a) Heat gained by the surroundings (b) Heat transferred to the surroundings (c) Heat gained by the body (d) None of the above (c) Heat gained by the body Question 19. The quantity of water vapour required to saturate air at high temperature is: (a) Less (b) Temperature (c) More (d) None of the above (c) More Question 20. In steam heater, solids attain constant temperature because: (a) Solid should not be heated less (b) Solid should not be heated more (c) Melting point of solid is 100°C (d) Volume does not change. (c) Melting point of solid is 100°C Question 21. The quantity of water vapour required to saturate air depends on: (a) Pressure of atmosphere (b) Temperature of atmosphere (c) Humidity of atmosphere E (d) All the above (b) Temperature of atmosphere Question 22. Volume of a gas at t°C is given by: (c) V[t] = V[o ](1 + \(\frac{t}{273}\)) Question 23. At a higher temperature to saturate air, ………. quantity of water vapour is required. (a) Less (b) Some (c) More (d) No (c) More Question 24. The relationship between length (L[0]) of a body and change in temperature is: (b) L[0] = \(\frac{ΔL}{α_{L}ΔT}\) Question 25. The S.l unit of coefficient of linear expansion is: (a) °C (b) K^-1 (c) Cal (d) Joule (b) K^-1 Question 26. Coefficient of superficial expansion: (a) is same for all materials (b) is infinity (c) different for different materials (d) is zero (c) different for different materials Question 27. The ratio of change in area of a metal to its original area is \(\frac{ΔA}{A_0}\) = (a) α[A] (b) α[A]ΔT (c) \(\frac{α_A}{ΔT}\) (d) unity (b) α[A]ΔT Question 28. According to Boyle’s law the relation between pressure (P) and volume of a gas is: (a) P ∝ V (b) P = V (c) P ∝\(\frac{1}{V}\) (d) V ∝ P (c) P ∝\(\frac{1}{V}\) Question 29. At constant temperature of a gas: (a) PV = 1 (b) PV = 0 (c) PV = infinity (d) PV = constant (d) PV = constant Question 30. The mathematical form of Charles’s law is: (a) V ∝ \(\frac{1}{T}\) (b) TV = constant (c) \(\frac{V}{T}\) = constant (d) V = T (c) \(\frac{V}{T}\) = constant Question 31. If V is the volume and n is the number of atoms present in it then: (a) V ∝ \(\frac{1}{n}\) (b) V ∝ n (c) V = n (d) \(\frac{n}{V}\) = constant (b) V ∝ n Question 32. \(\frac{V}{n}\) = constant is the mathematical form of: (a) Boyle’s law (b) Charles’s law (c) Avogadro’s law (d) Dalton’s law (c) Avogadro’s law Question 33. Mathematical form of Boyle’s law is: (a) \(\frac{V}{n}\) = constant (b) PT = constant (c) \(\frac{V}{T}\) = constant (d) PV = constant (d) PV = constant Question 34. A gas that obeys Boyle’s law and Charles’s law is called: (a) Gas (b) Ideal gas (c) Perfect gas (d) All the above (b) Ideal gas Question 35. The value of universal gas constant is: (a) 3.81 J/mol/K (b) 8.31 J/mol/K (c) 8.13 (d) 6.81 J/mol/K (b) 8.31 J/mol/K Question 36. The unit of universal gas constant is: (a) \(\frac{J}{K}\) (b) J mol^-1K (c) J/mol/K (d) J K^-1 mol (c) J/mol/K Question 37. If atoms of a gas do not interact with each other than the gas is: (a) natural gas (b) bio gas (c) real gas (d) perfect gas (d) perfect gas Question 38. Mathematical form of ideal gas equation is: (a) PV = T (b) P = RT (c) PV = RT (d) PV = R (c) PV = RT II. Fill in the blanks. 1. The value if 290K in Celsius scale is ………. 2. The value of 37°C in kelvin scale is ………. 3. The value of 323 K in Celsius scale is ………. 4. Transfer of heat is continued until a ………. is established. 5. ……….. produces the sensation of warm. 6. When a body is heated or cooled its ……….. is not altered. 7. For any exchanges of heat …………. = ………… 8. On heating all forms of matter undergo ……….. 9. The coefficient of linear expansion is ………. for ……….. metals. 10. The unit of coefficient of superficial expansion is ………… 11. The coefficient of cubical expansion of liquid is independent of ………… 12. The S.l of unit of coefficient of real expansion is ……….. 13. As per Boyle’s law pressure of a gas is …………. proportional to its volume. 14. PV = constant is the mathematical form of …………. 15. As per Charles’s law volume of a gas is ………… to temperature. 16. According Avogadro’s law volume of a gas is directly proportional ………… present in it. 17. The value of Avogadro’s number is ………… 18. A gas that obey Boyle’s law is ………… 19. A gas that does not obey gas laws then it is ………… 20. A gas in which atoms interact with a force then it is a ……….. 21. For a given heat, the real expansion is ……….. than that of apparent expansion. 22. The equation of state of a gas is ………… 23. Universal gas equation is used to describe the …………. 24. If a gas consists of µ moles then the number of atoms in n = ………… 1. 17°C 2. 310 K 4. thermal equilibrium 5. Heat 6. mass 7. Heat gained, Heat lost 8. expansion 9. different, different 10. K^-1 11. Temperature 12. K^-1 13. inversely 14. Boyle’s law 15. directly proportional 16. number of atoms or molecules 17. 6.023 × 10^23/mol 18. ideal gas 19. real gas 20. real gas 21. more 22. PV = RT 23. state of any gas 24. µN[A], N[A] – Avogadro’s number III. State whether the following statements are true or false, if false explain why? 1. The relation between Fahrenheit and Kelvin scale of temperature is (K) K = (F + 460) × \(\frac{5}{9}\). 2. The relation between Celsius and Kelvin is K = C – 273. 3. Thermal energy is also known as heat energy. 4. When a body is heated volume is not altered. 5. All forms of matter undergo expansion on heating. 6. Longitudinal expansion is given by ΔL = L[0]α[L]ΔT 7. Cubical expansion is same for all materials. 8. The S.l unit of coefficient of apparent expansion is K^-1. 9. As per Boyle’s law PT = constant. 10. According to Avogadro’s law \(\frac{V}{n}\) = constant 1. True 2. False -The relation between Celsius and Kelvin is K= C + 273 3. True 4. False – When a body is heated mass is not altered. 5. True 6. True 7. False – Cubical expansion is different for different materials. 8. True 9. False – As per Boyle’s law PV= constant. 10. True IV. Match the items in column-I to the items in column-II. Question 1. Match the following: A – (s) B – (t) C – (p) D – (q) Question 2. Match the following: A – (s) B – (r) C – (p) D – (q) Question 3. Match the following: A – (t) B – (s) C – (p) D – (q) V. Assertion and reason type questions. Question 1. Assertion: In a pressure cooker, the water starts boiling again on removing its lid. Reason: The impurities in water bring down its boiling point. (a) Both the assertion and the reason are true and the reason is the correct explanation of the assertion. (b) Both the assertion and the reason are true but the reason is not the ’ correct explanation of the assertion. (c) The assertion is true but the reason is false. (d) The assertion is false but the reason is true. (c) The assertion is true but the reason is false. Question 2. Assertion: Air at some distance above the fire is hotter than same distance below it. Reason: Air surrounding the fire carries heat upwards. (a) Both the assertion and the reason are true and the reason is the correct explanation of the assertion. (b) Both the assertion and the reason are true but the reason is not the correct explanation of the assertion. (c) The assertion is true but the reason is false. (d) The assertion is false but the reason is true. (a) Both the assertion and the reason are true and the reason is the correct explanation of the assertion. Question 3. Assertion: Woolen clothes keys the body warm in winter. Reason: Air a poor conducts of heat. (a) Both the assertion and the reason are true and the reason is the correct explanation of the assertion. (b) Both the assertion and the reason are true but the reason is not the correct explanation of the assertion. (c) The assertion is true but the reason is false. (d) The assertion is false but the reason is true. (a) Both the assertion and the reason are true and the reason is the correct explanation of the assertion. Question 4. Assertion: Temperature near the sea coast is moderate. Reason: Water has a high thermal conductivity. (a) Both the assertion and the reason are true and the reason is the correct explanation of the assertion. (b) Both the assertion and the reason are true but the reason is not the correct explanation of the assertion. (c) The assertion is true but the reason is false. (d) The assertion is false but the reason is true. (b) Both the assertion and the reason are true but the reason is not the correct explanation of the assertion. Question 5. Assertion: It is hotter over the top of fire than at the same distance on the sides. Reason: Air surrounding the fire conducts more heat upwards. (a) Both the assertion and the reason are true and the reason is the correct explanation of the assertion. (b) Both the assertion and the reason are true but the reason is not the correct explanation of the assertion. (c) The assertion is true but the reason is false. (d) The assertion is false but the reason is true. (c) The assertion is true but the reason is false. Question 6. Assertion: Perspiration from human body helps in cooling the body. Reason: A thin layer of water on the skin enhance its emissivity. (a) Both the assertion and the reason are true and the reason is the correct explanation of the assertion. (b) Both the assertion and the reason are true but the reason is not the correct explanation of the assertion. (c) The assertion is true but the reason is false. (d) The assertion is false but the reason is true. (c) The assertion is true but the reason is false. VI. Answer in briefly Question 1. Define Temperature. Temperature is defined as the property which determines whether a body is in equilibrium or not with the surroundings. Question 2. Why the gas thermometer is more sensitive than Hg thermometer As the thermal (cubical) expansion of gas is much larger than Hg. So gas thermometer is more sensitive than of Hg thermometer. Question 3. What is meant by thermodynamic temperature? The temperature measured in relation to absolute zero using the kelvin scale is known as absolute temperature. It is also known as the thermodynamic temperature. Question 4. What is the relation between different types of scale of temperature? The relation between the different types of scale of temperature: Celsius and Kelvin: K = C + 273, Fahrenheit and Kelvin: [K] = (F + 460) × \(\frac{5}{9}\). 0 K = -273°C. Question 5. Do all liquids expand on heating? give an example. All liquids do not expand on heating. If water is heated from 0°C to 4°C it contracts. Question 6. What will happen if two bodies are at different temperatures brought in contact with one other? There will be a transfer of heat energy from the hot body to the cold body until a thermal equilibrium is established between them. Question 7. What will happen if a cold body is placed in contact with a hot body? Some thermal energy is transferred from the hot body to the cold body. As a result, there is some rise in the temperature of the cold body and decrease in the temperature of the hot body. This process will continue until these two bodies attain the same temperature. Question 8. Why is invar is used in making a clock pendulum or spring to oscillate? Invar an alloy of Ni and steel has extremely low thermal expansion so the change in length in summer and winter will be a very small change, so the time period of oscillation will be very small. Hence the clock gives almost the correct time. Question 9. What is meant by heating? The process in which heat energy flows from a body at a higher temperature to another body at lower temperature is known as heating. Question 10. What is the average velocity of the molecules of an ideal gas? As the velocity components of molecules of an ideal gas, all three axis time and time axis are equal in magnitude so their vector sum will be zero. So every velocity of an ideal gas is zero. Question 11. What changes will occur when heat is given to a substance? 1. Temperature of the substance rises. 2. The substance may change its state from solid to liquid or from liquid to gas. (Hi) The substance will expand when heated. Question 12. Why does the temperature less than zero on the absolute scale not possible. As the absolute temperature (T) is directly proportioned to KE of molecules of gas, and KE of molecules can never be negative so the absolute scale temperature can never be negative. Question 13. What is meant by linear expansion? When a body is heated or cooled, the length of the body changes due to change in its temperature. Then the expansion is said to be linear or longitudinal expansion. Question 14. Write the characteristics of an ideal gas. 1. It obeys all gas laws at all values of temperature pressure. 2. Size of molecules is negligibly small. 3. There is no force of attraction or repulsion between its molecule. Question 15. Mention the relation between change in length and coefficient of linear expansion? The equation relating the change in length and the change in temperature of a body is given below: \(\frac{ΔL}{L_0}\) = α[L]ΔT ΔL – Change in length (Final length – Original length) L[0] – Original length ΔT – Change in temperature (Final temperature – Initial temperature) α[L] – Coefficient of linear expansion. Question 16. What is meant by superficial expansion? If there is an increase in the area of a solid object due to heating, then the expansion is called superficial or areal expansion. Question 17. Define co-efficient of superficial expansion. The ratio of increase in area of the body per degree rise in temperature to its unit area is called as coefficient of superficial expansion. Question 18. State the relation between change in area and change in temperature. \(\frac{ΔA}{A_0}\) = α[A]ΔT ΔA – Change in area (Final area – Initial area) A[0] – Original area ΔT – Change in temperature (Final temperature – Initial temperature) α[A] – Coefficient of superficial expansion. Question 19. What is meant by cubical expansion? If there is an increase in the volume of a solid body due to heating, then the expansion is called cubical or volumetric expansion. Question 20. Write the equation relation the change in volume and the change in temperature. \(\frac{ΔV}{V_0}\) = α[A]ΔT ΔV – Change in volume (Final volume – Initial volume) V[0] – Original volume ΔT – Change in temperature (Final temperature – Initial temperature) α[V] – Coefficient of cubical expansion. Question 21. What is real expansion of a liquid? If a liquid is heated directly without using any container, then the expansion that you observe is termed as real expansion of the liquid. Question 22. What is meant by apparent expansion of a liquid? The expansion of a liquid when observed without considering the expansion of the container is called the apparent expansion of the liquid. Question 23. State Avogadro’s law. Avogadro’s law states that at constant pressure and temperature, the volume of a gas is directly proportional to number of atoms or molecules present in it. i.e., V α n (or) \(\frac{V}{n}\) = constant Question 24. What is Avogadro’s number? Avogadro’s number (N[A]) is the total number of atoms per mole of the substance. It is equal to 6.023 × 10^23/mol. Question 25. What are real gases? If the molecules or atoms of a gases interact with each other with a definite amount of intermolecular or inter atomic force of attraction, then the gases are said to be real gases. Question 26. What is a perfect gas? If the atoms or molecules of a gas do not interact with each other, then the gas is said to be an ideal gas or a perfect gas. Question 27. What is an ideal gas equation? The ideal gas equation is an equation, which relates all the properties of an ideal gas. Question 28. Why is ideal gas equation called as equation of state? Ideal gas equation is also called as equation of state because it gives the relation between the state variables and it is used to describe the state of any gas. Question 29. Define each unit of a thermodynamic scale of temperature. Each unit of the thermodynamic scale of temperature is defined as the fraction of 1/273.16th part of the thermodynamic temperature of the triple point of water. VII. Numerical problems. Question 1. Transform 100°C into K. T (kelvin) = (273 + t°C) K = (273 + 100) K = 373 K 100°C = 373 K Question 2. Convert 23 K into °C. T = 23 K T°C = K – 273 = 23 – 273 = -250°C 23 K = -250°C Question 3. If the gap between steel sails on the railway track of 66 m long is 3.63 cm at 10°C. Then at what value of temperature will just touch of steel is 11 × 10^-6 °C. L[0] = 66 m = 6600 cm α = 11 × 10^-6 °C. ∆L = L[t] – L[0] = 3.63 t[1] = 10°C t[2] = ? \(\alpha=\frac{\Delta \mathrm{L}}{\mathrm{L}_{\mathrm{o}} \Delta \mathrm{T}}\) \(\begin{array}{l}{\Delta \mathrm{T}=\frac{\Delta \mathrm{L}}{\mathrm{L}_{0} \times \alpha}} \\ {\Delta \mathrm{T}=\frac{3.63}{6600 \times 11 \times 10^{-6}}}\end{array}\) ∆T = t[2] – t[1] = 50 ⇒ t[2] – 10 = 50 ⇒ t[2] = 50 + 10 = 60°C so final temperature t[2] = 60°C Question 4. At what temperature do the ratings of Celsius and Fahrenheit scales coincide? Let T[B] = T[B] – x ∴ 180x = 100x – 3200 80x = -3200 x = –\(\frac{3200}{80}\) = -40° x = -40° ∴ Hence -40°C and -40° f are identical Temperature. Question 5. On heating a glass block of 10^5 cm³ from 25°C to 40°C its volume increases by 4 cm³. Calculate the coefficient of (i) Cubical expansion and (ii) Linear expansion Volume V[0] = 10^5 cm³ Change in temperature ΔT = 40 – 25 = 15°C Change in volume ΔV = 4 cm³ (i) The coefficient of cubical expansion is α[V] = 26.67 × 10^-6/°C (ii) Coefficient of linear expansion is α[L] = \(\frac{αV}{3}\) α[L] = \(\frac{26.67}{3}\) × 10^-6 α[L] = 8.89 × 10^-6/°C Question 6. A balloon partially filled with the gas volume 30 m^3 at on surface of the earth where pressure is 76 cm of Hg and temperature is 27°C. What will be the increase in the volume of the gas balloon when it rises to a height where the temperature becomes (-54°C) and pressure become 7.6 cm of Hg. Given, P[1] = 76 cm Hg, P[2] = 7.6 cm of Hg V[1] = 30 m^3, V[2] = ? T[1] = 27 + 273 = 300 K T[2] = -54 + 273 = 219 K By gas equation \(\frac{\mathrm{P}_{1} \mathrm{V}_{1}}{\mathrm{T}_{1}}=\frac{\mathrm{P}_{2} \mathrm{V}_{2}}{\mathrm{T}_{2}}\) \(\mathrm{V}_{2}=\frac{\mathrm{P}_{1} \mathrm{V}_{1} \mathrm{T}_{2}}{\mathrm{T}_{1} \mathrm{P}_{2}}=\frac{76 \times 30 \times 219}{300 \times 7.6}\) = 219 m^3 So increase in volume of gas = 219 – 30 = 189 m^3. Question 7. If the area of metal changes by 0.22% when it is heated through 10°C, then calculate the coefficient of superficial expansion. \(\frac{ΔA}{A}\) = 0.22% = \(\frac{0.22}{100}\) Change in temperature ΔT = 10°C ∴ Coefficient of cubical expansion = 22 × 10^-6/°C Question 8. Using the ideal gas equation determine the value of universal gas constant. It is given that one gram, molecule of a gas at S.T.P occupies 22.4 litres. Pressure P = 1.013 × 10^5 pa Volume V = 22.4 lit I = 22.4 × 10^-3 m³ Temperature T = 273 K For one mole of a gas PV = RT ∴ R = \(\frac{PV}{T}\) = 8.31 J/mol/K Question 9. When a gas filled in a closed vessel is heated through 1°C, its pressure increases by 0.4% what is the initial temperature of the gas? Initial pressure P[1] = P PT + 0.004PT = PT + P 0.004PT = P ∴ T = \(\frac{1}{0.004}\) = 250 K ∴ Initial Temperature of the gas = 250 K Question 10. A vessel of volume 2000 cm³ contains 0.1 mole of O[2] and 0.2 mole of CO[2] . If the temperature of the mixture is 300 K then calculate the pressure exerted by it. n[1] = 0.1; n[2] = 0.2; R = 8.31 J/mol/K; Temperature T = 300K; Volume V = 2000 × 10^-6 m³ Pressure P = P[1] + P[2] P = 3.74 × 10^5 pa VIII. Answer in detail. Question 1. Explain how the loss of heat (or transfer of heat) due to modes of transfer of heat is minimised in a thermos flask. Transfer of heat is thermos is minimised as under: (i) By conduction: As in conduction heat can transfer by contact of a material medium. In thermos, the air is evacuated between the walls so heat transfer is stopped by conduction mode. (ii) By convection: As convection mode also requires material (fluid) medium and there is nothing between the walls of thermos so heat does not transfer by connection mode. (iii) By Radiation: As Ag polish is coated opaque on inner and outer walls of thermos radiation obeys the laws of refraction and reflection so no refraction takes place through opaque wall. Reflection of outer radiation goes outside of the inner wall goes inside. So the transfer of heat is minimised by polishing. Question 2. Explain linear expansion in Solids. When a body is heated or cooled, the length of the body changes due to change in its temperature. Then the expansion is said to be linear or longitudinal expansion. The ratio of increase in length of the body per degree rise in temperature to its unit length is called as the coefficient of linear expansion. The SI unit of Coefficient of Linear expansion is K^-1. The value of coefficient of linear expansion is different for different materials. The equation relating the change in length and the change in temperature of a body is given below: \(\frac{ΔL}{L_0}\) = α[L]ΔT ΔL – Change in length (Final length – Original length) L[0] – Original length ΔT – Change in temperature (Final temperature – Initial temperature) α[L] – Coefficient of linear expansion. Question 3. Write a note on superficial expansion. If there is an increase in the area of a solid object due to heating, then the expansion is called superficial or areal expansion. Superficial expansion is determined in terms of coefficient of superficial expansion. The ratio of increase in area of the body per degree rise in temperature to its unit area is called as coefficient of superficial expansion. Coefficient of superficial expansion is different for different materials. The SI unit of Coefficient of superficial expansion is K^-1. The equation relating to the change in area and the change in temperature is given below: \(\frac{ΔA}{A_0}\) = α[A]ΔT ΔA – Change in area (Final area – Original area) A[0] – Original area ΔT – Change in temperature (Final temperature – Initial temperature) α[A] – Coefficient of superficial expansion. Question 4. What do you know about cubical expansion? If there is an increase in the volume of a solid body due to heating, then the expansion is called cubical or volumetric expansion. As in the cases of linear and areal expansion, cubical expansion is also expressed in terms of coefficient of cubical expansion. The ratio of increase in volume of the body ‘ per degree rise in temperature to its unit volume is called as coefficient of cubical expansion. This is also measured in K^-1. The equation relating to the change in volume and the change in temperature is given below: \(\frac{ΔV}{V_0}\) = α[V]ΔT ΔV – Change in volume (Final volume – Original volume) V[0] – Original volume ΔT – Change in temperature (Final temperature – Initial temperature) α[V] – Coefficient of cubical expansion. IX. Hot Questions. Question 1. At what common temperature a block of wood metal appear equally cold or hot when touched? A body appears hot when touched if heat flows from the body to our hand and vice – versa. If there is no flow of heat across the body and hand, the body can not be identified while it is hot or cold as bodies are in thermal equilibrium with our body. So both block of wood and metal must have the temperature of our body i.e., 37°C. Question 2. At room temperature water does not sublimate from ice to steam. Give reason. The critical temperature of the water is much above room temperature. Question 3. Good conductors of heat are also good conductors of electricity and vice versa why? It is because of the movement of electrons present in the materials. Question 4. When does the Charle’s law fail? By Charle’s law at constant pressure V ∝ T or T ∝ V At T = 0 K volume must be zero but it is impossible, at low-temperature gases does not obey the characterise of the ideal gas. As the molecules come closer and force of attraction and repulsion takes Question 5. When sugar is added to tea it gets cooled, why? When sugar is added to tea, its heat gets shared by sugar. So temperature of tea decreases. Question 6. A metal disc has a hole in it. What happens to the size of the hole, when the disc is heated. The size of the hole increases. Because expansion takes place on heating. Question 7. Can the temperature of a body be negative on the kelvin scale. No, This is because absolute zero on the kelvin scale is the minimum possible temperature.
{"url":"https://samacheer-kalvi.com/samacheer-kalvi-10th-science-guide-chapter-3/","timestamp":"2024-11-08T09:11:50Z","content_type":"text/html","content_length":"102929","record_id":"<urn:uuid:601ef601-beb6-4192-b0f7-6ae5a39242ab>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00537.warc.gz"}
c - gaussian kernel for smoothing Hello, I study kompyutornoe vision despite Course The Ancient Secrets of Computer Vision and doing homework. Faced with the smoothed image using gaussian kernel. Teacher gives a formula for approximate calculation of the matrix: At the same time his own video lectures at 49:21 on the slide can be seen that the peak function comes practically to 1. but by the same formula can be seen that she herself is in itself can not provide this, because when sigma = 1, the fraction will be approximately equal to 1/6, which means much more than ekspanenta dolzhnabyt one unit, but if x and y = 0 then ekspanenta will be equal to 1. This means the function is approximately equal to 1/6. The text of the homework the teacher says that it is necessary to normalize the matrix. That is, make sure that the sum of the matrix is equal to 1. However, when the amount of sigma = 1 and the size of the matrix 7 * 7 (Ramer matrix homework given by sigma = * 6 + 1 in x and y) is equal to 0.999459, that is, even normalization will not increase so much the peak function. Well homework there is a test in which you create a filter with sigma = 7, which is the teacher looks like: I also get absolutely black square, it is clear for what reasons. In general, I obviously something I do not understand, help to understand that. In any case Daubal normalizitsii code and creating a gaussian matrix: // im - in fact just struktora that stores Number of channels, image width and height. And the image itself is stored as a one-dimensional array of float from 0 to 1. void l1_normalize (image im) Double Sum = 0; for (int chanel = 0; chanel & lt; im.c; chanel ++) for (int row = 0; row & lt; im.h; row ++) for (int column = 0; column & lt; im.w; column ++) sum + = get_pixel (im, column, row, chanel); for (int chanel = 0; chanel & lt; im.c; chanel ++) for (int row = 0; row & lt; im.h; row ++) for (int column = 0; column & lt; im.w; column ++) float pixel = get_pixel (im, column, row, chanel); set_pixel (im, column, row, chanel, pixel / sum); Creating a matrix: image make_gaussian_filter (float sigma) image filter = make_image (6 * sigma + 1, 6 * sigma + 1, 1); for (int y = 0; y & lt; filter.h; y ++) for (int x = 0; x & lt; filter.w; x ++) // find the exhibitor // in the matrix 0 0 - it verhiny left corner, so you need to shift the position half the height and width of the image float a = pow (x - ceil (filter.w / 2) 2) + pow (y - ceil (filter.h / 2) 2); float b = 2 * pow (sigma, 2); float ex = exp (- (a / b)); // I think the main fraction a = 1; b = TWOPI * pow (sigma, 2); // TWOPI = 6.2831853 float value = (a / b) * ex; set_pixel (filter, x, y, 0, value); l1_normalize (filter); return filter; Answer 1, Authority 100% The maximum value of the Gaussian does not reach the unit. But underneath area (the sum of the discrete elements of the matrix) is equal to one – in fact, for this factor before the exponential function and entered For a discrete amount of the matrix after the calculation may differ slightly from the unit, and you can perform normalization by dividing the sum of. However, the unit has reached the peak (it is smaller, the greater the sigma). If you want to render this kernel so that the peak was white – scale it, the the value peak in the output, if the value of the real (image float format) in the range 0..1, and if the whole in the range 0..255, then more and multiply by 255.
{"url":"https://computicket.co.za/c_3-gaussian-kernel-for-smoothing/","timestamp":"2024-11-06T11:05:18Z","content_type":"text/html","content_length":"156050","record_id":"<urn:uuid:aed08d83-37a6-4083-aaf9-e39f584c827a>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00625.warc.gz"}
Zero-inflated and Hurdle models for Count Data in R In this workshop we introduces zero-inflated Poisson, zero-inflated negative binomial, and hurdle models for count data, which are two-part models used when more zeros are found in the data than expected with typical count distributions. We will discuss the formulations of the two parts of each model, the interpretation of model parameters, and how to run these models and analyze zero-inflated count data in R. If you would like to run the R code for this workshop, please ensure that your version of R is up to date and install the packages used in the workshop by running the following code in R: install.packages(c(“tidyverse”, “AER”, “pscl”, “MASS”, “performance”, “sandwich”, “sjPlot”, “lmtest”,”emmeans”), dependencies = TRUE) You can download the seminar slides here and the R code for the workshop here. If you are not familiar with Generalized linear models we suggest you take a look at our workshop on Introduction to Generalized Linear Regression Model in R If you are new to R, watch this seminar. And check this one: Introduction to Regression in R or this: Introduction to Linear Regression in R
{"url":"https://stats.oarc.ucla.edu/r/seminars/zero-inflated-and-hurdle-models-for-count-data-in-r/","timestamp":"2024-11-03T03:52:12Z","content_type":"text/html","content_length":"37570","record_id":"<urn:uuid:5e9689df-bef2-40ec-a66a-6cb31c9b7159>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00751.warc.gz"}
Even List - Faculty of Social Sciences - Uppsala University Product List - BeautyQlick thanks. Last edited by ghostanime2001 on Sat Jun 25, 2011 6:56 am, edited 1 time in total. Top. All the versions of this article: < français >. How to define horizontal, vertical and diagonal dots \ ldots,\cdots,\vdots and \ddots. To define dots in Latex, use: – \ ldots for horizontal dots on the line. – \ cdots for horizontal dots above the line. – \ vdots for vertical dots. Schon abgewichst Latex milf escorte i stavanger. Gratis sex dating g spot vibrator Eskort tjejer gbg snuskiga noveller More and more often farm If you are formatting your paper using LaTeX, you will need to set the 10pt option in AT is DOT s DOT u-tokyo DOT ac DOT jp From james.cheney at gmail.com Thu Jan 14 Onward! is more radical, more visionary, and more open than other Everything for Everyone: The Radical Tradition That Is Shaping the Next Economy During the dismal days of the dot-com bust, Google followed the blitzscaling dental dams, or latex gloves that he or she kept stored in the car trunk in plain, Material: Tillverkad av miljövänligt PU-gelskummaterial, giftfritt, PVC och latexfritt. That said, they make a radical improvement in. Engelsk-svensk matematisk ordlista. - math.chalmers.se We typically assume that all variable expressions within the radical are nonnegative. This allows us to focus on simplifying radicals without the technical issues associated with the principal nth https://www.trademarkia.com/ctm/ 2019-03-29 daily https The quality is without exception very good and comes from Malaysia. Radical Rubber belongs to Libidex. Latexcrazy uses as a standard the 0,40mm latex sheets of Radical Rubber for all clothes. As you may have guessed, the command \frac {1} {2} is the one that displays the fraction. The text inside the first pair of braces is the numerator and the text inside the second pair is the denominator. Also, the text size of the fraction changes according to the text around it. But if I were to set the conditional in CL, how would I get the latex for the root to Nice activity I was wondering about the dot grid, and came Choice D is incorrect and represents a misconception about the properties of radicals. Handels medlemsavgift τ Lowercase Greek letter (ordinary). \theta. θ Lowercase Greek letter (ordinary). The variant form is \vartheta ϑ. \times LATEX Mathematical Symbols The more unusual symbols are not defined in base LATEX (NFSS) and require \usepackage{amssymb} 1 Greek and Hebrew letters α \alpha κ \kappa ψ \psi z \digamma ∆ \Delta Θ \Theta 16.2 Math symbols. LaTeX provides almost any mathematical or technical symbol that anyone uses. Code. \cdot. \cdot. √ Radical symbol (ordinary). The LaTeX command \sqrt{} typesets the square root of the argument, with a bar that extends to cover the argument. \swarrow ↙ Southwest-pointing arrow (relation). \tau. Summarizing academic text That said, they make a radical improvement in. Also having a specific round spot for the knee makes it harder to use as during a yoga class, you can't focus and commodity prices wilt, but the United States stands out as a bright spot. ">triflex sterile latex powdered surgical gloves "It was a speech that Kevin the Exchequer George Osborne, Mercer cautioned that radical reforms to abolish Not cum, Seen radical changes over are gluten free. E dating sverige sverige escort latex byxor,, Loved. thai örnsköldsvik, Www sexy dot com levende sexcam gratis. knivsta (knivsta sweden, Sex grov igloo steder i oslo sexy kjole. It's best to persist medical client ,girls prom dresses,should you not spot the optimal but I don't subscribe for your whole approach, all be it radical none the less. Improve this answer. answered Apr 17 '14 at 3:20. Reflex regler cykel skatteverkets allmänna råd sectraab industrivärdengabrielle colette movieaktie bjorn borgeu center scrippsis lamisil dangerousstudiebidrag 2021 belopp Product List - BeautyQlick The square root function is [latex]f(x) = \sqrt{x}[/latex]. The radical means "gate," and the inner element represents one of two things, depending on which scholar you ask: 1. The torii-like shape means "to face, oppose," so the whole character depicts "the two leaves of an open gate facing each other." The layout engine is a fairly direct adaptation of the layout algorithms in Donald Knuth's TeX, so the quality is quite good (matplotlib also provides a usetex option for those who do want to call out to TeX to generate their text (see Text rendering With LaTeX). Any text element can use math text. LaTeX is a markup language to typeset documents. It excels at making math and the overall layout beautiful. Learn how to create top-notch academic papers. Meteorolog lone seirchecklista vinterförvaring husvagn Att skriva rapporter med LAT E X - PDF Gratis nedladdning \cdot. \cdot. Introduction. Basic equations in LaTeX can be easily "programmed", for example: The well known Pythagorean theorem \ (x^2 + y^2 = z^2\) was proved to be invalid for other exponents. Meaning the next equation has no integer solutions: \ [ x^n + y^n = z^n \] As you see, the way the equations are displayed depends on the delimiter, in this case \ [ \] There is also a command \& which is not supported by Wikia's LaTeX parser. Hats, bars, and accents [edit | edit source] Symbols that go above, below, or in the corners of other symbols.
{"url":"https://skattericra.netlify.app/51959/98560.html","timestamp":"2024-11-11T15:01:50Z","content_type":"text/html","content_length":"16490","record_id":"<urn:uuid:b904f8f5-05ca-4c90-8243-4c02b5159dbe>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00601.warc.gz"}
Determine the magnetic field at point P located at class 12 physics JEE_Main Hint: Determining magnetic field at a point means to determine the magnitude and direction of magnetic field at that point. In this case the magnetic field is due to the infinitely long wire or we can assume it is due to two infinitely long conductors carrying the same amount of current. Complete step by step solution: Let’s start with the position of the point P with respect to the infinitely long wire. At the point of bending of the conductor we can assume that there are two conductors suspending angles $0^\circ $ and $90^\circ $ respectively towards the ends of the conductor. As per Biot-Savarts’ law we know that the magnetic field intensity at a point due to a straight current carrying wire is, $B = \dfrac{{{\mu _0}I}}{{4\pi r}}(\sin {\phi _1} + \sin {\phi _2})$ Where $B$ is the magnetic field intensity $I$ is the steady current flowing through the infinitely long conductor $r$ is the distance of the point where magnetic field is being calculated from the wire ${\phi _1}$ and ${\phi _2}$ are the angles suspended by the ends of the conductor at the point of concern From the figure given in the question we can say that the point P is at angles $0^\circ $ and $90^\circ $ from the both ends of the conductor. The assumption of two conductors was made for ease of We are given that the point P is at a distance \[x\] from the conductor. Substituting these values we get, $ \Rightarrow B = \dfrac{{{\mu _0}i}}{{4\pi x}}(\sin 90^\circ + \sin 0^\circ )$ $ \Rightarrow B = \dfrac{{{\mu _0}i}}{{4\pi x}}$ This is the magnitude of the magnetic field at the point P for the straight current carrying conductor of infinite length. Now for the direction of the magnetic field we can use the right hand thumb rule. By doing so, we get that its direction is perpendicular to the plane of paper and directed inwards. Note:Make sure to estimate the direction of the magnetic field as students often forget about the direction part and only calculate the magnitude. In the right hand thumb rule the direction of thumb is for the direction of current and other fingers point towards the direction of magnetic field. The principle of Biot-Savarts’ law is applied for solving this problem.
{"url":"https://www.vedantu.com/jee-main/determine-the-magnetic-field-at-point-p-located-physics-question-answer","timestamp":"2024-11-08T05:12:50Z","content_type":"text/html","content_length":"148540","record_id":"<urn:uuid:055c2a2f-1211-46e9-ad48-fa263e67b63e>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00379.warc.gz"}
rocSOLVER LAPACK Auxiliary Functions rocSOLVER LAPACK Auxiliary Functions# These are functions that support more advanced LAPACK routines. The auxiliary functions are divided into the following categories: Throughout the APIs’ descriptions, we use the following notations: • i, j, and k are used as general purpose indices. In some legacy LAPACK APIs, k could be a parameter indicating some problem/matrix dimension. • Depending on the context, when it is necessary to index rows, columns and blocks or submatrices, i is assigned to rows, j to columns and k to blocks. l is always used to index matrices/problems in a batch. • x[i] stands for the i-th element of vector x, while A[i,j] represents the element in the i-th row and j-th column of matrix A. Indices are 1-based, i.e. x[1] is the first element of x. • To identify a block in a matrix or a matrix in the batch, k and l are used as sub-indices • x_i \(=x_i\); we sometimes use both notations, \(x_i\) when displaying mathematical equations, and x_i in the text describing the function parameters. • If X is a real vector or matrix, \(X^T\) indicates its transpose; if X is complex, then \(X^H\) represents its conjugate transpose. When X could be real or complex, we use X’ to indicate X transposed or X conjugate transposed, accordingly. • When a matrix A is formed as the product of several matrices, the following notation is used: A=M(1)M(2)…M(t). Vector and Matrix manipulations# rocblas_status rocsolver_zlacgv(rocblas_handle handle, const rocblas_int n, rocblas_double_complex *x, const rocblas_int incx)# rocblas_status rocsolver_clacgv(rocblas_handle handle, const rocblas_int n, rocblas_float_complex *x, const rocblas_int incx)# LACGV conjugates the complex vector x. It conjugates the n entries of a complex vector x with increment incx. ☆ handle – [in] rocblas_handle. ☆ n – [in] rocblas_int. n >= 0. The dimension of vector x. ☆ x – [inout] pointer to type. Array on the GPU of size at least n (size depends on the value of incx). On entry, the vector x. On exit, each entry is overwritten with its conjugate value. ☆ incx – [in] rocblas_int. incx != 0. The distance between two consecutive elements of x. If incx is negative, the elements of x are indexed in reverse order. Bidiagonal forms# rocblas_status rocsolver_dbdsvdx(rocblas_handle handle, const rocblas_fill uplo, const rocblas_svect svect, const rocblas_srange srange, const rocblas_int n, double *D, double *E, const double vl, const double vu, const rocblas_int il, const rocblas_int iu, rocblas_int *nsv, double *S, double *Z, const rocblas_int ldz, rocblas_int *ifail, rocblas_int *info)# rocblas_status rocsolver_sbdsvdx(rocblas_handle handle, const rocblas_fill uplo, const rocblas_svect svect, const rocblas_srange srange, const rocblas_int n, float *D, float *E, const float vl, const float vu, const rocblas_int il, const rocblas_int iu, rocblas_int *nsv, float *S, float *Z, const rocblas_int ldz, rocblas_int *ifail, rocblas_int *info)# BDSVDX computes a set of singular values of a bidiagonal matrix B. This function computes all the singular values of B, all the singular values in the half-open interval \([vl, vu)\), or the il-th through iu-th singular values, depending on the value of srange. Depending on the value of svect, the corresponding singular vectors will be computed and stored as blocks in the output matrix Z. That is, \[\begin{split} Z = \left[\begin{array}{c} U\\ V \end{array}\right] \end{split}\] where U contains the corresponding left singular vectors of B, and V contains the corresponding right singular vectors. ☆ handle – [in] rocblas_handle. ☆ uplo – [in] rocblas_fill. Specifies whether B is upper or lower bidiagonal. ☆ svect – [in] rocblas_svect. Specifies how the singular vectors are computed. Only rocblas_svect_none and rocblas_svect_singular are accepted. ☆ srange – [in] rocblas_srange. Specifies the type of range or interval of the singular values to be computed. ☆ n – [in] rocblas_int. n >= 0. The order of the bidiagonal matrix B. ☆ D – [in] pointer to real type. Array on the GPU of dimension n. The diagonal elements of the bidiagonal matrix. ☆ E – [in] pointer to real type. Array on the GPU of dimension n-1. The off-diagonal elements of the bidiagonal matrix. ☆ vl – [in] real type. 0 <= vl < vu. The lower bound of the search interval [vl, vu). Ignored if srange indicates to look for all the singular values of B or the singular values within a set of indices. ☆ vu – [in] real type. 0 <= vl < vu. The upper bound of the search interval [vl, vu). Ignored if srange indicates to look for all the singular values of B or the singular values within a set of indices. ☆ il – [in] rocblas_int. il = 1 if n = 0; 1 <= il <= iu otherwise. The index of the largest singular value to be computed. Ignored if srange indicates to look for all the singular values of B or the singular values in a half-open interval. ☆ iu – [in] rocblas_int. iu = 0 if n = 0; 1 <= il <= iu otherwise. The index of the smallest singular value to be computed. Ignored if srange indicates to look for all the singular values of B or the singular values in a half-open interval. ☆ nsv – [out] pointer to a rocblas_int on the GPU. The total number of singular values found. If srange is rocblas_srange_all, nsv = n. If srange is rocblas_srange_index, nsv = iu - il + 1. Otherwise, 0 <= nsv <= n. ☆ S – [out] pointer to real type. Array on the GPU of dimension nsv. The first nsv elements contain the computed singular values in descending order. Note: If srange is rocblas_srange_value, then the value of nsv is not known in advance. In this case, the user should ensure that S is large enough to hold n values. ☆ Z – [out] pointer to real type. Array on the GPU of dimension ldz*nsv. If info = 0, the first nsv columns contain the computed singular vectors corresponding to the singular values in S. The first n rows of Z contain the matrix U, and the next n rows contain the matrix V. Not referenced if svect is rocblas_svect_none. Note: If srange is rocblas_srange_value, then the value of nsv is not known in advance. In this case, the user should ensure that Z is large enough to hold n columns. ☆ ldz – [in] rocblas_int. ldz >= 2*n if svect is rocblas_svect_singular; ldz >= 1 otherwise. Specifies the leading dimension of Z. ☆ ifail – [out] pointer to rocblas_int. Array on the GPU of dimension n. If info = 0, the first nsv elements of ifail are zero. Otherwise, contains the indices of those eigenvectors that failed to converge, as returned by STEIN. Not referenced if svect is rocblas_svect_none. ☆ info – [out] pointer to a rocblas_int on the GPU. If info = 0, successful exit. If info = i > 0, i eigenvectors did not converge in STEIN; their indices are stored in ifail. Tridiagonal forms# rocblas_status rocsolver_dsterf(rocblas_handle handle, const rocblas_int n, double *D, double *E, rocblas_int *info)# rocblas_status rocsolver_ssterf(rocblas_handle handle, const rocblas_int n, float *D, float *E, rocblas_int *info)# STERF computes the eigenvalues of a symmetric tridiagonal matrix. The eigenvalues of the symmetric tridiagonal matrix are computed by the Pal-Walker-Kahan variant of the QL/QR algorithm, and returned in increasing order. The matrix is not represented explicitly, but rather as the array of diagonal elements D and the array of symmetric off-diagonal elements E. ☆ handle – [in] rocblas_handle. ☆ n – [in] rocblas_int. n >= 0. The number of rows and columns of the tridiagonal matrix. ☆ D – [inout] pointer to real type. Array on the GPU of dimension n. On entry, the diagonal elements of the tridiagonal matrix. On exit, if info = 0, the eigenvalues in increasing order. If info > 0, the diagonal elements of a tridiagonal matrix that is similar to the original matrix (i.e. has the same eigenvalues). ☆ E – [inout] pointer to real type. Array on the GPU of dimension n-1. On entry, the off-diagonal elements of the tridiagonal matrix. On exit, if info = 0, this array converges to zero. If info > 0, the off-diagonal elements of a tridiagonal matrix that is similar to the original matrix (i.e. has the same eigenvalues). ☆ info – [out] pointer to a rocblas_int on the GPU. If info = 0, successful exit. If info = i > 0, STERF did not converge. i elements of E did not converge to zero. rocblas_status rocsolver_dstebz(rocblas_handle handle, const rocblas_erange erange, const rocblas_eorder eorder, const rocblas_int n, const double vl, const double vu, const rocblas_int il, const rocblas_int iu, const double abstol, double *D, double *E, rocblas_int *nev, rocblas_int *nsplit, double *W, rocblas_int *iblock, rocblas_int *isplit, rocblas_int *info)# rocblas_status rocsolver_sstebz(rocblas_handle handle, const rocblas_erange erange, const rocblas_eorder eorder, const rocblas_int n, const float vl, const float vu, const rocblas_int il, const rocblas_int iu, const float abstol, float *D, float *E, rocblas_int *nev, rocblas_int *nsplit, float *W, rocblas_int *iblock, rocblas_int *isplit, rocblas_int *info)# STEBZ computes a set of eigenvalues of a symmetric tridiagonal matrix T. This function computes all the eigenvalues of T, all the eigenvalues in the half-open interval (vl, vu], or the il-th through iu-th eigenvalues, depending on the value of erange. The eigenvalues are returned in increasing order either for the entire matrix, or grouped by independent diagonal blocks (if they exist), depending on the value of eorder. ☆ handle – [in] rocblas_handle. ☆ erange – [in] rocblas_erange. Specifies the type of range or interval of the eigenvalues to be computed. ☆ eorder – [in] rocblas_eorder. Specifies whether the computed eigenvalues will be ordered by their position in the entire spectrum, or grouped by independent diagonal (split off) blocks. ☆ n – [in] rocblas_int. n >= 0. The order of the tridiagonal matrix T. ☆ vl – [in] real type. vl < vu. The lower bound of the search interval (vl, vu]. Ignored if erange indicates to look for all the eigenvalues of T or the eigenvalues within a set of indices. ☆ vu – [in] real type. vl < vu. The upper bound of the search interval (vl, vu]. Ignored if erange indicates to look for all the eigenvalues of T or the eigenvalues within a set of indices. ☆ il – [in] rocblas_int. il = 1 if n = 0; 1 <= il <= iu otherwise. The index of the smallest eigenvalue to be computed. Ignored if erange indicates to look for all the eigenvalues of T or the eigenvalues in a half-open interval. ☆ iu – [in] rocblas_int. iu = 0 if n = 0; 1 <= il <= iu otherwise. The index of the largest eigenvalue to be computed. Ignored if erange indicates to look for all the eigenvalues of T or the eigenvalues in a half-open interval. ☆ abstol – [in] real type. The absolute tolerance. An eigenvalue is considered to be located if it lies in an interval whose width is <= abstol. If abstol is negative, then machine-epsilon times the 1-norm of the tridiagonal form of A will be used as tolerance. If abstol=0, then the tolerance will be set to twice the underflow threshold; this is the tolerance that could get the most accurate results. ☆ D – [in] pointer to real type. Array on the GPU of dimension n. The diagonal elements of the tridiagonal matrix. ☆ E – [in] pointer to real type. Array on the GPU of dimension n-1. The off-diagonal elements of the tridiagonal matrix. ☆ nev – [out] pointer to a rocblas_int on the GPU. The total number of eigenvalues found. ☆ nsplit – [out] pointer to a rocblas_int on the GPU. The number of split off blocks in the matrix. ☆ W – [out] pointer to real type. Array on the GPU of dimension n. The first nev elements contain the computed eigenvalues. (The remaining elements can be used as workspace for internal ☆ iblock – [out] pointer to rocblas_int. Array on the GPU of dimension n. The block indices corresponding to each eigenvalue. When matrix T has split off blocks (nsplit > 1), then if iblock [i] = k, the eigenvalue W[i] belongs to the k-th diagonal block from the top. ☆ isplit – [out] pointer to rocblas_int. Array on the GPU of dimension n. The splitting indices that divide the tridiagonal matrix into diagonal blocks. The k-th block stretches from the end of the (k-1)-th block (or the top left corner of the tridiagonal matrix, in the case of the 1st block) to the isplit[k]-th row/column. ☆ info – [out] pointer to a rocblas_int on the GPU. If info = 0, successful exit. If info = 1, the bisection did not converge for some eigenvalues, i.e. the returned values are not as accurate as the given tolerance. The non-converged eigenvalues are flagged by negative entries in iblock. Orthonormal matrices# rocblas_status rocsolver_dorg2r(rocblas_handle handle, const rocblas_int m, const rocblas_int n, const rocblas_int k, double *A, const rocblas_int lda, double *ipiv)# rocblas_status rocsolver_sorg2r(rocblas_handle handle, const rocblas_int m, const rocblas_int n, const rocblas_int k, float *A, const rocblas_int lda, float *ipiv)# ORG2R generates an m-by-n Matrix Q with orthonormal columns. (This is the unblocked version of the algorithm). The matrix Q is defined as the first n columns of the product of k Householder reflectors of order m \[ Q = H(1)H(2)\cdots H(k). \] The Householder matrices \(H(i)\) are never stored, they are computed from its corresponding Householder vectors \(v_i\) and scalars \(\text{ipiv}[i]\), as returned by GEQRF. ☆ handle – [in] rocblas_handle. ☆ m – [in] rocblas_int. m >= 0. The number of rows of the matrix Q. ☆ n – [in] rocblas_int. 0 <= n <= m. The number of columns of the matrix Q. ☆ k – [in] rocblas_int. 0 <= k <= n. The number of Householder reflectors. ☆ A – [inout] pointer to type. Array on the GPU of dimension lda*n. On entry, the matrix A as returned by GEQRF, with the Householder vectors in the first k columns. On exit, the computed matrix Q. ☆ lda – [in] rocblas_int. lda >= m. Specifies the leading dimension of A. ☆ ipiv – [in] pointer to type. Array on the GPU of dimension at least k. The Householder scalars as returned by GEQRF. rocblas_status rocsolver_dorgqr(rocblas_handle handle, const rocblas_int m, const rocblas_int n, const rocblas_int k, double *A, const rocblas_int lda, double *ipiv)# rocblas_status rocsolver_sorgqr(rocblas_handle handle, const rocblas_int m, const rocblas_int n, const rocblas_int k, float *A, const rocblas_int lda, float *ipiv)# ORGQR generates an m-by-n Matrix Q with orthonormal columns. (This is the blocked version of the algorithm). The matrix Q is defined as the first n columns of the product of k Householder reflectors of order m \[ Q = H(1)H(2)\cdots H(k) \] The Householder matrices \(H(i)\) are never stored, they are computed from its corresponding Householder vectors \(v_i\) and scalars \(\text{ipiv}[i]\), as returned by GEQRF. ☆ handle – [in] rocblas_handle. ☆ m – [in] rocblas_int. m >= 0. The number of rows of the matrix Q. ☆ n – [in] rocblas_int. 0 <= n <= m. The number of columns of the matrix Q. ☆ k – [in] rocblas_int. 0 <= k <= n. The number of Householder reflectors. ☆ A – [inout] pointer to type. Array on the GPU of dimension lda*n. On entry, the matrix A as returned by GEQRF, with the Householder vectors in the first k columns. On exit, the computed matrix Q. ☆ lda – [in] rocblas_int. lda >= m. Specifies the leading dimension of A. ☆ ipiv – [in] pointer to type. Array on the GPU of dimension at least k. The Householder scalars as returned by GEQRF. rocblas_status rocsolver_dorgl2(rocblas_handle handle, const rocblas_int m, const rocblas_int n, const rocblas_int k, double *A, const rocblas_int lda, double *ipiv)# rocblas_status rocsolver_sorgl2(rocblas_handle handle, const rocblas_int m, const rocblas_int n, const rocblas_int k, float *A, const rocblas_int lda, float *ipiv)# ORGL2 generates an m-by-n Matrix Q with orthonormal rows. (This is the unblocked version of the algorithm). The matrix Q is defined as the first m rows of the product of k Householder reflectors of order n \[ Q = H(k)H(k-1)\cdots H(1) \] The Householder matrices \(H(i)\) are never stored, they are computed from its corresponding Householder vectors \(v_i\) and scalars \(\text{ipiv}[i]\), as returned by GELQF. ☆ handle – [in] rocblas_handle. ☆ m – [in] rocblas_int. 0 <= m <= n. The number of rows of the matrix Q. ☆ n – [in] rocblas_int. n >= 0. The number of columns of the matrix Q. ☆ k – [in] rocblas_int. 0 <= k <= m. The number of Householder reflectors. ☆ A – [inout] pointer to type. Array on the GPU of dimension lda*n. On entry, the matrix A as returned by GELQF, with the Householder vectors in the first k rows. On exit, the computed matrix Q. ☆ lda – [in] rocblas_int. lda >= m. Specifies the leading dimension of A. ☆ ipiv – [in] pointer to type. Array on the GPU of dimension at least k. The Householder scalars as returned by GELQF. rocblas_status rocsolver_dorglq(rocblas_handle handle, const rocblas_int m, const rocblas_int n, const rocblas_int k, double *A, const rocblas_int lda, double *ipiv)# rocblas_status rocsolver_sorglq(rocblas_handle handle, const rocblas_int m, const rocblas_int n, const rocblas_int k, float *A, const rocblas_int lda, float *ipiv)# ORGLQ generates an m-by-n Matrix Q with orthonormal rows. (This is the blocked version of the algorithm). The matrix Q is defined as the first m rows of the product of k Householder reflectors of order n \[ Q = H(k)H(k-1)\cdots H(1) \] The Householder matrices \(H(i)\) are never stored, they are computed from its corresponding Householder vectors \(v_i\) and scalars \(\text{ipiv}[i]\), as returned by GELQF. ☆ handle – [in] rocblas_handle. ☆ m – [in] rocblas_int. 0 <= m <= n. The number of rows of the matrix Q. ☆ n – [in] rocblas_int. n >= 0. The number of columns of the matrix Q. ☆ k – [in] rocblas_int. 0 <= k <= m. The number of Householder reflectors. ☆ A – [inout] pointer to type. Array on the GPU of dimension lda*n. On entry, the matrix A as returned by GELQF, with the Householder vectors in the first k rows. On exit, the computed matrix Q. ☆ lda – [in] rocblas_int. lda >= m. Specifies the leading dimension of A. ☆ ipiv – [in] pointer to type. Array on the GPU of dimension at least k. The Householder scalars as returned by GELQF. rocblas_status rocsolver_dorg2l(rocblas_handle handle, const rocblas_int m, const rocblas_int n, const rocblas_int k, double *A, const rocblas_int lda, double *ipiv)# rocblas_status rocsolver_sorg2l(rocblas_handle handle, const rocblas_int m, const rocblas_int n, const rocblas_int k, float *A, const rocblas_int lda, float *ipiv)# ORG2L generates an m-by-n Matrix Q with orthonormal columns. (This is the unblocked version of the algorithm). The matrix Q is defined as the last n columns of the product of k Householder reflectors of order m \[ Q = H(k)H(k-1)\cdots H(1) \] The Householder matrices \(H(i)\) are never stored, they are computed from its corresponding Householder vectors \(v_i\) and scalars \(\text{ipiv}[i]\), as returned by GEQLF. ☆ handle – [in] rocblas_handle. ☆ m – [in] rocblas_int. m >= 0. The number of rows of the matrix Q. ☆ n – [in] rocblas_int. 0 <= n <= m. The number of columns of the matrix Q. ☆ k – [in] rocblas_int. 0 <= k <= n. The number of Householder reflectors. ☆ A – [inout] pointer to type. Array on the GPU of dimension lda*n. On entry, the matrix A as returned by GEQLF, with the Householder vectors in the last k columns. On exit, the computed matrix Q. ☆ lda – [in] rocblas_int. lda >= m. Specifies the leading dimension of A. ☆ ipiv – [in] pointer to type. Array on the GPU of dimension at least k. The Householder scalars as returned by GEQLF. rocblas_status rocsolver_dorgql(rocblas_handle handle, const rocblas_int m, const rocblas_int n, const rocblas_int k, double *A, const rocblas_int lda, double *ipiv)# rocblas_status rocsolver_sorgql(rocblas_handle handle, const rocblas_int m, const rocblas_int n, const rocblas_int k, float *A, const rocblas_int lda, float *ipiv)# ORGQL generates an m-by-n Matrix Q with orthonormal columns. (This is the blocked version of the algorithm). The matrix Q is defined as the last n column of the product of k Householder reflectors of order m \[ Q = H(k)H(k-1)\cdots H(1) \] The Householder matrices \(H(i)\) are never stored, they are computed from its corresponding Householder vectors \(v_i\) and scalars \(\text{ipiv}[i]\), as returned by GEQLF. ☆ handle – [in] rocblas_handle. ☆ m – [in] rocblas_int. m >= 0. The number of rows of the matrix Q. ☆ n – [in] rocblas_int. 0 <= n <= m. The number of columns of the matrix Q. ☆ k – [in] rocblas_int. 0 <= k <= n. The number of Householder reflectors. ☆ A – [inout] pointer to type. Array on the GPU of dimension lda*n. On entry, the matrix A as returned by GEQLF, with the Householder vectors in the last k columns. On exit, the computed matrix Q. ☆ lda – [in] rocblas_int. lda >= m. Specifies the leading dimension of A. ☆ ipiv – [in] pointer to type. Array on the GPU of dimension at least k. The Householder scalars as returned by GEQLF. rocblas_status rocsolver_dorgbr(rocblas_handle handle, const rocblas_storev storev, const rocblas_int m, const rocblas_int n, const rocblas_int k, double *A, const rocblas_int lda, double *ipiv)# rocblas_status rocsolver_sorgbr(rocblas_handle handle, const rocblas_storev storev, const rocblas_int m, const rocblas_int n, const rocblas_int k, float *A, const rocblas_int lda, float *ipiv)# ORGBR generates an m-by-n Matrix Q with orthonormal rows or columns. If storev is column-wise, then the matrix Q has orthonormal columns. If m >= k, Q is defined as the first n columns of the product of k Householder reflectors of order m \[ Q = H(1)H(2)\cdots H(k) \] If m < k, Q is defined as the product of Householder reflectors of order m \[ Q = H(1)H(2)\cdots H(m-1) \] On the other hand, if storev is row-wise, then the matrix Q has orthonormal rows. If n > k, Q is defined as the first m rows of the product of k Householder reflectors of order n \[ Q = H(k)H(k-1)\cdots H(1) \] If n <= k, Q is defined as the product of Householder reflectors of order n \[ Q = H(n-1)H(n-2)\cdots H(1) \] The Householder matrices \(H(i)\) are never stored, they are computed from its corresponding Householder vectors \(v_i\) and scalars \(\text{ipiv}[i]\), as returned by GEBRD in its arguments A and tauq or taup. ☆ handle – [in] rocblas_handle. ☆ storev – [in] rocblas_storev. Specifies whether to work column-wise or row-wise. ☆ m – [in] rocblas_int. m >= 0. The number of rows of the matrix Q. If row-wise, then min(n,k) <= m <= n. ☆ n – [in] rocblas_int. n >= 0. The number of columns of the matrix Q. If column-wise, then min(m,k) <= n <= m. ☆ k – [in] rocblas_int. k >= 0. The number of columns (if storev is column-wise) or rows (if row-wise) of the original matrix reduced by GEBRD. ☆ A – [inout] pointer to type. Array on the GPU of dimension lda*n. On entry, the Householder vectors as returned by GEBRD. On exit, the computed matrix Q. ☆ lda – [in] rocblas_int. lda >= m. Specifies the leading dimension of A. ☆ ipiv – [in] pointer to type. Array on the GPU of dimension min(m,k) if column-wise, or min(n,k) if row-wise. The Householder scalars as returned by GEBRD. rocblas_status rocsolver_dorgtr(rocblas_handle handle, const rocblas_fill uplo, const rocblas_int n, double *A, const rocblas_int lda, double *ipiv)# rocblas_status rocsolver_sorgtr(rocblas_handle handle, const rocblas_fill uplo, const rocblas_int n, float *A, const rocblas_int lda, float *ipiv)# ORGTR generates an n-by-n orthogonal Matrix Q. Q is defined as the product of n-1 Householder reflectors of order n. If uplo indicates upper, then Q has the form \[ Q = H(n-1)H(n-2)\cdots H(1) \] On the other hand, if uplo indicates lower, then Q has the form \[ Q = H(1)H(2)\cdots H(n-1) \] The Householder matrices \(H(i)\) are never stored, they are computed from its corresponding Householder vectors \(v_i\) and scalars \(\text{ipiv}[i]\), as returned by SYTRD in its arguments A and tau. ☆ handle – [in] rocblas_handle. ☆ uplo – [in] rocblas_fill. Specifies whether the SYTRD factorization was upper or lower triangular. If uplo indicates lower (or upper), then the upper (or lower) part of A is not used. ☆ n – [in] rocblas_int. n >= 0. The number of rows and columns of the matrix Q. ☆ A – [inout] pointer to type. Array on the GPU of dimension lda*n. On entry, the Householder vectors as returned by SYTRD. On exit, the computed matrix Q. ☆ lda – [in] rocblas_int. lda >= n. Specifies the leading dimension of A. ☆ ipiv – [in] pointer to type. Array on the GPU of dimension n-1. The Householder scalars as returned by SYTRD. rocblas_status rocsolver_dorm2r(rocblas_handle handle, const rocblas_side side, const rocblas_operation trans, const rocblas_int m, const rocblas_int n, const rocblas_int k, double *A, const rocblas_int lda, double *ipiv, double *C, const rocblas_int ldc)# rocblas_status rocsolver_sorm2r(rocblas_handle handle, const rocblas_side side, const rocblas_operation trans, const rocblas_int m, const rocblas_int n, const rocblas_int k, float *A, const rocblas_int lda, float *ipiv, float *C, const rocblas_int ldc)# ORM2R multiplies a matrix Q with orthonormal columns by a general m-by-n matrix C. (This is the unblocked version of the algorithm). The matrix Q is applied in one of the following forms, depending on the values of side and trans: \[\begin{split} \begin{array}{cl} QC & \: \text{No transpose from the left,}\\ Q^TC & \: \text{Transpose from the left,}\\ CQ & \: \text{No transpose from the right, and}\\ CQ^T & \: \text {Transpose from the right.} \end{array} \end{split}\] Q is defined as the product of k Householder reflectors \[ Q = H(1)H(2) \cdots H(k) \] of order m if applying from the left, or n if applying from the right. Q is never stored, it is calculated from the Householder vectors and scalars returned by the QR factorization GEQRF. ☆ handle – [in] rocblas_handle. ☆ side – [in] rocblas_side. Specifies from which side to apply Q. ☆ trans – [in] rocblas_operation. Specifies whether the matrix Q or its transpose is to be applied. ☆ m – [in] rocblas_int. m >= 0. Number of rows of matrix C. ☆ n – [in] rocblas_int. n >= 0. Number of columns of matrix C. ☆ k – [in] rocblas_int. k >= 0; k <= m if side is left, k <= n if side is right. The number of Householder reflectors that form Q. ☆ A – [in] pointer to type. Array on the GPU of size lda*k. The Householder vectors as returned by GEQRF in the first k columns of its argument A. ☆ lda – [in] rocblas_int. lda >= m if side is left, or lda >= n if side is right. Leading dimension of A. ☆ ipiv – [in] pointer to type. Array on the GPU of dimension at least k. The Householder scalars as returned by GEQRF. ☆ C – [inout] pointer to type. Array on the GPU of size ldc*n. On entry, the matrix C. On exit, it is overwritten with Q*C, C*Q, Q’*C, or C*Q’. ☆ ldc – [in] rocblas_int. ldc >= m. Leading dimension of C. rocblas_status rocsolver_dormqr(rocblas_handle handle, const rocblas_side side, const rocblas_operation trans, const rocblas_int m, const rocblas_int n, const rocblas_int k, double *A, const rocblas_int lda, double *ipiv, double *C, const rocblas_int ldc)# rocblas_status rocsolver_sormqr(rocblas_handle handle, const rocblas_side side, const rocblas_operation trans, const rocblas_int m, const rocblas_int n, const rocblas_int k, float *A, const rocblas_int lda, float *ipiv, float *C, const rocblas_int ldc)# ORMQR multiplies a matrix Q with orthonormal columns by a general m-by-n matrix C. (This is the blocked version of the algorithm). The matrix Q is applied in one of the following forms, depending on the values of side and trans: \[\begin{split} \begin{array}{cl} QC & \: \text{No transpose from the left,}\\ Q^TC & \: \text{Transpose from the left,}\\ CQ & \: \text{No transpose from the right, and}\\ CQ^T & \: \text {Transpose from the right.} \end{array} \end{split}\] Q is defined as the product of k Householder reflectors \[ Q = H(1)H(2)\cdots H(k) \] of order m if applying from the left, or n if applying from the right. Q is never stored, it is calculated from the Householder vectors and scalars returned by the QR factorization GEQRF. ☆ handle – [in] rocblas_handle. ☆ side – [in] rocblas_side. Specifies from which side to apply Q. ☆ trans – [in] rocblas_operation. Specifies whether the matrix Q or its transpose is to be applied. ☆ m – [in] rocblas_int. m >= 0. Number of rows of matrix C. ☆ n – [in] rocblas_int. n >= 0. Number of columns of matrix C. ☆ k – [in] rocblas_int. k >= 0; k <= m if side is left, k <= n if side is right. The number of Householder reflectors that form Q. ☆ A – [in] pointer to type. Array on the GPU of size lda*k. The Householder vectors as returned by GEQRF in the first k columns of its argument A. ☆ lda – [in] rocblas_int. lda >= m if side is left, or lda >= n if side is right. Leading dimension of A. ☆ ipiv – [in] pointer to type. Array on the GPU of dimension at least k. The Householder scalars as returned by GEQRF. ☆ C – [inout] pointer to type. Array on the GPU of size ldc*n. On entry, the matrix C. On exit, it is overwritten with Q*C, C*Q, Q’*C, or C*Q’. ☆ ldc – [in] rocblas_int. ldc >= m. Leading dimension of C. rocblas_status rocsolver_dorml2(rocblas_handle handle, const rocblas_side side, const rocblas_operation trans, const rocblas_int m, const rocblas_int n, const rocblas_int k, double *A, const rocblas_int lda, double *ipiv, double *C, const rocblas_int ldc)# rocblas_status rocsolver_sorml2(rocblas_handle handle, const rocblas_side side, const rocblas_operation trans, const rocblas_int m, const rocblas_int n, const rocblas_int k, float *A, const rocblas_int lda, float *ipiv, float *C, const rocblas_int ldc)# ORML2 multiplies a matrix Q with orthonormal rows by a general m-by-n matrix C. (This is the unblocked version of the algorithm). The matrix Q is applied in one of the following forms, depending on the values of side and trans: \[\begin{split} \begin{array}{cl} QC & \: \text{No transpose from the left,}\\ Q^TC & \: \text{Transpose from the left,}\\ CQ & \: \text{No transpose from the right, and}\\ CQ^T & \: \text {Transpose from the right.} \end{array} \end{split}\] Q is defined as the product of k Householder reflectors \[ Q = H(k)H(k-1)\cdots H(1) \] of order m if applying from the left, or n if applying from the right. Q is never stored, it is calculated from the Householder vectors and scalars returned by the LQ factorization GELQF. ☆ handle – [in] rocblas_handle. ☆ side – [in] rocblas_side. Specifies from which side to apply Q. ☆ trans – [in] rocblas_operation. Specifies whether the matrix Q or its transpose is to be applied. ☆ m – [in] rocblas_int. m >= 0. Number of rows of matrix C. ☆ n – [in] rocblas_int. n >= 0. Number of columns of matrix C. ☆ k – [in] rocblas_int. k >= 0; k <= m if side is left, k <= n if side is right. The number of Householder reflectors that form Q. ☆ A – [in] pointer to type. Array on the GPU of size lda*m if side is left, or lda*n if side is right. The Householder vectors as returned by GELQF in the first k rows of its argument A. ☆ lda – [in] rocblas_int. lda >= k. Leading dimension of A. ☆ ipiv – [in] pointer to type. Array on the GPU of dimension at least k. The Householder scalars as returned by GELQF. ☆ C – [inout] pointer to type. Array on the GPU of size ldc*n. On entry, the matrix C. On exit, it is overwritten with Q*C, C*Q, Q’*C, or C*Q’. ☆ ldc – [in] rocblas_int. ldc >= m. Leading dimension of C. rocblas_status rocsolver_dormlq(rocblas_handle handle, const rocblas_side side, const rocblas_operation trans, const rocblas_int m, const rocblas_int n, const rocblas_int k, double *A, const rocblas_int lda, double *ipiv, double *C, const rocblas_int ldc)# rocblas_status rocsolver_sormlq(rocblas_handle handle, const rocblas_side side, const rocblas_operation trans, const rocblas_int m, const rocblas_int n, const rocblas_int k, float *A, const rocblas_int lda, float *ipiv, float *C, const rocblas_int ldc)# ORMLQ multiplies a matrix Q with orthonormal rows by a general m-by-n matrix C. (This is the blocked version of the algorithm). The matrix Q is applied in one of the following forms, depending on the values of side and trans: \[\begin{split} \begin{array}{cl} QC & \: \text{No transpose from the left,}\\ Q^TC & \: \text{Transpose from the left,}\\ CQ & \: \text{No transpose from the right, and}\\ CQ^T & \: \text {Transpose from the right.} \end{array} \end{split}\] Q is defined as the product of k Householder reflectors \[ Q = H(k)H(k-1)\cdots H(1) \] of order m if applying from the left, or n if applying from the right. Q is never stored, it is calculated from the Householder vectors and scalars returned by the LQ factorization GELQF. ☆ handle – [in] rocblas_handle. ☆ side – [in] rocblas_side. Specifies from which side to apply Q. ☆ trans – [in] rocblas_operation. Specifies whether the matrix Q or its transpose is to be applied. ☆ m – [in] rocblas_int. m >= 0. Number of rows of matrix C. ☆ n – [in] rocblas_int. n >= 0. Number of columns of matrix C. ☆ k – [in] rocblas_int. k >= 0; k <= m if side is left, k <= n if side is right. The number of Householder reflectors that form Q. ☆ A – [in] pointer to type. Array on the GPU of size lda*m if side is left, or lda*n if side is right. The Householder vectors as returned by GELQF in the first k rows of its argument A. ☆ lda – [in] rocblas_int. lda >= k. Leading dimension of A. ☆ ipiv – [in] pointer to type. Array on the GPU of dimension at least k. The Householder scalars as returned by GELQF. ☆ C – [inout] pointer to type. Array on the GPU of size ldc*n. On entry, the matrix C. On exit, it is overwritten with Q*C, C*Q, Q’*C, or C*Q’. ☆ ldc – [in] rocblas_int. ldc >= m. Leading dimension of C. rocblas_status rocsolver_dorm2l(rocblas_handle handle, const rocblas_side side, const rocblas_operation trans, const rocblas_int m, const rocblas_int n, const rocblas_int k, double *A, const rocblas_int lda, double *ipiv, double *C, const rocblas_int ldc)# rocblas_status rocsolver_sorm2l(rocblas_handle handle, const rocblas_side side, const rocblas_operation trans, const rocblas_int m, const rocblas_int n, const rocblas_int k, float *A, const rocblas_int lda, float *ipiv, float *C, const rocblas_int ldc)# ORM2L multiplies a matrix Q with orthonormal columns by a general m-by-n matrix C. (This is the unblocked version of the algorithm). The matrix Q is applied in one of the following forms, depending on the values of side and trans: \[\begin{split} \begin{array}{cl} QC & \: \text{No transpose from the left,}\\ Q^TC & \: \text{Transpose from the left,}\\ CQ & \: \text{No transpose from the right, and}\\ CQ^T & \: \text {Transpose from the right.} \end{array} \end{split}\] Q is defined as the product of k Householder reflectors \[ Q = H(k)H(k-1)\cdots H(1) \] of order m if applying from the left, or n if applying from the right. Q is never stored, it is calculated from the Householder vectors and scalars returned by the QL factorization GEQLF. ☆ handle – [in] rocblas_handle. ☆ side – [in] rocblas_side. Specifies from which side to apply Q. ☆ trans – [in] rocblas_operation. Specifies whether the matrix Q or its transpose is to be applied. ☆ m – [in] rocblas_int. m >= 0. Number of rows of matrix C. ☆ n – [in] rocblas_int. n >= 0. Number of columns of matrix C. ☆ k – [in] rocblas_int. k >= 0; k <= m if side is left, k <= n if side is right. The number of Householder reflectors that form Q. ☆ A – [in] pointer to type. Array on the GPU of size lda*k. The Householder vectors as returned by GEQLF in the last k columns of its argument A. ☆ lda – [in] rocblas_int. lda >= m if side is left, lda >= n if side is right. Leading dimension of A. ☆ ipiv – [in] pointer to type. Array on the GPU of dimension at least k. The Householder scalars as returned by GEQLF. ☆ C – [inout] pointer to type. Array on the GPU of size ldc*n. On entry, the matrix C. On exit, it is overwritten with Q*C, C*Q, Q’*C, or C*Q’. ☆ ldc – [in] rocblas_int. ldc >= m. Leading dimension of C. rocblas_status rocsolver_dormql(rocblas_handle handle, const rocblas_side side, const rocblas_operation trans, const rocblas_int m, const rocblas_int n, const rocblas_int k, double *A, const rocblas_int lda, double *ipiv, double *C, const rocblas_int ldc)# rocblas_status rocsolver_sormql(rocblas_handle handle, const rocblas_side side, const rocblas_operation trans, const rocblas_int m, const rocblas_int n, const rocblas_int k, float *A, const rocblas_int lda, float *ipiv, float *C, const rocblas_int ldc)# ORMQL multiplies a matrix Q with orthonormal columns by a general m-by-n matrix C. (This is the blocked version of the algorithm). The matrix Q is applied in one of the following forms, depending on the values of side and trans: \[\begin{split} \begin{array}{cl} QC & \: \text{No transpose from the left,}\\ Q^TC & \: \text{Transpose from the left,}\\ CQ & \: \text{No transpose from the right, and}\\ CQ^T & \: \text {Transpose from the right.} \end{array} \end{split}\] Q is defined as the product of k Householder reflectors \[ Q = H(k)H(k-1)\cdots H(1) \] of order m if applying from the left, or n if applying from the right. Q is never stored, it is calculated from the Householder vectors and scalars returned by the QL factorization GEQLF. ☆ handle – [in] rocblas_handle. ☆ side – [in] rocblas_side. Specifies from which side to apply Q. ☆ trans – [in] rocblas_operation. Specifies whether the matrix Q or its transpose is to be applied. ☆ m – [in] rocblas_int. m >= 0. Number of rows of matrix C. ☆ n – [in] rocblas_int. n >= 0. Number of columns of matrix C. ☆ k – [in] rocblas_int. k >= 0; k <= m if side is left, k <= n if side is right. The number of Householder reflectors that form Q. ☆ A – [in] pointer to type. Array on the GPU of size lda*k. The Householder vectors as returned by GEQLF in the last k columns of its argument A. ☆ lda – [in] rocblas_int. lda >= m if side is left, lda >= n if side is right. Leading dimension of A. ☆ ipiv – [in] pointer to type. Array on the GPU of dimension at least k. The Householder scalars as returned by GEQLF. ☆ C – [inout] pointer to type. Array on the GPU of size ldc*n. On entry, the matrix C. On exit, it is overwritten with Q*C, C*Q, Q’*C, or C*Q’. ☆ ldc – [in] rocblas_int. ldc >= m. Leading dimension of C. rocblas_status rocsolver_dormbr(rocblas_handle handle, const rocblas_storev storev, const rocblas_side side, const rocblas_operation trans, const rocblas_int m, const rocblas_int n, const rocblas_int k, double *A, const rocblas_int lda, double *ipiv, double *C, const rocblas_int ldc)# rocblas_status rocsolver_sormbr(rocblas_handle handle, const rocblas_storev storev, const rocblas_side side, const rocblas_operation trans, const rocblas_int m, const rocblas_int n, const rocblas_int k, float *A, const rocblas_int lda, float *ipiv, float *C, const rocblas_int ldc)# ORMBR multiplies a matrix Q with orthonormal rows or columns by a general m-by-n matrix C. If storev is column-wise, then the matrix Q has orthonormal columns. If storev is row-wise, then the matrix Q has orthonormal rows. The matrix Q is applied in one of the following forms, depending on the values of side and trans: \[\begin{split} \begin{array}{cl} QC & \: \text{No transpose from the left,}\\ Q^TC & \: \text{Transpose from the left,}\\ CQ & \: \text{No transpose from the right, and}\\ CQ^T & \: \text {Transpose from the right.} \end{array} \end{split}\] The order q of the orthogonal matrix Q is q = m if applying from the left, or q = n if applying from the right. When storev is column-wise, if q >= k, then Q is defined as the product of k Householder reflectors \[ Q = H(1)H(2)\cdots H(k), \] and if q < k, then Q is defined as the product \[ Q = H(1)H(2)\cdots H(q-1). \] When storev is row-wise, if q > k, then Q is defined as the product of k Householder reflectors \[ Q = H(1)H(2)\cdots H(k), \] and if q <= k, Q is defined as the product \[ Q = H(1)H(2)\cdots H(q-1). \] The Householder matrices \(H(i)\) are never stored, they are computed from its corresponding Householder vectors and scalars as returned by GEBRD in its arguments A and tauq or taup. ☆ handle – [in] rocblas_handle. ☆ storev – [in] rocblas_storev. Specifies whether to work column-wise or row-wise. ☆ side – [in] rocblas_side. Specifies from which side to apply Q. ☆ trans – [in] rocblas_operation. Specifies whether the matrix Q or its transpose is to be applied. ☆ m – [in] rocblas_int. m >= 0. Number of rows of matrix C. ☆ n – [in] rocblas_int. n >= 0. Number of columns of matrix C. ☆ k – [in] rocblas_int. k >= 0. The number of columns (if storev is column-wise) or rows (if row-wise) of the original matrix reduced by GEBRD. ☆ A – [in] pointer to type. Array on the GPU of size lda*min(q,k) if column-wise, or lda*q if row-wise. The Householder vectors as returned by GEBRD. ☆ lda – [in] rocblas_int. lda >= q if column-wise, or lda >= min(q,k) if row-wise. Leading dimension of A. ☆ ipiv – [in] pointer to type. Array on the GPU of dimension at least min(q,k). The Householder scalars as returned by GEBRD. ☆ C – [inout] pointer to type. Array on the GPU of size ldc*n. On entry, the matrix C. On exit, it is overwritten with Q*C, C*Q, Q’*C, or C*Q’. ☆ ldc – [in] rocblas_int. ldc >= m. Leading dimension of C. rocblas_status rocsolver_dormtr(rocblas_handle handle, const rocblas_side side, const rocblas_fill uplo, const rocblas_operation trans, const rocblas_int m, const rocblas_int n, double *A, const rocblas_int lda, double *ipiv, double *C, const rocblas_int ldc)# rocblas_status rocsolver_sormtr(rocblas_handle handle, const rocblas_side side, const rocblas_fill uplo, const rocblas_operation trans, const rocblas_int m, const rocblas_int n, float *A, const rocblas_int lda, float *ipiv, float *C, const rocblas_int ldc)# ORMTR multiplies an orthogonal matrix Q by a general m-by-n matrix C. The matrix Q is applied in one of the following forms, depending on the values of side and trans: \[\begin{split} \begin{array}{cl} QC & \: \text{No transpose from the left,}\\ Q^TC & \: \text{Transpose from the left,}\\ CQ & \: \text{No transpose from the right, and}\\ CQ^T & \: \text {Transpose from the right.} \end{array} \end{split}\] The order q of the orthogonal matrix Q is q = m if applying from the left, or q = n if applying from the right. Q is defined as a product of q-1 Householder reflectors. If uplo indicates upper, then Q has the form \[ Q = H(q-1)H(q-2)\cdots H(1). \] On the other hand, if uplo indicates lower, then Q has the form \[ Q = H(1)H(2)\cdots H(q-1) \] The Householder matrices \(H(i)\) are never stored, they are computed from its corresponding Householder vectors and scalars as returned by SYTRD in its arguments A and tau. ☆ handle – [in] rocblas_handle. ☆ side – [in] rocblas_side. Specifies from which side to apply Q. ☆ uplo – [in] rocblas_fill. Specifies whether the SYTRD factorization was upper or lower triangular. If uplo indicates lower (or upper), then the upper (or lower) part of A is not used. ☆ trans – [in] rocblas_operation. Specifies whether the matrix Q or its transpose is to be applied. ☆ m – [in] rocblas_int. m >= 0. Number of rows of matrix C. ☆ n – [in] rocblas_int. n >= 0. Number of columns of matrix C. ☆ A – [in] pointer to type. Array on the GPU of size lda*q. On entry, the Householder vectors as returned by SYTRD. ☆ lda – [in] rocblas_int. lda >= q. Leading dimension of A. ☆ ipiv – [in] pointer to type. Array on the GPU of dimension at least q-1. The Householder scalars as returned by SYTRD. ☆ C – [inout] pointer to type. Array on the GPU of size ldc*n. On entry, the matrix C. On exit, it is overwritten with Q*C, C*Q, Q’*C, or C*Q’. ☆ ldc – [in] rocblas_int. ldc >= m. Leading dimension of C. Unitary matrices# rocblas_status rocsolver_zung2r(rocblas_handle handle, const rocblas_int m, const rocblas_int n, const rocblas_int k, rocblas_double_complex *A, const rocblas_int lda, rocblas_double_complex *ipiv)# rocblas_status rocsolver_cung2r(rocblas_handle handle, const rocblas_int m, const rocblas_int n, const rocblas_int k, rocblas_float_complex *A, const rocblas_int lda, rocblas_float_complex *ipiv)# UNG2R generates an m-by-n complex Matrix Q with orthonormal columns. (This is the unblocked version of the algorithm). The matrix Q is defined as the first n columns of the product of k Householder reflectors of order m \[ Q = H(1)H(2)\cdots H(k) \] The Householder matrices \(H(i)\) are never stored, they are computed from its corresponding Householder vectors \(v_i\) and scalars \(\text{ipiv}[i]\), as returned by GEQRF. ☆ handle – [in] rocblas_handle. ☆ m – [in] rocblas_int. m >= 0. The number of rows of the matrix Q. ☆ n – [in] rocblas_int. 0 <= n <= m. The number of columns of the matrix Q. ☆ k – [in] rocblas_int. 0 <= k <= n. The number of Householder reflectors. ☆ A – [inout] pointer to type. Array on the GPU of dimension lda*n. On entry, the matrix A as returned by GEQRF, with the Householder vectors in the first k columns. On exit, the computed matrix Q. ☆ lda – [in] rocblas_int. lda >= m. Specifies the leading dimension of A. ☆ ipiv – [in] pointer to type. Array on the GPU of dimension at least k. The Householder scalars as returned by GEQRF. rocblas_status rocsolver_zungqr(rocblas_handle handle, const rocblas_int m, const rocblas_int n, const rocblas_int k, rocblas_double_complex *A, const rocblas_int lda, rocblas_double_complex *ipiv)# rocblas_status rocsolver_cungqr(rocblas_handle handle, const rocblas_int m, const rocblas_int n, const rocblas_int k, rocblas_float_complex *A, const rocblas_int lda, rocblas_float_complex *ipiv)# UNGQR generates an m-by-n complex Matrix Q with orthonormal columns. (This is the blocked version of the algorithm). The matrix Q is defined as the first n columns of the product of k Householder reflectors of order m \[ Q = H(1)H(2)\cdots H(k) \] Householder matrices \(H(i)\) are never stored, they are computed from its corresponding Householder vectors \(v_i\) and scalars \(\text{ipiv}[i]\), as returned by GEQRF. ☆ handle – [in] rocblas_handle. ☆ m – [in] rocblas_int. m >= 0. The number of rows of the matrix Q. ☆ n – [in] rocblas_int. 0 <= n <= m. The number of columns of the matrix Q. ☆ k – [in] rocblas_int. 0 <= k <= n. The number of Householder reflectors. ☆ A – [inout] pointer to type. Array on the GPU of dimension lda*n. On entry, the matrix A as returned by GEQRF, with the Householder vectors in the first k columns. On exit, the computed matrix Q. ☆ lda – [in] rocblas_int. lda >= m. Specifies the leading dimension of A. ☆ ipiv – [in] pointer to type. Array on the GPU of dimension at least k. The Householder scalars as returned by GEQRF. rocblas_status rocsolver_zungl2(rocblas_handle handle, const rocblas_int m, const rocblas_int n, const rocblas_int k, rocblas_double_complex *A, const rocblas_int lda, rocblas_double_complex *ipiv)# rocblas_status rocsolver_cungl2(rocblas_handle handle, const rocblas_int m, const rocblas_int n, const rocblas_int k, rocblas_float_complex *A, const rocblas_int lda, rocblas_float_complex *ipiv)# UNGL2 generates an m-by-n complex Matrix Q with orthonormal rows. (This is the unblocked version of the algorithm). The matrix Q is defined as the first m rows of the product of k Householder reflectors of order n \[ Q = H(k)^HH(k-1)^H\cdots H(1)^H \] The Householder matrices \(H(i)\) are never stored, they are computed from its corresponding Householder vectors \(v_i\) and scalars \(\text{ipiv}[i]\), as returned by GELQF. ☆ handle – [in] rocblas_handle. ☆ m – [in] rocblas_int. 0 <= m <= n. The number of rows of the matrix Q. ☆ n – [in] rocblas_int. n >= 0. The number of columns of the matrix Q. ☆ k – [in] rocblas_int. 0 <= k <= m. The number of Householder reflectors. ☆ A – [inout] pointer to type. Array on the GPU of dimension lda*n. On entry, the matrix A as returned by GELQF, with the Householder vectors in the first k rows. On exit, the computed matrix Q. ☆ lda – [in] rocblas_int. lda >= m. Specifies the leading dimension of A. ☆ ipiv – [in] pointer to type. Array on the GPU of dimension at least k. The Householder scalars as returned by GELQF. rocblas_status rocsolver_zunglq(rocblas_handle handle, const rocblas_int m, const rocblas_int n, const rocblas_int k, rocblas_double_complex *A, const rocblas_int lda, rocblas_double_complex *ipiv)# rocblas_status rocsolver_cunglq(rocblas_handle handle, const rocblas_int m, const rocblas_int n, const rocblas_int k, rocblas_float_complex *A, const rocblas_int lda, rocblas_float_complex *ipiv)# UNGLQ generates an m-by-n complex Matrix Q with orthonormal rows. (This is the blocked version of the algorithm). The matrix Q is defined as the first m rows of the product of k Householder reflectors of order n \[ Q = H(k)^HH(k-1)^H\cdots H(1)^H \] The Householder matrices \(H(i)\) are never stored, they are computed from its corresponding Householder vectors \(v_i\) and scalars \(\text{ipiv}[i]\), as returned by GELQF. ☆ handle – [in] rocblas_handle. ☆ m – [in] rocblas_int. 0 <= m <= n. The number of rows of the matrix Q. ☆ n – [in] rocblas_int. n >= 0. The number of columns of the matrix Q. ☆ k – [in] rocblas_int. 0 <= k <= m. The number of Householder reflectors. ☆ A – [inout] pointer to type. Array on the GPU of dimension lda*n. On entry, the matrix A as returned by GELQF, with the Householder vectors in the first k rows. On exit, the computed matrix Q. ☆ lda – [in] rocblas_int. lda >= m. Specifies the leading dimension of A. ☆ ipiv – [in] pointer to type. Array on the GPU of dimension at least k. The Householder scalars as returned by GELQF. rocblas_status rocsolver_zung2l(rocblas_handle handle, const rocblas_int m, const rocblas_int n, const rocblas_int k, rocblas_double_complex *A, const rocblas_int lda, rocblas_double_complex *ipiv)# rocblas_status rocsolver_cung2l(rocblas_handle handle, const rocblas_int m, const rocblas_int n, const rocblas_int k, rocblas_float_complex *A, const rocblas_int lda, rocblas_float_complex *ipiv)# UNG2L generates an m-by-n complex Matrix Q with orthonormal columns. (This is the unblocked version of the algorithm). The matrix Q is defined as the last n columns of the product of k Householder reflectors of order m \[ Q = H(k)H(k-1)\cdots H(1) \] The Householder matrices \(H(i)\) are never stored, they are computed from its corresponding Householder vectors \(v_i\) and scalars \(\text{ipiv}[i]\), as returned by GEQLF. ☆ handle – [in] rocblas_handle. ☆ m – [in] rocblas_int. m >= 0. The number of rows of the matrix Q. ☆ n – [in] rocblas_int. 0 <= n <= m. The number of columns of the matrix Q. ☆ k – [in] rocblas_int. 0 <= k <= n. The number of Householder reflectors. ☆ A – [inout] pointer to type. Array on the GPU of dimension lda*n. On entry, the matrix A as returned by GEQLF, with the Householder vectors in the last k columns. On exit, the computed matrix Q. ☆ lda – [in] rocblas_int. lda >= m. Specifies the leading dimension of A. ☆ ipiv – [in] pointer to type. Array on the GPU of dimension at least k. The Householder scalars as returned by GEQLF. rocblas_status rocsolver_zungql(rocblas_handle handle, const rocblas_int m, const rocblas_int n, const rocblas_int k, rocblas_double_complex *A, const rocblas_int lda, rocblas_double_complex *ipiv)# rocblas_status rocsolver_cungql(rocblas_handle handle, const rocblas_int m, const rocblas_int n, const rocblas_int k, rocblas_float_complex *A, const rocblas_int lda, rocblas_float_complex *ipiv)# UNGQL generates an m-by-n complex Matrix Q with orthonormal columns. (This is the blocked version of the algorithm). The matrix Q is defined as the last n columns of the product of k Householder reflectors of order m \[ Q = H(k)H(k-1)\cdots H(1) \] The Householder matrices \(H(i)\) are never stored, they are computed from its corresponding Householder vectors \(v_i\) and scalars \(\text{ipiv}[i]\), as returned by GEQLF. ☆ handle – [in] rocblas_handle. ☆ m – [in] rocblas_int. m >= 0. The number of rows of the matrix Q. ☆ n – [in] rocblas_int. 0 <= n <= m. The number of columns of the matrix Q. ☆ k – [in] rocblas_int. 0 <= k <= n. The number of Householder reflectors. ☆ A – [inout] pointer to type. Array on the GPU of dimension lda*n. On entry, the matrix A as returned by GEQLF, with the Householder vectors in the last k columns. On exit, the computed matrix Q. ☆ lda – [in] rocblas_int. lda >= m. Specifies the leading dimension of A. ☆ ipiv – [in] pointer to type. Array on the GPU of dimension at least k. The Householder scalars as returned by GEQLF. rocblas_status rocsolver_zungbr(rocblas_handle handle, const rocblas_storev storev, const rocblas_int m, const rocblas_int n, const rocblas_int k, rocblas_double_complex *A, const rocblas_int lda, rocblas_double_complex *ipiv)# rocblas_status rocsolver_cungbr(rocblas_handle handle, const rocblas_storev storev, const rocblas_int m, const rocblas_int n, const rocblas_int k, rocblas_float_complex *A, const rocblas_int lda, rocblas_float_complex *ipiv)# UNGBR generates an m-by-n complex Matrix Q with orthonormal rows or columns. If storev is column-wise, then the matrix Q has orthonormal columns. If m >= k, Q is defined as the first n columns of the product of k Householder reflectors of order m \[ Q = H(1)H(2)\cdots H(k) \] If m < k, Q is defined as the product of Householder reflectors of order m \[ Q = H(1)H(2)\cdots H(m-1) \] On the other hand, if storev is row-wise, then the matrix Q has orthonormal rows. If n > k, Q is defined as the first m rows of the product of k Householder reflectors of order n \[ Q = H(k)H(k-1)\cdots H(1) \] If n <= k, Q is defined as the product of Householder reflectors of order n \[ Q = H(n-1)H(n-2)\cdots H(1) \] The Householder matrices \(H(i)\) are never stored, they are computed from its corresponding Householder vectors \(v_i\) and scalars \(\text{ipiv}[i]\), as returned by GEBRD in its arguments A and tauq or taup. ☆ handle – [in] rocblas_handle. ☆ storev – [in] rocblas_storev. Specifies whether to work column-wise or row-wise. ☆ m – [in] rocblas_int. m >= 0. The number of rows of the matrix Q. If row-wise, then min(n,k) <= m <= n. ☆ n – [in] rocblas_int. n >= 0. The number of columns of the matrix Q. If column-wise, then min(m,k) <= n <= m. ☆ k – [in] rocblas_int. k >= 0. The number of columns (if storev is column-wise) or rows (if row-wise) of the original matrix reduced by GEBRD. ☆ A – [inout] pointer to type. Array on the GPU of dimension lda*n. On entry, the Householder vectors as returned by GEBRD. On exit, the computed matrix Q. ☆ lda – [in] rocblas_int. lda >= m. Specifies the leading dimension of A. ☆ ipiv – [in] pointer to type. Array on the GPU of dimension min(m,k) if column-wise, or min(n,k) if row-wise. The Householder scalars as returned by GEBRD. rocblas_status rocsolver_zungtr(rocblas_handle handle, const rocblas_fill uplo, const rocblas_int n, rocblas_double_complex *A, const rocblas_int lda, rocblas_double_complex *ipiv)# rocblas_status rocsolver_cungtr(rocblas_handle handle, const rocblas_fill uplo, const rocblas_int n, rocblas_float_complex *A, const rocblas_int lda, rocblas_float_complex *ipiv)# UNGTR generates an n-by-n unitary Matrix Q. Q is defined as the product of n-1 Householder reflectors of order n. If uplo indicates upper, then Q has the form \[ Q = H(n-1)H(n-2)\cdots H(1) \] On the other hand, if uplo indicates lower, then Q has the form \[ Q = H(1)H(2)\cdots H(n-1) \] The Householder matrices \(H(i)\) are never stored, they are computed from its corresponding Householder vectors \(v_i\) and scalars \(\text{ipiv}[i]\), as returned by HETRD in its arguments A and tau. ☆ handle – [in] rocblas_handle. ☆ uplo – [in] rocblas_fill. Specifies whether the HETRD factorization was upper or lower triangular. If uplo indicates lower (or upper), then the upper (or lower) part of A is not used. ☆ n – [in] rocblas_int. n >= 0. The number of rows and columns of the matrix Q. ☆ A – [inout] pointer to type. Array on the GPU of dimension lda*n. On entry, the Householder vectors as returned by HETRD. On exit, the computed matrix Q. ☆ lda – [in] rocblas_int. lda >= m. Specifies the leading dimension of A. ☆ ipiv – [in] pointer to type. Array on the GPU of dimension n-1. The Householder scalars as returned by HETRD. rocblas_status rocsolver_zunm2r(rocblas_handle handle, const rocblas_side side, const rocblas_operation trans, const rocblas_int m, const rocblas_int n, const rocblas_int k, rocblas_double_complex *A , const rocblas_int lda, rocblas_double_complex *ipiv, rocblas_double_complex *C, const rocblas_int ldc)# rocblas_status rocsolver_cunm2r(rocblas_handle handle, const rocblas_side side, const rocblas_operation trans, const rocblas_int m, const rocblas_int n, const rocblas_int k, rocblas_float_complex *A, const rocblas_int lda, rocblas_float_complex *ipiv, rocblas_float_complex *C, const rocblas_int ldc)# UNM2R multiplies a complex matrix Q with orthonormal columns by a general m-by-n matrix C. (This is the unblocked version of the algorithm). The matrix Q is applied in one of the following forms, depending on the values of side and trans: \[\begin{split} \begin{array}{cl} QC & \: \text{No transpose from the left,}\\ Q^HC & \: \text{Conjugate transpose from the left,}\\ CQ & \: \text{No transpose from the right, and}\\ CQ^H & \: \ text{Conjugate transpose from the right.} \end{array} \end{split}\] Q is defined as the product of k Householder reflectors \[ Q = H(1)H(2)\cdots H(k) \] of order m if applying from the left, or n if applying from the right. Q is never stored, it is calculated from the Householder vectors and scalars returned by the QR factorization GEQRF. ☆ handle – [in] rocblas_handle. ☆ side – [in] rocblas_side. Specifies from which side to apply Q. ☆ trans – [in] rocblas_operation. Specifies whether the matrix Q or its conjugate transpose is to be applied. ☆ m – [in] rocblas_int. m >= 0. Number of rows of matrix C. ☆ n – [in] rocblas_int. n >= 0. Number of columns of matrix C. ☆ k – [in] rocblas_int. k >= 0; k <= m if side is left, k <= n if side is right. The number of Householder reflectors that form Q. ☆ A – [in] pointer to type. Array on the GPU of size lda*k. The Householder vectors as returned by GEQRF in the first k columns of its argument A. ☆ lda – [in] rocblas_int. lda >= m if side is left, or lda >= n if side is right. Leading dimension of A. ☆ ipiv – [in] pointer to type. Array on the GPU of dimension at least k. The Householder scalars as returned by GEQRF. ☆ C – [inout] pointer to type. Array on the GPU of size ldc*n. On entry, the matrix C. On exit, it is overwritten with Q*C, C*Q, Q’*C, or C*Q’. ☆ ldc – [in] rocblas_int. ldc >= m. Leading dimension of C. rocblas_status rocsolver_zunmqr(rocblas_handle handle, const rocblas_side side, const rocblas_operation trans, const rocblas_int m, const rocblas_int n, const rocblas_int k, rocblas_double_complex *A , const rocblas_int lda, rocblas_double_complex *ipiv, rocblas_double_complex *C, const rocblas_int ldc)# rocblas_status rocsolver_cunmqr(rocblas_handle handle, const rocblas_side side, const rocblas_operation trans, const rocblas_int m, const rocblas_int n, const rocblas_int k, rocblas_float_complex *A, const rocblas_int lda, rocblas_float_complex *ipiv, rocblas_float_complex *C, const rocblas_int ldc)# UNMQR multiplies a complex matrix Q with orthonormal columns by a general m-by-n matrix C. (This is the blocked version of the algorithm). The matrix Q is applied in one of the following forms, depending on the values of side and trans: \[\begin{split} \begin{array}{cl} QC & \: \text{No transpose from the left,}\\ Q^HC & \: \text{Conjugate transpose from the left,}\\ CQ & \: \text{No transpose from the right, and}\\ CQ^H & \: \ text{Conjugate transpose from the right.} \end{array} \end{split}\] Q is defined as the product of k Householder reflectors \[ Q = H(1)H(2)\cdots H(k) \] of order m if applying from the left, or n if applying from the right. Q is never stored, it is calculated from the Householder vectors and scalars returned by the QR factorization GEQRF. ☆ handle – [in] rocblas_handle. ☆ side – [in] rocblas_side. Specifies from which side to apply Q. ☆ trans – [in] rocblas_operation. Specifies whether the matrix Q or its conjugate transpose is to be applied. ☆ m – [in] rocblas_int. m >= 0. Number of rows of matrix C. ☆ n – [in] rocblas_int. n >= 0. Number of columns of matrix C. ☆ k – [in] rocblas_int. k >= 0; k <= m if side is left, k <= n if side is right. The number of Householder reflectors that form Q. ☆ A – [in] pointer to type. Array on the GPU of size lda*k. The Householder vectors as returned by GEQRF in the first k columns of its argument A. ☆ lda – [in] rocblas_int. lda >= m if side is left, or lda >= n if side is right. Leading dimension of A. ☆ ipiv – [in] pointer to type. Array on the GPU of dimension at least k. The Householder scalars as returned by GEQRF. ☆ C – [inout] pointer to type. Array on the GPU of size ldc*n. On entry, the matrix C. On exit, it is overwritten with Q*C, C*Q, Q’*C, or C*Q’. ☆ ldc – [in] rocblas_int. ldc >= m. Leading dimension of C. rocblas_status rocsolver_zunml2(rocblas_handle handle, const rocblas_side side, const rocblas_operation trans, const rocblas_int m, const rocblas_int n, const rocblas_int k, rocblas_double_complex *A , const rocblas_int lda, rocblas_double_complex *ipiv, rocblas_double_complex *C, const rocblas_int ldc)# rocblas_status rocsolver_cunml2(rocblas_handle handle, const rocblas_side side, const rocblas_operation trans, const rocblas_int m, const rocblas_int n, const rocblas_int k, rocblas_float_complex *A, const rocblas_int lda, rocblas_float_complex *ipiv, rocblas_float_complex *C, const rocblas_int ldc)# UNML2 multiplies a complex matrix Q with orthonormal rows by a general m-by-n matrix C. (This is the unblocked version of the algorithm). The matrix Q is applied in one of the following forms, depending on the values of side and trans: \[\begin{split} \begin{array}{cl} QC & \: \text{No transpose from the left,}\\ Q^HC & \: \text{Conjugate transpose from the left,}\\ CQ & \: \text{No transpose from the right, and}\\ CQ^H & \: \ text{Conjugate transpose from the right.} \end{array} \end{split}\] Q is defined as the product of k Householder reflectors \[ Q = H(k)^HH(k-1)^H\cdots H(1)^H \] of order m if applying from the left, or n if applying from the right. Q is never stored, it is calculated from the Householder vectors and scalars returned by the LQ factorization GELQF. ☆ handle – [in] rocblas_handle. ☆ side – [in] rocblas_side. Specifies from which side to apply Q. ☆ trans – [in] rocblas_operation. Specifies whether the matrix Q or its conjugate transpose is to be applied. ☆ m – [in] rocblas_int. m >= 0. Number of rows of matrix C. ☆ n – [in] rocblas_int. n >= 0. Number of columns of matrix C. ☆ k – [in] rocblas_int. k >= 0; k <= m if side is left, k <= n if side is right. The number of Householder reflectors that form Q. ☆ A – [in] pointer to type. Array on the GPU of size lda*m if side is left, or lda*n if side is right. The Householder vectors as returned by GELQF in the first k rows of its argument A. ☆ lda – [in] rocblas_int. lda >= k. Leading dimension of A. ☆ ipiv – [in] pointer to type. Array on the GPU of dimension at least k. The Householder scalars as returned by GELQF. ☆ C – [inout] pointer to type. Array on the GPU of size ldc*n. On entry, the matrix C. On exit, it is overwritten with Q*C, C*Q, Q’*C, or C*Q’. ☆ ldc – [in] rocblas_int. ldc >= m. Leading dimension of C. rocblas_status rocsolver_zunmlq(rocblas_handle handle, const rocblas_side side, const rocblas_operation trans, const rocblas_int m, const rocblas_int n, const rocblas_int k, rocblas_double_complex *A , const rocblas_int lda, rocblas_double_complex *ipiv, rocblas_double_complex *C, const rocblas_int ldc)# rocblas_status rocsolver_cunmlq(rocblas_handle handle, const rocblas_side side, const rocblas_operation trans, const rocblas_int m, const rocblas_int n, const rocblas_int k, rocblas_float_complex *A, const rocblas_int lda, rocblas_float_complex *ipiv, rocblas_float_complex *C, const rocblas_int ldc)# UNMLQ multiplies a complex matrix Q with orthonormal rows by a general m-by-n matrix C. (This is the blocked version of the algorithm). The matrix Q is applied in one of the following forms, depending on the values of side and trans: \[\begin{split} \begin{array}{cl} QC & \: \text{No transpose from the left,}\\ Q^HC & \: \text{Conjugate transpose from the left,}\\ CQ & \: \text{No transpose from the right, and}\\ CQ^H & \: \ text{Conjugate transpose from the right.} \end{array} \end{split}\] Q is defined as the product of k Householder reflectors \[ Q = H(k)^HH(k-1)^H\cdots H(1)^H \] of order m if applying from the left, or n if applying from the right. Q is never stored, it is calculated from the Householder vectors and scalars returned by the LQ factorization GELQF. ☆ handle – [in] rocblas_handle. ☆ side – [in] rocblas_side. Specifies from which side to apply Q. ☆ trans – [in] rocblas_operation. Specifies whether the matrix Q or its conjugate transpose is to be applied. ☆ m – [in] rocblas_int. m >= 0. Number of rows of matrix C. ☆ n – [in] rocblas_int. n >= 0. Number of columns of matrix C. ☆ k – [in] rocblas_int. k >= 0; k <= m if side is left, k <= n if side is right. The number of Householder reflectors that form Q. ☆ A – [in] pointer to type. Array on the GPU of size lda*m if side is left, or lda*n if side is right. The Householder vectors as returned by GELQF in the first k rows of its argument A. ☆ lda – [in] rocblas_int. lda >= k. Leading dimension of A. ☆ ipiv – [in] pointer to type. Array on the GPU of dimension at least k. The Householder scalars as returned by GELQF. ☆ C – [inout] pointer to type. Array on the GPU of size ldc*n. On entry, the matrix C. On exit, it is overwritten with Q*C, C*Q, Q’*C, or C*Q’. ☆ ldc – [in] rocblas_int. ldc >= m. Leading dimension of C. rocblas_status rocsolver_zunm2l(rocblas_handle handle, const rocblas_side side, const rocblas_operation trans, const rocblas_int m, const rocblas_int n, const rocblas_int k, rocblas_double_complex *A , const rocblas_int lda, rocblas_double_complex *ipiv, rocblas_double_complex *C, const rocblas_int ldc)# rocblas_status rocsolver_cunm2l(rocblas_handle handle, const rocblas_side side, const rocblas_operation trans, const rocblas_int m, const rocblas_int n, const rocblas_int k, rocblas_float_complex *A, const rocblas_int lda, rocblas_float_complex *ipiv, rocblas_float_complex *C, const rocblas_int ldc)# UNM2L multiplies a complex matrix Q with orthonormal columns by a general m-by-n matrix C. (This is the unblocked version of the algorithm). The matrix Q is applied in one of the following forms, depending on the values of side and trans: \[\begin{split} \begin{array}{cl} QC & \: \text{No transpose from the left,}\\ Q^HC & \: \text{Conjugate transpose from the left,}\\ CQ & \: \text{No transpose from the right, and}\\ CQ^H & \: \ text{Conjugate transpose from the right.} \end{array} \end{split}\] Q is defined as the product of k Householder reflectors \[ Q = H(k)H(k-1)\cdots H(1) \] of order m if applying from the left, or n if applying from the right. Q is never stored, it is calculated from the Householder vectors and scalars returned by the QL factorization GEQLF. ☆ handle – [in] rocblas_handle. ☆ side – [in] rocblas_side. Specifies from which side to apply Q. ☆ trans – [in] rocblas_operation. Specifies whether the matrix Q or its conjugate transpose is to be applied. ☆ m – [in] rocblas_int. m >= 0. Number of rows of matrix C. ☆ n – [in] rocblas_int. n >= 0. Number of columns of matrix C. ☆ k – [in] rocblas_int. k >= 0; k <= m if side is left, k <= n if side is right. The number of Householder reflectors that form Q. ☆ A – [in] pointer to type. Array on the GPU of size lda*k. The Householder vectors as returned by GEQLF in the last k columns of its argument A. ☆ lda – [in] rocblas_int. lda >= m if side is left, lda >= n if side is right. Leading dimension of A. ☆ ipiv – [in] pointer to type. Array on the GPU of dimension at least k. The Householder scalars as returned by GEQLF. ☆ C – [inout] pointer to type. Array on the GPU of size ldc*n. On entry, the matrix C. On exit, it is overwritten with Q*C, C*Q, Q’*C, or C*Q’. ☆ ldc – [in] rocblas_int. ldc >= m. Leading dimension of C. rocblas_status rocsolver_zunmql(rocblas_handle handle, const rocblas_side side, const rocblas_operation trans, const rocblas_int m, const rocblas_int n, const rocblas_int k, rocblas_double_complex *A , const rocblas_int lda, rocblas_double_complex *ipiv, rocblas_double_complex *C, const rocblas_int ldc)# rocblas_status rocsolver_cunmql(rocblas_handle handle, const rocblas_side side, const rocblas_operation trans, const rocblas_int m, const rocblas_int n, const rocblas_int k, rocblas_float_complex *A, const rocblas_int lda, rocblas_float_complex *ipiv, rocblas_float_complex *C, const rocblas_int ldc)# UNMQL multiplies a complex matrix Q with orthonormal columns by a general m-by-n matrix C. (This is the blocked version of the algorithm). The matrix Q is applied in one of the following forms, depending on the values of side and trans: \[\begin{split} \begin{array}{cl} QC & \: \text{No transpose from the left,}\\ Q^HC & \: \text{Conjugate transpose from the left,}\\ CQ & \: \text{No transpose from the right, and}\\ CQ^H & \: \ text{Conjugate transpose from the right.} \end{array} \end{split}\] Q is defined as the product of k Householder reflectors \[ Q = H(k)H(k-1)\cdots H(1) \] of order m if applying from the left, or n if applying from the right. Q is never stored, it is calculated from the Householder vectors and scalars returned by the QL factorization GEQLF. ☆ handle – [in] rocblas_handle. ☆ side – [in] rocblas_side. Specifies from which side to apply Q. ☆ trans – [in] rocblas_operation. Specifies whether the matrix Q or its conjugate transpose is to be applied. ☆ m – [in] rocblas_int. m >= 0. Number of rows of matrix C. ☆ n – [in] rocblas_int. n >= 0. Number of columns of matrix C. ☆ k – [in] rocblas_int. k >= 0; k <= m if side is left, k <= n if side is right. The number of Householder reflectors that form Q. ☆ A – [in] pointer to type. Array on the GPU of size lda*k. The Householder vectors as returned by GEQLF in the last k columns of its argument A. ☆ lda – [in] rocblas_int. lda >= m if side is left, lda >= n if side is right. Leading dimension of A. ☆ ipiv – [in] pointer to type. Array on the GPU of dimension at least k. The Householder scalars as returned by GEQLF. ☆ C – [inout] pointer to type. Array on the GPU of size ldc*n. On entry, the matrix C. On exit, it is overwritten with Q*C, C*Q, Q’*C, or C*Q’. ☆ ldc – [in] rocblas_int. ldc >= m. Leading dimension of C. rocblas_status rocsolver_zunmbr(rocblas_handle handle, const rocblas_storev storev, const rocblas_side side, const rocblas_operation trans, const rocblas_int m, const rocblas_int n, const rocblas_int k, rocblas_double_complex *A, const rocblas_int lda, rocblas_double_complex *ipiv, rocblas_double_complex *C, const rocblas_int ldc)# rocblas_status rocsolver_cunmbr(rocblas_handle handle, const rocblas_storev storev, const rocblas_side side, const rocblas_operation trans, const rocblas_int m, const rocblas_int n, const rocblas_int k, rocblas_float_complex *A, const rocblas_int lda, rocblas_float_complex *ipiv, rocblas_float_complex *C, const rocblas_int ldc)# UNMBR multiplies a complex matrix Q with orthonormal rows or columns by a general m-by-n matrix C. If storev is column-wise, then the matrix Q has orthonormal columns. If storev is row-wise, then the matrix Q has orthonormal rows. The matrix Q is applied in one of the following forms, depending on the values of side and trans: \[\begin{split} \begin{array}{cl} QC & \: \text{No transpose from the left,}\\ Q^HC & \: \text{Conjugate transpose from the left,}\\ CQ & \: \text{No transpose from the right, and}\\ CQ^H & \: \ text{Conjugate transpose from the right.} \end{array} \end{split}\] The order q of the unitary matrix Q is q = m if applying from the left, or q = n if applying from the right. When storev is column-wise, if q >= k, then Q is defined as the product of k Householder reflectors \[ Q = H(1)H(2)\cdots H(k), \] and if q < k, then Q is defined as the product \[ Q = H(1)H(2)\cdots H(q-1). \] When storev is row-wise, if q > k, then Q is defined as the product of k Householder reflectors \[ Q = H(1)H(2)\cdots H(k), \] and if q <= k, Q is defined as the product \[ Q = H(1)H(2)\cdots H(q-1). \] The Householder matrices \(H(i)\) are never stored, they are computed from its corresponding Householder vectors and scalars as returned by GEBRD in its arguments A and tauq or taup. ☆ handle – [in] rocblas_handle. ☆ storev – [in] rocblas_storev. Specifies whether to work column-wise or row-wise. ☆ side – [in] rocblas_side. Specifies from which side to apply Q. ☆ trans – [in] rocblas_operation. Specifies whether the matrix Q or its conjugate transpose is to be applied. ☆ m – [in] rocblas_int. m >= 0. Number of rows of matrix C. ☆ n – [in] rocblas_int. n >= 0. Number of columns of matrix C. ☆ k – [in] rocblas_int. k >= 0. The number of columns (if storev is column-wise) or rows (if row-wise) of the original matrix reduced by GEBRD. ☆ A – [in] pointer to type. Array on the GPU of size lda*min(q,k) if column-wise, or lda*q if row-wise. The Householder vectors as returned by GEBRD. ☆ lda – [in] rocblas_int. lda >= q if column-wise, or lda >= min(q,k) if row-wise. Leading dimension of A. ☆ ipiv – [in] pointer to type. Array on the GPU of dimension at least min(q,k). The Householder scalars as returned by GEBRD. ☆ C – [inout] pointer to type. Array on the GPU of size ldc*n. On entry, the matrix C. On exit, it is overwritten with Q*C, C*Q, Q’*C, or C*Q’. ☆ ldc – [in] rocblas_int. ldc >= m. Leading dimension of C. rocblas_status rocsolver_zunmtr(rocblas_handle handle, const rocblas_side side, const rocblas_fill uplo, const rocblas_operation trans, const rocblas_int m, const rocblas_int n, rocblas_double_complex *A, const rocblas_int lda, rocblas_double_complex *ipiv, rocblas_double_complex *C, const rocblas_int ldc)# rocblas_status rocsolver_cunmtr(rocblas_handle handle, const rocblas_side side, const rocblas_fill uplo, const rocblas_operation trans, const rocblas_int m, const rocblas_int n, rocblas_float_complex *A, const rocblas_int lda, rocblas_float_complex *ipiv, rocblas_float_complex *C, const rocblas_int ldc)# UNMTR multiplies a unitary matrix Q by a general m-by-n matrix C. The matrix Q is applied in one of the following forms, depending on the values of side and trans: \[\begin{split} \begin{array}{cl} QC & \: \text{No transpose from the left,}\\ Q^HC & \: \text{Conjugate transpose from the left,}\\ CQ & \: \text{No transpose from the right, and}\\ CQ^H & \: \ text{Conjugate transpose from the right.} \end{array} \end{split}\] The order q of the unitary matrix Q is q = m if applying from the left, or q = n if applying from the right. Q is defined as a product of q-1 Householder reflectors. If uplo indicates upper, then Q has the form \[ Q = H(q-1)H(q-2)\cdots H(1). \] On the other hand, if uplo indicates lower, then Q has the form \[ Q = H(1)H(2)\cdots H(q-1) \] The Householder matrices \(H(i)\) are never stored, they are computed from its corresponding Householder vectors and scalars as returned by HETRD in its arguments A and tau. ☆ handle – [in] rocblas_handle. ☆ side – [in] rocblas_side. Specifies from which side to apply Q. ☆ uplo – [in] rocblas_fill. Specifies whether the HETRD factorization was upper or lower triangular. If uplo indicates lower (or upper), then the upper (or lower) part of A is not used. ☆ trans – [in] rocblas_operation. Specifies whether the matrix Q or its conjugate transpose is to be applied. ☆ m – [in] rocblas_int. m >= 0. Number of rows of matrix C. ☆ n – [in] rocblas_int. n >= 0. Number of columns of matrix C. ☆ A – [in] pointer to type. Array on the GPU of size lda*q. On entry, the Householder vectors as returned by HETRD. ☆ lda – [in] rocblas_int. lda >= q. Leading dimension of A. ☆ ipiv – [in] pointer to type. Array on the GPU of dimension at least q-1. The Householder scalars as returned by HETRD. ☆ C – [inout] pointer to type. Array on the GPU of size ldc*n. On entry, the matrix C. On exit, it is overwritten with Q*C, C*Q, Q’*C, or C*Q’. ☆ ldc – [in] rocblas_int. ldc >= m. Leading dimension of C.
{"url":"https://rocm.docs.amd.com/projects/rocSOLVER/en/docs-6.2.1/reference/auxiliary.html","timestamp":"2024-11-02T14:15:29Z","content_type":"text/html","content_length":"626300","record_id":"<urn:uuid:68f79bb3-894a-4deb-9fc2-01ef60cde593>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00508.warc.gz"}
SQL Transitivity -- Community Note Virtuoso SQL supports tree and graph data structures represented as relations operated on through the use of Transitive SQL Subqueries. A derived table (i.e., the product of a SELECT query within a FROM clause) can be declared to be transitive. This is achieved by applying the TRANSITIVE modifier to the SELECT keyword alongside other conventional modifiers such as DISTINCT or TOP. The syntax of this modifier is as follow: transitive_decl ::= TRANSITIVE <trans_option>[, ...] trans_opt :: = | T_MIN ( intnum ) | T_MAX ( intnum ) | T_DISTINCT | T_EXISTS | T_NO_CYCLES | T_CYCLES_ONLY | T_NO_ORDER | T_SHORTEST_ONLY | T_IN <position_list> | T_OUT ( <position_list ) | T_END_FLAG ( intnum ) | T_FINAL_AS name | T_STEP ( <position_or_path_spec> ) | T_DIRECTION (intnum) position_list ::= intnum [,...] position_or_path_spec ::= intnum | 'step_no' | 'path_id' A transitive derived table is a projection (also known as a query solution) that may comprise four different types of columns: • Input • Output • Step Data • Special When a transitive derived table occurs in a query, the enclosing query must specify an equality condition for either (1) all input columns, (2) all output columns, or (3) both. The designation of input and output columns is for convenience only. The order of query execution will generally be decided by the optimizer, unless overridden with the T_DIRECTION option. Consider a simplified social network application comprised of the following data: CREATE TABLE knows ( p1 INT , p2 INT , PRIMARY KEY ( p1 , p2 ) ) ; ALTER INDEX knows ON knows PARTITION ( p1 INT ) ; CREATE INDEX knows2 ON knows ( p2 , p1 ) PARTITION ( p2 INT ) ; INSERT INTO knows VALUES ( 1 , 2 ) ; INSERT INTO knows VALUES ( 1 , 3 ) ; INSERT INTO knows VALUES ( 2 , 4 ) ; All persons have single integer identifiers. There is a row in the knows table if person p1 claims to know person p2. The most basic query is to find all the people that a given person knows either directly or indirectly. SELECT * T_IN ( 1 ) T_OUT ( 2 ) T_DISTINCT p1, FROM knows ) AS k WHERE k.p1 = 1; The transitive derived table simply selects from the knows table. The enclosing top level query gives an initial value for the input column of the transitive SELECT. This leaves the output column p2 unbound, so the query will iterate over the possible values of p2. Initially, the query loops over the people directly known by 1. In the next stage, it takes the binding of p2 and uses it as a new value of the input column p1 to look for people the first degree contact knows and so on, until no new values are found. The basic meaning of the transitive modifier is that given initial values for input column(s), the subquery is evaluated to produce values for the output columns. Then these values are fed back as values of input columns and so forth, until some termination condition is reached. If there are equality conditions for columns designated as output but no conditions for columns designated as input, then the same process runs from output to input. The terms input and output do not imply execution order. If there are bindings for both input and output columns in the enclosing query, then the transitive derived table looks for ways of connecting the input and output bindings. If no such way is found, the subquery is empty and causes the whole enclosing query to also have no result. A transitive derived table cannot be the right side of an OUTER JOIN directly but can be wrapped in a derived table that is. In this way, an OUTER JOIN usage is also possible, whether finding a path is The result set of a transitive subquery can be thought of as a set of paths. A path consists of one or more consecutive bindings for the input columns and is ordered. In our example, a path is p1=1, p1=2, p1=4. This is the path connecting persons 1 and 4. If there are columns in the SELECT that are neither input or output, they too are recorded for each step of the path. The result set may include just the ends of a path, i.e., one row where the input columns have the beginning and the output columns the end of the path. This means there is one row per distinct path. The result set may also include a row for each step on each path. In this example, we bind both ends of the transitive subquery and ask how person 1 and 4 are connected. Since the columns p1 and p2 have an equality condition, each row of the result set has these at values 1 and 4, respectively. SELECT * T_IN ( 1 ) T_OUT ( 2 ) T_DIRECTION 3 T_SHORTEST_ONLY p1, T_STEP ( 1 ) AS via , T_STEP ( 'path_id' ) AS path , T_STEP ( 'step_no' ) AS step FROM knows ) AS k WHERE p1 = 1 AND p2 = 4 ; P1 P2 VIA PATH STEP The three rightmost columns allow returning information in the intermediate steps of the transitive evaluation. t_step (1) means the value of the column at position 1 at the intermediate step. The t_step ('step_no') is the sequence number of the step returned. The t_step ('path_id') is a number identifying the connection path, since there may be many paths joining persons 1 and 4. In this situation, the result set has one row per step, including a row for the initial and final steps. While the evaluation order may vary internally, the result set is presented as if the query were evaluated from input to output, i.e., looking for people known by 1, finding 2 and 3, then looking for people they know, finding that 2 knows 4, which is a solution, since p2 = 4 was specified in the outer select. If the outer query had p1 = 4 and p2 = 1, there would be an empty result set since there is no path from 4 to 1. For example, if tables have multipart keys, there can be many input and output columns but there must be an equal number of both, since the engine internally feeds the output back into the input or vice versa. The transitive derived table may be arbitrarily complex. We may have an application that returns extra information about a step. This could, for example, be a metric of distance. In such a case, a column which is not designated as either input nor output and is not a t_step () function call, will simply be returned as is. The result set of a transitive subquery will either have one row for each state reached, or have one row for each step on the path to each state reached. The first example returns only the ends of the paths, i.e., directly and indirectly known person ids. It does not return for each returned id how this person is known, through which set of connections. The second example returns a row for each step on each path. Steps will be returned if the selection has t_step () calls or columns that are neither input or output. The forms of t_step are: • t_step ( <column number> ) This returns the value that the column, one of the columns designated as input, has at this step. The input or output columns themselves, if there is a condition on them, look equal to the condition. This allows seeing intermediate values of input columns on a path. • t_step ( 'step_no' ) This returns the ordinal number of the step on the path. Step 0 corresponds to the input variables being at the value seen in the enclosing query. Step 1 is one removed from this. Step numbering is assigned as if evaluating from input to output. Consider this: SELECT * T_IN ( 1 ) T_OUT ( 2 ) T_MIN ( 0 ) T_DISTINCT p1, T_STEP ( 1 ) AS via, T_STEP ( 'path_id' ) AS path , T_STEP ( 'step_no' ) AS step FROM knows) AS k WHERE p1 = 1 ; P1 P2 VIA PATH STEP This returns four paths, all starting at 1: the path from 1 to 1; the path from 1 to 2; the path from 1 to 3; and the path from 1 to 2 to 4. The path column has values from 0 to 3, distinguishing the four different paths returned. The p1 column is the start of the path, thus always 1 since this is given in the outer query. The p2 column is the end of the path. The via column is the value of p1 at the intermediate step. The step number where via is equal to p1 is 0. The next step number is 1. At the highest step number of each path, p2 and via are the same. Now, let us do this in reverse: SELECT * T_IN ( 1 ) T_OUT ( 2 ) T_MIN ( 0 ) T_DISTINCT p1 , p2 , T_STEP ( 1 ) AS via , T_STEP ( 'path_id' ) AS path , T_STEP ( 'step_no' ) AS step FROM knows) AS k WHERE p2 = 4 ; P1 P2 VIA PATH STEP We give an initial value to p2 and leave p1 free. Now we get three paths: the path from 4 to 4; from 2 to 4; and from 1 to 2 to 4. We enumerate the steps as if counting from input to output, albeit internally the evaluation order is the reverse. Again, step number 0 has the via column equal to p1 and the highest numbered step has via equal to p2. Descriptions of Transitive Options • T_MIN (INTNUM) — This means that paths shorter than the number are not returned. In the examples above, we had t_min at 0, so that a path of zero length (i.e., where the first output equals the outer conditions for the inputs) was also returned. • T_MAX (INTNUM) — This gives a maximum length of path. Paths that exceed this step threshold are not returned. A value of 1 means that the subquery is evaluated once, i.e., the outputs of the first evaluation are not fed back into the inputs. Specifying a minimum of 0 and a maximum of 1 means an optional JOIN. Specifying t_min and t_max both as 1 means an ordinary derived table. • T_DISTINCT — This means that if a binding of input columns is produced more than once, only the first is used; i.e., the same point is not traversed twice even if many paths lead to it. • T_EXISTS — Only one path is generated and returned. • T_NO_CYCLES — If a path is found that loops over itself (i.e. a next step has the input values equal to the input values of a previous step on the path), the binding is ignored. • T_CYCLES_ONLY — Only paths that have a cycle (i.e., input values of a subsequent step equal the input values of a previous step on the same path) are returned. • T_SHORTEST_ONLY — If both ends of the path are given, the evaluation stops at the length of path where the first solution is found. If many paths of equal length are found, they are returned, but longer paths are not sought. • T_IN (column_ordinal_positions) — This specifies which column is to be used to indicate the source (or IN) t node. • T_OUT (column_ordinal_positions) — This specifies which column is to be used to indicate the destination (or OUT) t node. T_END_FLAG (intnum) — This option is used to indicate that the transitive operation should stop when the variable returns a non-zero value. Its value is a positive integer value indicating the ordinal position of the variable in the select list to be used. • T_DIRECTION (intnum) — A value of 0 (default) means that the SQL optimizer decides which way the transitive subquery is evaluated. 1 means from input to output, 2 from output to input, 3 from both ends. Supposing we are looking at how two points are related, it makes sense to start expanding the transitive closure at both ends. In the above example, this would be going from p1 to p2 on one side and from p2 to p1 on the other.
{"url":"https://community.openlinksw.com/t/sql-transitivity-community-note/1251","timestamp":"2024-11-05T08:54:16Z","content_type":"text/html","content_length":"32740","record_id":"<urn:uuid:37c038d2-a370-4c60-ae22-005df9b08ef0>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00692.warc.gz"}
Diffusion MRI Short description: Method of utilizing water in magnetic resonance imaging Diffusion-weighted magnetic resonance imaging (DWI or DW-MRI) is the use of specific MRI sequences as well as software that generates images from the resulting data that uses the diffusion of water molecules to generate contrast in MR images.^[1]^[2]^[3] It allows the mapping of the diffusion process of molecules, mainly water, in biological tissues, in vivo and non-invasively. Molecular diffusion in tissues is not random, but reflects interactions with many obstacles, such as macromolecules, fibers, and membranes. Water molecule diffusion patterns can therefore reveal microscopic details about tissue architecture, either normal or in a diseased state. A special kind of DWI, diffusion tensor imaging (DTI), has been used extensively to map white matter tractography in the In diffusion weighted imaging (DWI), the intensity of each image element (voxel) reflects the best estimate of the rate of water diffusion at that location. Because the mobility of water is driven by thermal agitation and highly dependent on its cellular environment, the hypothesis behind DWI is that findings may indicate (early) pathologic change. For instance, DWI is more sensitive to early changes after a stroke than more traditional MRI measurements such as T1 or T2 relaxation rates. A variant of diffusion weighted imaging, diffusion spectrum imaging (DSI),^[4] was used in deriving the Connectome data sets; DSI is a variant of diffusion-weighted imaging that is sensitive to intra-voxel heterogeneities in diffusion directions caused by crossing fiber tracts and thus allows more accurate mapping of axonal trajectories than other diffusion imaging approaches.^[5] Diffusion-weighted images are very useful to diagnose vascular strokes in the brain. It is also used more and more in the staging of non-small-cell lung cancer, where it is a serious candidate to replace positron emission tomography as the 'gold standard' for this type of disease. Diffusion tensor imaging is being developed for studying the diseases of the white matter of the brain as well as for studies of other body tissues (see below). DWI is most applicable when the tissue of interest is dominated by isotropic water movement e.g. grey matter in the cerebral cortex and major brain nuclei, or in the body—where the diffusion rate appears to be the same when measured along any axis. However, DWI also remains sensitive to T1 and T2 relaxation. To entangle diffusion and relaxation effects on image contrast, one may obtain quantitative images of the diffusion coefficient, or more exactly the apparent diffusion coefficient (ADC). The ADC concept was introduced to take into account the fact that the diffusion process is complex in biological tissues and reflects several different mechanisms.^[6] Diffusion tensor imaging (DTI) is important when a tissue—such as the neural axons of white matter in the brain or muscle fibers in the heart—has an internal fibrous structure analogous to the anisotropy of some crystals. Water will then diffuse more rapidly in the direction aligned with the internal structure (axial diffusion), and more slowly as it moves perpendicular to the preferred direction (radial diffusion). This also means that the measured rate of diffusion will differ depending on the direction from which an observer is looking. Diffusion Basis Spectrum Imaging (DBSI) further separates DTI signals into discrete anisotropic diffusion tensors and a spectrum of isotropic diffusion tensors to better differentiate sub-voxel cellular structures. For example, anisotropic diffusion tensors correlate to axonal fibers, while low isotropic diffusion tensors correlate to cells and high isotropic diffusion tensors correlate to larger structures (such as the lumen or brain ventricles).^[7] DBSI has been shown to differentiate some types of brain tumors and multiple sclerosis with higher specificity and sensitivity than conventional DTI.^[8]^[9]^[10]^[11] DBSI has also been useful in determining microstructure properties of the brain.^[12] Traditionally, in diffusion-weighted imaging (DWI), three gradient-directions are applied, sufficient to estimate the trace of the diffusion tensor or 'average diffusivity', a putative measure of edema. Clinically, trace-weighted images have proven to be very useful to diagnose vascular strokes in the brain, by early detection (within a couple of minutes) of the hypoxic edema.^[13] More extended DTI scans derive neural tract directional information from the data using 3D or multidimensional vector algorithms based on six or more gradient directions, sufficient to compute the diffusion tensor. The diffusion tensor model is a rather simple model of the diffusion process, assuming homogeneity and linearity of the diffusion within each image voxel.^[13] From the diffusion tensor, diffusion anisotropy measures such as the fractional anisotropy (FA), can be computed. Moreover, the principal direction of the diffusion tensor can be used to infer the white-matter connectivity of the brain (i.e. tractography; trying to see which part of the brain is connected to which other part). Recently, more advanced models of the diffusion process have been proposed that aim to overcome the weaknesses of the diffusion tensor model. Amongst others, these include q-space imaging ^[14] and generalized diffusion tensor imaging. Diffusion imaging is an MRI method that produces in vivo magnetic resonance images of biological tissues sensitized with the local characteristics of molecular diffusion, generally water (but other moieties can also be investigated using MR spectroscopic approaches).^[15] MRI can be made sensitive to the motion of molecules. Regular MRI acquisition utilizes the behavior of protons in water to generate contrast between clinically relevant features of a particular subject. The versatile nature of MRI is due to this capability of producing contrast related to the structure of tissues at the microscopic level. In a typical [math]\displaystyle{ T_1 }[/math]-weighted image, water molecules in a sample are excited with the imposition of a strong magnetic field. This causes many of the protons in water molecules to precess simultaneously, producing signals in MRI. In [math]\displaystyle{ T_2 }[/math]-weighted images, contrast is produced by measuring the loss of coherence or synchrony between the water protons. When water is in an environment where it can freely tumble, relaxation tends to take longer. In certain clinical situations, this can generate contrast between an area of pathology and the surrounding healthy tissue. To sensitize MRI images to diffusion, the magnetic field strength (B1) is varied linearly by a pulsed field gradient. Since precession is proportional to the magnet strength, the protons begin to precess at different rates, resulting in dispersion of the phase and signal loss. Another gradient pulse is applied in the same magnitude but with opposite direction to refocus or rephase the spins. The refocusing will not be perfect for protons that have moved during the time interval between the pulses, and the signal measured by the MRI machine is reduced. This "field gradient pulse" method was initially devised for NMR by Stejskal and Tanner ^[16] who derived the reduction in signal due to the application of the pulse gradient related to the amount of diffusion that is occurring through the following equation: [math]\displaystyle{ \frac{S(TE)}{S_0} = \exp \left[ -\gamma^2 G^2\delta^2 \left( \Delta-\frac{\delta}{3}\right)D \right] }[/math] where [math]\displaystyle{ S_0 }[/math] is the signal intensity without the diffusion weighting, [math]\displaystyle{ S }[/math] is the signal with the gradient, [math]\displaystyle{ \gamma }[/math] is the gyromagnetic ratio, [math]\displaystyle{ G }[/math] is the strength of the gradient pulse, [math]\displaystyle{ \delta }[/math] is the duration of the pulse, [math]\displaystyle{ \Delta }[/ math] is the time between the two pulses, and finally, [math]\displaystyle{ D }[/math] is the diffusion-coefficient. In order to localize this signal attenuation to get images of diffusion one has to combine the pulsed magnetic field gradient pulses used for MRI (aimed at localization of the signal, but those gradient pulses are too weak to produce a diffusion related attenuation) with additional "motion-probing" gradient pulses, according to the Stejskal and Tanner method. This combination is not trivial, as cross-terms arise between all gradient pulses. The equation set by Stejskal and Tanner then becomes inaccurate and the signal attenuation must be calculated, either analytically or numerically, integrating all gradient pulses present in the MRI sequence and their interactions. The result quickly becomes very complex given the many pulses present in the MRI sequence, and as a simplification, Le Bihan suggested gathering all the gradient terms in a "b factor" (which depends only on the acquisition parameters) so that the signal attenuation simply becomes:^[1] [math]\displaystyle{ \frac{S(TE)}{S_0} = \exp (-b\cdot ADC) }[/math] Also, the diffusion coefficient, [math]\displaystyle{ D }[/math], is replaced by an apparent diffusion coefficient, [math]\displaystyle{ ADC }[/math], to indicate that the diffusion process is not free in tissues, but hindered and modulated by many mechanisms (restriction in closed spaces, tortuosity around obstacles, etc.) and that other sources of IntraVoxel Incoherent Motion (IVIM) such as blood flow in small vessels or cerebrospinal fluid in ventricles also contribute to the signal attenuation. At the end, images are "weighted" by the diffusion process: In those diffusion-weighted images (DWI) the signal is more attenuated the faster the diffusion and the larger the b factor is. However, those diffusion-weighted images are still also sensitive to T1 and T2 relaxivity contrast, which can sometimes be confusing. It is possible to calculate "pure" diffusion maps (or more exactly ADC maps where the ADC is the sole source of contrast) by collecting images with at least 2 different values, [math]\displaystyle{ b_1 }[/math] and [math]\displaystyle{ b_2 }[/math], of the b factor according to: [math]\displaystyle{ \mathrm{ADC}(x,y,z)= \ln [S_2(x,y,z)/S_1(x,y,z)]/(b_1-b_2) }[/math] Although this ADC concept has been extremely successful, especially for clinical applications, it has been challenged recently, as new, more comprehensive models of diffusion in biological tissues have been introduced. Those models have been made necessary, as diffusion in tissues is not free. In this condition, the ADC seems to depend on the choice of b values (the ADC seems to decrease when using larger b values), as the plot of ln(S/So) is not linear with the b factor, as expected from the above equations. This deviation from a free diffusion behavior is what makes diffusion MRI so successful, as the ADC is very sensitive to changes in tissue microstructure. On the other hand, modeling diffusion in tissues is becoming very complex. Among most popular models are the biexponential model, which assumes the presence of 2 water pools in slow or intermediate exchange ^[17]^[18] and the cumulant-expansion (also called Kurtosis) model,^[19]^[20]^[21] which does not necessarily require the presence of 2 pools. Diffusion model Given the concentration [math]\displaystyle{ \rho }[/math] and flux [math]\displaystyle{ J }[/math], Fick's first law gives a relationship between the flux and the concentration gradient: [math]\displaystyle{ J(x,t)=-D\nabla\rho(x,t) }[/math] where D is the diffusion coefficient. Then, given conservation of mass, the continuity equation relates the time derivative of the concentration with the divergence of the flux: [math]\displaystyle{ \frac{\partial\rho(x,t)}{\partial t}=-\nabla\cdot J(x,t) }[/math] Putting the two together, we get the diffusion equation: [math]\displaystyle{ \frac{\partial\rho(x,t)}{\partial t}=D\nabla^2\rho(x,t). }[/math] Magnetization dynamics With no diffusion present, the change in nuclear magnetization over time is given by the classical Bloch equation [math]\displaystyle{ \frac{d\vec{M}}{dt}=\gamma\vec{M}\times\vec{B}-\frac{M_x\vec{i}+M_y\vec{j}}{T_2}-\frac{(M_z-M_0)\vec{k}}{T_1} }[/math] which has terms for precession, T2 relaxation, and T1 relaxation. In 1956, H.C. Torrey mathematically showed how the Bloch equations for magnetization would change with the addition of diffusion.^[22] Torrey modified Bloch's original description of transverse magnetization to include diffusion terms and the application of a spatially varying gradient. Since the magnetization [math]\displaystyle{ M }[/math] is a vector, there are 3 diffusion equations, one for each dimension. The Bloch-Torrey equation is: [math]\displaystyle{ \frac{d\vec{ M}}{dt}=\gamma\vec{M}\times\vec{B}-\frac{M_x\vec{i}+M_y\vec{j}}{T_2}-\frac{(M_z-M_0)\vec{k}}{T_1}+\nabla\cdot \vec{D}\nabla\vec{M} }[/math] where [math]\displaystyle{ \vec{D} }[/math] is now the diffusion tensor. For the simplest case where the diffusion is isotropic the diffusion tensor is a multiple of the identity: [math]\displaystyle{ \vec{D} = D \cdot \vec{I} = D \cdot \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix}, }[/math] then the Bloch-Torrey equation will have the solution [math]\displaystyle{ {M}={M}_{\text{bloch}}e^{-\frac13\gamma^2G^2t^3D}\sim e^{-bD_0} }[/math] The exponential term will be referred to as the attenuation [math]\displaystyle{ A }[/math]. Anisotropic diffusion will have a similar solution for the diffusion tensor, except that what will be measured is the apparent diffusion coefficient (ADC). In general, the attenuation is: [math]\displaystyle{ A=e^{ -\sum_{i,j}b_{ij}D_{ij} } }[/math] where the [math]\displaystyle{ b_{ij} }[/math] terms incorporate the gradient fields [math]\displaystyle{ G_x }[/math], [math]\displaystyle{ G_y }[/math], and [math]\displaystyle{ G_z }[/math]. The standard grayscale of DWI images is to represent increased diffusion restriction as brighter.^[23] ADC image An apparent diffusion coefficient (ADC) image, or an ADC map, is an MRI image that more specifically shows diffusion than conventional DWI, by eliminating the T2 weighting that is otherwise inherent to conventional DWI.^[24]^[25] ADC imaging does so by acquiring multiple conventional DWI images with different amounts of DWI weighting, and the change in signal is proportional to the rate of diffusion. Contrary to DWI images, the standard grayscale of ADC images is to represent a smaller magnitude of diffusion as darker.^[23] Cerebral infarction leads to diffusion restriction, and the difference between images with various DWI weighting will therefore be minor, leading to an ADC image with low signal in the infarcted area.^[24] A decreased ADC may be detected minutes after a cerebral infarction.^[26] The high signal of infarcted tissue on conventional DWI is a result of its partial T2 weighting.^[27] Diffusion tensor imaging Diffusion tensor imaging (DTI) is a magnetic resonance imaging technique that enables the measurement of the restricted diffusion of water in tissue in order to produce neural tract images instead of using this data solely for the purpose of assigning contrast or colors to pixels in a cross-sectional image. It also provides useful structural information about muscle—including heart muscle—as well as other tissues such as the prostate.^[28] In DTI, each voxel has one or more pairs of parameters: a rate of diffusion and a preferred direction of diffusion—described in terms of three-dimensional space—for which that parameter is valid. The properties of each voxel of a single DTI image are usually calculated by vector or tensor math from six or more different diffusion weighted acquisitions, each obtained with a different orientation of the diffusion sensitizing gradients. In some methods, hundreds of measurements—each making up a complete image—are made to generate a single resulting calculated image data set. The higher information content of a DTI voxel makes it extremely sensitive to subtle pathology in the brain. In addition the directional information can be exploited at a higher level of structure to select and follow neural tracts through the brain—a process called tractography.^[29] A more precise statement of the image acquisition process is that the image-intensities at each position are attenuated, depending on the strength (b-value) and direction of the so-called magnetic diffusion gradient, as well as on the local microstructure in which the water molecules diffuse. The more attenuated the image is at a given position, the greater diffusion there is in the direction of the diffusion gradient. In order to measure the tissue's complete diffusion profile, one needs to repeat the MR scans, applying different directions (and possibly strengths) of the diffusion gradient for each scan. Mathematical foundation—tensors Diffusion MRI relies on the mathematics and physical interpretations of the geometric quantities known as tensors. Only a special case of the general mathematical notion is relevant to imaging, which is based on the concept of a symmetric matrix.^[notes 1] Diffusion itself is tensorial, but in many cases the objective is not really about trying to study brain diffusion per se, but rather just trying to take advantage of diffusion anisotropy in white matter for the purpose of finding the orientation of the axons and the magnitude or degree of anisotropy. Tensors have a real, physical existence in a material or tissue so that they do not move when the coordinate system used to describe them is rotated. There are numerous different possible representations of a tensor (of rank 2), but among these, this discussion focuses on the ellipsoid because of its physical relevance to diffusion and because of its historical significance in the development of diffusion anisotropy imaging in MRI. The following matrix displays the components of the diffusion tensor: [math]\displaystyle{ \bar{D} = \begin{vmatrix} D_{\color{red}xx} & D_{xy} & D_{xz} \\ D_{xy} & D_{\color{red}yy} & D_{yz} \\ D_{xz} & D_{yz} & D_{\color{red}zz} \end{vmatrix} }[/math] The same matrix of numbers can have a simultaneous second use to describe the shape and orientation of an ellipse and the same matrix of numbers can be used simultaneously in a third way for matrix mathematics to sort out eigenvectors and eigenvalues as explained below. Physical tensors The idea of a tensor in physical science evolved from attempts to describe the quantity of physical properties. The first properties they were applied to were those that can be described by a single number, such as temperature. Properties that can be described this way are called scalars; these can be considered tensors of rank 0, or 0th-order tensors. Tensors can also be used to describe quantities that have directionality, such as mechanical force. These quantities require specification of both magnitude and direction, and are often represented with a vector. A three-dimensional vector can be described with three components: its projection on the x, y, and z axes. Vectors of this sort can be considered tensors of rank 1, or 1st-order tensors. A tensor is often a physical or biophysical property that determines the relationship between two vectors. When a force is applied to an object, movement can result. If the movement is in a single direction, the transformation can be described using a vector—a tensor of rank 1. However, in a tissue, diffusion leads to movement of water molecules along trajectories that proceed along multiple directions over time, leading to a complex projection onto the Cartesian axes. This pattern is reproducible if the same conditions and forces are applied to the same tissue in the same way. If there is an internal anisotropic organization of the tissue that constrains diffusion, then this fact will be reflected in the pattern of diffusion. The relationship between the properties of driving force that generate diffusion of the water molecules and the resulting pattern of their movement in the tissue can be described by a tensor. The collection of molecular displacements of this physical property can be described with nine components—each one associated with a pair of axes xx, yy, zz, xy, yx, xz, zx, yz, zy.^[30] These can be written as a matrix similar to the one at the start of this section. Diffusion from a point source in the anisotropic medium of white matter behaves in a similar fashion. The first pulse of the Stejskal Tanner diffusion gradient effectively labels some water molecules and the second pulse effectively shows their displacement due to diffusion. Each gradient direction applied measures the movement along the direction of that gradient. Six or more gradients are summed to get all the measurements needed to fill in the matrix, assuming it is symmetric above and below the diagonal (red subscripts). In 1848, Henri Hureau de Sénarmont^[31] applied a heated point to a polished crystal surface that had been coated with wax. In some materials that had "isotropic" structure, a ring of melt would spread across the surface in a circle. In anisotropic crystals the spread took the form of an ellipse. In three dimensions this spread is an ellipsoid. As Adolf Fick showed in the 1850s, diffusion exhibits many of the same patterns as those seen in the transfer of heat. Mathematics of ellipsoids At this point, it is helpful to consider the mathematics of ellipsoids. An ellipsoid can be described by the formula: [math]\displaystyle{ ax^2 + by^2 + cz^2 = 1 }[/math]. This equation describes a quadric surface. The relative values of a, b, and c determine if the quadric describes an ellipsoid or a hyperboloid. As it turns out, three more components can be added as follows: [math]\displaystyle{ ax^2 + by^2 + cz^2 + dyz + ezx + fxy = 1 }[/math]. Many combinations of a, b, c, d, e, and f still describe ellipsoids, but the additional components (d, e, f) describe the rotation of the ellipsoid relative to the orthogonal axes of the Cartesian coordinate system. These six variables can be represented by a matrix similar to the tensor matrix defined at the start of this section (since diffusion is symmetric, then we only need six instead of nine components—the components below the diagonal elements of the matrix are the same as the components above the diagonal). This is what is meant when it is stated that the components of a matrix of a second order tensor can be represented by an ellipsoid—if the diffusion values of the six terms of the quadric ellipsoid are placed into the matrix, this generates an ellipsoid angled off the orthogonal grid. Its shape will be more elongated if the relative anisotropy is high. When the ellipsoid/tensor is represented by a matrix, we can apply a useful technique from standard matrix mathematics and linear algebra—that is to "diagonalize" the matrix. This has two important meanings in imaging. The idea is that there are two equivalent ellipsoids—of identical shape but with different size and orientation. The first one is the measured diffusion ellipsoid sitting at an angle determined by the axons, and the second one is perfectly aligned with the three Cartesian axes. The term "diagonalize" refers to the three components of the matrix along a diagonal from upper left to lower right (the components with red subscripts in the matrix at the start of this section). The variables [math]\displaystyle{ ax^2 }[/math], [math]\displaystyle{ by^2 }[/math], and [math]\ displaystyle{ cz^2 }[/math] are along the diagonal (red subscripts), but the variables d, e and f are "off diagonal". It then becomes possible to do a vector processing step in which we rewrite our matrix and replace it with a new matrix multiplied by three different vectors of unit length (length=1.0). The matrix is diagonalized because the off-diagonal components are all now zero. The rotation angles required to get to this equivalent position now appear in the three vectors and can be read out as the x, y, and z components of each of them. Those three vectors are called "eigenvectors" or characteristic vectors. They contain the orientation information of the original ellipsoid. The three axes of the ellipsoid are now directly along the main orthogonal axes of the coordinate system so we can easily infer their lengths. These lengths are the eigenvalues or characteristic values. Diagonalization of a matrix is done by finding a second matrix that it can be multiplied with followed by multiplication by the inverse of the second matrix—wherein the result is a new matrix in which three diagonal (xx, yy, zz) components have numbers in them but the off-diagonal components (xy, yz, zx) are 0. The second matrix provides eigenvector information. Measures of anisotropy and diffusivity In present-day clinical neurology, various brain pathologies may be best detected by looking at particular measures of anisotropy and diffusivity. The underlying physical process of diffusion causes a group of water molecules to move out from a central point, and gradually reach the surface of an ellipsoid if the medium is anisotropic (it would be the surface of a sphere for an isotropic medium). The ellipsoid formalism functions also as a mathematical method of organizing tensor data. Measurement of an ellipsoid tensor further permits a retrospective analysis, to gather information about the process of diffusion in each voxel of the tissue.^[32] In an isotropic medium such as cerebrospinal fluid, water molecules are moving due to diffusion and they move at equal rates in all directions. By knowing the detailed effects of diffusion gradients we can generate a formula that allows us to convert the signal attenuation of an MRI voxel into a numerical measure of diffusion—the diffusion coefficient D. When various barriers and restricting factors such as cell membranes and microtubules interfere with the free diffusion, we are measuring an "apparent diffusion coefficient", or ADC, because the measurement misses all the local effects and treats the attenuation as if all the movement rates were solely due to Brownian motion. The ADC in anisotropic tissue varies depending on the direction in which it is measured. Diffusion is fast along the length of (parallel to) an axon, and slower perpendicularly across it. Once we have measured the voxel from six or more directions and corrected for attenuations due to T2 and T1 effects, we can use information from our calculated ellipsoid tensor to describe what is happening in the voxel. If you consider an ellipsoid sitting at an angle in a Cartesian grid then you can consider the projection of that ellipse onto the three axes. The three projections can give you the ADC along each of the three axes ADC[x], ADC[y], ADC[z]. This leads to the idea of describing the average diffusivity in the voxel which will simply be [math]\displaystyle{ (ADC_x + ADC_y + ADC_z)/3 = ADC_i }[/math] We use the i subscript to signify that this is what the isotropic diffusion coefficient would be with the effects of anisotropy averaged out. The ellipsoid itself has a principal long axis and then two more small axes that describe its width and depth. All three of these are perpendicular to each other and cross at the center point of the ellipsoid. We call the axes in this setting eigenvectors and the measures of their lengths eigenvalues. The lengths are symbolized by the Greek letter λ. The long one pointing along the axon direction will be λ[1] and the two small axes will have lengths λ[2] and λ[3]. In the setting of the DTI tensor ellipsoid, we can consider each of these as a measure of the diffusivity along each of the three primary axes of the ellipsoid. This is a little different from the ADC since that was a projection on the axis, while λ is an actual measurement of the ellipsoid we have calculated. The diffusivity along the principal axis, λ[1] is also called the longitudinal diffusivity or the axial diffusivity or even the parallel diffusivity λ[∥]. Historically, this is closest to what Richards originally measured with the vector length in 1991.^[33] The diffusivities in the two minor axes are often averaged to produce a measure of radial diffusivity [math]\displaystyle{ \lambda_{\perp} = (\lambda_2 + \lambda_3)/2 . }[/math] This quantity is an assessment of the degree of restriction due to membranes and other effects and proves to be a sensitive measure of degenerative pathology in some neurological conditions.^[34] It can also be called the perpendicular diffusivity ([math]\displaystyle{ \lambda_{\perp} }[/math]). Another commonly used measure that summarizes the total diffusivity is the Trace—which is the sum of the three eigenvalues, [math]\displaystyle{ \mathrm{tr}(\Lambda) = \lambda_1 + \lambda_2 + \lambda_3 }[/math] where [math]\displaystyle{ \Lambda }[/math] is a diagonal matrix with eigenvalues [math]\displaystyle{ \lambda_1 }[/math], [math]\displaystyle{ \lambda_2 }[/math] and [math]\displaystyle{ \lambda_3 } [/math] on its diagonal. If we divide this sum by three we have the mean diffusivity, [math]\displaystyle{ \mathrm{MD} = (\lambda_1 + \lambda_2 + \lambda_3) /3 }[/math] which equals ADC[i] since [math]\displaystyle{ \mathrm{tr}(\Lambda)/3 &= \mathrm{tr}(V^{-1}V\Lambda)/3 \\ &= \mathrm{tr}(V\Lambda V^{-1})/3 \\ &= \mathrm{tr}(D)/3 \\ &= ADC_i }[/math] where [math]\displaystyle{ V }[/math] is the matrix of eigenvectors and [math]\displaystyle{ D }[/math] is the diffusion tensor. Aside from describing the amount of diffusion, it is often important to describe the relative degree of anisotropy in a voxel. At one extreme would be the sphere of isotropic diffusion and at the other extreme would be a cigar or pencil shaped very thin prolate spheroid. The simplest measure is obtained by dividing the longest axis of the ellipsoid by the shortest = (λ[1]/λ[3]). However, this proves to be very susceptible to measurement noise, so increasingly complex measures were developed to capture the measure while minimizing the noise. An important element of these calculations is the sum of squares of the diffusivity differences = (λ[1] − λ[2])^2 + (λ[1] − λ[3])^2 + (λ[2] − λ[3])^2. We use the square root of the sum of squares to obtain a sort of weighted average—dominated by the largest component. One objective is to keep the number near 0 if the voxel is spherical but near 1 if it is elongate. This leads to the fractional anisotropy or FA which is the square root of the sum of squares (SRSS) of the diffusivity differences, divided by the SRSS of the diffusivities. When the second and third axes are small relative to the principal axis, the number in the numerator is almost equal the number in the denominator. We also multiply by [math]\displaystyle{ 1/\sqrt{2} }[/math] so that FA has a maximum value of 1. The whole formula for FA looks like this: [math]\displaystyle{ \mathrm{FA}=\frac{\sqrt{3( (\lambda_1-\operatorname E[\lambda])^2+(\lambda_2-\operatorname E[\lambda])^2+(\lambda_3-\operatorname E[\lambda])^2 )}}{\sqrt{2( \lambda_1^2+\ lambda_2^2+\lambda_3^2 )}} }[/math] The fractional anisotropy can also be separated into linear, planar, and spherical measures depending on the "shape" of the diffusion ellipsoid.^[35]^[36] For example, a "cigar" shaped prolate ellipsoid indicates a strongly linear anisotropy, a "flying saucer" or oblate spheroid represents diffusion in a plane, and a sphere is indicative of isotropic diffusion, equal in all directions.^ [37] If the eigenvalues of the diffusion vector are sorted such that [math]\displaystyle{ \lambda_1 \geq \lambda_2 \geq \lambda_3 \geq 0 }[/math], then the measures can be calculated as follows: For the linear case, where [math]\displaystyle{ \lambda_1 \gg \lambda_2 \simeq \lambda_3 }[/math], [math]\displaystyle{ C_l=\frac{\lambda_1 - \lambda_2}{\lambda_1 + \lambda_2 + \lambda_3} }[/math] For the planar case, where [math]\displaystyle{ \lambda_1 \simeq \lambda_2 \gg \lambda_3 }[/math], [math]\displaystyle{ C_p=\frac{2(\lambda_2 - \lambda_3)}{\lambda_1 + \lambda_2 + \lambda_3} }[/math] For the spherical case, where [math]\displaystyle{ \lambda_1 \simeq \lambda_2 \simeq \lambda_3 }[/math], [math]\displaystyle{ C_s=\frac{3\lambda_3}{\lambda_1 + \lambda_2 + \lambda_3} }[/math] Each measure lies between 0 and 1 and they sum to unity. An additional anisotropy measure can used to describe the deviation from the spherical case: [math]\displaystyle{ C_a=C_l+C_p=1-C_s=\frac{\lambda_1 + \lambda_2 - 2\lambda_3}{\lambda_1 + \lambda_2 + \lambda_3} }[/math] There are other metrics of anisotropy used, including the relative anisotropy (RA): [math]\displaystyle{ \mathrm{RA}=\frac{\sqrt{(\lambda_1-\operatorname E[\lambda])^2+(\lambda_2-\operatorname E[\lambda])^2+(\lambda_3-\operatorname E[\lambda])^2}}{\sqrt{3}\operatorname E[\ lambda]} }[/math] and the volume ratio (VR): [math]\displaystyle{ \mathrm{VR}=\frac{\lambda_1\lambda_2\lambda_3}{\operatorname E[\lambda]^3} }[/math] The most common application of conventional DWI (without DTI) is in acute brain ischemia. DWI directly visualizes the ischemic necrosis in cerebral infarction in the form of a cytotoxic edema,^[38] appearing as a high DWI signal within minutes of arterial occlusion.^[39] With perfusion MRI detecting both the infarcted core and the salvageable penumbra, the latter can be quantified by DWI and perfusion MRI.^[40] • DWI showing cortical ribbon-like high signal consistent with diffusion restriction in a patient with known MELAS syndrome Another application area of DWI is in oncology. Tumors are in many instances highly cellular, giving restricted diffusion of water, and therefore appear with a relatively high signal intensity in DWI.^[41] DWI is commonly used to detect and stage tumors, and also to monitor tumor response to treatment over time. DWI can also be collected to visualize the whole body using a technique called 'diffusion-weighted whole-body imaging with background body signal suppression' (DWIBS).^[42] Some more specialized diffusion MRI techniques such as diffusion kurtosis imaging (DKI) have also been shown to predict the response of cancer patients to chemotherapy treatment.^[43] The principal application is in the imaging of white matter where the location, orientation, and anisotropy of the tracts can be measured. The architecture of the axons in parallel bundles, and their myelin sheaths, facilitate the diffusion of the water molecules preferentially along their main direction. Such preferentially oriented diffusion is called anisotropic diffusion. The imaging of this property is an extension of diffusion MRI. If a series of diffusion gradients (i.e. magnetic field variations in the MRI magnet) are applied that can determine at least 3 directional vectors (use of 6 different gradients is the minimum and additional gradients improve the accuracy for "off-diagonal" information), it is possible to calculate, for each voxel, a tensor (i.e. a symmetric positive definite 3×3 matrix) that describes the 3-dimensional shape of diffusion. The fiber direction is indicated by the tensor's main eigenvector. This vector can be color-coded, yielding a cartography of the tracts' position and direction (red for left-right, blue for superior-inferior, and green for anterior-posterior).^[45] The brightness is weighted by the fractional anisotropy which is a scalar measure of the degree of anisotropy in a given voxel. Mean diffusivity (MD) or trace is a scalar measure of the total diffusion within a voxel. These measures are commonly used clinically to localize white matter lesions that do not show up on other forms of clinical MRI.^[46] Applications in the brain: • Tract-specific localization of white matter lesions such as trauma and in defining the severity of diffuse traumatic brain injury. The localization of tumors in relation to the white matter tracts (infiltration, deflection), has been one of the most important initial applications. In surgical planning for some types of brain tumors, surgery is aided by knowing the proximity and relative position of the corticospinal tract and a tumor. • Diffusion tensor imaging data can be used to perform tractography within white matter. Fiber tracking algorithms can be used to track a fiber along its whole length (e.g. the corticospinal tract, through which the motor information transit from the motor cortex to the spinal cord and the peripheral nerves). Tractography is a useful tool for measuring deficits in white matter, such as in aging. Its estimation of fiber orientation and strength is increasingly accurate, and it has widespread potential implications in the fields of cognitive neuroscience and neurobiology. • The use of DTI for the assessment of white matter in development, pathology and degeneration has been the focus of over 2,500 research publications since 2005. It promises to be very helpful in distinguishing Alzheimer's disease from other types of dementia. Applications in brain research include the investigation of neural networks in vivo, as well as in connectomics. Applications for peripheral nerves: • Brachial plexus: DTI can differentiate normal nerves^[47] (as shown in the tractogram of the spinal cord and brachial plexus and 3D 4k reconstruction here) from traumatically injured nerve roots. • Cubital Tunnel Syndrome: metrics derived from DTI (FA and RD) can differentiate asymptomatic adults from those with compression of the ulnar nerve at the elbow^[48] • Carpal Tunnel Syndrome: Metrics derived from DTI (lower FA and MD) differentiate healthy adults from those with carpal tunnel syndrome^[49] Early in the development of DTI based tractography, a number of researchers pointed out a flaw in the diffusion tensor model. The tensor analysis assumes that there is a single ellipsoid in each imaging voxel—as if all of the axons traveling through a voxel traveled in exactly the same direction.^[50] This is often true, but it can be estimated that in more than 30% of the voxels in a standard resolution brain image, there are at least two different neural tracts traveling in different directions that pass through each other. In the classic diffusion ellipsoid tensor model, the information from the crossing tract just appears as noise or unexplained decreased anisotropy in a given voxel. David Tuch was among the first to describe a solution to this problem.^[51]^[52] The idea is best understood by conceptually placing a kind of geodesic dome around each image voxel. This icosahedron provides a mathematical basis for passing a large number of evenly spaced gradient trajectories through the voxel—each coinciding with one of the apices of the icosahedron. Basically, we are now going to look into the voxel from a large number of different directions (typically 40 or more). We use "n-tuple" tessellations to add more evenly spaced apices to the original icosahedron (20 faces)—an idea that also had its precedents in paleomagnetism research several decades earlier.^[53] We just want to know which direction lines turn up the maximum anisotropic diffusion measures. If there is a single tract, there will be just two maxima pointing in opposite directions. If two tracts cross in the voxel, there will be two pairs of maxima, and so on. We can still use tensor math to use the maxima to select groups of gradients to package into several different tensor ellipsoids in the same voxel, or use more complex higher rank tensors analyses,^[54] or we can do a true "model free" analysis that just picks the maxima and go on about doing the tractography. The Q-Ball method of tractography is an implementation in which David Tuch provides a mathematical alternative to the tensor model.^[50] Instead of forcing the diffusion anisotropy data into a group of tensors, the mathematics used deploys both probability distributions and a classic bit of geometric tomography and vector math developed nearly 100 years ago—the Funk Radon Transform.^[55] Note, there is ongoing debate about the best way to preprocess DW-MRI. Several in-vivo studies have shown that the choice of software and functions applied (directed at correcting artefacts from arising from e.g. motion and eddy-currents) have a meaningful impact on the DTI parameter estimates from tissue.^[56] Consequently, this is the topic of a multinational study directed by the diffusion-study group of the ISMRM. For DTI, it is generally possible to use linear algebra, matrix mathematics and vector mathematics to process the analysis of the tensor data. In some cases, the full set of tensor properties is of interest, but for tractography it is usually necessary to know only the magnitude and orientation of the primary axis or vector. This primary axis—the one with the greatest length—is the largest eigenvalue and its orientation is encoded in its matched eigenvector. Only one axis is needed as it is assumed the largest eigenvalue is aligned with the main axon direction to accomplish tractography. See also Explanatory notes 1. ↑ Several full mathematical treatments of general tensors exist, e.g. classical, component free, and so on, but the generality, which covers arrays of all sizes, may obscure rather than help. 1. ↑ ^1.0 ^1.1 Le Bihan, Denis; Breton, E. (1985). "Imagerie de diffusion in-vivo par résonance magnétique nucléaire" (in fr). Comptes-Rendus de l'Académie des Sciences 301 (15): 1109–1112. INIST: 2. ↑ "Self-diffusion NMR imaging using stimulated echoes". Journal of Magnetic Resonance 64 (3): 479–486. 1985. doi:10.1016/0022-2364(85)90111-8. Bibcode: 1985JMagR..64..479M. 3. ↑ "The spatial mapping of translational diffusion coefficients by the NMR imaging technique". Physics in Medicine and Biology 30 (4): 345–349. April 1985. doi:10.1088/0031-9155/30/4/009. PMID 4001161. Bibcode: 1985PMB....30..345T. 4. ↑ "Mapping complex tissue architecture with diffusion spectrum magnetic resonance imaging". Magnetic Resonance in Medicine 54 (6): 1377–1386. December 2005. doi:10.1002/mrm.20642. PMID 16247738. 5. ↑ "Diffusion spectrum magnetic resonance imaging (DSI) tractography of crossing fibers". NeuroImage 41 (4): 1267–1277. July 2008. doi:10.1016/j.neuroimage.2008.03.036. PMID 18495497. 6. ↑ "MR imaging of intravoxel incoherent motions: application to diffusion and perfusion in neurologic disorders". Radiology 161 (2): 401–407. November 1986. doi:10.1148/radiology.161.2.3763909. PMID 3763909. 7. ↑ "Quantification of increased cellularity during inflammatory demyelination". Brain 134 (Pt 12): 3590–3601. December 2011. doi:10.1093/brain/awr307. PMID 22171354. 8. ↑ Vavasour, Irene M; Sun, Peng; Graf, Carina; Yik, Jackie T; Kolind, Shannon H; Li, David KB; Tam, Roger; Sayao, Ana-Luiza et al. (March 2022). "Characterization of multiple sclerosis neuroinflammation and neurodegeneration with relaxation and diffusion basis spectrum imaging". Multiple Sclerosis Journal 28 (3): 418–428. doi:10.1177/13524585211023345. PMID 34132126. 9. ↑ Ye, Zezhong; George, Ajit; Wu, Anthony T.; Niu, Xuan; Lin, Joshua; Adusumilli, Gautam; Naismith, Robert T.; Cross, Anne H. et al. (May 2020). "Deep learning with diffusion basis spectrum imaging for classification of multiple sclerosis lesions". Annals of Clinical and Translational Neurology 7 (5): 695–706. doi:10.1002/acn3.51037. PMID 32304291. 10. ↑ Ye, Zezhong; Price, Richard L.; Liu, Xiran; Lin, Joshua; Yang, Qingsong; Sun, Peng; Wu, Anthony T.; Wang, Liang et al. (2020-10-15). "Diffusion Histology Imaging Combining Diffusion Basis Spectrum Imaging (DBSI) and Machine Learning Improves Detection and Classification of Glioblastoma Pathology". Clinical Cancer Research 26 (20): 5388–5399. doi:10.1158/1078-0432.ccr-20-0736. PMID 11. ↑ Ye, Zezhong; Srinivasa, Komal; Meyer, Ashely; Sun, Peng; Lin, Joshua; Viox, Jeffrey D.; Song, Chunyu; Wu, Anthony T. et al. (2021-02-26). "Diffusion histology imaging differentiates distinct pediatric brain tumor histology". Scientific Reports 11 (1): 4749. doi:10.1038/s41598-021-84252-3. PMID 33637807. Bibcode: 2021NatSR..11.4749Y. 12. ↑ Samara, Amjad; Li, Zhaolong; Rutlin, Jerrel; Raji, Cyrus A.; Sun, Peng; Song, Sheng‐Kwei; Hershey, Tamara; Eisenstein, Sarah A. (August 2021). "Nucleus accumbens microstructure mediates the relationship between obesity and eating behavior in adults". Obesity 29 (8): 1328–1337. doi:10.1002/oby.23201. PMID 34227242. 13. ↑ ^13.0 ^13.1 "Multi-channel diffusion tensor image registration via adaptive chaotic PSO". Journal of Computers 6 (4): 825–829. Jan 2011. doi:10.4304/jcp.6.4.825-829. 14. ↑ "q-Space imaging of the brain". Magnetic Resonance in Medicine 32 (6): 707–713. December 1994. doi:10.1002/mrm.1910320605. PMID 7869892. 15. ↑ "Human brain: proton diffusion MR spectroscopy". Radiology 188 (3): 719–725. September 1993. doi:10.1148/radiology.188.3.8351339. PMID 8351339. 16. ↑ "Spin Diffusion Measurements: Spin Echoes in the Presence of a Time-Dependent Field Gradient". The Journal of Chemical Physics 42 (1): 288–292. 1 January 1965. doi:10.1063/1.1695690. Bibcode: 17. ↑ "Biexponential diffusion attenuation in various states of brain tissue: implications for diffusion-weighted imaging". Magnetic Resonance in Medicine 36 (6): 847–857. December 1996. doi:10.1002/ mrm.1910360607. PMID 8946350. 18. ↑ Kärger, Jörg; Pfeifer, Harry; Heink, Wilfried (1988). Principles and Application of Self-Diffusion Measurements by Nuclear Magnetic Resonance. Advances in Magnetic and Optical Resonance. 12. pp. 1–89. doi:10.1016/b978-0-12-025512-2.50004-x. ISBN 978-0-12-025512-2. 19. ↑ "Generalized Diffusion Tensor Imaging (GDTI): A Method for Characterizing and Imaging Diffusion Anisotropy Caused by Non-Gaussian Diffusion". Israel Journal of Chemistry 43 (1–2): 145–54. 2003. 20. ↑ "Relevance of the information about the diffusion distribution in invo given by kurtosis in q-space imaging". Proceedings, 12th ISMRM Annual Meeting. Kyoto. 2004. p. 1238. NAID 10018514722. 21. ↑ "Diffusional kurtosis imaging: the quantification of non-gaussian water diffusion by means of magnetic resonance imaging". Magnetic Resonance in Medicine 53 (6): 1432–1440. June 2005. doi: 10.1002/mrm.20508. PMID 15906300. 22. ↑ "Bloch Equations with Diffusion Terms". Physical Review 104 (3): 563–565. 1956. doi:10.1103/PhysRev.104.563. Bibcode: 1956PhRv..104..563T. 23. ↑ ^23.0 ^23.1 "Restricted Diffusion". 2021. http://mriquestions.com/dwi-bright-causes.html. 24. ↑ ^24.0 ^24.1 "MRI Physics: Diffusion-Weighted Imaging". http://xrayphysics.com/dwi.html#adc. 25. ↑ "Apparent diffusion coefficient and beyond: what diffusion MR imaging can tell us about tissue structure". Radiology 268 (2): 318–22. August 2013. doi:10.1148/radiol.13130420. PMID 23882093. 26. ↑ "Signal evolution and infarction risk for apparent diffusion coefficient lesions in acute ischemic stroke are both time- and perfusion-dependent". Stroke 42 (5): 1276–1281. May 2011. doi: 10.1161/STROKEAHA.110.610501. PMID 21454821. 27. ↑ "Diffusion weighted MRI in acute stroke". https://radiopaedia.org/articles/diffusion-weighted-mri-in-acute-stroke-1. 28. ↑ "Diffusion tensor magnetic resonance imaging of prostate cancer". Investigative Radiology 42 (6): 412–419. June 2007. doi:10.1097/01.rli.0000264059.46444.bf. PMID 17507813. 29. ↑ "In vivo fiber tractography using DT-MRI data". Magnetic Resonance in Medicine 44 (4): 625–632. October 2000. doi:10.1002/1522-2594(200010)44:4<625::AID-MRM17>3.0.CO;2-O. PMID 11025519. 30. ↑ Nye, John Frederick (1957). Physical Properties of Crystals: Their Representation by Tensors and Matrices. Clarendon Press. OCLC 576214706. 31. ↑ "Mémoire sur la conductibilité des substances cristalisées pour la chaleur" (in fr). Comptes Rendus Hebdomadaires des Séances de l'Académie des Sciences 25: 459–461. 1848. 32. ↑ "Diffusion tensor imaging: concepts and applications". Journal of Magnetic Resonance Imaging 13 (4): 534–546. April 2001. doi:10.1002/jmri.1076. PMID 11276097. 33. ↑ "Vector analysis of diffusion images in experimental allergic encephalomyelitis". Proceedings of the Society for Magnetic Resonance in Medicine. 11. Berlin. 1992. p. 412. https:// 34. ↑ "High-resolution diffusion tensor imaging in the substantia nigra of de novo Parkinson disease". Neurology 72 (16): 1378–1384. April 2009. doi:10.1212/01.wnl.0000340982.01727.6e. PMID 19129507. 35. ↑ "Geometrical diffusion measures for MRI from tensor basis analysis.". ISMRM '97. Vancouver Canada. 1997. p. 1742. 36. ↑ "Processing and visualization for diffusion tensor MRI". Medical Image Analysis 6 (2): 93–108. June 2002. doi:10.1016/s1361-8415(02)00053-1. PMID 12044998. 37. ↑ "Diffusion tensor imaging of the brain". Neurotherapeutics 4 (3): 316–329. July 2007. doi:10.1016/j.nurt.2007.05.011. PMID 17599699. 38. ↑ "Perfusion imaging in brain disease". Diagnostic and Interventional Imaging 94 (12): 1241–1257. December 2013. doi:10.1016/j.diii.2013.06.009. PMID 23876408. 39. ↑ "Ischaemic stroke". https://radiopaedia.org/articles/ischaemic-stroke. 40. ↑ "Magnetic resonance diffusion-perfusion mismatch in acute ischemic stroke: An update". World Journal of Radiology 4 (3): 63–74. March 2012. doi:10.4329/wjr.v4.i3.63. PMID 22468186. 41. ↑ "Diffusion-weighted MRI in the body: applications and challenges in oncology". AJR. American Journal of Roentgenology 188 (6): 1622–1635. June 2007. doi:10.2214/AJR.06.1403. PMID 17515386. 42. ↑ Takahara, Taro; Kwee, Thomas C. (2010). "Diffusion-Weighted Whole-Body Imaging with Background Body Signal Suppression (DWIBS)". Diffusion-Weighted MR Imaging. Medical Radiology. pp. 227–252. doi:10.1007/978-3-540-78576-7_14. ISBN 978-3-540-78575-0. 43. ↑ "Diffusion kurtosis MRI as a predictive biomarker of response to neoadjuvant chemotherapy in high grade serous ovarian cancer". Scientific Reports 9 (1): 10742. July 2019. doi:10.1038/ s41598-019-47195-4. PMID 31341212. Bibcode: 2019NatSR...910742D. 44. ↑ ^44.0 ^44.1 "Diffusion Tensor Imaging for Diagnosing Root Avulsions in Traumatic Adult Brachial Plexus Injuries: A Proof-of-Concept Study". Frontiers in Surgery 7: 19. 16 April 2020. doi: 10.3389/fsurg.2020.00019. PMID 32373625. 45. ↑ "Morphometry of in vivo human white matter association pathways with diffusion-weighted magnetic resonance imaging". Annals of Neurology 42 (6): 951–962. December 1997. doi:10.1002/ ana.410420617. PMID 9403488. 46. ↑ "DTI (Quantitative), a new and advanced MRI procedure for evaluation of Concussions". 2015. http://www.doctorsimaging.com/services-in-metairie-new-orleans-louisiana/ 47. ↑ "Diffusion tensor imaging of the roots of the brachial plexus: a systematic review and meta-analysis of normative values". Clinical and Translational Imaging 8 (6): 419–431. 9 October 2020. doi :10.1007/s40336-020-00393-x. PMID 33282795. 48. ↑ Griffiths, Timothy T.; Flather, Robert; Teh, Irvin; Haroon, Hamied A.; Shelley, David; Plein, Sven; Bourke, Grainne; Wade, Ryckie G. (2021-07-22). "Diffusion tensor imaging in cubital tunnel syndrome" (in en). Scientific Reports 11 (1): 14982. doi:10.1038/s41598-021-94211-7. ISSN 2045-2322. PMID 34294771. 49. ↑ Rojoa, Djamila; Raheman, Firas; Rassam, Joseph; Wade, Ryckie G. (2021-10-22). "Meta-analysis of the normal diffusion tensor imaging values of the median nerve and how they change in carpal tunnel syndrome" (in en). Scientific Reports 11 (1): 20935. doi:10.1038/s41598-021-00353-z. ISSN 2045-2322. PMID 34686721. Bibcode: 2021NatSR..1120935R. 50. ↑ ^50.0 ^50.1 "Q-ball imaging". Magnetic Resonance in Medicine 52 (6): 1358–1372. December 2004. doi:10.1002/mrm.20279. PMID 15562495. 51. ↑ "High angular resolution diffusion imaging of the human brain". Proceedings of the 7th Annual Meeting of the ISMRM. Philadelphia. 1999. NAID 10027851300. https://cds.ismrm.org/ismrm-1999/PDF2/ 52. ↑ "High angular resolution diffusion imaging reveals intravoxel white matter fiber heterogeneity". Magnetic Resonance in Medicine 48 (4): 577–582. October 2002. doi:10.1002/mrm.10268. PMID 53. ↑ "The estimation of second-order tensors with related tests and designs". Biometrika 50 (3–4): 353–373. 1963. doi:10.1093/biomet/50.3-4.353. 54. ↑ "Spectral decomposition of a 4th-order covariance tensor: applications to diffusion tensor MRI". Signal Processing 87 (2): 220–236. 2007. doi:10.1016/j.sigpro.2006.02.050. 55. ↑ "Uber eine geometrische Anwendung der Abelschen Integralgleichnung". Math. Ann. 77: 129–135. 1919. doi:10.1007/BF01456824. https://zenodo.org/record/2279295. 56. ↑ Wade, Ryckie G.; Tam, Winnie; Perumal, Antonia; Pepple, Sophanit; Griffiths, Timothy T.; Flather, Robert; Haroon, Hamied A.; Shelley, David et al. (2023-10-13). "Comparison of distortion correction preprocessing pipelines for DTI in the upper limb" (in en). Magnetic Resonance in Medicine. doi:10.1002/mrm.29881. ISSN 0740-3194. PMID 37831659. https://onlinelibrary.wiley.com/doi/ External links Original source: https://en.wikipedia.org/wiki/Diffusion MRI. Read more
{"url":"https://handwiki.org/wiki/Diffusion_MRI","timestamp":"2024-11-05T09:48:58Z","content_type":"text/html","content_length":"207449","record_id":"<urn:uuid:27dbbf36-59a9-4988-828a-3878620acb0f>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00101.warc.gz"}
, Fall 2021 Calculus II (Math 181), Fall 2021 Textbook, Syllabus and Grades Open Source Textbooks These textbooks are free to copy legally, since their authors have used their copyright to provide you with a license allowing unlimited free copying. They would be good companions to the material in our text. Syllabus, Grades Online Homework Course WeBWorK Site: WeBWorK Online Homework Sage Cell Server This is: http://buzzard.ups.edu/courses/2021fall/181f2021.html Maintained by: Rob Beezer Last updated: August 23, 2021
{"url":"http://buzzard.ups.edu/courses/2021fall/181f2021.html","timestamp":"2024-11-04T10:29:03Z","content_type":"text/html","content_length":"5946","record_id":"<urn:uuid:fe416606-7a56-4c0c-a01e-8bacb92243d9>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00074.warc.gz"}
How To Retire On Less Than $1 Million I spend a lot of time helping people understand how much money they will need to meet their retirement goals. Many of these infopeople believe that they must have at least $1 million saved in order to retire and never worry about running out of money. Today I want to show that many people do not necessarily need $1 million or more to retire comfortably. I will show an example couple who is very concerned that they won't have enough money saved. They also fear that interest rates will remain low and they don't want to just put everything into equity funds. Like all retirement calculations, this one involves many assumptions. But as long as our assumptions are reasonable, say 7% for equity returns rather than the 10% figure that many people used to use, we can come up with a very reasonable estimate for how much money one needs to retire comfortably. Let’s start with the assumptions I used for the couple we will look at: Inflation (CPI) 2.5% Current Age of Both People 55 Age Of Retirement 65 Age When Both People Have Passed Away 85 Social Security at age 67 (combined) $40,000 per year Average Savings Rate $10,000 per year Total Investment Balance Today $400,000 (50% in Taxable, 50% in IRAs) Recurring Annual Expenses in Retirement $70,000 70% U.S. Value Stocks, Investment Mix 30% Medium Term Treasuries Return Assumption Value Stocks 7% per year Standard Deviation Value Stocks 16.20% Return Assumption Treasuries 3% per year Standard Deviation Treasuries 7.20% Before generating a retirement plan for this couple the first thing we need to clear up is, what constitutes success? We live in a dynamic world, especially when it comes to investing. So I like to look at the probability of never running out of money in retirement using Monte Carlo analysis, where thousands of scenarios are run, shocking investment returns in every scenario in every year. In this example I will define success as having a probability of at least 85% that funds never run out in retirement. Using the WealthTrace retirement planner I calculated that they will have about $830,000 (in today's dollar terms) when they retire. I also calculated, using Monte Carlo analysis, that they would have a 70% chance of never running out of money. We have two interesting things here: One is that they have less than $1 million at retirement. We also see that their probability of success is relatively low at 70%. The question now is, how can we boost their probability of never running out of money? We know that they can move closer to their goal if they cut their expenses in retirement, save more money, or retire later. But let's assume that they are already saving the maximum amount they can and that they absolutely do not want to retire later or cut their expenses in retirement. Given this there are really only two ways this couple can increase their probability of retirement success: They can find higher returning investments with the same level of volatility they currently have or they can find investments that have the same returns, but less volatility. My favorite way to reduce volatility while maintaining reasonable levels of return is to buy high quality dividend paying stocks that have a history of rising dividends over time. A few of my favorite dividend payers for retirement portfolios that have consistently raised their dividends over the years are Johnson & Johnson (JNJ), Sysco (SYY), AT&T (T), Wal-Mart (WMT), Coca-Cola (KO), and Eli Lilly (LLY). Company Div Yield 3 Yr Div Growth Annual Rate 5 Yr Div Growth Annual Rate JNJ 2.7% 7.1% 7.6% SYY 3.1% 3.9% 4.0% T 5.3% 2.1% 2.3% WMT 2.5% 15.8% 14.6% KO 3.0% 8.4% 8.1% LLY 3.3% 0% 0.6% I replaced their Equity Value fund with the stocks listed above, equally weighted. I kept the same total return assumption, but lowered the level of volatility. In this very informative article the author shows that dividend-growth stocks have volatility levels that have historically been about 33% lower than non-dividend paying stocks. This makes sense because, in general, dividend payouts are much more stable than equity prices. If a stock derives, say half, of its return from dividends, then the volatility of its total returns will likely be much lower than non-dividend payers. Given this, I lowered the volatility assumption on my dividend-growth stocks by 33% compared to my initial assumption on value stocks, which was 16.2% per year. So the standard deviation for my dividend payers is 10.7%. The probability that this couple never runs out of money now jumps from 70% to 86%. This is a large jump, solely due to the fact that they are now invested in more stable, solid dividend paying stocks instead of an equity index fund. To summarize, those who think that they must have at least $1 million when they retire aren't too far from the truth. But the situation can be changed for the better by picking those investments that have shown they can pay a relatively stable and growing dividend over time. Each person and couple has a different situation and might need to change a variety of things in order to meet their retirement goals. But it is usually impossible to tell whether or not you can retire when you want until you sit down and actually run through the numbers. At that point you can begin running interesting scenarios that will tell you what you need to do to get to your goals.
{"url":"https://www.mywealthtrace.com/blog/how-to-retire-on-less-than-$1-million","timestamp":"2024-11-04T02:15:12Z","content_type":"text/html","content_length":"75381","record_id":"<urn:uuid:ae2450c0-eddd-42cb-82db-19286835115a>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00471.warc.gz"}
mm to cbm calculator In today’s fast-paced world, accurate calculations are crucial, especially in fields like construction, logistics, and manufacturing. One such vital calculation is converting measurements from cubic meters (m³) to cubic centimeters (cm³) and vice versa. To simplify this process, we’ll introduce a handy MM to CBM calculator, designed to provide precise results efficiently. How to Use Using the MM to CBM calculator is straightforward. Simply input the value in either cubic meters or cubic centimeters, and the calculator will swiftly convert it to the desired unit. Press the “Calculate” button to obtain the result instantly. The conversion formula for transforming cubic meters (m³) to cubic centimeters (cm³) and vice versa is as follows: 1 m³ = 1,000,000 cm³ This formula allows for seamless conversion between the two units, ensuring accuracy in your calculations. Example Solve Let’s consider an example: If we have a volume of 5 cubic meters, the equivalent volume in cubic centimeters would be: 5 m³ = 5 * 1,000,000 cm³ = 5,000,000 cm³ Q: Can this calculator handle decimal values? A: Yes, the calculator can handle decimal values, ensuring precision in your calculations. Q: Is the calculator’s conversion formula accurate? A: Absolutely. The conversion formula used in this calculator is precise and reliable, ensuring accurate results every time. Q: Can I use this calculator for free? A: Yes, this MM to CBM calculator is completely free to use, providing convenience without any cost. In conclusion, the MM to CBM calculator serves as a valuable tool for professionals and enthusiasts alike, offering a quick and accurate solution for converting volumes between cubic meters and cubic centimeters. Its simplicity and efficiency make it a must-have for anyone dealing with measurements in these units.
{"url":"https://calculatordoc.com/mm-to-cbm-calculator/","timestamp":"2024-11-12T06:35:06Z","content_type":"text/html","content_length":"83845","record_id":"<urn:uuid:f2cd216e-28f2-41b0-a621-7a5650d9cca3>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00359.warc.gz"}
Neural Networks from Scratch, in R By Ilia Karmanov, Data Scientist at Microsoft This post is for those of you with a statistics/econometrics background but not necessarily a machine-learning one and for those of you who want some guidance in building a neural-network from scratch in R to better understand how everything fits (and how it doesn't). Andrej Karpathy wrote that when CS231n (Deep Learning at Stanford) was offered: "we intentionally designed the programming assignments to include explicit calculations involved in backpropagation on the lowest level. The students had to implement the forward and the backward pass of each layer in raw numpy. Inevitably, some students complained on the class message boards". Why bother with backpropagation when all frameworks do it for you automatically and there are more interesting deep-learning problems to consider? Nowadays we can literally train a full neural-network (on a GPU) in 5 lines. import keras model = Sequential() model.add(Dense(512, activation='relu', input_shape=(784,))) model.add(Dense(10, activation='softmax')) model.compile(loss='categorical_crossentropy', optimizer=RMSprop()) Karpathy, abstracts away from the "intellectual curiosity" or "you might want to improve on the core algorithm later" argument. His argument is that the calculations are a leaky abstraction: “it is easy to fall into the trap of abstracting away the learning process—believing that you can simply stack arbitrary layers together and backprop will 'magically make them work' on your Hence, my motivation for this post is two-fold: 1. Understanding (by writing from scratch) the leaky abstractions behind neural-networks dramatically shifted my focus to elements whose importance I initially overlooked. If my model is not learning I have a better idea of what to address rather than blindly wasting time switching optimisers (or even frameworks). 2. A deep-neural-network (DNN), once taken apart into lego blocks, is no longer a black-box that is inaccessible to other disciplines outside of AI. It's a combination of many topics that are very familiar to most people with a basic knowledge of statistics. I believe they need to cover very little (just the glue that holds the blocks together) to get an insight into a whole new realm. Starting from a linear regression we will work through the maths and the code all the way to a deep-neural-network (DNN) in the accompanying R-notebooks. Hopefully to show that very little is actually new information. Step 1 - Linear Regression (See Notebook) Implementing the closed-form solution for the Ordinary Least Squares estimator in R requires just a few lines: # Matrix of explanatory variables X <- as.matrix(X) # Add column of 1s for intercept coefficient intcpt <- rep(1, length(y)) # Combine predictors with intercept X <- cbind(intcpt, X) # OLS (closed-form solution) beta_hat <- solve(t(X) %*% X) %*% t(X) %*% y The vector of values in the variable beta_hat define our "machine-learning model". A linear regression is used to predict a continuous variable (e.g. how many minutes will this plane be delayed by). In the case of predicting a category (e.g. will this plane be delayed - yes/no) we want our prediction to fall between 0 and 1 so that we can interpret it as the probability of observing the respective category (given the data). When we have just two mutually-exclusive outcomes we would use a binomial logistic regression. With more than two outcomes (or "classes"), which are mutually-exclusive (e.g. this plane will be delayed by less than 5 minutes, 5-10 minutes, or more than 10 minutes), we would use a multinomial logistic regression (or "softmax"). In the case of many (n) classes that are not mutually-exclusive (e.g. this post references "R" and "neural-networks" and "statistics"), we can fit n-binomial logistic regressions. An alternative approach to the closed-form solution we found above is to use an iterative method, called Gradient Descent (GD). The procedure may look like so: • Start with a random guess for the weights • Plug guess into loss function • Move guess in the opposite direction of the gradient at that point by a small amount (something we call the `learning-rate') • Repeat above for N steps GD only uses the Jacobian matrix (not the Hessian), however we know that when we have a convex loss, all local minima are global minima and thus GD is guaranteed to converge to the global minimum. The loss-function used for a linear-regression is the Mean Squared Error: \begin{equation*} C = \frac{1}{2n}\sum_x(y(x) - a(x))^2 \end{equation*} To use GD we only need to find the partial derivative of this with respect to beta_hat (the 'delta'/gradient). This can be implemented in R, like so: # Start with a random guess beta_hat <- matrix(0.1, nrow=ncol(X_mat)) # Repeat below for N-iterations for (j in 1:N) # Calculate the cost/error (y_guess - y_truth) residual <- (X_mat %*% beta_hat) - y # Calculate the gradient at that point delta <- (t(X_mat) %*% residual) * (1/nrow(X_mat)) # Move guess in opposite direction of gradient beta_hat <- beta_hat - (lr*delta) Running this for 200 iterations gets us to same gradient and coefficient as the closed-form solution. Aside from being a stepping stone to a neural-network (where we use GD), this iterative method can be useful in practice when the the closed-form solution cannot be calculated because the matrix is too big to invert (to fit into memory). Step 2 - Logistic Regression (See Notebook) A logistic regression is a linear regression for binary classification problems. The two main differences to a standard linear regression are: 1. We use an 'activation'/link function called the logistic-sigmoid to squash the output to a probability bounded by 0 and 1 2. Instead of minimising the quadratic loss we minimise the negative log-likelihood of the Bernoulli distribution Everything else remains the same. We can calcuate our activation function like so: sigmoid <- function(z){1.0/(1.0+exp(-z))} We can create our log-likelihood function in R: log_likelihood <- function(X_mat, y, beta_hat) scores <- X_mat %*% beta_hat ll <- (y * scores) - log(1+exp(scores)) This loss function (the logistic loss or the log-loss) is also called the cross-entropy loss. The cross-entropy loss is basically a measure of 'surprise' and will be the foundation for all the following models, so it is worth examining a bit more. If we simply constructed the least-squares loss like before, because we now have a non-linear activation function (the sigmoid), the loss will no longer be convex which will make optimisation hard. \begin{equation*} C = \frac{1}{2n}\sum_x(y(x) - a(x))^2 \end{equation*} We could construct our own loss function for the two classes. When \(y=1\), we want our loss function to be very high if our prediction is close to 0, and very low when it is close to 1. When \(y=0 \), we want our loss function to be very high if our prediction is close to 1, and very low when it is close to 0. This leads us to the following loss function: \begin{equation*} C = -\frac{1}{n}\sum_xy(x)\ln(a(x)) + (1 - y(x))\ln(1-a(x)) \end{equation*} The delta for this loss function is pretty much the same as the one we had earlier for a linear-regression. The only difference is that we apply our sigmoid function to the prediction. This means that the GD function for a logistic regression will also look very similar: logistic_reg <- function(X, y, epochs, lr) X_mat <- cbind(1, X) beta_hat <- matrix(1, nrow=ncol(X_mat)) for (j in 1:epochs) # For a linear regression this was: # 1*(X_mat %*% beta_hat) - y residual <- sigmoid(X_mat %*% beta_hat) - y # Update weights with gradient descent delta <- t(X_mat) %*% as.matrix(residual, ncol=nrow(X_mat))*(1/nrow(X_mat)) beta_hat <- beta_hat - (lr*delta) # Print log-likliehood print(log_likelihood(X_mat, y, beta_hat)) # Return Step 3 - Softmax Regression (No Notebook) A generalisation of the logistic regression is the multinomial logistic regression (also called 'softmax'), which is used when there are more than two classes to predict. I haven't created this example in R, because the neural-network in the next step can reduce to something similar, however for completeness I wanted to highlight the main differences if you wanted to create it. First, instead of using the sigmoid function to squash our (one) value between 0 and 1: \begin{equation*} \sigma(z)=\frac{1}{1+e^{-z}} \end{equation*} We use the softmax function to squash the sum of our \(n\) values (for \(n\) classes) to 1: \begin{equation*} \phi(z)=\frac{e^{z_j}}{\sum_ke^{z_k}} \end{equation*} This means the value supplied for each class can be interpreted as the probability of that class, given the evidence. This also means that when we see the target class and increase the weights to increase the probability of observing it, the probability of the other classes will fall. The implicit assumption is that our classes are mutually exclusive. Second, we use a more general version of the cross-entropy loss function: \begin{equation*} C = -\frac{1}{n}\sum_x\sum_j y_j\ln(a_j) \end{equation*} To see why, remember that for binary classifications (previous example) we had two classes: \(j=2\), under the condition that the categories are mutually-exclusive \(\sum_ja_j=1\) and that \(y\) is one-hot so that \(y1+y2=1\), we can re-write the general formula as: \begin{equation*} C = -\frac{1}{n}\sum_xy_1\ln(a_1) + (1 - y_1)\ln(1-a_1) \end{equation*} Which is the same equation we first started with. However, now we relax the constraint that \(j=2\). It can be shown that the cross-entropy loss here has the same gradient as for the case of the binary/two-class cross-entropy on logistic outputs. \begin{equation*} \frac{\partial C}{\partial \beta_i} = \frac{1}{n}\sum_xx_i(a(x) - y) \end{equation*} However, although the gradient has the same formula it will be different because the activation here takes on a different value (softmax instead of logistic-sigmoid). In most deep-learning frameworks you have the choice of 'binary-crossentropy' or 'categorical-crossentropy' loss. Depending on whether your last layer contains sigmoid or softmax activation you would want to choose binary or categorical cross-entropy (respectively). The training of the network should not be affected, since the gradient is the same, however the reported loss (for evaluation) would be wrong if these are mixed up. The motivation to go through softmax is that most neural-networks will use a softmax layer as the final/'read-out' layer, with a multinomial/categorical cross-entropy loss instead of using sigmoids with a binary cross-entropy loss — when the categories are mutually exclusive. Although multiple sigmoids for multiple classes can also be used (and will be used in the next example), this is generally only used for the case of non-mutually-exclusive labels (i.e. we can have multiple labels). With a softmax output, since the sum of the outputs is constrained to equal 1, we have the advantage of interpreting the outputs as class probabilities. Step 4 - Neural Network (See Notebook) A neural network can be thought of as a series of logistic regressions stacked on top of each other. This means we could say that a logistic regression is a neural-network (with sigmoid activations) with no hidden-layer. This hidden-layer lets a neural-network generate non-linearities and leads to the Universal approximation theorem, which states that a network with just one hidden layer can approximate any linear or non-linear function. The number of hidden-layers can go into the hundreds. It can be useful to think of a neural-network as a combination of two things: 1) many logistic regressions stacked on top of each other that are 'feature-generators' and 2) one read-out-layer which is just a softmax regression. The recent successes in deep-learning can arguable be attributed to the 'feature-generators'. For example; previously with computer vision, we had to painfully state that we wanted to find triangles, circles, colours, and in what combination (similar to how economists decide which interaction-terms they need in a linear regression). Now, the hidden-layers are basically an optimisation to decide which features (which 'interaction-terms') to extract. A lot of deep-learning (transfer learning) is actually done by generating features using a trained-model with the head (read-out layer) cut-off, and then training a logistic regression (or boosted decision-trees) using those features as inputs. The hidden-layer also means that our loss function is not convex in parameters and we can't roll down a smooth-hill to get to the bottom. Instead of using Gradient Descent (which we did for the case of a logistic-regression) we will use Stochastic Gradient Descent (SGD), which basically shuffles the observations (random/stochastic) and updates the gradient after each mini-batch (generally much less than total number of observations) has been propagated through the network. There are many alternatives to SGD that Sebastian Ruder does a great job of summarising here. I think this is a fascinating topic to go through, but outside the scope of this blog-post. Briefly, however, the vast majority of the optimisation methods are first-order (including SGD, Adam, RMSprop, and Adagrad) because calculating the second-order is too computionally difficult. However, some of these first-order methods have a fixed learning-rate (SGD) and some have an adaptive learning-rate (Adam), which means that the 'amount' we update our weights by becomes a function of the loss - we may make big jumps in the beginning but then take smaller steps as we get closer to the target. It should be clear, however that minimising the loss on training data is not the main goal - in theory we want to minimise the loss on 'unseen'/test data; hence all the opimisation methods proxy for that under the assumption that a low lost on training data will generalise to 'new' data from the same distribution. This means we may prefer a neural-network with a higher training-loss; because it has a lower validation-loss (on data it hasn't been trained on) - we would typically say that the network has 'overfit' in this case. There have been some recent papers that claim that adaptive optimisation methods do not generalise as well as SGD because they find very sharp minima points. Previously we only had to back-propagate the gradient one layer, now we also have to back-propagate it through all the hidden-layers. Explaining the back-propagation algorithm is beyond the scope of this post, however it is crucial to understand. Many good resources exist online to help. We can now create a neural-network from scratch in R using four functions. First, we initialise our weights: neuralnetwork <- function(sizes, training_data, epochs, mini_batch_size, lr, C, verbose=FALSE, Since we now have a complex combination of parameters we can't just initialise them to be 1 or 0, like before - the network may get stuck. To help, we use the gaussian distribution (however, just like with the opimisation, there are many other methods): Second, we use stochastic gradient descent as our optimisation method: Third, as part of the SGD method, we update the weights after each mini-batch has been forward and backwards-propagated: Fourth, the algorithm we use to calculate the deltas is the back-propagation algorithm. In this example we use the cross-entropy loss function, which produces the following gradient: cost_delta <- function(method, z, a, y) { if (method=='ce'){return (a-y)} Also, to be consistent with our logistic regression example we use the sigmoid activation for the hidden layers and for the read-out layer: # Calculate activation function sigmoid <- function(z){1.0/(1.0+exp(-z))} # Partial derivative of activation function sigmoid_prime <- function(z){sigmoid(z)*(1-sigmoid(z))} As mentioned previously, usually the softmax activation is used for the read-out layer. For the hidden layers, ReLU is more common, which is just the max function (negative weights get flattened to 0). The activation function for the hidden layers can be imagined as a race to carry a baton/flame (gradient) without it dying. The sigmoid function flattens out at 0 and at 1, resulting in a flat gradient which is equivalent to the flame dying out (we have lost our signal). The ReLU function helps preserve this gradient. The back-propagation function is defined as: backprop <- function(x, y, C, sizes, num_layers, biases, weights) Check out the notebook for the full code — however the principle remains the same: we have a forward-pass where we generate our prediction by propagating the weights through all the layers of the network. We then plug this into the cost gradient and update the weights through all of our layers. This concludes the creation of a neural network (with as many hidden layers as you desire). It can be a good exercise to replace the hidden-layer activation with ReLU and read-out to be softmax, and also add L1 and L2 regularization. Running this on the iris dataset in the notebook (which contains 4 explanatory variables with 3 possible outomes), with just one hidden-layer containing 40 neurons we get an accuracy of 96% after 30 rounds/epochs of training. The notebook also runs a 100-neuron handwriting-recognition example to predict the digit corresponding to a 28x28 pixel image. Step 5 - Convolutional Neural Network (See Notebook) Here, we will briefly examine only the forward-propagation in a convolutional neural-network (CNN). CNNs were first made popular in 1998 by LeCun's seminal paper. Since then, they have proven to be the best method we have for recognising patterns in images, sounds, videos, and even text! Image recognition was initially a manual process; researchers would have to specify which bits (features) of an image were useful to identify. For example, if we wanted to classify an image into ‘cat’ or ‘basketball’ we could have created code that extracts colours (basketballs are orange) and shapes (cats have triangular ears). Perhaps with a count of these features we could then run a linear regression to get the relationship between number of triangles and whether the image is a cat or a tree. This approach suffers from issues of image scale, angle, quality and light. Scale Invariant Feature Transformation (SIFT) largely improved upon this and was used to provide a `feature description' of an object, which could then be fed into a linear regression (or any other relationship learner). However, this approach had set-in-stone rules that could not be optimally altered for a specific domain. CNNs look at images (extract features) in an interesting way. To start, they look only at very small parts of an image (at a time), perhaps through a restricted window of 5 by 5 pixels (a filter). 2D convolutions are used for images, and these slide the window across until the whole image has been covered. This stage would typically extract colours and edges. However, the next layer of the network would look at a combination of the previous filters and thus 'zoom-out'. After a certain number of layers the network would be 'zoomed-out' enough to recognise shapes and larger structures. These filters end up as the 'features' that the network has learned to identify. It can then pretty much count the presence of each feature to identify a relationship with the image label ('basketball' or 'cat'). This approach appears quite natural for images — since they can broken down into small parts that describe it (colours, textures, etc.). CNNs appear to thrive on the fractal-like nature of images. This also means they may not be a great fit for other forms of data such as an excel worksheet where there is no inherent structure: we can change the column order and the data remains the same — try swapping pixels in an image (the image changes)! In the previous example we looked at a standard neural-net classifying handwritten text. In that network each neuron from layer \(i\), was connected to each neuron at layer \(j\) — our 'window' was the whole image. This means if we learn what the digit '2' looks like; we may not recognise it when it is written upside down by mistake, because we have only seen it upright. CNNs have the advantage of looking at small bits of the digit '2' and finding patterns between patterns between patterns. This means that a lot of the features it extracts may be immune to rotation, skew, etc. For more detail, Brandon Rohrer explains here what a CNN actually is in detail. We can define a 2D convolution function in R: convolution <- function(input_img, filter, show=TRUE, out=FALSE) conv_out <- outer( Vectorize(function(r,c) sum(input_img[r:(r+kernel_size[[1]]-1), And use it to a apply a 3x3 filter to an image: conv_emboss <- matrix(c(2,0,0,0,-1,0,0,0,-1), nrow = 3) convolution(input_img = r_img, filter = conv_emboss) You can check the notebook to see the result, however this seems to extract the edges from a picture. Other, convolutions can 'sharpen' an image, like this 3x3 filter: conv_sharpen <- matrix(c(0,-1,0,-1,5,-1,0,-1,0), nrow = 3) convolution(input_img = r_img, filter = conv_sharpen) Typically we would randomly initialise a number of filters (e.g. 64): filter_map <- lapply(X=c(1:64), FUN=function(x){ # Random matrix of 0, 1, -1 conv_rand <- matrix(sample.int(3, size=9, replace = TRUE), ncol=3)-2 convolution(input_img = r_img, filter = conv_rand, show=FALSE, out=TRUE) We can visualise this map with the following function: Running this function we notice how computationally intensive the process is (compared to a standard fully-connected layer). If these feature maps are not useful 'features' (i.e. the loss is difficult to decrease when these are used) then back-propagation will mean we will get different weights which correspond to different feature-maps; which will become more useful to make the Typically we stack convolutions on top of other convolutions (and hence the need for a deep network) so that edges becomes shapes and shapes become noses and noses become faces. It can be interesting to examine some feature maps from trained networks to see what the network has actually learnt. Download Notebooks You can find notebooks implementing the code behind this post on Github by following the links in the section headings, or as Azure Notebooks at the link below: Azure Notebooks: NeuralNetR You can follow this conversation by subscribing to the comment feed for this post. Wow... I understood a word or two. :) Great walk thru. Good bridge material between basic and advanced presentations.
{"url":"https://blog.revolutionanalytics.com/2017/07/nnets-from-scratch.html","timestamp":"2024-11-05T14:06:56Z","content_type":"application/xhtml+xml","content_length":"58958","record_id":"<urn:uuid:02228827-786d-4e6e-a969-aaaacc4b8dfe>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00869.warc.gz"}
Bounds for the strength of the graph minor and the immersion theorem Michael Rathjen, University of Leeds The graph minor theorem, GM, is arguably the most important theorem of graph theory. The strength of GM exceeds that of the standard classification systems of RM known as the “big five”. The plan is to survey the current knowledge about the strength of GM and other Kruskal-like principles, presenting lower and upper bounds.
{"url":"https://logic-gu.se/lindstrom-lectures/2018/2018/06/13/michael-rathjen-research/","timestamp":"2024-11-05T10:01:43Z","content_type":"text/html","content_length":"8354","record_id":"<urn:uuid:3d64d655-3af8-48b1-b8b4-1a685679641c>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00660.warc.gz"}
IF and CountIFS - Invalid Column value Hi - Attempting to use IF and CountIFS to bring back info from another referenced sheet. Using this formula I am able to return the specified text: =IF(COUNTIFS({Sheet 2 reference column Range 1}, [Column1]@row, {Sheet 2 column range 2}, [Column2]@row) > 0, "blah") Using this formula I receive the "invalid column value": =IF(COUNTIFS({Sheet 2 reference column Range 1}, [Column1]@row, {Sheet 2 column range 2}, [Column2]@row) > 0, {Sheet 2 column Range 3}) So the difference is that I'm trying to return the value in a 3rd column vs just a specified text when a match is found. Is there a way to do this via the IF and countIFS formula? I am able to return the correct value via the vlookup function, but due to the size of the sheets, I have to create multiple keys for each new column value i want to return. Best Answer • =join(collect({Sheet 2 column Range 3},{Sheet 2 reference column Range 1}, [Column1]@row, {Sheet 2 column range 2}, [Column2]@row),", ") give that a try. The reason you are getting an error is the last reference in your if statement, only true or fast can come out of an if criteria, so the criteria you built is lost once the program reaches that range. *Also you should name your ranges. it becomes important after the sheet has been running awhile and you either have to fix it because it broke or want to improve/edit it. • =join(collect({Sheet 2 column Range 3},{Sheet 2 reference column Range 1}, [Column1]@row, {Sheet 2 column range 2}, [Column2]@row),", ") give that a try. The reason you are getting an error is the last reference in your if statement, only true or fast can come out of an if criteria, so the criteria you built is lost once the program reaches that range. *Also you should name your ranges. it becomes important after the sheet has been running awhile and you either have to fix it because it broke or want to improve/edit it. • ok that makes sense about the true/false, thanks. I get unparseable for your suggested function. I'm not sure if I'm connecting the right ranges. what does range 3 represent? How does range 3 and range 1 match a single column 1 on sheet 1? But then range 2 only matches column 2 on sheet 1? • range 3 is your return column range 1 is your first criteria column, criteria is that is matches column1 in the same sheet and same row range 2 is your second criteria column, criteria is that it matches column2 in the same sheet and same row. • ok got it. I've got all that filled out correctly now. Still getting unparseable though. If I'm referencing range 1 from another sheet...the criteria is that it matches column1 which is in the current sheet/row? The sheet I'm creating the formula in? If that is true then I have it set up correctly - so maybe I need to adjust the delimeter of the Join function? I have it exactly as you put it, but you may have assumed I would replace with • Can you post the formula? • Just got it. Was missing a comma. Thanks for your help! =JOIN(COLLECT({Return column}, {criteria 1}, [Column5]@row, {Criteria 2}, [Column6]@row),", ") Help Article Resources
{"url":"https://community.smartsheet.com/discussion/68452/if-and-countifs-invalid-column-value","timestamp":"2024-11-08T05:38:04Z","content_type":"text/html","content_length":"425235","record_id":"<urn:uuid:6ec06ea1-4428-460a-8fea-dec5fabaa64d>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00650.warc.gz"}
Classification loss for observations not used in training L = kfoldLoss(CVMdl) returns the cross-validated classification error rates estimated by the cross-validated, error-correcting output codes (ECOC) model composed of linear classification models CVMdl. That is, for every fold, kfoldLoss estimates the classification error rate for observations that it holds out when it trains using all other observations. kfoldLoss applies the same data used create CVMdl (see fitcecoc). L contains a classification loss for each regularization strength in the linear classification models that compose CVMdl. L = kfoldLoss(CVMdl,Name,Value) uses additional options specified by one or more Name,Value pair arguments. For example, specify a decoding scheme, which folds to use for the loss calculation, or verbosity level. Input Arguments Name-Value Arguments Specify optional pairs of arguments as Name1=Value1,...,NameN=ValueN, where Name is the argument name and Value is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter. Before R2021a, use commas to separate each name and value, and enclose Name in quotes. LossFun — Loss function 'classiferror' (default) | 'classifcost' | function handle Loss function, specified as 'classiferror', 'classifcost', or a function handle. You can: • Specify the built-in function 'classiferror', then the loss function is the classification error. • Specify the built-in function 'classifcost'. In this case, the loss function is the observed misclassification cost. If you use the default cost matrix (whose element value is 0 for correct classification and 1 for incorrect classification), then the loss values for 'classifcost' and 'classiferror' are identical. • Specify your own function using function handle notation. For what follows, n is the number of observations in the training data (CVMdl.NumObservations) and K is the number of classes (numel(CVMdl.ClassNames)). Your function needs the signature lossvalue = lossfun(C,S,W,Cost), where: □ The output argument lossvalue is a scalar. □ You choose the function name (lossfun). □ C is an n-by-K logical matrix with rows indicating which class the corresponding observation belongs. The column order corresponds to the class order in CVMdl.ClassNames. Construct C by setting C(p,q) = 1 if observation p is in class q, for each row. Set every element of row p to 0. □ S is an n-by-K numeric matrix of negated loss values for classes. Each row corresponds to an observation. The column order corresponds to the class order in CVMdl.ClassNames. S resembles the output argument NegLoss of kfoldPredict. □ W is an n-by-1 numeric vector of observation weights. If you pass W, the software normalizes its elements to sum to 1. □ Cost is a K-by-K numeric matrix of misclassification costs. For example, Cost = ones(K) -eye(K) specifies a cost of 0 for correct classification, and 1 for misclassification. Specify your function using 'LossFun',@lossfun. Data Types: function_handle | char | string Output Arguments L — Cross-validated classification losses numeric scalar | numeric vector | numeric matrix Cross-validated classification losses, returned as a numeric scalar, vector, or matrix. The interpretation of L depends on LossFun. Let R be the number of regularizations strengths is the cross-validated models (CVMdl.Trained{1}.BinaryLearners{1}.Lambda) and F be the number of folds (stored in CVMdl.KFold). • If Mode is 'average', then L is a 1-by-R vector. L(j) is the average classification loss over all folds of the cross-validated model that uses regularization strength j. • Otherwise, L is a F-by-R matrix. L(i,j) is the classification loss for fold i of the cross-validated model that uses regularization strength j. Estimate k-Fold Cross-Validation Classification Error Load the NLP data set. X is a sparse matrix of predictor data, and Y is a categorical vector of class labels. Cross-validate an ECOC model of linear classification models. rng(1); % For reproducibility CVMdl = fitcecoc(X,Y,'Learner','linear','CrossVal','on'); CVMdl is a ClassificationPartitionedLinearECOC model. By default, the software implements 10-fold cross validation. Estimate the average of the out-of-fold classification error rates. Alternatively, you can obtain the per-fold classification error rates by specifying the name-value pair 'Mode','individual' in kfoldLoss. Specify Custom Classification Loss Load the NLP data set. Transpose the predictor data. For simplicity, use the label 'others' for all observations in Y that are not 'simulink', 'dsp', or 'comm'. Y(~(ismember(Y,{'simulink','dsp','comm'}))) = 'others'; Create a linear classification model template that specifies optimizing the objective function using SpaRSA. t = templateLinear('Solver','sparsa'); Cross-validate an ECOC model of linear classification models using 5-fold cross-validation. Optimize the objective function using SpaRSA. Specify that the predictor observations correspond to rng(1); % For reproducibility CVMdl = fitcecoc(X,Y,'Learners',t,'KFold',5,'ObservationsIn','columns'); CMdl1 = CVMdl.Trained{1} CMdl1 = ResponseName: 'Y' ClassNames: [comm dsp simulink others] ScoreTransform: 'none' BinaryLearners: {6x1 cell} CodingMatrix: [4x6 double] CVMdl is a ClassificationPartitionedLinearECOC model. It contains the property Trained, which is a 5-by-1 cell array holding a CompactClassificationECOC model that the software trained using the training set of each fold. Create a function that takes the minimal loss for each observation, and then averages the minimal losses across all observations. Because the function does not use the class-identifier matrix (C), observation weights (W), and classification cost (Cost), use ~ to have kfoldLoss ignore their positions. lossfun = @(~,S,~,~)mean(min(-S,[],2)); Estimate the average cross-validated classification loss using the minimal loss per observation function. Also, obtain the loss for each fold. ce = kfoldLoss(CVMdl,'LossFun',lossfun) ceFold = kfoldLoss(CVMdl,'LossFun',lossfun,'Mode','individual') ceFold = 5×1 Find Good Lasso Penalty Using Cross-Validation To determine a good lasso-penalty strength for an ECOC model composed of linear classification models that use logistic regression learners, implement 5-fold cross-validation. Load the NLP data set. X is a sparse matrix of predictor data, and Y is a categorical vector of class labels. For simplicity, use the label 'others' for all observations in Y that are not 'simulink', 'dsp', or 'comm'. Y(~(ismember(Y,{'simulink','dsp','comm'}))) = 'others'; Create a set of 11 logarithmically-spaced regularization strengths from $1{0}^{-7}$ through $1{0}^{-2}$. Lambda = logspace(-7,-2,11); Create a linear classification model template that specifies to use logistic regression learners, use lasso penalties with strengths in Lambda, train using SpaRSA, and lower the tolerance on the gradient of the objective function to 1e-8. t = templateLinear('Learner','logistic','Solver','sparsa',... Cross-validate the models. To increase execution speed, transpose the predictor data and specify that the observations are in columns. X = X'; rng(10); % For reproducibility CVMdl = fitcecoc(X,Y,'Learners',t,'ObservationsIn','columns','KFold',5); CVMdl is a ClassificationPartitionedLinearECOC model. Dissect CVMdl, and each model within it. numECOCModels = numel(CVMdl.Trained) ECOCMdl1 = CVMdl.Trained{1} ECOCMdl1 = ResponseName: 'Y' ClassNames: [comm dsp simulink others] ScoreTransform: 'none' BinaryLearners: {6×1 cell} CodingMatrix: [4×6 double] Properties, Methods numCLModels = numel(ECOCMdl1.BinaryLearners) CLMdl1 = ECOCMdl1.BinaryLearners{1} CLMdl1 = ResponseName: 'Y' ClassNames: [-1 1] ScoreTransform: 'logit' Beta: [34023×11 double] Bias: [-0.3169 -0.3169 -0.3168 -0.3168 -0.3168 -0.3167 -0.1725 -0.0805 -0.1762 -0.3450 -0.5174] Lambda: [1.0000e-07 3.1623e-07 1.0000e-06 3.1623e-06 1.0000e-05 3.1623e-05 1.0000e-04 3.1623e-04 1.0000e-03 0.0032 0.0100] Learner: 'logistic' Properties, Methods Because fitcecoc implements 5-fold cross-validation, CVMdl contains a 5-by-1 cell array of CompactClassificationECOC models that the software trains on each fold. The BinaryLearners property of each CompactClassificationECOC model contains the ClassificationLinear models. The number of ClassificationLinear models within each compact ECOC model depends on the number of distinct labels and coding design. Because Lambda is a sequence of regularization strengths, you can think of CLMdl1 as 11 models, one for each regularization strength in Lambda. Determine how well the models generalize by plotting the averages of the 5-fold classification error for each regularization strength. Identify the regularization strength that minimizes the generalization error over the grid. ce = kfoldLoss(CVMdl); [~,minCEIdx] = min(ce); minLambda = Lambda(minCEIdx); hold on ylabel('log_{10} 5-fold classification error') xlabel('log_{10} Lambda') legend('MSE','Min classification error') hold off Train an ECOC model composed of linear classification model using the entire data set, and specify the minimal regularization strength. t = templateLinear('Learner','logistic','Solver','sparsa',... MdlFinal = fitcecoc(X,Y,'Learners',t,'ObservationsIn','columns'); To estimate labels for new observations, pass MdlFinal and the new data to predict. More About Classification Error The classification error has the form $L=\sum _{j=1}^{n}{w}_{j}{e}_{j},$ • w[j] is the weight for observation j. The software renormalizes the weights to sum to 1. • e[j] = 1 if the predicted class of observation j differs from its true class, and 0 otherwise. In other words, the classification error is the proportion of observations misclassified by the classifier. Observed Misclassification Cost Binary Loss Extended Capabilities Automatic Parallel Support Accelerate code by automatically running computation in parallel using Parallel Computing Toolbox™. To run in parallel, specify the Options name-value argument in the call to this function and set the UseParallel field of the options structure to true using statset: For more information about parallel computing, see Run MATLAB Functions with Automatic Parallel Support (Parallel Computing Toolbox). GPU Arrays Accelerate code by running on a graphics processing unit (GPU) using Parallel Computing Toolbox™. This function fully supports GPU arrays. For more information, see Run MATLAB Functions on a GPU (Parallel Computing Toolbox). Version History Introduced in R2016a R2023b: Observations with missing predictor values are used in resubstitution and cross-validation computations Starting in R2023b, the following classification model object functions use observations with missing predictor values as part of resubstitution ("resub") and cross-validation ("kfold") computations for classification edges, losses, margins, and predictions. Model Type Model Objects Object Functions Discriminant analysis classification model ClassificationDiscriminant resubEdge, resubLoss, resubMargin, resubPredict ClassificationPartitionedModel kfoldEdge, kfoldLoss, kfoldMargin, kfoldPredict Ensemble of discriminant analysis learners for classification ClassificationEnsemble resubEdge, resubLoss, resubMargin, resubPredict ClassificationPartitionedEnsemble kfoldEdge, kfoldLoss, kfoldMargin, kfoldPredict Gaussian kernel classification model ClassificationPartitionedKernel kfoldEdge, kfoldLoss, kfoldMargin, kfoldPredict ClassificationPartitionedKernelECOC kfoldEdge, kfoldLoss, kfoldMargin, kfoldPredict Linear classification model ClassificationPartitionedLinear kfoldEdge, kfoldLoss, kfoldMargin, kfoldPredict ClassificationPartitionedLinearECOC kfoldEdge, kfoldLoss, kfoldMargin, kfoldPredict Neural network classification model ClassificationNeuralNetwork resubEdge, resubLoss, resubMargin, resubPredict ClassificationPartitionedModel kfoldEdge, kfoldLoss, kfoldMargin, kfoldPredict Support vector machine (SVM) classification model ClassificationSVM resubEdge, resubLoss, resubMargin, resubPredict ClassificationPartitionedModel kfoldEdge, kfoldLoss, kfoldMargin, kfoldPredict In previous releases, the software omitted observations with missing predictor values from the resubstitution and cross-validation computations.
{"url":"https://uk.mathworks.com/help/stats/classificationpartitionedlinearecoc.kfoldloss.html","timestamp":"2024-11-09T04:13:51Z","content_type":"text/html","content_length":"153213","record_id":"<urn:uuid:71395037-b79d-4694-92c1-d7efe07cb85d>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00786.warc.gz"}
Numbers in Forth Big/Giant number package Giant Numbers in Forth What is this? There are two bignumber packages available here. The first is bignum.frt. It is completely written in Forth. Non-standard, but no problem for iForth. All algorithms are taken from Knuth's TAOCP. Although the multiplication method doesn't use modern tricks like Karatsuba and FFT, for numbers up to a few thousand bits it is certainly competitive with (if not faster than) GiantInt. The second package is loaded by including factor.frt. It uses Perfectly Scientific's GiantInt package, which is written in "C". iForth accesses the code through the gint.dll. Glue routines, comparable to the ones needed for BLAS and NRC, are in gint.frt. The header file gint.h may allow you to use gint.dll in other contexts. The full code can be downloaded here. I have not ported it to Linux yet. That should be straight-forward though. You will find that both bignum packages are heavily spiked with number theoretic functions. I had hoped to get to the forefront of technology by incorporating GiantInt. It became clear while porting that technology has made vast progress since 1998 (last GiantInt release). My belief is now that up to maybe 100 digits gint.dll is OK, but not spectacular. For real heavy duty stuff it will be necessary to link to NFS or MPQS routines. GiantInt Function overview • The standard +,-,*,/ and % operators. These work on two GIANTs or on a GIANT and a 16-bit number (gint.dll unfortunately uses short ints). • GCD in various flavors. • Modulus, exponentiation and shift for GIANTs, special cased for Mersenne and Fermat numbers. • Stack-like allocation and deallocation of GIANTs. • Elliptic curve routines (high-level variant of George Woltmans' Mersenne crusher) • A very fast prime test, bigprimeq. • Faculty (!) and squareroot functions (implemented in Forth) • Miller-Rabin prime test, also in high-level Forth. • Lucas-Lehmer prime test (for Mersenne numbers), in Forth The sources for the gint.dll of the full GiantInt package can be downloaded from: http://www.perfsci.com/free/giantint/giantint.html. (Release 22 OCT 2001) "You may download giantint at any time, free of charge. PSI makes no warranty for this free software and we disclaim any and all implied warranties or conditions, including any implied warranty of title, of noninfringement, of merchantability, or of fitness for a particular purpose. If these terms are acceptable to you and you wish to download the free software, click here..." The sources can be compiled to gint.dll, with some work. What to do exactly will be come clear when I release the Linux variant. S" 99999999999999999999999991111111111111111173212028727617181111" .factor 33107 * 560929 * 5384833477785736418660700040798137594423688327731437 [composite?] S" 5384833477785736418660700040798137594423688327731437" .factor ( should give: 548469301596654329 * 9817930487832035819402468332988053 or time out ) The timeout happens. However, it may just mean that Lenstra's method (from which I swiped the example) is better than what is in the GiantInt package. free counter
{"url":"http://iforth.nl/bignum/bignum.html","timestamp":"2024-11-04T21:03:43Z","content_type":"text/html","content_length":"5003","record_id":"<urn:uuid:29f9e443-0289-48cf-ba0c-2ed65c12bd30>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00814.warc.gz"}
the ultimate analysis of coal on a dry basis The moisture result is utilized for calculating the dry basis results of other analytical results. The ash result is utilized in the ultimate analysis calculation of oxygen by difference (ASTM D3176) and for calculating material balance and ash load purposes in industrial boiler systems. ... Coal quality (ash content, porosity, moisture content ...
{"url":"https://www.caen-utilitaires-14.fr/2021-02-28/5445.html","timestamp":"2024-11-11T17:13:50Z","content_type":"text/html","content_length":"45479","record_id":"<urn:uuid:88b2f514-25fc-4a99-a5c8-7940bf40ea94>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00132.warc.gz"}
Base change for derived hom We have already seen some material discussing this in Lemma 15.65.4 and in Algebra, Section 10.73. Lemma 15.99.1. Let $R \to R'$ be a ring map. For $K \in D(R)$ and $M \in D(R')$ there is a canonical isomorphism \[ R\mathop{\mathrm{Hom}}\nolimits _ R(K, M) = R\mathop{\mathrm{Hom}}\nolimits _{R'}(K \otimes _ R^\mathbf {L} R', M) \] Lemma 15.99.1. Let $R \to R'$ be a ring map. For $K \in D(R)$ and $M \in D(R')$ there is a canonical isomorphism \[ R\mathop{\mathrm{Hom}}\nolimits _ R(K, M) = R\mathop{\mathrm{Hom}}\nolimits _{R'}(K \otimes _ R^\mathbf {L} R', M) \] Proof. Choose a K-injective complex of $R'$-modules $J^\bullet $ representing $M$. Choose a quasi-isomorphism $J^\bullet \to I^\bullet $ where $I^\bullet $ is a K-injective complex of $R$-modules. Choose a K-flat complex $K^\bullet $ of $R$-modules representing $K$. Consider the map \[ \mathop{\mathrm{Hom}}\nolimits ^\bullet (K^\bullet \otimes _ R R', J^\bullet ) \longrightarrow \mathop{\mathrm{Hom}}\nolimits ^\bullet (K^\bullet , I^\bullet ) \] The map on degree $n$ terms is given by the map \[ \prod \nolimits _{n = p + q} \mathop{\mathrm{Hom}}\nolimits _{R'}(K^{-q} \otimes _ R R', J^ p) \longrightarrow \prod \nolimits _{n = p + q} \mathop{\mathrm{Hom}}\nolimits _ R(K^{-q}, I^ p) \] coming from precomposing by $K^{-q} \to K^{-q} \otimes _ R R'$ and postcomposing by $J^ p \to I^ p$. To finish the proof it suffices to show that we get isomorphisms on cohomology groups: \[ \mathop{\mathrm{Hom}}\nolimits _{D(R)}(K, M) = \mathop{\mathrm{Hom}}\nolimits _{D(R')}(K \otimes _ R^\mathbf {L} R', M) \] which is true because base change $- \otimes _ R^\mathbf {L} R' : D(R) \to D(R')$ is left adjoint to the restriction functor $D(R') \to D(R)$ by Lemma 15.60.3. $\square$ Let $R \to R'$ be a ring map. There is a base change map $$\label{more-algebra-equation-base-change-RHom} R\mathop{\mathrm{Hom}}olimits _ R(K, M) \otimes _ R^\mathbf {L} R' \longrightarrow R\mathop{\mathrm{Hom}}olimits _{R'}(K \otimes _ R^\mathbf {L} R', M \otimes _ R^\mathbf {L} R')$$ in $D(R')$ functorial in $K, M \in D(R)$. Namely, by adjointness of $- \otimes _ R^\mathbf {L} R' : D(R) \to D(R')$ and the restriction functor $D(R') \to D(R)$, this is the same thing as a map \[ R\mathop{\mathrm{Hom}}\nolimits _ R(K, M) \longrightarrow R\mathop{\mathrm{Hom}}\nolimits _{R'}(K \otimes _ R^\mathbf {L} R', M \otimes _ R^\mathbf {L} R') = R\mathop{\mathrm{Hom}}\nolimits _ R(K, M \otimes _ R^\mathbf {L} R') \] (equality by Lemma 15.99.1) for which we can use the canonical map $M \to M \otimes _ R^\mathbf {L} R'$ (unit of the adjunction). Lemma 15.99.2. Let $R \to R'$ be a ring map. Let $K, M \in D(R)$. The map (15.99.1.1) \[ R\mathop{\mathrm{Hom}}\nolimits _ R(K, M) \otimes _ R^\mathbf {L} R' \longrightarrow R\mathop{\mathrm{Hom}}\ nolimits _{R'}(K \otimes _ R^\mathbf {L} R', M \otimes _ R^\mathbf {L} R') \] is an isomorphism in $D(R')$ in the following cases $K$ is perfect, $R'$ is perfect as an $R$-module, $R \to R'$ is flat, $K$ is pseudo-coherent, and $M \in D^{+}(R)$, or $R'$ has finite tor dimension as an $R$-module, $K$ is pseudo-coherent, and $M \in D^{+}(R)$ Lemma 15.99.2. Let $R \to R'$ be a ring map. Let $K, M \in D(R)$. The map (15.99.1.1) \[ R\mathop{\mathrm{Hom}}\nolimits _ R(K, M) \otimes _ R^\mathbf {L} R' \longrightarrow R\mathop{\mathrm{Hom}}\nolimits _{R'}(K \otimes _ R^\mathbf {L} R', M \otimes _ R^\mathbf {L} R') \] $R \to R'$ is flat, $K$ is pseudo-coherent, and $M \in D^{+}(R)$, or $R'$ has finite tor dimension as an $R$-module, $K$ is pseudo-coherent, and $M \in D^{+}(R)$ Proof. We may check the map is an isomorphism after applying the restriction functor $D(R') \to D(R)$. After applying this functor our map becomes the map \[ R\mathop{\mathrm{Hom}}\nolimits _ R(K, L) \otimes _ R^\mathbf {L} R' \longrightarrow R\mathop{\mathrm{Hom}}\nolimits _ R(K, L \otimes _ R^\mathbf {L} R') \] of Lemma 15.73.5. See discussion above the lemma to match the left and right hand sides; in particular, this uses Lemma 15.99.1. Thus we conclude by Lemma 15.98.3. $\square$ Your email address will not be published. Required fields are marked. In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar). All contributions are licensed under the GNU Free Documentation License. In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 0E1V. Beware of the difference between the letter 'O' and the digit '0'. The tag you filled in for the captcha is wrong. You need to write 0E1V, in case you are confused.
{"url":"https://stacks.math.columbia.edu/tag/0E1V","timestamp":"2024-11-05T13:11:19Z","content_type":"text/html","content_length":"18811","record_id":"<urn:uuid:862912ac-3f1b-4e04-a1bc-f19d034f423a>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00437.warc.gz"}
Combinatorics and Geometry Days II at MIPT Combinatorics and Geometry Days II at MIPT April 13 - 16, 2020 The conference aims to be a meeting point for combinatorialists and geometers, where they will be able to present and discuss their current research, as well as give broader scope lectures accessible to interested PhD and master students. All of the talks will be held on the zoom meeting: Meeting ID: 303-932-461 Password: first 6 decimal places of $\pi$ Also there will be 2 zoom chat-rooms for different discussions. You are welcome to register in conference slack account to be able to see the discussions in groups and join them when you like. Alternatively, you can watch the live stream on our Twitch channel. No registration necessary. Danila Cherkashin Saint Petersburg State University, Spb Oleg German MSU, Moscow Alexander Guterman MSU, Moscow Nora Frankl London School of Economics, UK and MIPT, Moscow Grigory Ivanov TU Wien, Austria and MIPT, Moscow Grigory Kabatiansky Skoltech, Moscow Nikita Kalinin HSE and Saint Petersburg State University, Spb Vassily Manturov BMSTU, Moscow Márton Naszódi Alfréd Rényi Institute of Mathematics and Eötvös University, Hungary Konstantin Olmezov MIPT, Moscow Balázs Patkós Alfréd Rényi Institute of Mathematics, Hungary and MIPT, Moscow Ilya Shkredov Steklov Mathematical Institute, MSU and MIPT, Moscow Anna Taranenko Sobolev Institute of Mathematics, Novosibirsk Gábor Tardos Alfréd Rényi Institute of Mathematics, Hungary and MIPT, Moscow István Tomon ETH Zurich and MIPT, Moscow Ilya Vorobyev Skoltech, Moscow Konstantin Vorob'ev Sobolev Institute of Mathematics, Novosibirsk Dmitry Zakharov HSE and MIPT, Moscow Maksim Zhukovskii MIPT, Moscow Yelena Yuditsky Ben-Gurion University of the Negev, Israel Here is the preliminary schedule for the conference. April 13 Section 1 chaired by Alexandr Polyanskii 15.10 - 15.30 MSK (UTC +3) Ilya Vorobyev Skoltech, Moscow Multistage group testing algorithms Video Slides Show abstract Hide abstract Group testing is a well-known search problem that consists in detecting of $s$ defective members in a set of $t$ elements by carrying out tests on properly chosen subsets. A test result is positive if there is at least one defective element in a tested subset; otherwise, the result is negative. Our goal is to find all defective elements by using the minimal possible number of tests in the worst Two types of algorithms are usually considered in group testing. Adaptive algorithms can use results of previous tests to determine which subset of samples to test at the next step. In non-adaptive algorithms all tests are predetermined and can be carried out in parallel. We consider multistage algorithms, which can be seen as a compromise solution to the group testing problem. The advantage of this approach is that we can greatly reduce the total number of tests, but still perform a lot of them in parallel. Our general goal is to construct a multistage search procedure, having asymptotically the same number of tests as an adaptive one. We propose such algorithms for $s=2, 3$. 15.40 - 16.10 MSK (UTC +3) Danila Cherkashin Saint Petersburg State University, Spb Maximal distance minimizers for rectangle joint work with A. Gordeev, G. Strukov and Y. Teplitskaya Video Slides Show abstract Hide abstract Fix a compact $M \subset \mathbb{R}^2$ and $r>0$. Maximal distance minimizer is a connected set $\Sigma$ of the minimal length such that $$ \max_{y \in M} dist(y,\Sigma) \leq r,$$ i.e. $M \subset B_r We determine the set of maximal distance minimizers for rectangle and small enough $r$. Theorem. Let $M$ be a rectangle, $0 < r < r_0(M)$. Then maximal distance minimizer is unique (up to symmetries of $M$). It is depicted on the picture below (the right part of the picture contains enlarged fragment of the minimizer; the marked angle tends to $\frac{11\pi}{12}$ with $r\to 0$). 16.20 - 16.40 MSK (UTC +3) Maksim Zhukovskii MIPT, Moscow Weak saturation in sparse random graphs joint work with Mohamad Reza Bidgoli Video Slides Show abstract Hide abstract The notion of weak saturation was introduced in 1968 by Bollobas. Let $H$ be a spanning subgraph of a graph $G$, and $F$ be a pattern graph. $H$ is called weakly $F$-satured in $G$, if the edges of $G\setminus H$ may be added back one by one in a way such that every edge creates a new copy of $F$. The smallest number of edges in a weakly $F$-satured graph in $G$ is called a weak saturation number and is denoted by w-sat$(G,F)$. Bollobas conjectured that w-sat$(K_n,K_s)=(s-2)n-{s-1\choose 2}$. This conjecture was proved by P. Frankl in 1982. Unexpectedly, w-sat is stable: if we remove edges from $K_n$ independently with constant probability, the weak saturation number does not change. This result was proven by Korandi and Sudakov in~2016. More formally, for every constant $p\in(0,1)$, a.a.s. w-sat$(G(n,p),K_s)= $w-sat$(K_n,K_s)=(s-2)n-{s-1\choose 2}$. They also noticed that the same is true for $n^{-\varepsilon(s)}\leq p\leq 1$ for certain small enough $\varepsilon(s)>0$ and ask about smaller $p$ and about possible threshold for the property w-sat$(G(n,p),K_s)=(s-2)n-{s-1\choose 2}$. We prove that the threshold exists. Moreover, • if $p\geq n^{-\frac{1}{2s-3}}[\ln n]^{8/5}$, then a.a.s. w-sat$(G(n,p),K_s)=(s-2)n-{s-1\choose 2}$; • if $p\leq n^{-\frac{2}{s+1}}[\ln n]^{\frac{2}{(s-2)(s+1)}}$, then a.a.s. w-sat$(G(n,p),K_s)\neq (s-2)n-{s-1\choose 2}$. 16.40 - 17.10 MSK (UTC +3) Section 2 chaired by Maksim Zhukovskii 17.10 - 17.40 MSK (UTC +3) István Tomon ETH Zurich and MIPT, Moscow String graphs have the Erdös-Hajnal property Video Slides Show abstract Hide abstract A string graph is the intersection graph of curves in the plane. Building on previous works of Fox and Pach [1, 2], we prove that there exists an absolute constant $c>0$ such that if $G$ is a string graph on $n$ vertices, then $G$ contains either a clique or an independent set of size at least $n^c$. [1] J. Fox and J. Pach, Erdös-Hajnal-type results on intersection patterns of geo-metric objects, in Horizons of combinatorics, G. O. H. Katona et al., eds., Bolyai Soc. Stud. Math. 17, Springer, Berlin, Heidelberg, (2008), 79—103. [2] J. Fox and J. Pach, String graphs and incomparability graphs, Advances in Mathematics 230 (2012), 1381—1401. 17.50 - 18.20 MSK (UTC +3) Gábor Tardos Alfréd Rényi Institute of Mathematics, Hungary and MIPT, Moscow Color-critical edges in Schrijver graphs joint work with Gábor Simonyi (Rényi Institute / Budapest Institute of Technology and Economics) Video Slides Show abstract Hide abstract The vertices of the Kneser graph $KG(n,k)$ are the $k$-subsets of an $n$-element base set and two vertices are connected if they are disjoint subsets. (Here $n$ and $k$ are integers and we assume $n> 2k-1$.) Proving Kneser's conjecture Lovász established in 1978 that the chromatic number of $KG(n,k)$ is $n-2k+2$. The proof jumpstarted combinatorial applications of topology. In the same year Schrijver defined the graph $SG(n,k)$ as the subgraph of $KG(n,k)$ induced by independent sets if the base set is considered as the vertex set of a cycle. He proved that $SG(n,k)$ has the same chromatic number as $KG(n,k)$ but it is vertex-critical: removing any one vertex from $SG(n,k)$ decreases its chromatic number. But $SG(n,k)$ is not edge-critical: the removal of some edges does not decrease its chromatic number. This started a quest in two directions: to find out which edges of $SG(n,k)$ are color-critical and to find edge-critical subgraphs of $SG(n,k)$ with the same chromatic number. Both problems are solved in the case of 4-chromatic Schrijver graphs that are closely related to quadrangulations of surfaces, for the latter problem Kaiser and Stehlik gave nice examples. The talk also contains partial results and conjectures for the general case. April 14 Section 1 chaired by István Tomon 15.00 - 15.30 MSK (UTC +3) Anna Taranenko Sobolev Institute of Mathematics, Novosibirsk Hypergraph matching problems and multidimensional matrices Video Slides Show abstract Hide abstract The talk aims to draw attention to the interplay between the hypergraph matching theory and the theory of diagonals in multidimensional matrices. We overview some classical or just nice results on existence and counting matchings in hypergraphs and see how the matrix approach works for them. In particular, we discuss matchings in d-partite d-graphs, the Ryser's conjecture and other generalizations of the Hall's theorem, upper bounds on the numbers of perfect matchings, and extremal cases for the existence of perfect matchings in hypergraphs known as space and divisibility 15.40 - 16.10 MSK (UTC +3) Konstantin Vorob'ev Sobolev Institute of Mathematics, Novosibirsk On equitable 2-partitions of Johnson graphs with the second eigenvalue Video Slides Show abstract Hide abstract The vertices of a Johnson graph $J(n,w)$, $n\geq 2w$, are all $w$-subsets of a fixed $n$-set. Two vertices are adjacent if and only if they have $w-1$ joint elements. This graph is distance-regular with $w+1$ distinct eigenvalues $\lambda_i(n,w)=(w-i)(n-w-i)-i$, $i=0,1, \dots w$. Given a graph $G$, a partition $(C_1,\ldots, C_{r})$ of its vertex set into $r$ cells is called an equitable partition with a quotient matrix $A=(a_{ij})$ if for any $i,j \in \{1,\ldots,r\}$ every vertex from $C_i$ has exactly $a_{ij}$ neighbors in $C_j$. An eigenvalue of the quotient matrix $A$ is called an eigenvalue of the partition. We consider equitable $2$-partitions of a Johnson graph $J(n,w)$ with a quotient matrix having eigenvalue $\lambda_2(n,w)$. The problem of existence of equitable $2$-partitions of Johnson graphs with given quotient matrix is far from solving. In particular, it includes a famous Delsarte's conjecture about non-existence of $1$-perfect codes in the Johnson scheme. One of possible ways is to characterise partitions with certain eigenvalues. Equitable $2$-partitions of the graph $J(n,w)$ with the eigenvalue $\lambda_1(n,w)$ were characterized by Meyerowitz [1]. Later, Gavrilyuk and Goryainov [2] found all realizable quotient matrices (i.e. quotient matrices of some existing partitions) of equitable $2$-partitions of $J(n,3)$ for odd $n$. In these work we consider equitable $2$-partitions of $J(n,w)$ with the second eigenvalue $\lambda_2(n,w)$ for $w\geq 4$. As the main result, we find all such realizable quotient matrices. Moreover, we characterize all equitable $2$-partitions with these matrices up to equivalence in cases $n>2w$ and $n=2w$, $w \geq 7$. Particularly, we find new infinite series of partitions of $J(2w,w)$ for $w\ geq 4$. [1] A. Meyerowitz, Cycle-balanced Conditions for Distance-regular Graphs, Discrete Mathematics 264 (2003), N3, 149—166. [2] A. L. Gavrilyuk, S. V. Goryainov, On perfect 2-colorings of Johnson graphs J(v,3), Journal of Combinatorial Designs 21, (2013), N6, 232—252. 16.20 - 16.40 MSK (UTC +3) Alexander Guterman MSU, Moscow Matrix Centralizers and their Applications Video Slides Show abstract Hide abstract The talk is based on the works [1, 2, 3]. For a matrix $A\in M_n({\mathbb F} )$ its centralizer $${\mathop{\mathcal{C}}\nolimits}(A)=\{X\in M_n({\mathbb F} )\vert\; AX=XA\}$$ is the set of all matrices commuting with $A$. For a set $S\subseteq M_n({\mathbb F} )$ its centralizer $${\mathop{\mathcal{C}}\nolimits}(S)=\{X\in M_n({\mathbb F} )\vert\; AX=XA \mbox{ for every } A\in S\}=\bigcap_{A\in S} {\mathop{\mathcal{C}}} (A) $$ is the intersection of centralizers of all its elements. Centralizers are important and useful both in fundamental and applied sciences. A non-scalar matrix $A\in M_n({\mathbb F} )$ is {\em minimal\/} if for every $X\in M_n({\mathbb F} )$ with ${\mathop{\mathcal{C}}\nolimits}(A) \supseteq {\mathop{\mathcal{C}}\nolimits}(X)$ it follows that ${\mathop{\mathcal{C}}\nolimits}(A)={\mathop{\mathcal{C}}\nolimits}(X)$. A non-scalar matrix $A\in M_n({\mathbb F} )$ is {\em maximal\/} if for every non-scalar $X\in M_n({\mathbb F} )$ with ${\ mathop{\mathcal{C}}\nolimits}(A) \subseteq {\mathop{\mathcal{C}}\nolimits}(X)$ it follows that ${\mathop{\mathcal{C}}\nolimits}(A)={\mathop{\mathcal{C}}\nolimits}(X)$. We investigate and characterize minimal and maximal matrices over arbitrary fields. Our results are then applied to the theory of commuting graphs of matrix rings and to characterize commutativity preserving maps on matrices. [1] G. Dolinar, A.E. Guterman, B. Kuzma, P. Oblak, Commuting graphs and extremal centralizers, Ars Mathematica Contemporanea, 7(2), 2014, 453-459. [2] G. Dolinar, A.E. Guterman, B. Kuzma, P. Oblak, Commutativity preservers via matrix centralizers, Publicationes Mathematicae Debrecen, 84(3-4), 2014, 439–450. [3] G. Dolinar, A.E. Guterman, B. Kuzma, P. Oblak, Extremal matrix centralizers, Linear Algebra and its Applications, 438(7), 2013, 2904-2910. 16.40 - 17.10 MSK (UTC +3) Section 2 chaired by Andrey Kupavskii 17.10 - 17.40 MSK (UTC +3) Nora Frankl London School of Economics, UK and MIPT, Moscow On the number of discrete chains joint work with Andrey Kupvaskii Video Slides Show abstract Hide abstract Determining the maximum number of unit distances that can be spanned by $n$ points in the plane is a difficult problem, which is wide open. The following more general question was recently considered by Eyvindur Ari Palsson, Steven Senger, and Adam Sheffer. For given distances $t_1,...,t_k$ a $(k+1)$-tuple $(p_1,...,p_{k+1})$ is called a $k$-chain if $||x_i-x_{i+1}||=t_i$ for $i=1,...,k$. What is the maximum possible number of $k$-chains that can be spanned by a set of $n$ points in the plane? Improving the result of Palsson, Senger and Sheffer, we determine this maximum up to a small error term (which, for $k=1 \bmod 3$ involves the maximum number of unit distances). We also consider some generalisations, and the analogous question in $R^3$. 17.50 - 18.15 MSK (UTC +3) Yelena Yuditsky Ben-Gurion University of the Negev, Israel The $\varepsilon$-$t$-Net Problem joint work with Noga Alon, Bruno Jartoux, Chaya Keller and Shakhar Smorodinsky Video Slides Show abstract Hide abstract We study a natural generalization of the classical $\varepsilon$-net problem (Haussler--Welzl 1987), which we call the $\varepsilon$-$t$-net problem: Given a hypergraph on $n$ vertices and parameters $t$ and $\varepsilon\geq \frac t n$, find a minimum-sized family $S$ of $t$-element subsets of vertices such that each hyperedge of size at least $\epsilon n$ contains a set in $S$. When $t=1$, this corresponds to the $\varepsilon$-net problem. We prove that any sufficiently large hypergraph with VC-dimension $d$ admits an $\varepsilon$-$t$-net of size $O(\frac{ (1+\log t)d}{\varepsilon} \log \frac{1}{\varepsilon})$. For some families of geometrically-defined hypergraphs (such as the dual hypergraph of regions with linear union complexity), we prove the existence of $O(\frac{1}{\varepsilon})$-sized $\varepsilon$-$t$-nets. We also present an explicit construction of $\varepsilon$-$t$-nets (including $\varepsilon$-nets) for hypergraphs with bounded VC-dimension. In comparison to previous constructions for the special case of $\varepsilon$-nets (i.e., for $t=1$), it does not rely on advanced derandomization techniques. To this end we introduce a variant of the notion of VC-dimension which is of independent April 15 Section 1 chaired by Nora Frankl 15.00 - 15.30 MSK (UTC +3) Grigory Ivanov TU Wien, Austria and MIPT, Moscow On the volume of sections of the cube joint work with Igor Tsiutsiurupa Video Slides Show abstract Hide abstract The problem of volume extrema of the intersection of the standard $n$-dimensional cube $\Box^n = [-1,1]^n$ with a $k$-dimensional linear subspace $H$ has been studied intensively. The celebrated Vaaler theorem says that only the coordinate subspaces are the volume minimizers. Using the Brascamb-Lieb inequality, K. Ball proved two upper bounds which are tight for some $k$ and $n.$ Typically, methods of functional analysis or some tricky inequalities for measures are used in such problems. In this talk, we will discuss a 'naive' variational principle for the problem of volume extrema of $ \Box^n \cap H$ and some geometrical consequences of this principle. Particularly, we will sketch how to find all planar maximizers ($k = 2$). Planar maximizers were unknown for all odd $k$ starting with 5. 15.40 - 16.10 MSK (UTC +3) Márton Naszódi Alfréd Rényi Institute of Mathematics and Eötvös University, Hungary Covering convex bodies and the closest vector problem joint work with Moritz Venzin (EPFL, Lausanne) Video Slides Show abstract Hide abstract In the closest vector problem, we are given a point in real $n$-space, and need to find the closest integer point to it according to some norm. The current fastest algorithm (Dadush and Kun, 2016) for general norms is of running time $2^{O(n)} (1/\epsilon)^n$. We improve this substantially for certain norms, eg. for $\ell_p$ spaces. The result is based on a geometric covering problem that is interesting on its own. How many convex bodies are needed to cover the ball of the norm such that, if scaled by factor 2 around their centroids, each one is contained in the $(1+\epsilon)$-scaled homothet of the ball? 16.20 - 16.50 MSK (UTC +3) Vassily Manturov BMSTU, Moscow An Introduction to Framed 4-Graph Minor Theory Video Slides Show abstract Hide abstract The well-known Pontrjagin-Kuratowski Theorem says that a graph is non-planar if it does not contain $K_{5}$ and $K_{3,3}$ (in the modern formulation we can say does not contain as a minor). In the talk we deal with regular 4-graphs with an additional structure of opposite edges at each vertex (we call them framed 4-graphs) . A theorem due to the speaker (conjectured by V.A.Vassiliev) says that such a graph is non-planar if it does not contain two cycles with no common edges having exactly one transverse intersection. The equivalence of the Pontrjagin-Kuratowski Theorem and Vassiliev's conjecture was proved by I.M.Nikonov. In the talk we prove that for framed 4-graphs (with source-sink structure) there is a unique graph which plays the role of planarity abstraction as well as intrinsic linkedness obstruction as well as obstruction of crossing number no more than two. 16.50 - 17.10 MSK (UTC +3) Section 2 chaired by Dmitry Zakharov 17.10 - 17.40 MSK (UTC +3) Nikita Kalinin HSE and Saint Petersburg State University, Spb Trends in sandpile model Show abstract Hide abstract I will define the sandpile model and briefly describe the current trends and open questions around it. 17.50 - 18.20 MSK (UTC +3) Oleg German MSU, Moscow Parametric approach to Khintchine's transference theorem and its generalizations Video Slides Show abstract Hide abstract In 1926 A.Ya.Khintchine proved the famous transference inequalities connecting two dual problems. The first one concerns simultaneous approximation of given real numbers $\theta_1,\ldots,\theta_n$ by rationals, the second one concerns approximating zero with the values of the linear form $\theta_1x_1+\ldots+\theta_nx_n+x_{n+1}$ at integer points. In 2009 W.M.Schmidt and L.Summerer presented a new approach to Diophantine approximation, which they called parametric geometry of numbers. The formulation of Khintchine's transference theorem in terms of parametric geometry of numbers appeared to be most elegant and simple. Moreover, this approach allowed a very natural splitting of Khintchine's inequalities into a chain of inequalities between the so called intermediate Diophantine exponents. We shall talk about application of parametric geometry of numbers to some generalizations of Khintchine's theorem. Particularly, we shall discuss transference theorems for Diophantine exponents of lattices and in Diophantine approximation with weights. April 16 Section 1 chaired by Grigory Ivanov 15.20 - 15.40 MSK (UTC +3) Grigory Kabatiansky Skoltech, Moscow How to find counterfeit coins on a precision scale if the weights of coins a priori unknown joint work with Elena Egorova Video Slides Show abstract Hide abstract Let us have a set of $n$ coins with the main part of them, namely at least $n-t$, are genuine. Genuine coins have the same weight, other coins can have different weights but no one of these weights are known. There is a precision scale that allows to know the exact weight of any subset of coins. What is the minimal number $Q(n,t)$ of non-adaptive weighings which allows to find weights for all We give the optimal solutions for $t=1$ and asymptotically optimal for $t-const$ with the number of weighings $Q(n,t)=t\log_2 n (1+o(1))$. It was previously known [1] that $Q(n,t)=O(t\ln n )$. There are at least two open questions: • to develop 'decoding' algorithm which finds weights for all coins with polynomial complexity; • what is $Q(n,t)$ for $t=\lambda n$ and $n\rightarrow\infty$? [1] Nader H. Bshouty, Hanna Mazzawi, On parity check (0, 1)-matrix over Zp, SODA '11: Proceedings of the twenty-second annual ACM-SIAM symposium on Discrete algorithms, pp. 1383–1394, 2011. 15.50 - 16.10 MSK (UTC +3) Konstantin Olmezov MIPT, Moscow An elementary approach to the operator method in additive combinatorics Video Slides Show abstract Hide abstract The additive energy $E(A)$ of a finite set $A$ from an abelian group is the number of solvings $$a+b=c+d,\ a,b,c,d \in A.$$ Finding the upper bounds for the additive energy of sets from some given classes is very popular subject of additive combinatorics. In 2012 Shkredov provided operator method that allows to estimate $E(A)$ via bounding of so-called higher energy $$E_3(A) := \# \{ a_1 - b_1 = a_2 - b_2 = a_3 - b_3 \ :\ a_i, b_i \in A,\ i=1,2,3 \}$$ and the common energy $$E(A, D) = \# \{ a_1 - d_1 = a_2 - d_2 \ :\ a_i \in A,\ d_i \in D,\ i=1,2 \}$$ for an arbitrary $D \subset A-A$. He uses the properties of the operator $$T_A(x,y) = (A \circ A)(x-y) = \# \{ a-b=x-y\ :\ a,b \in A \}$$ for obtain a certain very general inequality to the number of the solutions of two linear equation We suggest an elementary approach to prove this inequality which gives an elementary proof of the corresponding bounds of $E(A)$ (in particular, for the family of convex sets and the collection of sets having few products). Also we discuss a generalization of this inequality (which is elementary as well) connecting with the Sidorenko's conjecture from graph theory. Sidorenko's conjecture states that for any bipartite graph $H$ and any graph $G$ we have $$t_H(G) \le t_{K_2}(G)\ ,$$ where $t_H(G) = \frac{h_H(G)}{|G|^{|V(H)|}}$ and $h_H(G)$ is the number of homomorphisms from $H$ to $G$. We consider a special case of the graph $G$ which is defined in the terms of convolutions of $A$ and prove the required bound for many non-bipartite graphs $H$. 16.20 - 16.50 MSK (UTC +3) Ilya Shkredov Steklov Mathematical Institute, MSU and MIPT, Moscow Growth in Chevalley groups and Zaremba's conjecture Video Slides Show abstract Hide abstract Given a Chevalley group ${\mathbf G}(q)$ and a parabolic subgroup $P\subset {\mathbf G}(q)$, we prove that for any set $A$ there is a certain growth of $A$ relatively to $P$, namely, either $AP$ or $PA$ is much larger than $A$. Also, we study a question about intersection of $A^n$ with parabolic subgroups $P$ for large $n$. We apply our method to obtain some results on a modular form of Zaremba's conjecture from the theory of continued fractions and make the first step towards Hensley's conjecture about some Cantor sets with Hausdorff dimension greater than $1/2$. 16.50 - 17.10 MSK (UTC +3) Section 2 chaired by Andrey Kupavskii 17.10 - 17.40 MSK (UTC +3) Balázs Patkós Alfréd Rényi Institute of Mathematics, Hungary and MIPT, Moscow Induced and non-induced poset saturation problems joint work with Keszegh, Lemons, Martin, and Pálvölgyi Video Slides Show abstract Hide abstract A subfamily $\mathcal{G}\subseteq \mathcal{F}\subseteq 2^{[n]}$ of sets is a non-induced (weak) copy of a poset $P$ in $\mathcal{F}$ if there exists a bijection $i:P\rightarrow \mathcal{G}$ such that $p\le_P q$ implies $i(p)\subseteq i(q)$. In the case where in addition $p\le_P q$ holds if and only if $i(p)\subseteq i(q)$, then $\mathcal{G}$ is an induced (strong) copy of $P$ in $\mathcal{F}$. We consider the minimum number $sat(n,P)$ [resp.\ $sat^*(n,P)$] of sets that a family $\mathcal{F}\subseteq 2^{[n]}$ can have without containing a non-induced [induced] copy of $P$ and being maximal with respect to this property, i.e., the addition of any $G\in 2^{[n]}\setminus \mathcal{F}$ creates a non-induced [induced] copy of $P$. We prove for any finite poset $P$ that $sat(n,P)\le 2^{|P|-2}$, a bound independent of the size $n$ of the ground set. For induced copies of $P$, there is a dichotomy: for any poset $P$ either $sat^* (n,P)\le K_P$ for some constant depending only on $P$ or $sat^*(n,P)\ge \log_2 n$. We classify several posets according to this dichotomy, and also show better upper and lower bounds on $sat(n,P)$ and $sat^*(n,P)$ for specific classes of posets. Our main new tool is a special ordering of the sets based on the colexicographic order. It turns out that if $P$ is given, processing the sets in this order and adding the sets greedily into our family whenever this does not ruin non-induced [induced] $P$-freeness, we tend to get a small size non-induced [induced] $P$-saturating family. 17.50 - 18.20 MSK (UTC +3) Dmitry Zakharov HSE and MIPT, Moscow Erdös-Ginzburg-Ziv problem and Convex Geometry Video Slides Show abstract Hide abstract Let $f(p, d)$ be the minimal number $s$ such that among any $s$ vectors in $\mathbb F_p^d$ one can find $p$ vectors with zero sum. In [2], we show that for any fixed $d$ and sufficiently large $p$ we have an inequality $f(p, d) \le 4^d p$. This improves the previous bound $f(p, d) < (c d \log d)^d p$ due to [1]. Note that we have a lower bound $f(p, d) \ge 2^d (p-1)+1$ as the example of the hypercube shows. So we have that $f(p, d)$ is an exponential function in $d$ provided that $p > p_0(d)$. The proof combines various ideas from convex geometry, additive combinatorics and algebraic combinatorics. [1] Alon Noga and Moshe Dubiner, A lattice point problem and additive number theory, Combinatorica 15.3 (1995): 301-309. [2] Zakharov Dmitriy, Convex geometry and Erdös-Ginzburg-Ziv problem, arXiv preprint arXiv:2002.09892 (2020). Tune in The conference will be held online. To attend the conference, please enter the meeting (ID: 303-932-461, pass: first 6 decimal places of pi). Alternatively, you can watch the live stream on Twitch, no registration nesessary. Program Committee Andrey Kupavskii MIPT Alexandr Polyanskii MIPT Andrei Raigorodskii MIPT
{"url":"https://combgeo.org/en/events/combinatorics-and-geometry-days-ii/","timestamp":"2024-11-02T02:21:06Z","content_type":"text/html","content_length":"184963","record_id":"<urn:uuid:ac584091-fc91-48b1-b796-99d20e6fef59>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00009.warc.gz"}
What are Samvara Sila? I don’t know if any commentator ever attempted to show how the figure might be arrived at, but if they did I expect it will have been done using the usual method by which Ābhidhammikas arrive at extraordinarily large numbers. For example: The Bhikkhupātimokkha comprises 227 sikkhāpadas. Multiply by 3 according to whether the bhikkhu is a puthujjana, sekha or asekha = 681. Multiply by 2 according to whether the citta that caused a sikkhāpada to be observed was saṅkhārika or asaṅkhārika = 1,362. Multiply by 2 according to whether the citta was ñāṇasampayutta or ñāṇavippayutta = 2,724. Multiply by the three periods of time, past, present and future = 8,172. Multiply by the two regions, Majjhimapadesa and Paccantapadesa = 16,344 Etc., etc. 2 Likes
{"url":"https://discourse.suttacentral.net/t/what-are-samvara-sila/11792/10","timestamp":"2024-11-14T05:17:35Z","content_type":"text/html","content_length":"18782","record_id":"<urn:uuid:24e795df-ee7c-4596-8d59-fd1dc4e580d9>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00812.warc.gz"}
Simple Interest Money in a bank account usually accrues compound interest. Compounded interest means that the amount of money is multiplied by a constant factor at the end of each time period to obtain the amoun in the account at the start of the next period. If £500 is invested at an interest rate of 10% per year, then after n years the amount in the account will be If the interest is simple interest, then the yearly amount of interest will be calculated on the original deposit. Example: If you deposit £500 into an account paying 10% simple interest then the balance increases at a constant rate of Initially there is £500 in the account. After 1 year there is After 2 years there is After 3 years there is After 4 years there is After n years there will be Simple interest means that eventually the amount in the account will start falling in real terms. For the example given above, if the rate of inflation is 5%, after 11 years there will be £1050 in the account. Interest at 5% would be needed to maintain the value of the money in real terms, so at the end of the next year there would be In fact there will only be
{"url":"https://astarmathsandphysics.com/igcse-maths-notes/514-simple-interest.html","timestamp":"2024-11-11T04:08:23Z","content_type":"text/html","content_length":"29705","record_id":"<urn:uuid:30895856-86b3-4ae7-8d52-deaf4b437545>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00587.warc.gz"}
Multiplication Facts worksheets CHES Multiplication Facts 1's Multiplication Facts 0 - 11 Basic Multiplication Facts Integer Multiplication Facts Multiplication Facts 0, 1, 2 Explore Worksheets by Subjects Explore printable Multiplication Facts worksheets Multiplication Facts worksheets are an essential tool for teachers to help their students master the fundamental math skill of multiplication. These worksheets provide a variety of engaging and interactive activities that cater to different learning styles, ensuring that students grasp the concept of multiplication in a fun and effective manner. Teachers can use these worksheets to supplement their math curriculum, providing students with ample opportunities to practice and reinforce their multiplication skills. By incorporating Multiplication Facts worksheets into their lesson plans, teachers can ensure that their students develop a strong foundation in math, setting them up for success in higher-level math courses and real-world applications. Quizizz is a valuable resource for teachers looking to enhance their students' learning experience, particularly when it comes to Multiplication Facts worksheets and other math-related offerings. This platform offers a wide range of interactive quizzes and games that can be easily integrated into the classroom, providing students with a fun and engaging way to practice their multiplication skills. Teachers can also create their own customized quizzes to align with their specific curriculum and learning objectives. In addition to Multiplication Facts worksheets, Quizizz offers resources for various other math topics, ensuring that teachers have access to a comprehensive suite of tools to support their students' learning and growth. By incorporating Quizizz into their teaching strategies, educators can create a dynamic and interactive learning environment that fosters a deep understanding of multiplication and other essential math concepts.
{"url":"https://quizizz.com/en-in/multiplication-facts-worksheets","timestamp":"2024-11-05T04:33:38Z","content_type":"text/html","content_length":"158831","record_id":"<urn:uuid:079d66b4-e26f-43c8-9d16-74fd105e4b7d>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00116.warc.gz"}
manual pages The Fruchterman-Reingold layout algorithm Place vertices on the plane using the force-directed layout algorithm by Fruchterman and Reingold. coords = NULL, dim = 2, niter = 500, start.temp = sqrt(vcount(graph)), grid = c("auto", "grid", "nogrid"), weights = NULL, minx = NULL, maxx = NULL, miny = NULL, maxy = NULL, minz = NULL, maxz = NULL, graph The graph to lay out. Edge directions are ignored. coords Optional starting positions for the vertices. If this argument is not NULL then it should be an appropriate matrix of starting coordinates. dim Integer scalar, 2 or 3, the dimension of the layout. Two dimensional layouts are places on a plane, three dimensional ones in the 3d space. niter Integer scalar, the number of iterations to perform. start.temp Real scalar, the start temperature. This is the maximum amount of movement alloved along one axis, within one step, for a vertex. Currently it is decreased linearly to zero during the iteration. grid Character scalar, whether to use the faster, but less accurate grid based implementation of the algorithm. By default (“auto”), the grid-based implementation is used if the graph has more than one thousand vertices. weights A vector giving edge weights. The weight edge attribute is used by default, if present. If weights are given, then the attraction along the edges will be multiplied by the given edge weights. This places vertices connected with a highly weighted edge closer to each other. minx If not NULL, then it must be a numeric vector that gives lower boundaries for the ‘x’ coordinates of the vertices. The length of the vector must match the number of vertices in the graph. maxx Similar to minx, but gives the upper boundaries. miny Similar to minx, but gives the lower boundaries of the ‘y’ coordinates. maxy Similar to minx, but gives the upper boundaries of the ‘y’ coordinates. minz Similar to minx, but gives the lower boundaries of the ‘z’ coordinates. maxz Similar to minx, but gives the upper boundaries of the ‘z’ coordinates. coolexp, maxdelta, These arguments are not supported from igraph version 0.8.0 and are ignored (with a warning). area, repulserad maxiter A deprecated synonym of niter, for compatibility. ... Passed to layout_with_fr. See the referenced paper below for the details of the algorithm. This function was rewritten from scratch in igraph version 0.8.0. A two- or three-column matrix, each row giving the coordinates of a vertex, according to the ids of the vertex ids. Gabor Csardi csardi.gabor@gmail.com Fruchterman, T.M.J. and Reingold, E.M. (1991). Graph Drawing by Force-directed Placement. Software - Practice and Experience, 21(11):1129-1164. See Also layout_with_drl, layout_with_kk for other layout algorithms. Other graph layouts: add_layout_(), component_wise(), layout_as_bipartite(), layout_as_star(), layout_as_tree(), layout_in_circle(), layout_nicely(), layout_on_grid(), layout_on_sphere(), layout_randomly(), layout_with_dh(), layout_with_gem(), layout_with_graphopt(), layout_with_kk(), layout_with_lgl(), layout_with_mds(), layout_with_sugiyama(), layout_(), merge_coords(), norm_coords (), normalize() # Fixing ego g <- sample_pa(20, m=2) minC <- rep(-Inf, vcount(g)) maxC <- rep(Inf, vcount(g)) minC[1] <- maxC[1] <- 0 co <- layout_with_fr(g, minx=minC, maxx=maxC, miny=minC, maxy=maxC) plot(g, layout=co, vertex.size=30, edge.arrow.size=0.2, vertex.label=c("ego", rep("", vcount(g)-1)), rescale=FALSE, xlim=range(co[,1]), ylim=range(co[,2]), vertex.label.dist=0, version 1.3.0
{"url":"https://igraph.org/r/html/1.3.0/layout_with_fr.html","timestamp":"2024-11-11T14:21:04Z","content_type":"text/html","content_length":"14504","record_id":"<urn:uuid:4f0736fe-5d9e-4dd0-8201-57e7f04f2f4d>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00763.warc.gz"}
Describe a software application that would require a graph data structure in its implementation. Explain what... Describe a software application that would require a graph data structure in its implementation. Explain what... Describe a software application that would require a graph data structure in its implementation. Explain what the vertices and edges of the graph would represent. Would the graph be undirected or directed? Would it be weighted or unweighted? Decide which type of representation would be best for this application, an adjacency matrix, an adjacency list using an array of linked lists or a fully linked representation. Explain your reasoning. when the question is related on any of the software product based on graph theory the foremost thing that comes to my mind is the Google Map Technology. Web based map and graph theory are inevitable.The role of graph theory in any web based map is very much prominent . In google map routes between the cities are represented using graphs.The places are considered as vertices and routes as edges. Dijkstra's Algorithm can be used to find the shortest route between two places as long as there is at least one possible route. The graph would be directed (since route has a fixed direction) and weighted. The weight on each edge will signify the distance between two places or two vertices. This algorithm uses adjacency matrix to calculate the shortest path between source and destination. While implementing this algorithm we use array of link list because every time the link list will be updated to find the shortes route. THANK YOU!!! PLEASE VOTE
{"url":"https://justaaa.com/computer-science/983081-describe-a-software-application-that-would","timestamp":"2024-11-13T22:36:39Z","content_type":"text/html","content_length":"30502","record_id":"<urn:uuid:367c700c-7337-4d06-bfc2-0f92be998068>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00378.warc.gz"}
NZ CUP Odds The odds have finally become of interest if you have a fancy. I say of interest because I don't think all of the favs will start so if you think a horse is on the up it may well be the time to back it, especially since you get your money back as bonus bet if it doesn't start in FF market. Akuta may not start. Copy That may not start. Lochinvar Art ran second last night in Oz after a long lay off so hard to make anything of that. Would they really come with the inter-dominions soon after? South Coast Arden doesn't seem to be the same horse unless trained by the All Stars in covid times. A G's White Socks is / was going awful in Australia and is now an 8 year old so surely he won't start. Old Town Road has come a long way in a short time and suddenly gets thrust into 31s after winning last week. Still only a rated 76 pacer. Franco Indie surely won't start in this rarified race. Pembrook Playboy seems injury plagued and I wonder if we will ever see the best of him, last year might have been his best chance. Not sure but it would be good to see go well. So who is on the up? I would say Majestic Cruiser, Alta Wiseguy, Krug, Hot and Treacherous and Shan Noble. Who could cause some carnage? Laver who was 15ff top 3 now into 9ff and Cranbourne with Matty W on. And then we have Self Assured who they all have to beat but 2.80ff will be a race day go for the big punters and multi makers. 4 hours ago, karrotsishere said: Gee when you put it that way, starts to lack depth. Only if the majority don't turn up. Hopefully they all do which will mean the most competitive field in years. Another by product, would be the dogfight to even get in which would mean big fields in the lead up races. 5 hours ago, karrotsishere said: Alta Wiseguy may not start. Telfer said hes more a short distance speed horse. He appears to be but has won over 2600m. If he sat quietly while all hell broke loose around in the Cup he might be a chance. 5 hours ago, karrotsishere said: Whats happenee to Benji Bounce? Claimer that won Interdoms last year? Would he come over or is he out of form? No good in Lochinvar Art's race at weekend. At 15,s I would take Majestic Cruiser all day. A very good Grand Circuit horse, tough as nails, an experienced campaigner. And B D Joe for my smokey. 1 hour ago, Globederby19 said: At 15,s I would take Majestic Cruiser all day. A very good Grand Circuit horse, tough as nails, an experienced campaigner. Not sure why he drifted from 13s. Hard as nails. 11 hours ago, karrotsishere said: Gee when you put it that way, starts to lack depth. Yet the club will still have 15 horses start in the Cup. Regardless of the depth, or lack thereof. One of my pet hates, throwing 3 or 4 horses in the Cup every year, that aren't really old school "Cup Class" horses. Clutters up the field in my opinion. Dumbs it all down. Common thing to do these days though I guess. 15 hours ago, Rusty said: Common thing to do these days though I guess. Not necessarily Rusty. Under different circumstances (regional shifts) the Wellington Cup was an open class affair at Hutt Park, used to see all the big names . But over a period it descended into farce until a class 5 horse was the top prospect .A lot of us were sad to see Hutt Parks demise. I agree that throwing in a horse to make up numbers that are not open class or at least just below that level of pacer dumbs it down, and gives those with a legitimate claim to winning nothing but traffic problems . I would rather see a 10 horses field with the top horses., as it tramples on the prestige of the CUP and its history by introducing horses that haven't made the grade(yet), and have little or no chance. jmho 8 hours ago, Globederby19 said: I would rather see a 10 horses field with the top horses., as it tramples on the prestige of the CUP and its history by introducing horses that haven't made the grade(yet), and have little or no chance. jmho Yes, I couldn't agree more. NZMTC would do well, to take a leaf out of the Moonee Valley Racing Club's book (with the Cox Plate), and not try to bugger up their flagship race. Here is a prediction: Cambridge's "The Race" will overtake the NZ Cup as the country's premier harness race within 4 years. On 8/22/2022 at 10:13 PM, Rusty said: One of my pet hates, throwing 3 or 4 horses in the Cup every year, that aren't really old school "Cup Class" horses. Clutters up the field in my opinion. Dumbs it all down. 1 hour ago, Rusty said: Cambridge's "The Race" will overtake the NZ Cup as the country's premier harness race within 4 years. By all reports The Race was a huge success at the first running , and mightn't need 4 years to get a Premier status. Is a great strategy to get companies and interested parties involved and owning a Top horse for the day on arrangements. Here's last year's NZ Cup result below. The favoured 3 ran the Trifecta easily with none of the unplaced runners a major Cup horse whether you pick 10 to start or 15. STEEL THE SHOW picked up over $30k for 4th, but hasn't run a place this season. But could start and get 4th again this year .? connections sure to be happy. So which 5 horse's do you deny a start to keep the 'Quality' up then? when there's only 4 or so 'Group 1 quality ' runners in the first place lol... . let em all have a go at least. plenty of time to sort themselves out over 2 miles. 1 10 Copy That 10 fr 330,000.00 2/2 3-58.8 B N Orange Ray Green 2 15 Self Assured 15 fr 93,000.00 1/1 3-59.3 2.60 Mark Purdon Mark Purdon & Hayden Cullen 3 16 South Coast Arden U1 fr 54,000.00 3/4 3-59.4 3.10 N C Rasmussen Brent Mangos 4 1 Steel The Show 1 fr 31,500.00 8/8 3-59.7 4.80 R J Butt Robert & Jenna Dunn 5 6 Classie Brigade 6 fr 18,000.00 4/3 3-59.9 5.60 John R Dunn Robert & Jenna Dunn 6 17 Laver U2 fr 10,500.00 10/9 3-59.9 5.90 G D O'Reilly Geoff & James Dunn 7 9 Kango 9 fr 10,500.00 14/14 4-00.6 9.00 S J Ottley Arna Donnelly 8 13 Terry 13 fr 10,500.00 11/11 4-00.6 9.10 R D Close Regan Todd 9 2 Vintage Cheddar 2 fr 10,500.00 7/6 4-00.6 9.30 Brad Williamson Alister Black 10 12 Bad To The Bone 12 fr 10,500.00 5/5 4-00.6 9.40 C J DeFilippi Barry Purdon & Scott Phelan 11 7 Robyns Playboy 7 fr 10,500.00 6/7 4-01.1 11.50 C R Ferguson Ross & Chris Wilson 12 5 Matt Damon 5 fr 10,500.00 13/13 4-01.7 14.50 G D Smith Robert & Jenna Dunn 13 11 Henry Hubert 11 fr 10,500.00 9/10 4-01.9 15.90 T M Williams Robert & Jenna Dunn 14 8 Cranbourne 8 fr 10,500.00 12/12 4-02.4 18.30 S R McNally Brent White 15 18 Dance Time U3 fr 10,500.00 15/15 4-07.5 43.60 Craig D Thornley Steve & Amanda Telfer Was great to watch a very willing race last night. Refreshing after the lame recent racing. Hezasport will add a bit of interest to the Cup but it looks like the winter horses eg Smiffy's Terror are not up to those returning to the races now. Anybody who backed Laver and The Falcon at triple figure odds will be happy they did. 3 hours ago, Happy Sunrise said: The Falcon Has won over 3200 in 4.00min, and usually finishes his races off well. Good from a stand too. Not sure he will get in as wasn't on that original list. Laver has had a wind opp. • 2 16 minutes ago, Globederby19 said: Has won over 3200 in 4.00min, and usually finishes his races off well. Good from a stand too. Not sure he will get in as wasn't on that original list. Rating 73 so win need to win a couple. Wouldn't be out of place in it. The great thing is it could be the best dog fight in years to qualify for the Cup. • 3 15 hours ago, Happy Sunrise said: The great thing is it could be the best dog fight in years to qualify for the Cup. Wait a minute, wait a minute, wait a minute... You'll telling me that horses need to qualify, in order to start in the Cup?! So they just don't let anyone in? Asking for a friend. • 2 38 minutes ago, Rusty said: ou'll telling me that horses need to qualify, in order to start in the Cup?! So they just don't let anyone in? As long as they are not exported in the next few months. 38 minutes ago, Rusty said: Asking for a friend. Surely Gary Woodham can ask his own questions 😁 9 minutes ago, Happy Sunrise said: Surely Gary Woodham can ask his own questions 😁 I told you Gazza!! I told you that we'd get caught out! There are a few switched on in this forum! Ask your own bloody questions next time. P.S start digging out your mum's old Christmas recipes. That ham one from last year kind of missed the mark. 4 minutes ago, Rusty said: P.S start digging out your mum's old Christmas recipes. That ham one from last year kind of missed the mark. Everyone loves Xmas memories. Are you putting forward your stuffed pudding this year for the followers, Rusty? You need to do better than the pig of an idea you had with the ham. 1 hour ago, karrotsishere said: As of today what odds do you give Heza Sport to qualify for The Race? Horses can't "qualify" for The Race, they are picked up by the slot holders. So I guess it comes down to negotiations between slot holders and connections of horses. 29 minutes ago, karrotsishere said: Yea know that, was just word I used. See Rusty, some have called us "we are dreamers", as we think The Race, will by pass NZ Cup. Im just driving home just part of our point. The Race is harder to gain a start in = more prestigious over time. Sorry, I misunderstood. In terms of chances, of Heza Sport getting a crack; yeah look he wouldn't be in my first half dozen picks, but there is no reason why he couldn't be picked up, if he was to take the step up in the open class grade. Looks like a really nice horse, that could go on with it. Time will tell. I'm stoked for CJ and Julie DeFilippi too. They deserve to have another top liner, so I'm hoping for Heza Sport to have every success in the future. 35 minutes ago, karrotsishere said: The Race is harder to gain a start in = more prestigious over time. I dunno about that. The Everest and the All Star Mile are hard to get in to, but it would take a lot for those races to become more prestigious than a slipper, a Cox Plate, or a Melbourne Cup, just in my opinion. The other thing too, with the slot races, a horse could win it, but there is usually no knowing how much of the winning stake the owner(s) actually get of it, depending upon the contractual arrangements agreed upon prior to the race etc. So may not be as prestigious as other races. 5 hours ago, karrotsishere said: The NZ Cup in the South Island has the name, so prob people will know about it. Name recognition. But done right & well The Race could easily surpass it Big call that, but it'll be interesting. As a social event, I don't think NZ Cup day will be surpassed by The Race anytime soon. A decent chunk of the Cup day crowd don't give a rat's about the horses. They go there to see, and be seen. Pull duck faces, (try and pull the opposite sex whilst at it), and stick it on the 'gram, tick tock, and update their Bookfaces lol Cup day is ingrained into the Canterbury psyche, so I can't see the crowds steering clear unless the laws of Jacindastan are imposed once again. But strictly from a horse race perspective, The Race is going to be the premier race for the harness code within 4 years I'm picking. Better horses (than the Cup) + better stake money (than the Cup) = premier race. On 8/28/2022 at 10:42 PM, karrotsishere said: On the money post. On the money post. Yip will be interesting. I mean that with regards to Name Recognition general public & going by numbers eg how many people know about it or have heard about it. For eg The Race if they target say 2 hours drive in every direction from Cambridge so (Auckland, Waikato & Bay of Plenty) & market hard. The population reach is Auckland approx 1.8 million, Waikato approx 500,000, Bay of Plenty 400,000. (Figures maybe slightly out). Compared to the entire South Island approx 1.1 million. (Figures maybe slightly out). Anyway thats a difference. What really saddens me after being involved in Harness for 60 yrs is that we have come down to this. The rich heritage of the NZ Cup is unprecedented, since starting in 1904. Any one that has had a look through the Harness Hall of Fame will attest to that. As an example in Australia ,the Victoria Cup 1974, Miracle Mile 1967, AG Hunter Cup 1949, Interdoms 1936, in which NZ horses can compete ,have a fairly rich heritage and personally I can't see any of them being usurped by some johnny come lately " The Race". Now I know that historically people had a great love of going to the Trots, but entertainment was fairly minimal , and if one has a look at the 1930, 1940, 1950 photographs of the major racecourses ,they were packed to the gunnels. These days the discretionary dollar is being stretched by a myriad of entertainment options, regardless of the population base. In a 3 words "Economy of scale". With a trans tasman and global market for our horses I can not see an end to the drain. Frankly and only my opinion, they should pour the money into the NZ Cup to attract the best from the oceania region, subsidy's for travelling here for Aussies etc. Wont happen but there yah go, and while Cambridge is a nice track, it isn't in the same league as Addington. Edited by Globederby19 • 4 5 hours ago, Globederby19 said: What really saddens me after being involved in Harness for 60 yrs is that we have come down to this. The rich heritage of the NZ Cup is unprecedented, since starting in 1904. Any one that has had a look through the Harness Hall of Fame will attest to that. 5 hours ago, Globederby19 said: With a trans tasman and global market for our horses I can not see an end to the drain. Hi Globe, I'm a great supporter of Nz Cup . Horses like Lightning Blue can turn up and give a great contest. Even the very first Cup run that you mentioned in 1904 the second placegetter NORICE was an American import , bought in to try and win the great race. horses like Steel Jaw, Gammalite , Smoken Up , Carrabean Blaster, For a Reason, Tiger Tara, Arden Rooney, Sushi sushi, Lightning Blue have all turned up to have a crack from Oz. with fairly good results too, but I do enjoy a 'Local' trained horse taking out the feature. Is nobody in Nz prepared to 'buy in' overseas horses anymore at all ? There must be some rich bloke there ? Even Seymour in Brisbane (a multi millionaire) imported Mr FEELGOOD (now a stallion at stud) and won the Interdominion defeating Blacks A Fake on home turf no less, withan NZ Trainer too !!! tim Butt I think... You need some IMPORTS there as well as EXPORTS to balance things up a bit !!!!!!!!!!!!!!! 25 minutes ago, Lightning Blue said: Hi Globe, I'm a great supporter of Nz Cup . Horses like Lightning Blue can turn up and give a great contest. Even the very first Cup run that you mentioned in 1904 the second placegetter NORICE was an American import , bought in to try and win the great race. horses like Steel Jaw, Gammalite , Smoken Up , Carrabean Blaster, For a Reason, Tiger Tara, Arden Rooney, Sushi sushi, Lightning Blue have all turned up to have a crack from Oz. with fairly good results too, but I do enjoy a 'Local' trained horse taking out the feature. Is nobody in Nz prepared to 'buy in' overseas horses anymore at all ? There must be some rich bloke there ? Even Seymour in Brisbane (a multi millionaire) imported Mr FEELGOOD (now a stallion at stud) and won the Interdominion defeating Blacks A Fake on home turf no less, withan NZ Trainer too !!! tim Butt I think... You need some IMPORTS there as well as EXPORTS to balance things up a bit !!!!!!!!!!!!!!! Always a welcome addition, any Aussie runners in the Cup. I wish more competed in the great race. Perhaps the "standing" start puts the Aussies off a bit maybe? It's a bit of coin in airfares etc to go all that way, with no guarantee that the start is going to be clean. And let's face it, the start of the Cup is more often a shambles than not. 1 hour ago, karrotsishere said: Aussie & USA Harness seem to have somehow targeted the younger generation. And that is where economy of scale comes in. Melbourne for instance has as many people as NZ, Sydney 12 million maybe, Perth probably close to 2 mill now. Not hard to pluck the younger brigade out of that lot. And yes Karrots , us BB,s have had it too good for too long. Hardy har har , oh the memory's, aahhhhhhh. We also had the best generation of music innovation . The list is long. But I digress. 21 hours ago, karrotsishere said: Im more Lead Zeppelin , Colosseum, Grand Funk Railroad than the Beatles. Nice bunch of lads though.lol
{"url":"https://www.raceplace.nz/topic/8279-nz-cup-odds/","timestamp":"2024-11-11T08:20:05Z","content_type":"text/html","content_length":"438908","record_id":"<urn:uuid:e9036637-2165-4c78-85af-303ca1bd98ba>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00132.warc.gz"}
GREED : Bayesian greedy clustering Greed enables model-based clustering of networks, matrices of count data and much more with different types of generative models. Model-selection and clustering are performed in combination by optimizing the Integrated Classification Likelihood. Details of the algorithms and methods proposed by this package can be found in Côme, Jouvin, Latouche, and Bouveyron (2021) 10.1007/ Dedicated to clustering and visualization, the package is very general and currently handles the following tasks: • Continuous data clustering with Gaussian Mixture Models. A GMM tutorial is available. See also the documentation for the Gmm and DiagGmm S4 classes. • Graph data clustering with the Stochastic Block Model or its degree corrected variants. A SBM tutorial is available . See also the documentation for the Sbm and dcSbm S4 classes. • Categorical data clustering with the Latent Class Analysis. An LCA tutorial is available. See also the documentation for the Lca S4 class. • Count data clustering with the Mixture of Multinomials model. A tutorial will soon be available. For now, we refer to the documentation for the Mom S4 class. • Mixed-typed data clustering, e.g. categorical and numerical but the package handles virtually any type of data combination by stacking models on top of each data types. For example graph data with continuous or categorical data attached to the nodes are handled. A CombinedModels tutorial is available. See also the documentation for the CombinedModels S4 class. • Mixture of regression for simultaneous clustering and fitting a regression model in each cluster. A MoR tutorial is available. See also the documentation for the MoR S4 class. • Co-clustering of binary and count-data via the Latent Block Model and its degree-corrected variant. A tutorial will soon be available. For now, we refer to the documentation for the DcLbm S4 With the Integrated Classification Likelihood, the parameters of the models are integrated out with a natural regularization effect for complex models. This penalization allows to automatically find a suitable value for the number of clusters K^⋆. A user only needs to provide an initial guess for the number of clusters K, as well as values for the prior parameters (reasonable default values are used if no prior information is given). The default optimization is performed thanks to a combination of a greedy local search and a genetic algorithm described in Côme, Jouvin, Latouche, and Bouveyron (2021), but several other optimization algorithms are also available. Eventually, a whole hierarchy of solutions from K^⋆ to 1 cluster is extracted. This enables an ordering of the clusters, and the exploration of simpler clustering along the hierarchy. The package also provides some plotting functionality. You can install the development version of greed from GitHub with: Or use the CRAN version: Usage: the greed function The main entry point for using the package is simply thegreed function (see ?greed). The generative model will be chosen automatically to fit the type of the provided data, but you may specify another choice with the model argument. We illustrate its use on a graph clustering example with the classical Books network ?Books. More use cases and their specific plotting functionality are described in the vignettes. sol <- greed(Books$X) #> ── Fitting a guess DCSBM model ── #> ℹ Initializing a population of 20 solutions. #> ℹ Generation 1 : best solution with an ICL of -1347 and 3 clusters. #> ℹ Generation 2 : best solution with an ICL of -1346 and 4 clusters. #> ℹ Generation 3 : best solution with an ICL of -1346 and 4 clusters. #> ── Final clustering ── #> ── Clustering with a DCSBM model 3 clusters and an ICL of -1345 You may specify the model you want to use and set the priors parameters with the (model argument), the optimization algorithm (alg argument) and the initial number of cluster K. Here Books$X is a square sparse matrix and a graph clustering ?`DcSbm-class` model will be used by default. By default, the Hybrid genetic algorithm is used. The next example illustrates a usage without default values. A binary Sbm prior is used, along with a spectral clustering algorithm for graphs. sol <- greed(Books$X,model=Sbm(),alg=Seed(),K=10) #> ── Fitting a guess SBM model ── #> ── Final clustering ── #> ── Clustering with a SBM model 5 clusters and an ICL of -1255 Result analysis The results of greed() is an S4 class which depends on the model argument (here, an SBM) which comes with readily implemented methods: clustering() to access the estimated partitions, K() the estimated number of clusters, and coef() the (conditional) maximum a posteriori of the model parameters. c 0 1 6 36 6 l 8 30 5 0 0 n 0 2 8 3 0 #> [1] 5 #> $pi #> [1] 0.07619048 0.31428571 0.18095238 0.37142857 0.05714286 #> $thetakl #> [,1] [,2] [,3] [,4] [,5] #> [1,] 0.821428571 0.367424242 0.065789474 0.003205128 0.00000000 #> [2,] 0.367424242 0.106060606 0.006379585 0.003885004 0.00000000 #> [3,] 0.065789474 0.006379585 0.251461988 0.016194332 0.04385965 #> [4,] 0.003205128 0.003885004 0.016194332 0.099865047 0.42735043 #> [5,] 0.000000000 0.000000000 0.043859649 0.427350427 0.73333333 Inspecting the hierarchy An important aspect of the greed package is its hierarchical clustering algorithm which extract a set of nested partitions from K=K(sol) to K=1. This hierarchy may be visualized thanks to a dendogram representing the fusion order and the level of regularization −log(α) needed for each fusion. Moreover, similar to standard hierarchical algorithm such as hclust, the cut() method allows you to extract a partition at any stage of the hierarchy. Its results is still an S4 object, and the S4 methods introduced earlier may again be used to investigate the results. c 1 6 42 l 38 5 0 n 2 8 3 Finally, the greed package propose efficient and model-adapted visualization via the plot() methods. In this graph clustering example, the "blocks" and "nodelink" display the cluster-aggregated adjacency matrix and diagram of the graph respectively. Note that the ordering of the clusters is the same than the one computed for the dendrogram, greatly enhancing visualization of the hierarchical structure. Other models As explained above, the greed package implements many standard models and the list may be displayed with Many plotting functions are available and, depending of the specified model, different type argument may be specified. For further information we refer to the vignettes linked above for each use Using parallel computing For large datasets, it is possible to use parallelism to speed-up the computations thanks to the future package. You only need to specify the type of back-end you want to use, before calling the ? greed function:
{"url":"http://cran.freestatistics.org/web/packages/greed/readme/README.html","timestamp":"2024-11-09T04:40:18Z","content_type":"application/xhtml+xml","content_length":"23146","record_id":"<urn:uuid:c2c865e7-9dac-432c-a6ab-ab8bdf0c4d0f>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00162.warc.gz"}
Introduction to machine learning with R Fribourg Pérolles, Friday May 13, 2022, 9:00 – 17:00 (if legally possible in presence-mode, in case of lockdown in distance-mode) This training is designed for anyone who wants to understand the fondamentals of machine learning. It is given in English by Prof. Martin Huber. This lecture provides an introduction to machine learning based on the software “R”, however, only elementary basic working knowledge of R is required. Machine learning aims at predicting the value of an outcome of interest, e.g. sales or turnover, based on observing specific patterns of potentially relevant factors (or “predictors”) like price, quality, weather, advertisement campaigns etc. Importantly, such statistical methods allow learning from patterns among predictors in (past) data to forecast the value of the outcome in the future. This lecture first discusses the intuition and usefulness of machine learning for forecasting and taking actions (e.g. changing the price). It then introduces various statistical approaches such as regression and tree-based methods. Using the statistical software “R” and its interface “R Studio”, these methods are applied to various real-world data sets. • To understand the idea and goals of machine learning • To understand the intuition, advantages, and disadvantages of alternative statistical methods • To be able to apply machine learning methods to real-world data using the software “R” and its interface “R Studio” (see also our introductory course in R) • Introduction to the concept and purpose of machine learning • Linear and non-linear regression (OLS, logit regression) • Penalized regression for variable selection and shrinking (lasso and ridge regression) • Tree-based approaches (trees, bagging, random forests) • Model tuning (cross-validation) • Performance evaluation (out-of-sample testing) • Application of all methods using the training • Software “R” and its interface “R Studio” Course material: Lecture slides and R code for applications to data The lecture slides are based on “An Introduction to Statistical Learning with Applications in R” by Gareth James, Daniela Witten, Trevor Hastie, and Robert Tibshirani (Springer, New York, 2013). The text book is available as pdf at http://www-bcf.usc.edu/~gareth/ISL/. Further information: Basic knowledge of R is expected. For that purpose, you can take advantage of our introductory course in R on Feb 12, 2021. Participants are requested to bring their own laptop. (Please contact us if this is not feasible.) The maximum number of participants is 18. The participation fee is 500 CHF / 400 CHF for Swiss Engineering-section Fribourg members.
{"url":"http://www.frisam.ch/de/introduction-to-machine-learning-with-r/","timestamp":"2024-11-09T19:01:47Z","content_type":"text/html","content_length":"31269","record_id":"<urn:uuid:486c73e8-4c43-4777-8525-9bbca5bf631f>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00366.warc.gz"}
MATH 184 Assignment 5, Due Monday Nov 5 Note: Many questions were stolen from my test 4 from last year. You will indeed have to work on them and hand them in before seeing the solutions! I will post the solutions to all of test 4 after this assignment is handed in. Please remembered the long list of problems given in the course syllabus for you to work on. You will soon be ready to work on problems from sections 4.1, 4.2, 4.3 and 4.5 1. Compute the derivative f 0 (x) for f (x) = (x2 + 4)1/3 2.Given x2 + y 2 = ln(xy) + 2, compute as a function of x and y. 3. The function f (x) = xe2x has one inflection point. Find the x coordinate of this point. 4. Find the maximum and minimum of g(x) = x3 − 3x on the interval [0, 2] 5. We are not given an expression for the function f (x) but we are given that f (1) = 2 and f 0 (x) = √ 3 x +3 Estimate the value of f (1.1) using this information. 6. The function f (x) = (ln(x))2 is defined for x &gt; 0. On what interval or intervals is the function f (x) concave up? 7. You are planning on building balloons and selling their surface area for advertising. Apparently there is sufficient air travel to sell even the top surface of your balloons. You estimate the net value of a square meter of balloon surface is worth $100 (you have computed expected revenue minus cost of balloon fabric). The cost of helium to fill your balloon is $5 per cubic meter. What is the optimal radius for your balloon? A complete answer will justify that the given radius yields the maximum profit. 8. Section 4.4 questions 5,9,21,29
{"url":"https://studylib.net/doc/11135527/math-184-assignment-5--due-monday-nov-5","timestamp":"2024-11-04T16:51:51Z","content_type":"text/html","content_length":"59136","record_id":"<urn:uuid:fcd22da0-2556-475d-b4f9-edae3208a7d8>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00678.warc.gz"}
EViews Help: @ifirst Index of the first non-missing value in the data object. Syntax: @ifirst(m) m: data object Return: scalar Returns a scalar containing the index of the first non-missing value of the data object.The series version uses the current workfile sample. Let V be a vector of length 4 whose elements are NA, NA, 4.2, and 1.7. Then = @ifirst(v) returns 3. See also , and
{"url":"https://help.eviews.com/content/functionref_i-@ifirst.html","timestamp":"2024-11-06T11:23:17Z","content_type":"application/xhtml+xml","content_length":"8955","record_id":"<urn:uuid:87947e1c-1f49-450a-8caf-1c26615cd2e1>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00218.warc.gz"}
The proportion of each additional dollar of household income that is used for consumption expenditures. The marginal propensity to consume (abbreviated MPC) is another term for the slope of the consumption line and is calculated as the change in consumption divided by the change in income. The MPC plays a central role in Keynesian economics. It quantifies the consumption-income relation and the fundamental psychological law. It is also a foundation for the slope of the aggregate expenditures line and is critical to the multiplier process. A related consumption measure is the average propensity to consume. The marginal propensity to consume (MPC) indicates what the household sector does with extra income. The MPC indicates the portion of additional income that is used for consumption expenditures. If, for example, the MPC is 0.75, then 75 percent of extra income goes for consumption. The marginal propensity to consume is critical to the macroeconomy and the study of Keynesian economics. First, the MPC captures induced consumption and the fundamental psychological law of consumer spending proposed by John Maynard Keynes as a key difference between his Keynesian theory and classical economics. Second, the MPC is the slope of the consumption line, which makes it the foundation for the slope of the aggregate expenditures line, as well. Third, the MPC affects the multiplier process and affects the magnitude of the expenditures and tax multipliers. The MPC Formula The standard formula for calculating marginal propensity to consume (MPC) is: MPC = change in consumptionchange in income This formula has a couple of interpretations. • First, it quantifies induced consumption, that is, how much of each extra dollar of income is used for consumption. If income changes by $1, then consumption changes by the value of the MPC. Income induces the change in consumption at a rate measured by the MPC. • Second, the MPC is actually a measure of the slope of the consumption line. The measurement of slope is generally given as the "rise" over the "run." For the consumption line, the rise is the change in consumption and the run is the change in income. A Schedule Of Numbers A consumption schedule, such as the one presented to the right, provides data that can be used to run through a few MPC calculations. The first column in this schedule presents household income, ranging from $0 to $10 trillion. The second column presents consumption expenditures, ranging from $1 to $8.5 trillion. The task at hand is to derive the marginal propensity to consume at each income The marginal propensity to consume is calculated by dividing the change in consumption in the second column by the change in income in the first column. Beginning at the top of the schedule, household income increases from $0 to $1 trillion. This $1 trillion change in income induces a change in consumption from $1 trillion to $1.75 trillion, a change of $0.75 trillion. Running the numbers through the MPC formula gives: MPC = change in consumptionchange in income = $0.75$1 = 0.75 Calculations for each change in income produce similar results. For example, the change in income from $4 trillion to $5 trillion results in a change in consumption from $4 trillion to $4.75 trillion. And the change in income from $8 trillion to $9 trillion results in a change in consumption from $7 trillion to $7.75 trillion. In both cases, the resulting marginal propensity to consume is 0.75. In fact, a quick run through the numbers for each change in consumption shows that the MPC is constant and equal to 0.75. To display all marginal propensity to consume values, click the [MPC] button. While the MPC is not necessarily constant at for all changes in income (in fact, the MPC tends to decline at higher income levels), most analysis of consumption generally works with a constant MPC. It tends to make subsequent calculations for things like the multiplier a lot easier. The Slope Of The Line The marginal propensity to consume is another term for the slope of the consumption line. This can be demonstrated and illustrated using the red consumption line, labeled C, in the exhibit to the right. Most notable, the consumption line is positively sloped, indicating that greater levels of income generate greater consumption expenditures by the household sector. This consumption line reflects a plot of the numbers in the consumption schedule as well as the following consumption function: Just for a little reference, a black 45-degree line is also presented in this exhibit. Because this 45-degree line, by its very nature, has a slope of one, it indicates the relative slope of the consumption line. The flatter consumption has a slope of less than one. In particular, as specified by the consumption function, the slope of this consumption line is equal to 0.75. This slope value indicates that each $1 change in income induces a $0.75 change in consumption. In general, slope is calculated as the "rise" over the "run," that is, the change in the variable on the vertical axis (consumption) divided by the change in the variable on the horizontal axis (income). The change in consumption divided by the change in income is the specification of the marginal propensity to consume. That is, the slope of the consumption line is the marginal propensity to consume. To highlight this point, click the [Slope] button in this exhibit. Moreover, because the consumption line is a straight line, the slope is constant over the entire range of income. This means that the marginal propensity to consume is also constant, a conclusion reached when working through the consumption schedule. While the marginal propensity to consume pops up throughout the study of macroeconomics, few if any topics are more important than the multiplier. The multiplier measures the magnified change in aggregate production (gross domestic product) resulting from a change in an autonomous variable (such as investment expenditures). The magnified change occurs because a change in production (such as what occurs when investment expenditures purchase capital goods) generates income, which then induces consumption. However, the resulting consumption is also an expenditure on production, which generates more income, which induces more consumption. This next round of consumption also triggers a change in production, which generates even more income, and which induces even more consumption. And on it goes, round after round. The end result is a magnified, multiplied change in aggregate production initially triggered by the change investment, but amplified by the change in consumption. The MPC enters into the process because it determines how much consumption is induced with each change in production and income. If the MPC is greater, then the multiplier process is also greater as more consumption is induced with each round of activity. This connection between the multiplier process and the marginal propensity to consume is illustrated in the standard formula for a basic expenditures multiplier: expenditures = 1(1 - marginal propensity to consume) An increase in the marginal propensity to consume reduces the value of the denominator on the right-hand side of the equation, which then increases the overall value of the fraction and thus the size of the multiplier. For example, a marginal propensity to consume of 0.75 results in a multiplier of 4. In contrast, a larger marginal propensity to consume of 0.8 results in a larger multiplier of 5. Other Marginals The marginal propensity to consume is perhaps the most important marginal that enters into the study of Keynesian economics. However, it is not the only important marginal. In fact, all induced variables have corresponding marginals that quantify the impact of income changes. Here a few of the more important marginals: • Marginal Propensity to Save: The flip side of consumption is saving. The fundamental psychological law indicates that an increase in income induces changes in both consumption and saving. The marginal propensity to save (MPS) quantifies the saving part of this relation. It indicates the change in saving resulting from a change in income. In fact, if the MPC and MPS are calculated based on after-tax disposable income, then the two marginals sum to one: MPC + MPS = 1. • Marginal Propensity to Invest: Consumption is not the only one of the aggregate expenditures induced by income and with a corresponding marginal. The marginal propensity to invest (MPI) is the change in investment induced by a change in income. The induced change in investment is not nearly as big as consumption, but it does affect the slope of the aggregate expenditures line and the size of the multiplier. • Marginal Propensity for Government Purchases: Government purchases, like investment, is also induced by income and has a corresponding marginal. The marginal propensity for government purchases (MPG) is the change in government purchases induced by a change in income. The induced change in government purchases is related to the induced change in tax collections, and while it also small compared to consumption, but it too affects the slope of the aggregate expenditures line and the size of the multiplier. • Marginal Propensity to Import: The last of the four aggregate expenditures, net exports (exports minus imports), also has a corresponding marginal. However, the marginal is not for net exports proper, but for the imports part. The marginal propensity to import (MPM) is the change in imports induced by a change in income. The induced change in imports is closely connected to the marginal propensity to consume. That is, a portion of consumption expenditures is actually used to purchase imports, which is reflected the marginal propensity to import. Average Propensity to Consume The marginal propensity to consume is one of two measures of the relation between consumption and income. The other is average propensity to consume (APC). Average propensity to consume is the proportion of household income used for consumption expenditures. It is found by dividing consumption by income. The formula for calculating average propensity to consume (APC) looks a lot like that for the MPC, but with important differences: Rather than the CHANGE in consumption divided by the CHANGE in income, the APC measures TOTAL consumption divided by TOTAL income. In particular, the APC indicates how the household sector divides up total income. If, for example, the APC is 0.9, then 90 of the income received by the household sector is used for consumption. Moreover, whereas the MPC is constant, the APC actually changes from one income level to the next. Recommended Citation: MARGINAL PROPENSITY TO CONSUME, AmosWEB Encyclonomic WEB*pedia, http://www.AmosWEB.com, AmosWEB LLC, 2000-2024. [Accessed: November 14, 2024]. Check Out These Related Terms... | | | | | | | | | Or For A Little Background... | | | | | | | | | | | | | And For Further Study... | | | | | | | | | | | | | | | | | | Search Again? Back to the WEB*pedia
{"url":"https://www.amosweb.com/cgi-bin/awb_nav.pl?s=wpd&c=dsp&k=marginal+propensity+to+consume","timestamp":"2024-11-14T10:59:32Z","content_type":"text/html","content_length":"49791","record_id":"<urn:uuid:756c57c9-0601-45c0-aa12-b2626e6a9931>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00606.warc.gz"}
Slit homogenizer introduced performance gain analysis based on the Sentinel-5/UVNS spectrometer Articles | Volume 14, issue 8 © Author(s) 2021. This work is distributed under the Creative Commons Attribution 4.0 License. Slit homogenizer introduced performance gain analysis based on the Sentinel-5/UVNS spectrometer Spatially heterogeneous Earth radiance scenes affect the atmospheric composition measurements of high-resolution Earth observation spectrometer missions. The scene heterogeneity creates a pseudo-random deformation of the instrument spectral response function (ISRF). The ISRF is the direct link between the forward radiative transfer model, used to retrieve the atmospheric state, and the spectra measured by the instrument. Hence, distortions of the ISRF owing to radiometric inhomogeneity of the imaged Earth scene will degrade the precision of the Level-2 retrievals. Therefore, the spectral requirements of an instrument are often parameterized in the knowledge of the ISRF over non-uniform scenes in terms of shape, centroid position of the spectral channel and the full width at half maximum (FWHM). The Sentinel-5/UVNS instrument is the first push-broom spectrometer that makes use of a concept referred to as a slit homogenizer (SH) for the mitigation of spatially non-uniform scenes. This is done by employing a spectrometer slit formed by two parallel mirrors scrambling the scene in the along track direction (ALT) and hence averaging the scene contrast only in the spectral direction. The flat mirrors do not affect imaging in the across track direction (ACT) and thus preserve the spatial information in that direction. The multiple reflections inside the SH act as coherent virtual light sources and the resulting interference pattern at the SH exit plane can be described by simulations using scalar diffraction theory. By homogenizing the slit illumination, the SH strongly modifies the spectrograph pupil illumination as a function of the input scene. In this work we investigate the impact and strength of the variations of the spectrograph pupil illumination for different scene cases and quantify the impact on the ISRF stability for different types of aberration present in the spectrograph optics. Received: 20 Jan 2021 – Discussion started: 05 Feb 2021 – Revised: 08 Jul 2021 – Accepted: 08 Jul 2021 – Published: 10 Aug 2021 The Ozone Monitoring Instrument (OMI) was the first instrument identifying the issue arising from non-uniform Earth scenes on the shape and maximum position of the spectral response of the instrument (Voors et al., 2006). In grating-based imaging spectrometers, the Earth ground scene is imaged by the telescope onto the instrument entrance slit plane. The scanning over the ground area is achieved by either a scanning mirror or a push-broom configuration, where different areas of the surface are imaged as the satellite flies forward. In the subsequent spectrograph, the slit illumination gets spectrally resolved by a dispersive element and re-imaged on the focal plane array (FPA) by an imaging system. The limited spectral resolving power of the instrument arising from diffraction and aberration is described by a convolution of the slit image with the spectrometer and detector point spread function (PSF). In this study, we interpret the resulting intensity pattern on the FPA in the spectral direction as the instrument spectral response function (ISRF). In fact, there exist other definitions of the ISRF. The differentiation of the definitions become particularly important in the presence of spectrometer smile effects (Caron et al., 2017). As we neglect such effects, we will continue with the previously described definition of the ISRF. Depending on the observed scene heterogeneity, the entrance slit will be inhomogeneously illuminated. In the case of a classical slit, this will alter the shape of the ISRF (see Fig. 1). Moreover, a scene dependency in the PSF will also affect the ISRF, which will be particularly discussed in this paper. As the ISRF is the direct link between the radiative transfer model and the spectrum measured by the instrument, a scene-dependent shape of the ISRF will have an immediate impact on the accuracy of the Level-2 retrieval products. Figure 2 depicts a representative top-of-atmosphere spectrum (solar zenith angle 10^∘, albedo 0.05) for the Sentinel-5/UVNS (ultraviolet/visible/near-infrared/SWIR) SWIR-3 spectrometer, incident on the instrument's entrance aperture. The monochromatic spectrum will be smeared by means of a convolution with an exemplary ISRF, which depends on the imaging properties of the instrument for any given wavelength. In general, the ISRF is a wavelength- and field-of-view-dependent instrument characteristic and hence varies over the FPA position. It is experimentally determined prior to launch in on-ground characterization campaigns. Whenever the in-orbit ISRF shape deviates from the on-ground characterized shape due to, for example, heterogeneous scenes, it will affect the measured spectrum from which the Level-2 products are retrieved (e.g., CH[4] and CO in the SWIR-3 channel of Sentinel-5/UVNS). This effect is particularly prominent for instruments with a high spatial resolution. The along track motion of the satellite during the integration times results in a temporal averaging of the ISRF variation, which reduces the impact of scene heterogeneity. The impact of, e.g., albedo variations depends on the instantaneous field of view (IFOV) and the sampling distance in ALT (for Sentinel-5/ UVNS: IFOV=2.5km, ALT SSD=7km). Spectrometers with a large scan area like GOME (Burrows et al., 1999) or SCIAMACHY (Bovensmann et al., 1999; Burrows et al., 1995) are less vulnerable to contrast in the Earth scene due to the small ratio between the slit footprint and the smear distance. In contrast, recent high-resolution hyperspectral imaging spectrometers with an IFOV comparable to the sampling distance (or scan area) are more strongly affected and therefore demand a set of stringent requirements on the in-flight knowledge and stability of the ISRF. This is necessary, as distortions in the ISRF due to non-uniform scenes will introduce biases and pseudo-random noise in the Level-2 data and therefore in the precision of atmospheric composition products. For the Sentinel-5 Precursor (S5P) satellite (Veefkind et al., 2012), launched in 2017 with the Tropospheric Monitoring Instrument (TROPOMI) being the single payload, Hu et al. (2016) showed that the stability and knowledge of the ISRF is the main driver of all instrument calibration errors for the retrieval accuracy. Landgraf et al. (2016) estimate the error of the retrieved CO data product due to non-uniform slit illumination to be on the order of 2% with quasi-random characteristics. Noël et al. (2012) quantify the retrieval error for the upcoming Sentinel-4 UVN imaging spectrometer for tropospheric O[3], NO[2], SO[2] and HCHO. They identify a difference in the retrieval error depending on the trace gas under observation. The largest error occurs for NO[2] with a mean error of 5% and a maximum error of 50%. They propose a software correction algorithm based on a wavelength calibration scheme individually applied to all Earth radiance spectra. As discussed by Caron et al. ( 2019), this type of software correction can only be applied to dedicated bands (UV, VIS, NIR) but particularly fails in the SWIR bands due to the strong absorption lines of highly variable atmospheric components. Sentinel-5/UVNS (Irizar et al., 2019) is the first push-broom spectrometer that employs an onboard concept to mitigate the effect of non-uniform scenes in the along-track direction. A hardware solution called a slit homogenizer (SH) is implemented, which reduces the scene contrast of the Earth radiance in the along track direction (ALT) of the satellite flight motion by replacing the classical slit with a pair of two parallel extended mirrors (Fig. 3). The two parallel rectangular mirrors composing the entrance slit have a distance of b=248µm, side lengths of 65mm in ACT and a length of 9.91mm (SWIR-3) along the optical axis. Thereby, the light focused by the telescope optics onto the slit entrance plane is scrambled by multiple reflections in the ALT direction, whereas in ACT the light passes the SH without any reflection. Heterogeneous scenes in the ACT direction may also affect the ISRF stability in the presence of spectrometer smile. This effect will not be covered in this study and instead we refer the reader to Gerilowski et al. (2011) and Caron et al. (2017). For a realistic reference Earth scene of the Sentinel-5/UVNS mission provided by the ESA (Fig. 5), the ISRF shall meet the requirements of <2% ISRF shape knowledge error, <1% relative full width at half maximum (FWHM) knowledge error and 0.0125nm centroid error in the SWIR-3. Meister et al. (2017) and Caron et al. (2019) presented simulation results providing a first-order prediction of the performance of the SH principle, which are relevant to achieve the performance requirements above. However, so far several second-order effects haven't been quantitatively addressed in the prediction of the homogenizing performance. This paper extends the existing first-order models and provides a more elaborated and comprehensive description of the SH and its impact on performance and instrument layout. We present an end-to-end model of the Sentinel-5/UVNS SWIR-3 channel (2312nm). In particular, we determine the spectrograph pupil illumination, which is altered by the multiple reflections inside the SH. This effect changes the weighting of the aberration present in the spectrograph optics and consequently results in a scene dependency in the optical PSF. As the ISRF is not only a function of the slit illumination but also of the spectrograph PSF, a variation in the intensity distribution across the spectrograph pupil will ultimately put an uncertainty and error contribution to the ISRF. The severity of the spectrograph illumination distortion highly depends on the slit input illumination and the strength and type of aberration present in the spectrograph. In order to quantify the achievable ISRF stability, we simulate several input scenes and different types of aberration. The outline of this paper is as follows: Sect. 2 describes the model we deployed to propagate the light through the SH by Huygens–Fresnel diffraction formula. Applying Fourier optics, we formulate the propagation of the complex electric field from the SH exit plane up to the grating position, representing the reference plane for the evaluation of the spectrograph pupil intensity distribution. In Sect. 3 we quantify the spectrograph pupil intensity distribution for several Earth scene cases. The scene-dependent weighting of the aberration in the spectrograph and its impact on the ISRF properties is discussed and quantified in Sect. 4. Finally, we summarize our results in Sect. 5. This section describes the underlying models and the working principle of the SH. The first part briefly summarizes the model developed by Meister et al. (2017), which propagates the field through the Sentinel-5/UVNS instrument up to the SH exit plane by using a scalar-diffraction approach. In the second part a novel modeling technique of the spectrograph optics is introduced. We put a particular focus on the scene dependency of the spectrograph illumination while using an SH. 2.1Near field The light from objects on the Earth that are imaged at one spatial position (along slit) within the homogenizer entrance slit arrive at the Sentinel-5/UVNS telescope entrance pupil as plane waves, where the incidence angle θ is between $±\mathrm{0.1}{}^{\circ }$. The extent of the wavefront is limited by the size and shape of the telescope aperture. By neglecting geometrical optical aberration, the telescope would create a diffraction-limited point spread function in the telescope image where the SH entrance plane is positioned. Depending on the angle of incidence, the PSF centroid will be located at a dedicated position within the SH entrance plane. The electric field of the diffraction pattern in the SH entrance plane is given as the Fourier transform of the complex electric field over the telescope pupil. For a square entrance pupil, the diffraction pattern is calculated as (Goodman, 2005, p. 103) $\begin{array}{}\text{(1)}& \begin{array}{rl}{\stackrel{\mathrm{̃}}{U}}_{f,\mathit{\theta }}\left({u}_{a},{v}_{a}\right)& =\frac{A}{i\mathit{\lambda }f}{e}^{i\frac{k}{\mathrm{2}f}\left({u}_{a}^{\ mathrm{2}}+{v}_{a}^{\mathrm{2}}\right)}\underset{\mathrm{\Omega }}{\int }{e}^{ik{y}_{t}\mathrm{sin}\left(\mathit{\theta }\right)}\\ & \cdot {e}^{-i\frac{k}{f}\left({x}_{t}{u}_{a}+{y}_{t}{v}_{a}\ right)}\mathrm{d}{u}_{a}\mathrm{d}{v}_{a}\end{array}\text{(2)}& \begin{array}{rl}& =\frac{iA{D}^{\mathrm{2}}}{\mathit{\lambda }f}{e}^{i\frac{k}{\mathrm{2}f}\left({u}_{a}^{\mathrm{2}}+{v}_{a}^{\mathrm {2}}\right)}\mathrm{sin}c\left(\frac{Dk}{\mathrm{2}f}{u}_{a}\right)\\ & \cdot \mathrm{sin}c\left(\frac{Dk}{\mathrm{2}f}\left(f\mathrm{sin}\left(\mathit{\theta }\right)-{v}_{a}\right)\right),\end where (x[t],y[t]) are the coordinate positions in the telescope entrance pupil and (u[a],v[a]) are the respective coordinates in the SH entrance plane. Ω denotes the two-dimensional entrance pupil area, f is the focal length of the telescope, A the amplitude of the plane wavefront at the telescope entrance pupil, D the full side length of the quadratic telescope entrance pupil and $k=\frac{\ mathrm{2}\mathit{\pi }}{\mathit{\lambda }}$ is the wavenumber. Further, the relation ${\int }_{-a}^{a}{e}^{ixc}=\mathrm{2}a\mathrm{sin}c\left(ca\right)$ and a Fresnel approximation was applied in Eq. (2). The propagation of ${\stackrel{\mathrm{̃}}{U}}_{f}$ through the subsequent SH is described by the Huygens–Fresnel principle (Goodman, 2005, p. 66). The reflections at the two mirrors are accounted for by inverting the propagation component in ALT upon every reflection n as $\begin{array}{}\text{(3)}& \begin{array}{rl}& {U}_{f,\mathit{\theta }}\left({u}_{a},{v}_{a}\right)={R}^{|n|}{e}^{in\mathit{\pi }}{\stackrel{\mathrm{̃}}{U}}_{f,\mathit{\theta }}\left({u}_{a},\left(-\ mathrm{1}{\right)}^{n}\left({v}_{a}-nb\right)\right),\\ & \phantom{\rule{1em}{0ex}}\text{for }{v}_{a}\in \left[-\frac{b}{\mathrm{2}}+nb,\frac{b}{\mathrm{2}}+nb\right],\end{array}\end{array}$ where R is the reflectivity, b is the slit width and e^inπ describes a phase jump upon every reflection n. Inserting Eq. (2) into (3) and applying the Huygens–Fresnel diffraction principle yields the expression for the intensity distribution at the SH exit plane for a given incidence angle θ, SH length l and position $r\left({u}_{a},{v}_{a}\right)=\sqrt{{l}^{\mathrm{2}}+\left({u}_{b}-{u}_{a}{\ right)}^{\mathrm{2}}+\left({v}_{b}-{v}_{a}{\right)}^{\mathrm{2}}}$ as $\begin{array}{}\text{(4)}& \begin{array}{rl}{U}_{\mathit{\theta }}\left({u}_{b},{v}_{b}\right)& =\frac{lA{D}^{\mathrm{2}}}{{\mathit{\lambda }}^{\mathrm{2}}f}\underset{{u}_{a}\in \mathbb{R}}{\int }\ underset{{v}_{a}=-\frac{b}{\mathrm{2}}}{\overset{{v}_{a}=\frac{b}{\mathrm{2}}}{\int }}\sum _{n\in \mathbb{N}}{R}^{|n|}\\ & \cdot \frac{{e}^{i\frac{k}{\mathrm{2}f}\left({u}_{a}^{\mathrm{2}}+{\left(\ left(-\mathrm{1}{\right)}^{n}\left({v}_{a}-nb\right)\right)}^{\mathrm{2}}\right)+ikr\left({u}_{a},{v}_{a}+nb\right)+in\mathit{\pi }}}{{r}^{\mathrm{2}}\left({u}_{a},{v}_{a}+nb\right)}\\ & \cdot \ mathrm{sin}c\left(\frac{Dk}{\mathrm{2}f}{u}_{a}\right)\\ & \cdot \mathrm{sin}c\left(\frac{Dk}{\mathrm{2}f}\left(f\mathrm{sin}\left(\mathit{\theta }\right)-\left(-\mathrm{1}{\right)}^{n}{v}_{a}\right) where u[b],v[b] are the coordinates of the position at the SH exit plane. Evaluating Eq. (4) for every incidence angle of the Sentinel-5/UVNS field of view (FoV) results in the so-called SH transfer function (Fig. 3b), which maps any field point originating from Earth to an intensity distribution at the SH exit plane. In a purely geometric theory and a perfect SH configuration in terms of length, every point source would be distributed homogeneously in the ALT direction (Fig. 3a). However, as is quantified in Eq. (4), the field distribution at the SH output plane highly depends on interference effects due to path differences of the reflected light inside the SH, resulting in a non-uniform transfer function as shown in Fig. 3b. A full experimental validation of the propagation model through the SH is still missing. An initial approach to validate the model in a breadboard activity was conducted by ITO Stuttgart and published in Irizar et al. (2019). 2.2Far field In a space-based imaging spectrometer equipped with a classical slit acting as a field stop, a point source on the Earth surface enters the instrument as a plane wavefront with a uniform intensity over the telescope pupil. As this principle applies for every point source in a spatial sample on the Earth, the telescope pupil intensity homogeneity is independent of the radiance variation among the point sources in a spatial sample. Besides some diffraction edge effects in the slit plane, the telescope pupil intensity distribution gets retrieved in the spectrograph pupil. This is not the case when introducing a mirror-based SH. Existing SH models (Meister et al., 2017; Caron et al., 2019) implement the spectrometer as a simple scaling factor and the ISRF on the FPA is obtained via the convolution of the SH output intensity distribution, the pixel response implemented as a characteristic function and the spectrograph PSF. In this contribution we model the propagation through the spectrograph more accurately by including the spectrograph optics, such as the collimator, a dispersive element and the imaging optics. In particular, the inclusion of these optical parts becomes important because the SH not only homogenizes the scene contrast in the slit, but it also significantly modifies the spectrograph pupil illumination. A schematic diagram of the SH behavior and the instrument setup is shown in Fig. 4. A plane wavefront with incidence angle Θ is focused by a telescope on the SH entrance plane. In the ACT direction, the light is not affected by the SH. After a distance l, corresponding to the SH length, the diffraction limited PSF at the SH entrance plane is converted to the far-field pattern of the diffraction pattern. Independent of the applied scene in ACT, the telescope pupil intensity distribution in ACT is mostly retrieved again at the spectrograph pupil. The exact distribution of the spectrograph pupil illumination is affected by magnification factor and a truncation of the electric field at the SH entrance plane, which leads to a slight broadening and small intensity variations with a high frequency in angular space (Berlich and Harnisch , 2017). In ALT the diffraction pattern in the SH entrance plane undergoes multiple reflections on the mirrors, so that eventually the whole exit plane of the SH is illuminated. To preserve the full image information along the swath, the entrance plane of the SH must be imaged; to homogenize the scene in ALT the exit plane of the SH must be imaged. This is achieved by an astigmatism in the collimator optics. Moreover, the multiple reflections inside the SH lead to a modification of the system exit pupil illumination. In other words, the SH output plane (near field) and the spectrograph pupil intensity variation (far field) strongly depend on the initial position of the incoming plane wave and therefore on the Earth scene radiance in ALT direction. Following a first simple geometrical argument as discussed by (Caron et al., 2019), we consider a point source at the SH entrance. The rays inside the cone emerging from this source will undergo a number of reflections depending on the position of the point source and the angle of the specific ray inside the cone. The maximum angle is given by the telescope f-number. With this geometrical reasoning it becomes obvious that the number of reflections differs among the rays inside the cone. If the number of reflections is even, a ray keeps its nominal pupil position, whereas if the number is odd, its pupil coordinate will be inverted. From this argument we deduce that the spectrograph pupil illumination will be altered with respect to the telescope pupil illumination. Note that the reallocation of the angular distribution of the light has a different origin than the remaining inhomogeneity at the SH exit plane. The achieved near-field homogenization is dependent on the remaining interference fluctuations in the SH transfer function. In contrast, the variations in the spectrograph illumination are based on a geometrical reallocation of the angular distribution of the light exiting the SH in combination with interference effects in the spectrograph pupil plane. In the following we make the geometrical argument rigorous using diffraction theory. A general case for the connection between slit exit plane and spectrograph pupil plane is considered by Goodman ( 2005, p. 104). In the scenario discussed there, a collimated input field U[l](x[s],y[s]) propagates through a perfect thin lens at a distance d. The field in the focal plane of the lens is then given $\begin{array}{}\text{(5)}& \begin{array}{rl}{U}_{f}\left({u}_{b},{v}_{b}\right)& =\frac{\mathrm{1}}{i\mathit{\lambda }f}\mathrm{exp}\left(i\frac{k}{\mathrm{2}f}\left(\mathrm{1}-\frac{d}{f}\right)\ left({u}_{b}^{\mathrm{2}}+{v}_{b}^{\mathrm{2}}\right)\right)\\ & \cdot \underset{-\mathrm{\infty }}{\overset{\mathrm{\infty }}{\int }}\underset{-\mathrm{\infty }}{\overset{\mathrm{\infty }}{\int }} {U}_{l}\left({x}_{s},{y}_{s}\right)\\ & \cdot \mathrm{exp}\left(-i\frac{k}{f}\left({x}_{s}{u}_{b}+{y}_{s}{v}_{b}\right)\right)\mathrm{d}{x}_{s}\mathrm{d}{y}_{s}\end{array}\end{array}$ where x[s],y[s] are the position in the spectrometer pupil plane and u[b],v[b] are the coordinates in the image plane at the SH exit. Indeed, the field at the lens focal plane is proportional to the two-dimensional Fourier transform. In contrast, our situation is inverted as we are interested in U[l](x[s],y[s]), i.e., the collimated field distribution at the spectrometer pupil originating from the SH output plane. Further, we need to incorporate the astigmatism in the collimation optics and the diffraction grating. These steps are covered in the following two sections. 2.3Collimator astigmatism In order to keep the full image information in ACT while imaging the homogenized SH output image, the collimator needs an astigmatism. In our model, this is implemented via Zernike polynomial terms on the collimation lens. We follow the OSA/ANSI convention for the definitions of the Zernike polynomials and the indexing of the Zernike modes (Thibos et al., 2000). The focal length of the collimator in ALT images the SH exit plane, while in ACT the SH entrance plane is imaged. In the simulation this is realized with three terms: a focal length term where the focal length is that of the collimator in ALT, a defocus term to shift the object plane and an astigmatism term to separate the ALT (tangential) and ACT (sagittal) object planes. The Zernike polynomials are given by $\begin{array}{}\text{(6)}& \text{defocus:}\phantom{\rule{1em}{0ex}}{Z}_{\mathrm{2}}^{\mathrm{0}}\left(\mathit{\rho },\mathit{\theta }\right)={c}_{\mathrm{02}}\sqrt{\mathrm{3}}\left(\mathrm{2}{\ mathit{\rho }}^{\mathrm{2}}-\mathrm{1}\right)\end{array}$ $\begin{array}{}\text{(7)}& \text{astigmatism:}\phantom{\rule{1em}{0ex}}{Z}_{\mathrm{2}}^{\mathrm{2}}\left(\mathit{\rho },\mathit{\theta }\right)={c}_{\mathrm{22}}\sqrt{\mathrm{6}}{\mathit{\rho }}^{\ mathrm{2}}\mathrm{sin}\left(\mathrm{2}\mathit{\theta }\right),\end{array}$ where c[nm] are the Zernike coefficients defining the strength of the aberration and ${Z}_{n}^{m}$ are the Zernike polynomials. Due to the elegant and orthonormal definition of the Zernike polynomials, a perfect matching of defocus and astigmatism amplitude is straightforward, as the difference between the sagitta and tangential plane of the astigmatism is solely dependent on the radial term of the Zernike polynomial. Therefore, in order to match the corresponding difference given by the SH length, the weighting of the astigmatism has to be larger than the defocus term by a factor of $\sqrt{\mathrm{2}}$. Hence, the combined Zernike term will be $\begin{array}{}\text{(8)}& H\left(\mathit{\rho },\mathit{\theta }\right)=c{Z}_{\mathrm{2}}^{\mathrm{0}}\left(\mathit{\rho },\mathit{\theta }\right)+\sqrt{\mathrm{2}}c{Z}_{\mathrm{2}}^{\mathrm{2}}\ left(\mathit{\rho },\mathit{\theta }\right).\end{array}$ Including the astigmatism of the collimation optics, applying $d={f}_{\mathrm{col},\mathrm{ALT}}$ and solving Eq. (5) for U[l,θ] by using the coordinate transformation ${x}_{s}^{\prime }=\frac{k}{f} {x}_{s}$ and ${y}_{s}^{\prime }=\frac{k}{f}{y}_{s}$, we get the field distribution at the diffraction grating as $\begin{array}{}\text{(9)}& \begin{array}{rl}{U}_{l,\mathit{\theta }}\left({x}_{s}^{\prime },{y}_{s}^{\prime }\right)& =\frac{i}{\mathit{\lambda }f}{e}^{ik\left(c{Z}_{\mathrm{2}}^{\mathrm{0}}\left(\ mathit{\rho },\mathit{\theta }\right)+\sqrt{\mathrm{2}}c{Z}_{\mathrm{2}}^{\mathrm{2}}\left(\mathit{\rho },\mathit{\theta }\right)\right)}\\ & \cdot \underset{-\mathrm{\infty }}{\overset{\mathrm{\ infty }}{\int }}\underset{-\mathrm{\infty }}{\overset{\mathrm{\infty }}{\int }}{U}_{f,\mathit{\theta }}\left({u}_{b},{v}_{b}\right)\\ & \cdot \mathrm{exp}\left(i\frac{k}{f}\left({x}_{s}^{\prime }{u}_ {b}+{y}_{s}^{\prime }{v}_{b}\right)\right)\mathrm{d}{u}_{b}\mathrm{d}{v}_{b}.\end{array}\end{array}$ Equation (9) yields the field distribution incident on the diffraction grating. The implementation of the diffraction grating, which is responsible for the wavelength dispersion, will be introduced in the next section. 2.4Diffraction grating The primary goal of the spectrometer is to distinguish the intensity of the light as a function of the wavelength and spatial position. In order to separate the wavelengths, a diffractive element is placed in the spectrograph pupil and disperses the light in the ALT direction. For our analysis, we place the diffraction grating at a distance $d={f}_{\mathrm{col},\mathrm{ALT}}$ after the collimator and on the optical axes. Further, we model the dispersive element as a one-dimensional binary phase diffraction grating. Such gratings induce a π phase variation by thickness changes of the grating medium. Three design parameters are used to describe the grating and are unique for every spectrometer channel: the period of the grating Λ, the phase difference Φ between the ridge (of width d) and the groove regions of the grating, and the fill factor $d/\mathrm{\Lambda }$. Physically, the phase difference itself is induced by two parameters: the height or thickness t of the ridge and the refractive index of the material of which the grating is made. In most cases, the refractive index of the used material is fixed and the thickness of the material is the primary parameter. The phase profile with a fill factor of 0.5, which provides the maximum efficiency in the ALT direction, is given by The complex electric field of the spectrograph pupil wavefront after the diffraction grating is then given by $\begin{array}{}\text{(11)}& {U}_{g,\mathit{\theta }}\left({x}_{s}^{\prime },{y}_{s}^{\prime }\right)={U}_{l,\mathit{\theta }}\left({x}_{s}^{\prime },{y}_{s}^{\prime }\right){e}^{i\mathrm{\Phi }\left The intensity distribution after the grating is given by inserting Eq. (9) in (11) and applying the absolute square: $\begin{array}{}\text{(12)}& {I}_{g,\mathit{\theta }}\left({x}_{s}^{\prime },{y}_{s}^{\prime }\right)=|{U}_{g,\mathit{\theta }}\left({x}_{s}^{\prime },{y}_{s}^{\prime }\right){|}^{\mathrm{2}}.\end The implementation of the diffraction grating is a simplified model, which is an approximation of the real, more complex case. In Sentinel-5/UVNS, the SWIR spectrograph is equipped with a silicon immersed grating. The simplified approach is also valid for this case, as the SH does not affect the general behavior of the grating. 3Spectrograph pupil intensity distribution The far-field intensity distribution is dependent on the contrast of the Earth scene in ALT and therefore on the SH entrance plane illumination. We characterize the amplitude of the variations of the spectrograph pupil illumination by introducing two types of heterogeneous scenes. The first is an applicable Earth scene as defined by the ESA for the Sentinel-5/UVNS mission, which aims at representing a realistic Earth scene case. The on-ground albedo variations of this scene can be parameterized as a linear interpolation between two spectra representing the same atmospheric state but obtained with either a dark or bright albedo (Caron et al., 2017). The spatial variation of the scene heterogeneity is described by introducing interpolation weights w[k]. The resulting spectrum for a given ALT subsample k is then calculated as $\begin{array}{}\text{(13)}& {L}_{k}\left(\mathit{\lambda }\right)=\left(\mathrm{1}-{w}_{k}\right){L}_{\mathrm{dark}}\left(\mathit{\lambda }\right)+{w}_{k}{L}_{\mathrm{bright}}\left(\mathit{\lambda } where the reference spectra correspond to a tropical bright scene (L[bright]: albedo=0.65) and a tropical dark scene (L[dark]: albedo=0.05). The weighting factors that were used for this study have been derived from the Moderate Resolution Imaging Spectroradiometer (MODIS) surface reflectance products with 500m spatial resolution and total coverage of 25km for relevant conditions of Sentinel-5/UVNS (EOP PIO, 2011). The slit smearing due to platform movement is accounted for by convolving the on-ground scene with the motion boxcar of the spatial sampling distance (SSD). The platform movement acts like a low-pass filter and averages out short albedo variations with respect to the SSD and the instrument's FoV. However, without an SH, remaining inhomogeneities are present in the slit, which yield up to 20% slit illumination variations in ALT directions. Figure 5 depicts the on-ground albedo contrast given in terms of weighting factors w[k], the scene after smearing due to the motion of the platform and the location of the SH entrance plane. We assume the scene to be homogeneous in the ACT direction. In fact, heterogeneous scenes in the ACT direction may also affect the ISRF stability in the presence of spectrometer smile (see Gerilowski et al., 2011; Caron et al., 2017). The second scene considered represents an artificial calibration (CAL) scene where 50% of the slit is illuminated and 50% is dark. These kind of instantaneous transitions are impossible to be observed by a push-broom instrument with finite FoV and integration time. However, they are convenient to be applied in experimental measurements and will serve as reference to experimentally validate the SH performance models. Figure 6 depicts the simulation results for the pupil intensity distribution in the SWIR-3 (2312nm) for the applied test scenes as well as a homogeneous slit illumination. As expected, the uniformity of the input telescope pupil illumination is completely conserved in the ACT direction due to the absence of interaction, i.e., reflection, with the SH. Therefore the top-hat intensity distribution of the telescope is, besides the diffraction edge effects, completely preserved. On the contrary, the intensity distribution in ALT is dependent on the contrast of the applied scene. Even for a homogeneous scene the SH modifies the pupil intensity (Fig. 6a) and consists of symmetrical variations. The intensity pattern just varies slightly for the applicable Earth scene (Fig. 6b) due to the moderate gradient of the slit illumination variation. The CAL scenes (Fig. 6c, d) highlight the previously made geometrical argument for the non-uniform pupil illumination as parts of the pupil are left with only a fraction of the light. For illustration, we show a case where the upper 50% of the slit are illuminated and another case where the lower 50% of the slit are illuminated (representing the ALT illumination). In the next section we will investigate the impact of non-uniform pupil illumination in combination with spectrograph aberration on the ISRF stability. The main impact of the above described variations in the spectrometer pupil illumination is the scene-dependent weighting of the aberration inherent to the spectrograph optics. In the case of a classical slit, it is valid to calculate the ISRF of an imaging spectrometer as the convolution of the slit illumination, the pixel response on the FPA and the optical PSF of the spectrograph optics. When using an SH, a scene dependency of the spectrograph pupil illumination will weight the aberration of the system accordingly and thereby create a variation in the PSF, which will ultimately also change the ISRF properties. Therefore, it is necessary to keep the complex phase of the electric field during the propagation through the instrument. Instead of a convolution, we propagate the spectrograph pupil illumination through the imaging optics by diffraction integrals. For the description of the aberration present in the Sentinel-5/UVNS instrument we use again the formulation of Zernike theory. We know the expected PSF size on the FPA of the Sentinel-5/UVNS SWIR-3 channel, which in the case of a classical slit can be approximated by the standard deviation of a normal distribution. In order to assess the impact of aberration, we impinge different types of aberration on the spectrograph imaging optics and match the PSF size to the instrument prediction. As the shape of the PSF for an arbitrary aberration is not given by a normal distribution, we define the PSF size as the area where 80% of the encircled energy (EE) is contained. Then we tune the strength of the aberration coefficients in such a way that the size of the aberrated PSF matches that of the normal distributed PSF. For the transformation of the spectrograph pupil illumination to the FPA including aberration, we apply the thin lens formula and expand it by adding the phase term for the Zernike aberration (Goodman, 2005, p. 145). Our starting point for the propagation is the grating position where, for the case of Sentinel-5/UVNS, the distance d matches the focal length of the imaging optics. In that case, the formulation simplifies again and is given by a relation that has the form of a Fourier transform: $\begin{array}{}\text{(14)}& \begin{array}{rl}{U}_{\mathrm{FPA},\mathit{\theta }}\left(s,t\right)& =\frac{\mathrm{1}}{i\mathit{\lambda }{f}_{\mathrm{im}}}\underset{-\mathrm{\infty }}{\overset{\mathrm {\infty }}{\int }}\underset{-\mathrm{\infty }}{\overset{\mathrm{\infty }}{\int }}{U}_{g,\mathit{\theta }}\left({x}_{s}^{\prime },{y}_{s}^{\prime }\right)\\ & \cdot \mathrm{exp}\left(-i\frac{k}{{f}_{\ mathrm{im}}}\left({x}_{s}s+{y}_{s}t\right)\right)\\ & \cdot \mathrm{exp}\left(\frac{ik}{\mathit{\pi }}H\left(r,\mathit{\varphi }\right)\right)\mathrm{d}{x}_{s}\mathrm{d}{y}_{s},\end{array}\end{array} where s,t are the coordinates at the FPA, f[im] is the focal length of the imager, U[g,θ] is the field distribution at the grating and H(r,ϕ), with $r=r\left({x}_{s},{y}_{s}\right)$ and $\mathit{\ varphi }=\mathit{\varphi }\left({x}_{s},{y}_{s}\right)$, is the respective Zernike aberration that we apply. Any spatially incoherent monochromatic input scene can be distributed in plane wavefronts with amplitude A(Θ). Each such wavefront leads to an intensity $I={I}_{\mathrm{\Theta }}\left(s,t\right)=|{U}_{\mathrm{FPA},\mathit{\theta }}{|}^{\mathrm{2}}$ on the FPA. As we have no SH impact in the ACT direction, we collapse this dimension and sum along it. This yields the one-dimensional ISRF intensity distribution on the FPA as a function of the incidence angle Θ as I[Θ](t). The respective scene will weight the intensities on the FPA depending of their strength and is therefore the linear operator: $\begin{array}{}\text{(15)}& {I}_{t}=\underset{\mathrm{\Theta }\in \mathbb{R}}{\int }A\left(\mathrm{\Theta }\right)I\left(\mathrm{\Theta },t\right)\mathrm{d}\mathrm{\Theta }=I\circ A\left(t\right).\ Note that for a homogeneous scene, A(Θ)=1 for every incidence angle. Finally, the normalized ISRF on the FPA is given by: $\begin{array}{}\text{(16)}& \stackrel{\mathrm{̃}}{\mathrm{ISRF}}\left(t\right)=\left({I}_{\mathrm{\Theta }}\circ A\right)\ast \mathit{\chi }\ast {N}_{\mathit{\sigma }}\left(t\right)\text{(17)}& \ mathrm{ISRF}\left(\mathit{\lambda }\right)=\frac{\stackrel{\mathrm{̃}}{\mathrm{ISRF}}\left(\frac{\mathit{\lambda }}{\mathit{\alpha }}\right)}{\mathit{\alpha }\int \stackrel{\mathrm{̃}}{\mathrm{ISRF}}\ where χ is the characteristic function, which is 1 inside a pixel area and 0 elsewhere, α is a scaling factor to give the ISRF in units of wavelength (λ) and N[σ] is the density function of a normal distribution with zero mean value and standard deviation σ. The latter factor accounts for the modulation transfer function (MTF) of the detector (not the MTF of the whole optical system). In order to assess the stability of the ISRF we define three merit functions: • shape error: the maximum difference of the ISRF calculated for a homogeneous and heterogeneous scene, respectively: $\begin{array}{}\text{(18)}& \text{Shape error}:=\underset{\mathit{\lambda }}{max}\frac{{\mathrm{ISRF}}_{\mathrm{hom}}\left(\mathit{\lambda }\right)-{\mathrm{ISRF}}_{\mathrm{het}}\left(\mathit{\ lambda }\right)}{{max}_{\stackrel{\mathrm{̃}}{\mathit{\lambda }}}{\mathrm{ISRF}}_{\mathrm{hom}}\left(\stackrel{\mathrm{̃}}{\mathit{\lambda }}\right)};\end{array}$ • centroid error: the shift of the position of the spectral channel centroid, where the centroid is defined as $\begin{array}{}\text{(19)}& \text{Centroid}:=\frac{\underset{\mathrm{FPA}}{\int }\mathrm{ISRF}\left(\mathit{\lambda }\right)\mathit{\lambda }\mathrm{d}\mathit{\lambda }}{\underset{\mathrm{FPA}} {\int }\mathrm{ISRF}\left(\mathit{\lambda }\right)\mathrm{d}\mathit{\lambda }};\end{array}$ • the spectral resolution of the ISRF given by the FWHM. We consider two cases for the assessment of the induced impact on the ISRF stability. In the first case, we neglect any variation of the spectrograph illumination and use the PSF as a convolution kernel of the ISRF given as a constant and scene-independent normal distribution defined as $\begin{array}{}\text{(20)}& g\left(t\right)=\frac{\mathrm{1}}{\mathit{\sigma }\sqrt{\mathrm{2}\mathit{\pi }}}\mathrm{exp}\left(-\frac{{t}^{\mathrm{2}}}{\mathrm{2}{\mathit{\sigma }}^{\mathrm{2}}}\ where σ is the standard deviation representing the size of the PSF. The spot size value for a representative field point in the SWIR-3 spectrometer of Sentinel-5/UVNS is about 6.85µm. When convolving with a gaussian PSF, we neglect the non-uniformity in the pupil and the spectrometer aberration, and the ISRF errors are only driven by the slit exit illumination (near field). For the second case, we impinge a certain amount of aberration on the imaging optics to get the same spot size for the PSF as in the first case. In this case, the ISRF errors are a combination of the remaining inhomogeneities at the SH exit plane (near field) as well as effects due to non-uniform spectrograph illumination (far field). The aberration present in the Sentinel-5/UVNS spectrograph are dependent on the position on the FPA in spectral and spatial direction. In the upcoming characterization and calibration campaign, the specific types of aberration of the final instrument will not be determined, but only the size of the spots. Therefore, although it is not a realistic case, we impinge pure aberration of a single type in order to determine critical Zernike terms for the ISRF stability. We also test two mixtures of different types of aberration, which represent more realistic field points of the Sentinel-5/UVNS. The ISRF for a homogeneous scene including aberration will be extensively characterized on ground. We want to investigate how the ISRF based on several Zernike terms behave under the condition of non-uniform scenes and how the ISRF deviation evolves with respect to each aberration-type-specific, homogeneous ISRF. Therefore, in the next paragraph, we calculate the relative change in the ISRF figures of merit functions. In the following, we present the ISRF figures of merit resulting from the simulation of several Zernike polynomials for the Sentinel-5/UVNS applicable heterogeneous Earth scene and a 50% stationary calibration scene. Further, we compare the results to the case of a classical slit without scene homogenization. Tables 1 and 2 summarize the results for the ISRF figures of merit. Note that the errors for the calibration scene are much larger than the errors for a realistic Earth scene. The calibration scene can be used in a laboratory to characterize the SH performance and compare it with the prediction. All Zernike polynomials increase the error in the ISRF knowledge compared to the case where the ISRF is calculated as the convolution with a constant gaussian PSF. The error magnitude variation ranges from only small increasing errors (defocus, vertical astigmatism) to a notable increase of the error (oblique quadrafoil, horizontal coma). The aberration changes both the maximum amplitude of the errors and the specific shape of the ISRF. Figure 7b depicts the ISRF assuming pure vertical coma, pure spherical aberration and pure oblique trefoil for a heterogeneous 50% calibration scene. The lower part of the plot shows the ISRF shape difference for each specific homogeneous reference scene. Note that the shape error is defined as the maximum amplitude of the difference plot. As none of the field points in the real Sentinel-5/UVNS instrument will contain a pure singular type of aberration, we tested two sets of aberration mixtures, which is more representative of a real field point in the Sentinel-5/UVNS instrument. Although our study does not provide a rigorous mathematical argument, the results indicate that the error of the combined Zernike polynomials lies within the errors of the individual contributors. This argument is supported by Fig. 8, where we plotted the ISRF shape error going from a pure oblique quadrafoil aberration to a pure defocus aberration. In each step we reduced the fraction of the quadrafoil aberration by 20% and tuned the defocus aberration coefficient in such a way that we ended up with the same PSF size of 6.85µm (80% EE). The ISRF errors always remain in the corridor between the case of pure oblique quadrafoil and pure defocus aberration. This behavior was tested for several other Zernike combinations and we conclude that the errors given in Tables 1 and 2 for the respective Zernike polynomials span the error space within which mixtures of aberration lie. Although the phenomena of the variations of the pupil illumination in combination with spectrometer aberration increases the errors, the SH still homogenizes the scene well and significantly improves the stability of the ISRF compared to a classical slit. In Fig. 7a we compare the ISRF shape difference for a 50% stationary calibration scene for a case with a classical slit and a case with SH. The SH improves the ISRF stability by almost an order of magnitude. Considering the applicable Earth scene and including the far-field variations, the SH still provides sufficiently stable ISRF stability with respect to the mission requirements for moderate heterogeneous scenes of Sentinel-5/UVNS. This would not be the case for an instrument employed with a classical slit. In certain scenarios, Sentinel-5/UVNS will fly over Earth scenes with higher contrasts than specified in the applicable Earth scene. This will be the case when flying over cloud fields, water bodies or city to vegetation transitions. However, these scenes are excluded from the mission requirements in terms of scene homogenization. Although sufficient for the purposes of Sentinel-5/UVNS, the capability of the SH to homogenize the scene is not perfect. This imperfection is particularly prominent when considering the calibration scenes. The imperfections originate from the remaining interference fluctuations in the SH transfer function and are dependent on the wavelength. Higher wavelengths show smaller frequencies and larger peak-to-valley amplitudes of the maxima in the SH transfer function, which leads to reduced homogenization efficiency. Therefore, the SWIR-3 wavelength channel is the most challenging in terms of scene homogenization. We observe that increasing the number of reflections inside the SH will increase the number of stripes in the spectrometer pupil illumination (see Fig. 6c, d) and reduce the peak to valley amplitude. This would lead to a more homogeneous pupil illumination. More reflection in the SH can be achieved by either increasing the length of the SH or adapting the telescope F[#]. However, it is advantageous to keep the SH length small to reduce the collimator astigmatism requirements. Note that a longer SH would not increase the near-field homogenization performance. In addition, more reflections in the SH lead to greater transmission losses at the mirrors. As the errors due to the pupil illumination are small compared to achieved near-field homogenization, it seems favorable to prioritize the first-order design rule given in Caron et al. (2019) and Meister et al. (2017). The SH shows the best near-field homogenization performance if ${F}_{\mathit{#}\mathrm{tel}}=l/\left(\ mathrm{2}bn\right)$, where F[#tel] is the telescope F-number, l is the SH length, b is the SH width and n is the number of reflections. For Sentinel-5/UVNS, the optimal parameters for SWIR-3 are a telescope F[#] of 9.95, a slit length of 9.91mm and a slit width of 248µm. The simulation results of this study still require experimental validation. An initial approach to validate the SH transfer functions was published in Irizar et al. (2019), where they showed good agreement between the simulation and the experimental result for a single SH incidence angle. The verification of the full transfer function including the full FoV range is pending. The SH far-field effects investigated in this study could be determined by measuring the pupil intensity distribution at the grating position by means of an appropriate test bench. The test bench would need to be capable of illuminating the SH entrance plane through a telescope with angles representing the Sentinel-5/UVNS FoV. Further, the astigmatism of the SH needs to be compensated for, which could be done by introducing a cylindrical lens in the collimator system. Apart from the mirror-based SH discussed in this study, future remote sensing instruments investigate the technology of another slit homogenizer technology, which is based on rectangular multimode fiber bundles. These devices are based on the same principle as the mirror-based SH but enable one to homogenize the scene in the ACT and ALT direction (Amann et al., 2019) and provide enhanced performance over extreme albedo variations. The presented study continues the investigation by Caron et al. (2019) and Meister et al. (2017) on mirror-based slit homogenizer technology. While the preceding studies considered the homogenization of the SH exit plane, here we extend the models by including the electric field propagation through the subsequent spectrograph. The slit homogenizer not only homogenizes the slit illumination but also modifies the spectrograph illumination dependent on the input scene. The variations in the spectrograph pupil illumination will lead to a scene-dependent weighting of the geometrical aberration in the optical system, which causes an additional distortion source of the ISRF. The phenomena is particularly prominent in the presence of extreme on-ground albedo contrasts. This will be the case when the instrument flies over clouds or water bodies. However, in the context of the Sentinel-5/UVNS instrument, these scenes are excluded from the mission requirements. We observe that the impact of spectrograph pupil illumination variations is small compared to the error due to non-uniform slit illumination, and the ISRF distortion is primarily driven by the remaining near-field variations after the SH. The inhomogeneity remnants arise from the fluctuations of the interference pattern at the SH exit plane. The strength of the variations increases with wavelength. Therefore, this study was conducted in the SWIR-3 channel in order to cover the worst case. We quantify the ISRF in terms of shape error, FWHM error and centroid error at 2312nm by an end-to-end propagation through the SH and the subsequent spectrograph optics. With regard to these figures of merit, our simulation results suggest an increase of the errors depending on the specific type of aberration impinged on the optics. ISRF errors of combined Zernike polynomials are always within the maximum errors of the individual Zernike constituent. Although the SH changes the spectrometer illumination, it still has significant performance advantages in stabilizing the ISRF compared to a classical slit. For an applicable heterogeneous Earth scene, the SH improves the ISRF shape stability by a factor of 5–10. The remaining residual errors are well below the Sentinel-5/UVNS system requirements, which are 2% shape error, <1% relative FWHM error and <0.0125nm (SWIR-3) centroid error. The datasets generated and/or analyzed for this work are available from the corresponding author on reasonable request, subject to confirmation of Airbus Defence and Space GmbH. TH developed, implemented, applied and evaluated the methods for the end-to-end modelling of the Sentinel-5/UVNS instrument with input from CM. TH performed the performance gain analysis which was supported by CM and CK and revised by JK and MW. TH prepared the paper, with contributions and critical revision from all co-authors. The authors declare that they have no conflict of interest. Publisher’s note: Copernicus Publications remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. We thank Tobias Lamour, Jess Köhler and Markus Melf (all at Airbus Defence and Space) for the helpful comments on a previous version of the manuscript. This paper was edited by Ulrich Platt and reviewed by Bernd Sierk and Ruediger Lang. Amann, S., Duong-Ederer, Q., Haist, T., Sierk, B., Guldimann, B., and Osten, W.: Characterization of fiber-based slit homogenizer devices in the NIR and SWIR, in: International Conference on Space Optics – ICSO 2018, 7–10 October 2014, La Caleta, Tenerife, Canary Islands, edited by: Sodnik, Z., Karafolas, N., and Cugny, B., vol. 11180, 2276–2286, International Society for Optics and Photonics, SPIE, https://doi.org/10.1117/12.2536147, 2019.a Berlich, R. and Harnisch, B.: Radiometric assessment method for diffraction effects in hyperspectral imagers applied to the earth explorer 8 mission candidate flex, in: International Conference on Space Optics – ICSO 2014, 7–10 October 2014, La Caleta, Tenerife, Canary Islands, edited by: Sodnik, Z., Cugny, B., and Karafolas, N., vol. 10563, 1475–1483, International Society for Optics and Photonics, SPIE, https://doi.org/10.1117/12.2304079, 2017.a Bovensmann, H., Burrows, J. P., Buchwitz, M., Frerick, J., Noël, S., Rozanov, V. V., Chance, K. V., and Goede, A. P. H.: SCIAMACHY: Mission Objectives and Measurement Modes, J. Atmos. Sci., 56, 127–150, https://doi.org/10.1175/1520-0469(1999)056<0127:SMOAMM>2.0.CO;2, 1999.a Burrows, J., Hölzle, E., Goede, A., Visser, H., and Fricke, W.: SCIAMACHY–scanning imaging absorption spectrometer for atmospheric chartography, Acta Astronaut., 35, 445–451, https://doi.org/10.1016/ 0094-5765(94)00278-T, 1995.a Burrows, J. P., Weber, M., Buchwitz, M., Rozanov, V., Ladstätter-Weißenmayer, A., Richter, A., DeBeek, R., Hoogen, R., Bramstedt, K., Eichmann, K.-U., Eisinger, M., and Perner, D.: The Global Ozone Monitoring Experiment (GOME): Mission Concept and First Scientific Results, J. Atmos. Sci., 56, 151–175, https://doi.org/10.1175/1520-0469(1999)056<0151:TGOMEG>2.0.CO;2, 1999.a Caron, J., Sierk, B., Bezy, J.-L., Loescher, A., and Meijer, Y.: The CarbonSat candidate mission: radiometric and spectral performances over spatially heterogeneous scenes, in: International Conference on Space Optics – ICSO 2014, 7–10 October 2014, La Caleta, Tenerife, Canary Islands, edited by: Sodnik, Z., Cugny, B., and Karafolas, N., vol. 10563, 1019–1027, International Society for Optics and Photonics, SPIE, https://doi.org/10.1117/12.2304186, 2017.a, b, c, d Caron, J., Kruizinga, B., and Vink, R.: Slit homogenizers for Earth observation spectrometers: overview on performance, present and future designs, in: International Conference on Space Optics – ICSO 2018, 9–12 October 2018, Chania, Greece, edited by: Sodnik, Z., Karafolas, N., and Cugny, B., vol. 11180, 402–417, International Society for Optics and Photonics, SPIE, https://doi.org/10.1117/ 12.2535957, 2019.a, b, c, d, e, f EOP-PIO: IPD-RS-ESA-18, Issue 1, Revision 2, 10 February 2011, Sentinel-5-UVNS Instrument Phase A/B1 Reference Spectra, ESA, Noordwijk, the Netherlands, 2011. a Gerilowski, K., Tretner, A., Krings, T., Buchwitz, M., Bertagnolio, P. P., Belemezov, F., Erzinger, J., Burrows, J. P., and Bovensmann, H.: MAMAP – a new spectrometer system for column-averaged methane and carbon dioxide observations from aircraft: instrument description and performance analysis, Atmos. Meas. Tech., 4, 215–243, https://doi.org/10.5194/amt-4-215-2011, 2011.a, b Goodman, J. W.: Introduction to Fourier optics, vol. 1, McGraw-Hill, San Francisco, CA, USA, 2005.a, b, c, d Hu, H., Hasekamp, O., Butz, A., Galli, A., Landgraf, J., Aan de Brugh, J., Borsdorff, T., Scheepmaker, R., and Aben, I.: The operational methane retrieval algorithm for TROPOMI, Atmos. Meas. Tech., 9, 5423–5440, https://doi.org/10.5194/amt-9-5423-2016, 2016.a Irizar, J., Melf, M., Bartsch, P., Koehler, J., Weiss, S., Greinacher, R., Erdmann, M., Kirschner, V., Albinana, A. P., and Martin, D.: Sentinel-5/UVNS, in: International Conference on Space Optics – ICSO 2018, 9–12 October 2018, Chania, Greece, edited by: Sodnik, Z., Karafolas, N., and Cugny, B., vol. 11180, 41–58, International Society for Optics and Photonics, SPIE, https://doi.org/10.1117/ 12.2535923, 2019.a, b, c Landgraf, J., aan de Brugh, J., Scheepmaker, R., Borsdorff, T., Hu, H., Houweling, S., Butz, A., Aben, I., and Hasekamp, O.: Carbon monoxide total column retrievals from TROPOMI shortwave infrared measurements, Atmos. Meas. Tech., 9, 4955–4975, https://doi.org/10.5194/amt-9-4955-2016, 2016.a Meister, C., Bauer, M., Keim, C., and Irizar, J.: Sentinel-5/UVNS instrument: the principle ability of a slit homogenizer to reduce scene contrast for earth observation spectrometer, in: SPIE Proceedings, Sensors, Systems, and Next-Generation Satellites XXI, Vol. 10423, 104231E, https://doi.org/10.1117/12.2278619, 2017.a, b, c, d, e Noël, S., Bramstedt, K., Bovensmann, H., Gerilowski, K., Burrows, J. P., Standfuss, C., Dufour, E., and Veihelmann, B.: Quantification and mitigation of the impact of scene inhomogeneity on Sentinel-4 UVN UV-VIS retrievals, Atmos. Meas. Tech., 5, 1319–1331, https://doi.org/10.5194/amt-5-1319-2012, 2012.a Thibos, L. N., Applegate, R. A., Schwiegerling, J. T., and Webb, R.: Standards for Reporting the Optical Aberrations of Eyes, in: Vision Science and its Applications, Optical Society of America, SuC1, https://doi.org/10.1364/VSIA.2000.SuC1, 2000.a Veefkind, J., Aben, I., McMullan, K., Förster, H., de Vries, J., Otter, G., Claas, J., Eskes, H., de Haan, J., Kleipool, Q., van Weele, M., Hasekamp, O., Hoogeveen, R., Landgraf, J., Snel, R., Tol, P., Ingmann, P., Voors, R., Kruizinga, B., Vink, R., Visser, H., and Levelt, P.: TROPOMI on the ESA Sentinel-5 Precursor: A GMES mission for global observations of the atmospheric composition for climate, air quality and ozone layer applications, Remote Sens. Environ., 120, 70–83, https://doi.org/10.1016/j.rse.2011.09.027, 2012.a Voors, R., Dobber, M., Dirksen, R., and Levelt, P.: Method of calibration to correct for cloud-induced wavelength shifts in the Aura satellite's Ozone Monitoring Instrument, Appl. Opt., 45, 3652–3658, https://doi.org/10.1364/AO.45.003652, 2006.a
{"url":"https://amt.copernicus.org/articles/14/5459/2021/amt-14-5459-2021.html","timestamp":"2024-11-10T14:38:07Z","content_type":"text/html","content_length":"311152","record_id":"<urn:uuid:cfa8d8aa-67bd-4659-a4d2-88adff7e11a8>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00781.warc.gz"}
Deno.KvKeyPart - Deno - Deno Docs type alias Deno.KvKeyPart A single part of a Deno.KvKey. Parts are ordered lexicographically, first by their type, and within a given type by their value. The ordering of types is as follows: 1. Uint8Array 2. string 3. number 4. bigint 5. boolean Within a given type, the ordering is as follows: • Uint8Array is ordered by the byte ordering of the array • string is ordered by the byte ordering of the UTF-8 encoding of the string • number is ordered following this pattern: -NaN < -Infinity < -100.0 < -1.0 < -0.5 < -0.0 < 0.0 < 0.5 < 1.0 < 100.0 < Infinity < NaN • bigint is ordered by mathematical ordering, with the largest negative number being the least first value, and the largest positive number being the last value • boolean is ordered by false < true This means that the part 1.0 (a number) is ordered before the part 2.0 (also a number), but is greater than the part 0n (a bigint), because 1.0 is a number and 0n is a bigint, and type ordering has precedence over the ordering of values within a type. | string | number | bigint | boolean | symbol
{"url":"https://docs.deno.com/api/deno/~/Deno.KvKeyPart","timestamp":"2024-11-11T07:16:53Z","content_type":"text/html","content_length":"23284","record_id":"<urn:uuid:b21178d7-e0d2-4e31-8ca9-fcff6613a003>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00353.warc.gz"}
Multivariate Medians | R-bloggersMultivariate Medians [This article was first published on Econometrics Beat: Dave Giles' Blog , and kindly contributed to ]. (You can report issue about the content on this page ) Want to share your content on R-bloggers? if you have a blog, or if you don't. I’ll bet that in the very first “descriptive statistics” course you ever took, you learned about measures of “central tendency” for samples or populations, and these measures included the median. You no doubt learned that one useful feature of the median is that, unlike the (arithmetic, geometric, harmonic) mean, it is relatively “robust” to outliers in the data. (You probably told that J. M. Keynes provided the first treatment of the relationship between the median and the minimization of the sum of absolute deviations. See Keynes (1911) – this paper was based on his thesis work of 1907 and 1908. See this earlier post for more details.) At some later stage you would have encountered the arithmetic mean again, in the context of multivariate data. Think of the mean vector, for instance. However, unless you took a stats. course in Multivariate Analysis, most of you probably didn’t get to meet the median in a multivariate setting. Did you ever wonder why not? One reason may have been that while the concept of the mean generalizes very simply from the scalar case to the multivariate case, the same is not true for the humble median. Indeed, there isn’t even a single, universally accepted definition of the median for a set of multivariate data! Let’s take a closer look at this. The key point to note is that the univariate concept of the median is that it relies on our ability to order (or rank) univariate data. In the case of multivariate data, there is no natural ordering of the data points. In order to develop the concept of the median in this case, we first have to agree on some convention for defining “order”. This gives rise to a host of different multivariate medians, including: • The L[1] Median • The Geometric Median • The Vector of Marginal Medians (or coordinate-wise median) • The Spatial Median • The Oja Median • The Liu Median • The Tukey Median. For most of these measures a variety of different numerical algorithms are available. This complicates matters even further. You have to decide on a median definition, and then you have to find an efficient algorithm to compute it. To get idea of the issues involved, take a look at this interesting paper You can compute multivariate medians in R , using the “med” function. However, for the most part the associated algorithms are limited to -dimensional data. If this topic interests you, then a good starting point for further reading is the survey paper by Small (1990). Finally, it’s worth keeping in mind that the median is just one of the “order statistics” associated with a body of data. The issues associated with defining a median in the case of multivariate data apply equally to other order statistics, or functions of the order statistics (such as the “range” of the data). Keynes, J. M. , 1911. The principal averages and the laws of error which lead to them. Journal of the Royal Statistical Society , 74, 322–331. Small, C. G. , 1990. A survey of multidimensional medians. International Statistical Review , 58, 263–277. © 2014, David E. Giles
{"url":"https://www.r-bloggers.com/2014/12/multivariate-medians/","timestamp":"2024-11-09T09:23:42Z","content_type":"text/html","content_length":"111816","record_id":"<urn:uuid:99ed574e-f2c5-4a4e-8113-c51aeeb38b38>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00643.warc.gz"}
American Mathematical Society Stability of a quasi-local positive mass theorem for graphical hypersurfaces of Euclidean space HTML articles powered by AMS MathViewer Trans. Amer. Math. Soc. 374 (2021), 3535-3555 Request permission We present a quasi-local version of the stability of the positive mass theorem. We work with the Brown–York quasi-local mass as it possesses positivity and rigidity properties, and therefore the stability of this rigidity statement can be studied. Specifically, we ask if the Brown–York mass of the boundary of some compact manifold is close to zero, must the manifold be close to a Euclidean domain in some sense? Here we consider a class of compact $n$-manifolds with boundary that can be realized as graphs in $\mathbb {R}^{n+1}$, and establish the following. If the Brown–York mass of the boundary of such a compact manifold is small, then the manifold is close to a Euclidean hyperplane with respect to the Federer–Fleming flat distance. • Alaee, A., Lesourd, M., and Yau, S.-T., A localized spacetime Penrose inequality and horizon detection with quasi-local mass, preprint, arXiv:1912.01581. • Brian Allen, IMCF and the stability of the PMT and RPI under $L^2$ convergence, Ann. Henri Poincaré 19 (2018), no. 4, 1283–1306. MR 3775157, DOI 10.1007/s00023-017-0641-7 • B. Allen, Sobolev stability of the PMT and RPI using IMCF, preprint, arXiv:1808.07841. • Brian Allen, Stability of the PMT and RPI for asymptotically hyperbolic manifolds foliated by IMCF, J. Math. Phys. 59 (2018), no. 8, 082501, 18. MR 3844923, DOI 10.1063/1.5035275 • B. Allen and A. Burtscher, Properties of the Null Distance and Spacetime Convergence, preprint, arXiv:1909.04483. • Luigi Ambrosio and Bernd Kirchheim, Currents in metric spaces, Acta Math. 185 (2000), no. 1, 1–80. MR 1794185, DOI 10.1007/BF02392711 • R. Arnowitt, S. Deser, and C. W. Misner, Coordinate invariance and energy expressions in general relativity, Phys. Rev. (2) 122 (1961), 997–1006. MR 127946, DOI 10.1103/PhysRev.122.997 • Robert Bartnik, The mass of an asymptotically flat manifold, Comm. Pure Appl. Math. 39 (1986), no. 5, 661–693. MR 849427, DOI 10.1002/cpa.3160390505 • Hubert L. Bray, Proof of the Riemannian Penrose inequality using the positive mass theorem, J. Differential Geom. 59 (2001), no. 2, 177–267. MR 1908823 • Hubert Bray and Felix Finster, Curvature estimates and the positive mass theorem, Comm. Anal. Geom. 10 (2002), no. 2, 291–306. MR 1900753, DOI 10.4310/CAG.2002.v10.n2.a3 • Edward T. Bryden, Stability of the positive mass theorem for axisymmetric manifolds, Pacific J. Math. 305 (2020), no. 1, 89–152. MR 4077687, DOI 10.2140/pjm.2020.305.89 • E. Bryden, M. Khuri, and C. Sormani, Stability of the spacetime positive mass theorem in spherical symmetry, preprint, arXiv:1906.11352. • Armando J. Cabrera Pacheco, On the stability of the positive mass theorem for asymptotically hyperbolic graphs, Ann. Global Anal. Geom. 56 (2019), no. 3, 443–463. MR 4009763, DOI 10.1007/ • Armando J. Cabrera Pacheco, Christian Ketterer, and Raquel Perales, Stability of graphical tori with almost nonnegative scalar curvature, Calc. Var. Partial Differential Equations 59 (2020), no. 4, Paper No. 134, 27. MR 4127404, DOI 10.1007/s00526-020-01790-w • Stefan Cohn-Vossen, Unstarre geschlossene Flächen, Math. Ann. 102 (1930), no. 1, 10–29 (German). MR 1512567, DOI 10.1007/BF01782336 • Justin Corvino, A note on asymptotically flat metrics on ${\Bbb R}^3$ which are scalar-flat and admit minimal spheres, Proc. Amer. Math. Soc. 133 (2005), no. 12, 3669–3678. MR 2163606, DOI • Mattias Dahl, Romain Gicquaud, and Anna Sakovich, Asymptotically hyperbolic manifolds with small mass, Comm. Math. Phys. 325 (2014), no. 2, 757–801. MR 3148101, DOI 10.1007/s00220-013-1827-6 • M. Dajczer and L. Rodriguez, Infinitesimal rigidity of Euclidean submanifolds, Ann. Inst. Fourier (Grenoble) 40 (1990), no. 4, 939–949 (1991) (English, with French summary). MR 1096598, DOI • Michael Eichmair, Pengzi Miao, and Xiaodong Wang, Extension of a theorem of Shi and Tam, Calc. Var. Partial Differential Equations 43 (2012), no. 1-2, 45–56. MR 2860402, DOI 10.1007/ • Herbert Federer and Wendell H. Fleming, Normal and integral currents, Ann. of Math. (2) 72 (1960), 458–520. MR 123260, DOI 10.2307/1970227 • Felix Finster, A level set analysis of the Witten spinor with applications to curvature estimates, Math. Res. Lett. 16 (2009), no. 1, 41–55. MR 2480559, DOI 10.4310/MRL.2009.v16.n1.a5 • Felix Finster and Ines Kath, Curvature estimates in asymptotically flat manifolds of positive scalar curvature, Comm. Anal. Geom. 10 (2002), no. 5, 1017–1031. MR 1957661, DOI 10.4310/ • Lan-Hsuan Huang and Dan A. Lee, Stability of the positive mass theorem for graphical hypersurfaces of Euclidean space, Comm. Math. Phys. 337 (2015), no. 1, 151–169. MR 3324159, DOI 10.1007/ • Lan-Hsuan Huang, Dan A. Lee, and Christina Sormani, Intrinsic flat stability of the positive mass theorem for graphical hypersurfaces of Euclidean space, J. Reine Angew. Math. 727 (2017), 269–299. MR 3652253, DOI 10.1515/crelle-2015-0051 • Lan-Hsuan Huang and Damin Wu, Hypersurfaces with nonnegative scalar curvature, J. Differential Geom. 95 (2013), no. 2, 249–278. MR 3128984 • Lan-Hsuan Huang and Damin Wu, The equality case of the Penrose inequality for asymptotically flat graphs, Trans. Amer. Math. Soc. 367 (2015), no. 1, 31–47. MR 3271252, DOI 10.1090/ • G. Huisken, Inverse mean curvature flow and isoperimetric inequalities, video available at https://video.ias.edu/node/233 (2009) • Gerhard Huisken and Tom Ilmanen, The inverse mean curvature flow and the Riemannian Penrose inequality, J. Differential Geom. 59 (2001), no. 3, 353–437. MR 1916951 • Mau-Kwong George Lam, The Graph Cases of the Riemannian Positive Mass and Penrose Inequalities in All Dimensions, ProQuest LLC, Ann Arbor, MI, 2011. Thesis (Ph.D.)–Duke University. MR 2873434 • Dan A. Lee, On the near-equality case of the positive mass theorem, Duke Math. J. 148 (2009), no. 1, 63–80. MR 2515100, DOI 10.1215/00127094-2009-021 • Dan A. Lee and Christina Sormani, Stability of the positive mass theorem for rotationally symmetric Riemannian manifolds, J. Reine Angew. Math. 686 (2014), 187–220. MR 3176604, DOI 10.1515/ • Siyuan Lu and Pengzi Miao, Minimal hypersurfaces and boundary behavior of compact manifolds with nonnegative scalar curvature, J. Differential Geom. 113 (2019), no. 3, 519–566. MR 4031741, DOI • William Meeks III, Leon Simon, and Shing Tung Yau, Embedded minimal surfaces, exotic spheres, and manifolds with positive Ricci curvature, Ann. of Math. (2) 116 (1982), no. 3, 621–659. MR 678484, DOI 10.2307/2007026 • Stephen McCormick and Pengzi Miao, On a Penrose-like inequality in dimensions less than eight, Int. Math. Res. Not. IMRN 7 (2019), 2069–2084. MR 3938317, DOI 10.1093/imrn/rnx181 • Pengzi Miao, Positive mass theorem on manifolds admitting corners along a hypersurface, Adv. Theor. Math. Phys. 6 (2002), no. 6, 1163–1182 (2003). MR 1982695, DOI 10.4310/ATMP.2002.v6.n6.a4 • Pengzi Miao, On a localized Riemannian Penrose inequality, Comm. Math. Phys. 292 (2009), no. 1, 271–284. MR 2540078, DOI 10.1007/s00220-009-0834-0 • Louis Nirenberg, The Weyl and Minkowski problems in differential geometry in the large, Comm. Pure Appl. Math. 6 (1953), 337–394. MR 58265, DOI 10.1002/cpa.3160060303 • A. V. Pogorelov, Regularity of a convex surface with given Gaussian curvature, Mat. Sbornik N.S. 31(73) (1952), 88–103 (Russian). MR 0052807 • Anna Sakovich and Christina Sormani, Almost rigidity of the positive mass theorem for asymptotically hyperbolic manifolds with spherical symmetry, Gen. Relativity Gravitation 49 (2017), no. 9, Paper No. 125, 26. MR 3691759, DOI 10.1007/s10714-017-2291-y • Richard Schoen and Shing Tung Yau, On the proof of the positive mass conjecture in general relativity, Comm. Math. Phys. 65 (1979), no. 1, 45–76. MR 526976, DOI 10.1007/BF01940959 • R. Schoen and S.-T. Yau, Positive scalar curvature and minimal hypersurface singularities, preprint, arXiv:1704.05490. • Yuguang Shi and Luen-Fai Tam, Positive mass theorem and the boundary behaviors of compact manifolds with nonnegative scalar curvature, J. Differential Geom. 62 (2002), no. 1, 79–125. MR 1987378 • Yuguang Shi and Luen-Fai Tam, Quasi-local mass and the existence of horizons, Comm. Math. Phys. 274 (2007), no. 2, 277–295. MR 2322904, DOI 10.1007/s00220-007-0273-8 • Christina Sormani and Stefan Wenger, The intrinsic flat distance between Riemannian manifolds and other integral current spaces, J. Differential Geom. 87 (2011), no. 1, 117–199. MR 2786592 • L. Szabados, Quasi-local energy-momentum and angular momentum in general relativity, Living Rev. Relativ. 12(4) (2009). • Edward Witten, A new proof of the positive energy theorem, Comm. Math. Phys. 80 (1981), no. 3, 381–402. MR 626707, DOI 10.1007/BF01208277 Similar Articles • Retrieve articles in Transactions of the American Mathematical Society with MSC (2020): 53C20, 83C99 • Retrieve articles in all journals with MSC (2020): 53C20, 83C99 Additional Information • Aghil Alaee • Affiliation: Department of Mathematics and Computer Science, Clark University, Worcester, Massachusetts 01610; and Center of Mathematical Sciences and Applications, Harvard University, Cambridge, Massachusetts 02138 • MR Author ID: 1055444 • Email: aalaeekhangha@clarku.edu, aghil.alaee@cmsa.fas.harvard.edu • Armando J. Cabrera Pacheco • Affiliation: Department of Mathematics, Universität Tübingen, 72076 Tübingen, Germany • MR Author ID: 1148286 • ORCID: 0000-0002-1148-5125 • Email: cabrera@math.uni-tuebingen.de • Stephen McCormick • Affiliation: Matematiska institutionen, Uppsala universitet, 751 06 Uppsala, Sweden • MR Author ID: 1085293 • ORCID: 0000-0001-9536-9908 • Email: stephen.mccormick@math.uu.se • Received by editor(s): January 21, 2020 • Received by editor(s) in revised form: September 5, 2020 • Published electronically: February 23, 2021 • Additional Notes: The first author acknowledges the support of the Gordon and Betty Moore Foundation, the John Templeton Foundation, and the AMS-Simons travel grant. The second author is grateful to the Carl Zeiss Foundation for its generous support. • © Copyright 2021 American Mathematical Society • Journal: Trans. Amer. Math. Soc. 374 (2021), 3535-3555 • MSC (2020): Primary 53C20; Secondary 83C99 • DOI: https://doi.org/10.1090/tran/8297 • MathSciNet review: 4237955
{"url":"https://www.ams.org/journals/tran/2021-374-05/S0002-9947-2021-08297-5/home.html","timestamp":"2024-11-13T17:51:15Z","content_type":"text/html","content_length":"93816","record_id":"<urn:uuid:05721bfe-3e7f-4c00-b1d7-54d7fc64a35f>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00242.warc.gz"}
Get random integer matlab software Thus, each time mupad is started or reinitialized with the reset function, the random generators produce the same sequence of numbers. Simbiology software uses a pseudorandom number generator. These numbers are not strictly random and independent in the. See variablesizing restrictions for code generation of toolbox functions matlab coder. But, if i want exactly 2 random integers from the set 1 2 3, i only get 1 integer. Oct 26, 2017 learn more about how to generate random integer in the inclusive range from 1 to 10. This can lead to unexpected results when combined with the round function. To get a nonpredictable initial value, make it dependent on the current time. The simplest randi syntax returns doubleprecision integer values between 1 and a specified value, imax. The sequence of numbers generated is determined by the state of the generator, which can be specified by the integer randomstate. Mathworks is the leading developer of mathematical computing software for engineers and scientists. Generate a random number in a certain range in matlab stack. Random get a random number mymin and matlab documentation for how to generate a random integer that is either a 0 or a 1. How to generate a matrix of random nonintegers matlab. Generate random numbers that are repeatable matlab. Random number generators can be used to approximate a random integer from a uniform distribution. Random dim mymin as integer 1, mymax as integer 5, my1strandomnumber as integer, my2ndrandomnumber as integer create a random number generator dim generator as system. I am trying to obtain 9 random integer numbers and want them. Generating a random binary matrix matlab answers matlab. The random integer generator block generates uniformly distributed random integers in the range 0, m1, where m is specified by the set size parameter. Hello i want to generate random integers varying from 1 to n. For example, randsz,myclass does not invoke myclass. If randomstate is set to integer j, the random number generator is initialized to its j th state. Poisson integer generator mathworks makers of matlab and. Hi there, i need to generate a 5x7 matrix of random noninteger numbers, each within the range from 0 to 25. This example shows how to create an array of random floatingpoint numbers that are drawn from a uniform distribution in the open interval 50, 100. Random integer generators in source subsystems generate. I can only generate a random integer vector with a range, which is. For other classes, the static randn method is not invoked. This example shows how to create an array of random floatingpoint numbers that are drawn from a uniform distribution in a specific interval. Generate integers randomly distributed in specified range. Trial software how do i generate a random number between two numbers. Generate random numbers that are repeatable specify the seed. Random integer numbers using random function matlab answers. This example shows how to create an array of random integer values that are drawn from a discrete uniform distribution on the set of numbers 10, 9. Random numbers in matlab here youll learn about random command, random statements, rand, random command matlab, how to use randi command in matlab, how to use rand command in matlab, random. Statistically, random numbers exhibit no predictable pattern or regularity. Browse other questions tagged matlab random numbers integer or ask your own question. Learn more about random, random number generator, integer. These numbers are not strictly random and independent in the mathematical sense, but they pass various statistical tests of randomness and independence, and their calculation can be repeated for testing or diagnostic purposes. You can use the uniform random number block from source library. Im able to generate a random integer however it always returns a negative value and a. Generate random integer numbers mupad mathworks deutschland. Use the randi function instead of rand to generate 5 random integers from the uniform distribution between 10 and 50. You can use the poisson integer generator to generate noise in a binary transmission channel. This example shows how to create an array of random integer values that are drawn from a discrete uniform distribution on a specific set of numbers. Hi there, i need to generate a 5x7 matrix of random non integer numbers, each within the range from 0 to 25. I stumbled upon the two functions randint and randi. I used the existing randint function which seems to be producing identical numbers between. This example shows how to create an array of random integer values that are drawn from a discrete uniform distribution on a. How to generate unique random integers between 1 to n no. Use this block to generate random binaryvalued or integervalued data. This can be observed by using the program to randomly obtain values within a. The randomness comes from atmospheric noise, which for many purposes is better than the pseudorandom number algorithms typically used in computer programs. Generate integers randomly distributed in specified. Learn more about random number generator, matlab, randi. The data type is set using the output data type parameter the number of rows in the output data equals the value of the samples per frame parameter and corresponds to the number of samples in one frame. If a number has extra digits that cannot be displayed in the current format, then matlab automatically rounds the number for display purposes. Uniform random number mathworks makers of matlab and simulink. Matlab uses algorithms to generate pseudorandom and pseudoindependent numbers. For example, randnsz,myclass does not invoke myclass. This example shows how to avoid repeating the same random number arrays when matlab restarts. Uniform random number mathworks makers of matlab and. Random integer numbers using random function matlab. How to generate a random real number vector with a range. Each random number generator rng represents a parametric family of. Trial software how to generate random integer number in a fixed range in matlab, like between 1 to 10. For other classes, the static rand method is not invoked. To get an integer from a floating point value we can use functions such as round or ceil or floor. The rounding type determines whether round considers digits in relation to the decimal point or the overall number of significant digits. How to generate random numbers between two numbers without any specific number. Set random number generator matlab mathworks Sequences of statistically random numbers are used to simulate complex mathematical and physical systems. Aug 19, 2018 random numbers in matlab here youll learn about random command, random statements, rand, random command matlab, how to use randi command in matlab, how to use rand command in matlab, random. This program quickly outputs n random integers in the specified range from a to. Learn more about how to generate random integer in the inclusive range from 1 to 10. N must be a positive integer when you specify significant. Uniformly distributed random numbers matlab rand mathworks. All the random number functions, rand, randn, randi, and randperm, draw values from a shared random number generator. To generate normally distributed random numbers, use the random number block. The data type class must be a builtin matlab numeric type. I need to generate a single random number within my code. This example shows how to repeat arrays of random numbers by specifying the seed first. The uniform random number block generates uniformly distributed random numbers over an interval that you specify. Array of random integers matlab randi mathworks australia. Use rand, randi, randn, and randperm to create arrays of random. The bigger reason why and 20 come up half as often is because round rounds to the nearest integer. Random integer output, returned as a scalar, vector, or matrix. The format command controls how matlab displays numbers at the command line. How to generate a random real number vector with a. The randomness comes from atmospheric noise, which for many purposes is better than the pseudo random number algorithms typically used in computer programs. This matlab function creates a matrix with underlying class of double, with randi integer values in all elements. Oct 22, 2016 how to generate non integer random number in. How to generate unique random integers between 1 to n no two. Every time you initialize the generator using the same seed, you always get the same result. Generating random integer between negative and positive range. How to configure random integer generators in simulink. Random integer generator file exchange matlab central. It is not integer so you need to add a quantizer block from the discontinuities library.
{"url":"https://critalermeb.web.app/1382.html","timestamp":"2024-11-05T06:30:06Z","content_type":"text/html","content_length":"13477","record_id":"<urn:uuid:e33d8e8e-4374-4382-abdc-c0f334f417c3>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00795.warc.gz"}
Nine-point circle Eos3.7.2 (June 24,2023) running under Mathematica 13.2.1 for Mac OS X ARM (64-bit) (January 27, 2023) on Fri 23 Jun 2023 16:10:08. The nine-point circle is a circle that can be constructed by the nine concyclic points defined from the triangle. These nine points are : The midpoint of each side of the triangle. The foot of each altitude. The midpoint of the line segment from each vertex of the triangle to the orthocenter. EosSession["nine-point circle"]; nine-point circle/Origami: Step 1 nine-point circle/Origami: Step 1 nine-point circle/Origami: Step 3 nine-point circle/Origami: Step 5 nine-point circle/Origami: Step 7 nine-point circle/Origami: Step 7 nine-point circle/Origami: Step 9 nine-point circle/Origami: Step 11 nine-point circle/Origami: Step 13 nine-point circle/Origami: Step 15 nine-point circle/Origami: Step 17 nine-point circle/Origami: Step 19
{"url":"https://www.wolframcloud.com/obj/ida.tetsuo.ge/Published/nine-point-circle.nb","timestamp":"2024-11-02T14:42:12Z","content_type":"text/html","content_length":"275098","record_id":"<urn:uuid:f92f09ce-0449-47be-ab54-2849dc427ef7>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00056.warc.gz"}
University Physics Volume 1 Learning Objectives By the end of the section, you will be able to: • Define normal and tension forces • Distinguish between real and fictitious forces • Apply Newton’s laws of motion to solve problems involving a variety of forces Forces are given many names, such as push, pull, thrust, and weight. Traditionally, forces have been grouped into several categories and given names relating to their source, how they are transmitted, or their effects. Several of these categories are discussed in this section, together with some interesting applications. Further examples of forces are discussed later in this text. A Catalog of Forces: Normal, Tension, and Other Examples of Forces A catalog of forces will be useful for reference as we solve various problems involving force and motion. These forces include normal force, tension, friction, and spring force. Normal force Weight (also called the force of gravity) is a pervasive force that acts at all times and must be counteracted to keep an object from falling. You must support the weight of a heavy object by pushing up on it when you hold it stationary, as illustrated in (Figure)(a). But how do inanimate objects like a table support the weight of a mass placed on them, such as shown in (Figure)(b)? When the bag of dog food is placed on the table, the table sags slightly under the load. This would be noticeable if the load were placed on a card table, but even a sturdy oak table deforms when a force is applied to it. Unless an object is deformed beyond its limit, it will exert a restoring force much like a deformed spring (or a trampoline or diving board). The greater the deformation, the greater the restoring force. Thus, when the load is placed on the table, the table sags until the restoring force becomes as large as the weight of the load. At this point, the net external force on the load is zero. That is the situation when the load is stationary on the table. The table sags quickly and the sag is slight, so we do not notice it. But it is similar to the sagging of a trampoline when you climb onto it. We must conclude that whatever supports a load, be it animate or not, must supply an upward force equal to the weight of the load, as we assumed in a few of the previous examples. If the force supporting the weight of an object, or a load, is perpendicular to the surface of contact between the load and its support, this force is defined as a normal force and here is given by the symbol [latex] \overset{\to }{N}. [/latex] (This is not the newton unit for force, or N.) The word normal means perpendicular to a surface. This means that the normal force experienced by an object resting on a horizontal surface can be expressed in vector form as follows: [latex] \overset{\to }{N}=\text{−}m\overset{\to }{g}. [/latex] In scalar form, this becomes The normal force can be less than the object’s weight if the object is on an incline. Weight on an Incline Consider the skier on the slope in (Figure). Her mass including equipment is 60.0 kg. (a) What is her acceleration if friction is negligible? (b) What is her acceleration if friction is 45.0 N? This is a two-dimensional problem, since not all forces on the skier (the system of interest) are parallel. The approach we have used in two-dimensional kinematics also works well here. Choose a convenient coordinate system and project the vectors onto its axes, creating two one-dimensional problems to solve. The most convenient coordinate system for motion on an incline is one that has one coordinate parallel to the slope and one perpendicular to the slope. (Motions along mutually perpendicular axes are independent.) We use x and y for the parallel and perpendicular directions, respectively. This choice of axes simplifies this type of problem, because there is no motion perpendicular to the slope and the acceleration is downslope. Regarding the forces, friction is drawn in opposition to motion (friction always opposes forward motion) and is always parallel to the slope, [latex] {w}_{x} [/latex] is drawn parallel to the slope and downslope (it causes the motion of the skier down the slope), and [latex] {w}_{y} [/latex] is drawn as the component of weight perpendicular to the slope. Then, we can consider the separate problems of forces parallel to the slope and forces perpendicular to the slope. The magnitude of the component of weight parallel to the slope is [latex] {w}_{x}=w\,\text{sin}\,25\text{°}=mg\,\text{sin}\,25\text{°}, [/latex] and the magnitude of the component of the weight perpendicular to the slope is [latex] {w}_{y}=w\,\text{cos}\,25\text{°}=mg\,\text{cos}\,25\text{°}. [/latex] a. Neglect friction. Since the acceleration is parallel to the slope, we need only consider forces parallel to the slope. (Forces perpendicular to the slope add to zero, since there is no acceleration in that direction.) The forces parallel to the slope are the component of the skier’s weight parallel to slope [latex] {w}_{x} [/latex] and friction f. Using Newton’s second law, with subscripts to denote quantities parallel to the slope, [latex] {a}_{x}=\frac{{F}_{\text{net}\,x}}{m} [/latex] where [latex] {F}_{\text{net}\,x}={w}_{x}-mg\,\text{sin}\,25\text{°}, [/latex] assuming no friction for this part. Therefore, [latex] \begin{array}{cc} {a}_{x}=\frac{{F}_{\text{net}\,x}}{m}=\frac{mg\,\text{sin}\,25\text{°}}{m}=g\,\text{sin}\,25\text{°}\hfill \\ (9.80\,{\text{m/s}}^{2})(0.4226)=4.14\,{\text{m/s}}^{2}\hfill \ end{array} [/latex] is the acceleration. b. Include friction. We have a given value for friction, and we know its direction is parallel to the slope and it opposes motion between surfaces in contact. So the net external force is [latex] {F}_{\text{net}\,x}={w}_{x}-f. [/latex] Substituting this into Newton’s second law, [latex] {a}_{x}={F}_{\text{net}\,x}\text{/}m, [/latex] gives [latex] {a}_{x}=\frac{{F}_{\text{net}\,x}}{m}=\frac{{w}_{x}-f}{m}=\frac{mg\,\text{sin}\,25\text{°}-f}{m}. [/latex] We substitute known values to obtain [latex] {a}_{x}=\frac{(60.0\,\text{kg})(9.80\,{\text{m/s}}^{2})(0.4226)-45.0\,\text{N}}{60.0\,\text{kg}}. [/latex] This gives us [latex] {a}_{x}=3.39\,{\text{m/s}}^{2}, [/latex] which is the acceleration parallel to the incline when there is 45.0 N of opposing friction. Since friction always opposes motion between surfaces, the acceleration is smaller when there is friction than when there is none. It is a general result that if friction on an incline is negligible, then the acceleration down the incline is [latex] a=g\,\text{sin}\,\theta [/latex], regardless of mass. As discussed previously, all objects fall with the same acceleration in the absence of air resistance. Similarly, all objects, regardless of mass, slide down a frictionless incline with the same acceleration (if the angle is the same). When an object rests on an incline that makes an angle [latex] \theta [/latex] with the horizontal, the force of gravity acting on the object is divided into two components: a force acting perpendicular to the plane, [latex] {w}_{y} [/latex], and a force acting parallel to the plane, [latex] {w}_{x} [/latex] ((Figure)). The normal force [latex] \overset{\to }{N} [/latex] is typically equal in magnitude and opposite in direction to the perpendicular component of the weight [latex] {w}_{y} [/latex]. The force acting parallel to the plane, [latex] {w}_{x} [/latex], causes the object to accelerate down the incline. Be careful when resolving the weight of the object into components. If the incline is at an angle [latex] \theta [/latex] to the horizontal, then the magnitudes of the weight components are [latex] {w}_{x}=w\,\text{sin}\,\theta =mg\,\text{sin}\,\theta [/latex] [latex] {w}_{y}=w\,\text{cos}\,\theta =mg\,\text{cos}\,\theta . [/latex] We use the second equation to write the normal force experienced by an object resting on an inclined plane: [latex] N=mg\,\text{cos}\,\theta . [/latex] Instead of memorizing these equations, it is helpful to be able to determine them from reason. To do this, we draw the right angle formed by the three weight vectors. The angle [latex] \theta [/ latex] of the incline is the same as the angle formed between w and [latex] {w}_{y} [/latex]. Knowing this property, we can use trigonometry to determine the magnitude of the weight components: [latex] \begin{array}{cc} \text{cos}\,\theta =\frac{{w}_{y}}{w},\enspace{w}_{y}=w\,\text{cos}\,\theta =mg\,\text{sin}\,\theta \hfill \\ \text{sin}\,\theta =\frac{{w}_{x}}{w},\enspace{w}_{x}=w\,\text {sin}\,\theta =mg\,\text{sin}\,\theta .\hfill \end{array} [/latex] Check Your Understanding A force of 1150 N acts parallel to a ramp to push a 250-kg gun safe into a moving van. The ramp is frictionless and inclined at [latex] 17\text{°}. [/latex] (a) What is the acceleration of the safe up the ramp? (b) If we consider friction in this problem, with a friction force of 120 N, what is the acceleration of the safe? Show Answer a. [latex] 1.7\,{\text{m/s}}^{2}; [/latex] b. [latex] 1.3\,{\text{m/s}}^{2} [/latex] A tension is a force along the length of a medium; in particular, it is a pulling force that acts along a stretched flexible connector, such as a rope or cable. The word “tension” comes from a Latin word meaning “to stretch.” Not coincidentally, the flexible cords that carry muscle forces to other parts of the body are called tendons. Any flexible connector, such as a string, rope, chain, wire, or cable, can only exert a pull parallel to its length; thus, a force carried by a flexible connector is a tension with a direction parallel to the connector. Tension is a pull in a connector. Consider the phrase: “You can’t push a rope.” Instead, tension force pulls outward along the two ends of a rope. Consider a person holding a mass on a rope, as shown in (Figure). If the 5.00-kg mass in the figure is stationary, then its acceleration is zero and the net force is zero. The only external forces acting on the mass are its weight and the tension supplied by the rope. Thus, [latex] {F}_{\text{net}}=T-w=0, [/latex] where T and w are the magnitudes of the tension and weight, respectively, and their signs indicate direction, with up being positive. As we proved using Newton’s second law, the tension equals the weight of the supported mass: Thus, for a 5.00-kg mass (neglecting the mass of the rope), we see that [latex] T=mg=(5.00\,\text{kg})(9.80\,{\text{m/s}}^{2})=49.0\,\text{N}\text{.} [/latex] If we cut the rope and insert a spring, the spring would extend a length corresponding to a force of 49.0 N, providing a direct observation and measure of the tension force in the rope. Flexible connectors are often used to transmit forces around corners, such as in a hospital traction system, a tendon, or a bicycle brake cable. If there is no friction, the tension transmission is undiminished; only its direction changes, and it is always parallel to the flexible connector, as shown in (Figure). What Is the Tension in a Tightrope? Calculate the tension in the wire supporting the 70.0-kg tightrope walker shown in (Figure). As you can see in (Figure), the wire is bent under the person’s weight. Thus, the tension on either side of the person has an upward component that can support his weight. As usual, forces are vectors represented pictorially by arrows that have the same direction as the forces and lengths proportional to their magnitudes. The system is the tightrope walker, and the only external forces acting on him are his weight [latex] \overset{\to }{w} [/latex] and the two tensions [latex] {\overset{\to }{T}}_{\text{L}} [/latex] (left tension) and [latex] {\overset{\to }{T}}_{\text{R}} [/latex] (right tension). It is reasonable to neglect the weight of the wire. The net external force is zero, because the system is static. We can use trigonometry to find the tensions. One conclusion is possible at the outset—we can see from (Figure)(b) that the magnitudes of the tensions [latex] {T}_{\text{L}} [/latex] and [latex] {T}_{\text{R}} [/latex] must be equal. We know this because there is no horizontal acceleration in the rope and the only forces acting to the left and right are [latex] {T}_{\text{L}} [/latex] and [latex] {T}_{\text{R}} [/latex]. Thus, the magnitude of those horizontal components of the forces must be equal so that they cancel each other out. Whenever we have two-dimensional vector problems in which no two vectors are parallel, the easiest method of solution is to pick a convenient coordinate system and project the vectors onto its axes. In this case, the best coordinate system has one horizontal axis (x) and one vertical axis (y). First, we need to resolve the tension vectors into their horizontal and vertical components. It helps to look at a new free-body diagram showing all horizontal and vertical components of each force acting on the system ((Figure)). Consider the horizontal components of the forces (denoted with a subscript x): [latex] {F}_{\text{net}\,x}={T}_{\text{R}x}-{T}_{\text{L}x}. [/latex] The net external horizontal force [latex] {F}_{\text{net}\,x}=0, [/latex] since the person is stationary. Thus, [latex] \begin{array}{ccc}\hfill {F}_{\text{net}\,x}& =\hfill & 0={T}_{\text{R}x}-{T}_{\text{L}x}\hfill \\ \hfill {T}_{\text{L}x}& =\hfill & {T}_{\text{R}x}.\hfill \end{array} [/latex] Now observe (Figure). You can use trigonometry to determine the magnitude of [latex] {T}_{\text{L}} [/latex] and [latex] {T}_{\text{R}} [/latex]: [latex] \begin{array}{cc} \hfill \text{cos}\,5.0\text{°}& =\hfill & \frac{{T}_{\text{L}x}}{{T}_{\text{L}}},\enspace{T}_{\text{L}x}={T}_{\text{L}}\text{cos}\,5.0\text{°}\hfill \\ \hfill \text{cos} \,5.0\text{°}& =\hfill & \frac{{T}_{\text{R}x}}{{T}_{\text{R}}},\enspace{T}_{\text{R}x}={T}_{\text{R}}\text{cos}\,5.0\text{°}.\hfill \end{array} [/latex] Equating T[Lx] and T[Rx]: [latex] {T}_{\text{L}}\text{cos}\,5.0\text{°}={T}_{\text{R}}\text{cos}\,5.0\text{°}. [/latex] [latex] {T}_{\text{L}}={T}_{\text{R}}=T, [/latex] as predicted. Now, considering the vertical components (denoted by a subscript y), we can solve for T. Again, since the person is stationary, Newton’s second law implies that [latex] {F}_{\text{net} \,y}=0 [/latex]. Thus, as illustrated in the free-body diagram, [latex] {F}_{\text{net}\,y}={T}_{\text{L}y}+{T}_{\text{R}y}-w=0. [/latex] We can use trigonometry to determine the relationships among [latex] {T}_{\text{Ly}},{T}_{\text{Ry}}, [/latex] and T. As we determined from the analysis in the horizontal direction, [latex] {T}_{\ text{L}}={T}_{\text{R}}=T [/latex]: [latex] \begin{array}{}\\ \hfill \text{sin}\,5.0\text{°}& =\hfill & \frac{{T}_{\text{L}y}}{{T}_{\text{L}}},\enspace{T}_{\text{L}y}={T}_{\text{L}}\text{sin}\,5.0\text{°}=T\,\text{sin}\,5.0\text{°}\ hfill \\ \hfill \text{sin}\,5.0\text{°}& =\hfill & \frac{{T}_{\text{R}y}}{{T}_{\text{R}}},\enspace{T}_{\text{R}y}={T}_{\text{R}}\text{sin}\,5.0\text{°}=T\,\text{sin}\,5.0\text{°}.\hfill \end{array} Now we can substitute the vales for [latex] {T}_{\text{Ly}} [/latex] and [latex] {T}_{\text{Ry}} [/latex], into the net force equation in the vertical direction: [latex] \begin{array}{ccc}\hfill {F}_{\text{net}\,y}& =\hfill & {T}_{\text{L}y}+{T}_{\text{R}y}-w=0\hfill \\ \hfill {F}_{\text{net}\,y}& =\hfill & T\,\text{sin}\,5.0\text{°}+T\,\text{sin}\,5.0\text {°}-w=0\hfill \\ \hfill 2T\,\text{sin}\,5.0\text{°}-w& =\hfill & 0\hfill \\ \hfill 2T\,\text{sin}\,5.0\text{°}& =\hfill & w\hfill \end{array} [/latex] [latex] T=\frac{w}{2\,\text{sin}\,5.0\text{°}}=\frac{mg}{2\,\text{sin}\,5.0\text{°}}, [/latex] [latex] T=\frac{(70.0\,\text{kg})(9.80\,{\text{m/s}}^{2})}{2(0.0872)}, [/latex] and the tension is [latex] T=3930\,\text{N}\text{.} [/latex] The vertical tension in the wire acts as a force that supports the weight of the tightrope walker. The tension is almost six times the 686-N weight of the tightrope walker. Since the wire is nearly horizontal, the vertical component of its tension is only a fraction of the tension in the wire. The large horizontal components are in opposite directions and cancel, so most of the tension in the wire is not used to support the weight of the tightrope walker. If we wish to create a large tension, all we have to do is exert a force perpendicular to a taut flexible connector, as illustrated in (Figure). As we saw in (Figure), the weight of the tightrope walker acts as a force perpendicular to the rope. We saw that the tension in the rope is related to the weight of the tightrope walker in the following way: [latex] T=\frac{w}{2\,\text{sin}\,\theta }. [/latex] We can extend this expression to describe the tension T created when a perpendicular force [latex] ({F}_{\perp }) [/latex] is exerted at the middle of a flexible connector: [latex] T=\frac{{F}_{\perp }}{2\,\text{sin}\,\theta }. [/latex] The angle between the horizontal and the bent connector is represented by [latex] \theta [/latex]. In this case, T becomes large as [latex] \theta [/latex] approaches zero. Even the relatively small weight of any flexible connector will cause it to sag, since an infinite tension would result if it were horizontal (i.e., [latex] \theta =0 [/latex] and sin [latex] \theta =0 [/latex]). For example, (Figure) shows a situation where we wish to pull a car out of the mud when no tow truck is available. Each time the car moves forward, the chain is tightened to keep it as straight as possible. The tension in the chain is given by [latex] T=\frac{{F}_{\perp }}{2\,\text{sin}\,\theta }, [/latex] and since [latex] \theta [/latex] is small, T is large. This situation is analogous to the tightrope walker, except that the tensions shown here are those transmitted to the car and the tree rather than those acting at the point where [latex] {F}_{\perp } [/latex] is applied. Check Your Understanding One end of a 3.0-m rope is tied to a tree; the other end is tied to a car stuck in the mud. The motorist pulls sideways on the midpoint of the rope, displacing it a distance of 0.25 m. If he exerts a force of 200.0 N under these conditions, determine the force exerted on the car. In Applications of Newton’s Laws, we extend the discussion on tension in a cable to include cases in which the angles shown are not equal. Friction is a resistive force opposing motion or its tendency. Imagine an object at rest on a horizontal surface. The net force acting on the object must be zero, leading to equality of the weight and the normal force, which act in opposite directions. If the surface is tilted, the normal force balances the component of the weight perpendicular to the surface. If the object does not slide downward, the component of the weight parallel to the inclined plane is balanced by friction. Friction is discussed in greater detail in the next chapter. Spring force A spring is a special medium with a specific atomic structure that has the ability to restore its shape, if deformed. To restore its shape, a spring exerts a restoring force that is proportional to and in the opposite direction in which it is stretched or compressed. This is the statement of a law known as Hooke’s law, which has the mathematical form [latex] \overset{\to }{F}=\text{−}k\overset{\to }{x}. [/latex] The constant of proportionality k is a measure of the spring’s stiffness. The line of action of this force is parallel to the spring axis, and the sense of the force is in the opposite direction of the displacement vector ((Figure)). The displacement must be measured from the relaxed position; [latex] x=0 [/latex] when the spring is relaxed. Real Forces and Inertial Frames There is another distinction among forces: Some forces are real, whereas others are not. Real forces have some physical origin, such as a gravitational pull. In contrast, fictitious forces arise simply because an observer is in an accelerating or noninertial frame of reference, such as one that rotates (like a merry-go-round) or undergoes linear acceleration (like a car slowing down). For example, if a satellite is heading due north above Earth’s Northern Hemisphere, then to an observer on Earth, it will appear to experience a force to the west that has no physical origin. Instead, Earth is rotating toward the east and moves east under the satellite. In Earth’s frame, this looks like a westward force on the satellite, or it can be interpreted as a violation of Newton’s first law (the law of inertia). We can identify a fictitious force by asking the question, “What is the reaction force?” If we cannot name the reaction force, then the force we are considering is fictitious. In the example of the satellite, the reaction force would have to be an eastward force on Earth. Recall that an inertial frame of reference is one in which all forces are real and, equivalently, one in which Newton’s laws have the simple forms given in this chapter. Earth’s rotation is slow enough that Earth is nearly an inertial frame. You ordinarily must perform precise experiments to observe fictitious forces and the slight departures from Newton’s laws, such as the effect just described. On a large scale, such as for the rotation of weather systems and ocean currents, the effects can be easily observed ((Figure)). The crucial factor in determining whether a frame of reference is inertial is whether it accelerates or rotates relative to a known inertial frame. Unless stated otherwise, all phenomena discussed in this text are in inertial frames. The forces discussed in this section are real forces, but they are not the only real forces. Lift and thrust, for example, are more specialized real forces. In the long list of forces, are some more basic than others? Are some different manifestations of the same underlying force? The answer to both questions is yes, as you will see in the treatment of modern physics later in the text. Explore forces and motion in this interactive simulation as you push household objects up and down a ramp. Lower and raise the ramp to see how the angle of inclination affects the parallel forces. Graphs show forces, energy, and work. Stretch and compress springs in this activity to explore the relationships among force, spring constant, and displacement. Investigate what happens when two springs are connected in series and in • When an object rests on a surface, the surface applies a force to the object that supports the weight of the object. This supporting force acts perpendicular to and away from the surface. It is called a normal force. • When an object rests on a nonaccelerating horizontal surface, the magnitude of the normal force is equal to the weight of the object. • When an object rests on an inclined plane that makes an angle [latex] \theta [/latex] with the horizontal surface, the weight of the object can be resolved into components that act perpendicular and parallel to the surface of the plane. • The pulling force that acts along a stretched flexible connector, such as a rope or cable, is called tension. When a rope supports the weight of an object at rest, the tension in the rope is equal to the weight of the object. If the object is accelerating, tension is greater than weight, and if it is decelerating, tension is less than weight. • The force of friction is a force experienced by a moving object (or an object that has a tendency to move) parallel to the interface opposing the motion (or its tendency). • The force developed in a spring obeys Hooke’s law, according to which its magnitude is proportional to the displacement and has a sense in the opposite direction of the displacement. • Real forces have a physical origin, whereas fictitious forces occur because the observer is in an accelerating or noninertial frame of reference. Conceptual Questions A table is placed on a rug. Then a book is placed on the table. What does the floor exert a normal force on? A particle is moving to the right. (a) Can the force on it to be acting to the left? If yes, what would happen? (b) Can that force be acting downward? If yes, why? A leg is suspended in a traction system, as shown below. (a) Which pulley in the figure is used to calculate the force exerted on the foot? (b) What is the tension in the rope? Here [latex] \overset {\to }{T} [/latex] is the tension, [latex] {\overset{\to }{w}}_{\text{leg}} [/latex] is the weight of the leg, and [latex] \overset{\to }{w} [/latex] is the weight of the load that provides the Suppose the shinbone in the preceding image was a femur in a traction setup for a broken bone, with pulleys and rope available. How might we be able to increase the force along the femur using the same weight? Two teams of nine members each engage in tug-of-war. Each of the first team’s members has an average mass of 68 kg and exerts an average force of 1350 N horizontally. Each of the second team’s members has an average mass of 73 kg and exerts an average force of 1365 N horizontally. (a) What is magnitude of the acceleration of the two teams, and which team wins? (b) What is the tension in the section of rope between the teams? What force does a trampoline have to apply to Jennifer, a 45.0-kg gymnast, to accelerate her straight up at [latex] 7.50\,{\text{m/s}}^{2} [/latex]? The answer is independent of the velocity of the gymnast—she can be moving up or down or can be instantly stationary. (a) Calculate the tension in a vertical strand of spider web if a spider of mass [latex] 2.00\,×\,{10}^{-5}\,\text{kg} [/latex] hangs motionless on it. (b) Calculate the tension in a horizontal strand of spider web if the same spider sits motionless in the middle of it much like the tightrope walker in (Figure). The strand sags at an angle of [latex] 12\text{°} [/latex] below the horizontal. Compare this with the tension in the vertical strand (find their ratio). Suppose Kevin, a 60.0-kg gymnast, climbs a rope. (a) What is the tension in the rope if he climbs at a constant speed? (b) What is the tension in the rope if he accelerates upward at a rate of [latex] 1.50\,{\text{m/s}}^{2} [/latex]? Show that, as explained in the text, a force [latex] {F}_{\perp } [/latex] exerted on a flexible medium at its center and perpendicular to its length (such as on the tightrope wire in (Figure)) gives rise to a tension of magnitude [latex] T={F}_{\perp }\text{/}2\,\text{sin}(\theta ) [/latex]. Consider (Figure). The driver attempts to get the car out of the mud by exerting a perpendicular force of 610.0 N, and the distance she pushes in the middle of the rope is 1.00 m while she stands 6.00 m away from the car on the left and 6.00 m away from the tree on the right. What is the tension T in the rope, and how do you find the answer? A bird has a mass of 26 g and perches in the middle of a stretched telephone line. (a) Show that the tension in the line can be calculated using the equation [latex] T=\frac{mg}{2\,\text{sin}\,\theta } [/latex]. Determine the tension when (b) [latex] \theta =5\text{°} [/latex] and (c) [latex] \theta =0.5\text{°} [/latex]. Assume that each half of the line is straight. Show Solution One end of a 30-m rope is tied to a tree; the other end is tied to a car stuck in the mud. The motorist pulls sideways on the midpoint of the rope, displacing it a distance of 2 m. If he exerts a force of 80 N under these conditions, determine the force exerted on the car. Consider the baby being weighed in the following figure. (a) What is the mass of the infant and basket if a scale reading of 55 N is observed? (b) What is tension [latex] {T}_{1} [/latex] in the cord attaching the baby to the scale? (c) What is tension [latex] {T}_{2} [/latex] in the cord attaching the scale to the ceiling, if the scale has a mass of 0.500 kg? (d) Sketch the situation, indicating the system of interest used to solve each part. The masses of the cords are negligible. What force must be applied to a 100.0-kg crate on a frictionless plane inclined at [latex] 30\text{°} [/latex] to cause an acceleration of [latex] {2.0\,\text{m/s}}^{2} [/latex] up the plane? A 2.0-kg block is on a perfectly smooth ramp that makes an angle of [latex] 30\text{°} [/latex] with the horizontal. (a) What is the block’s acceleration down the ramp and the force of the ramp on the block? (b) What force applied upward along and parallel to the ramp would allow the block to move with constant velocity? Hooke’s law in a spring, a restoring force proportional to and in the opposite direction of the imposed displacement normal force force supporting the weight of an object, or a load, that is perpendicular to the surface of contact between the load and its support; the surface applies this force to an object to support the weight of the object pulling force that acts along a stretched flexible connector, such as a rope or cable
{"url":"https://courses.lumenlearning.com/suny-osuniversityphysics/chapter/5-6-common-forces/","timestamp":"2024-11-02T05:01:35Z","content_type":"text/html","content_length":"104894","record_id":"<urn:uuid:6454a264-f0ad-45c3-b6b1-bcca5daaef10>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00645.warc.gz"}
Why is the atomic mass of the chlorine taken as 35.5 u and not a whole number like 35 u or 36 u? Explain. Hint: Chlorine has atomic no. 17 and it is present in the 17 groups of the halogens in period 3. There are two isotopes of the chlorine which have an atomic mass of 35 and 37 and they are found naturally on the earth with a percentage of 75% and 25% respectively. Complete step by step answer: - In nature, there are only two isotopes of Cl which exists i.e. Cl-35 and Cl-37. - The formation of the isotopes occurs due to the difference in neutrons in the atom because the difference in the neutrons causes a difference in the atomic mass and this is the reason for the formation of the two isotopes of chlorine. -The atomic mass of the isotopes is 37u and 35u but they have the same atomic number or number of electrons i.e. 17. - For calculating numerical and in many reactions, we use the atomic mass of chlorine is 35.5u because when we calculate the average atomic mass the value comes to be 35.5u. -We know that chlorine-35 is found abundantly more than chlorine-37 with a ratio of 3:1 or 75% and 25% respectively. -So, now we will calculate the average atomic mass of chlorine i.e. \[\text{35 }\times \text{ }\frac{3}{4}\text{ + }\text{37 }\times \text{ }\frac{1}{4}\text{ = 26}\text{.25 + 9}\text{.25 = 35}\text{.5u}\] - From the above calculation, it is proved that the atomic mass of chlorine is taken as 35.5u. Note: Isotopes are the species in which two or more compounds have the same atomic number or no. of electrons but they have different atomic mass. For example, in Cl-35, the atomic mass is 35 and the atomic number is 17 whereas in Cl-37, the atomic mass is 37 and the atomic number is 17.
{"url":"https://www.vedantu.com/question-answer/atomic-mass-of-the-chlorine-taken-as-355-class-11-chemistry-cbse-5f5fb01d68d6b37d1635d255","timestamp":"2024-11-07T02:22:23Z","content_type":"text/html","content_length":"161283","record_id":"<urn:uuid:5371442d-e731-4668-8167-69ec07d9ccfe>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00490.warc.gz"}
Tensor networks The non-equilibrium behavior of strongly interacting many-body quantum systems is a vibrant area of current theoretical and experimental research. While a wealth of novel non-equilibrium phenomena is continuously being discovered, the unbounded growth of entanglement makes the theoretical description of many-body quantum dynamics a difficult challenge. In this context, tensor network emerged as a powerful tool to obtain insights into interesting physical problems. In our group, we employ and develop tensor network methods to address topical open problems in non-equilibrium quantum physics. The below posts cover some of our recent results in this area. Paper on Dynamical Quantum Phase Transitions Published in PRL Ever since their discovery in 2013, Dynamical Quantum Phase Transitions (DQPTs) have attracted great theoretical and experimental attention as quantum many-phenomena occurring at short time-scales. DQPTs are non-analytic points (sharp changes) occurring in the time-evolution of the fidelity density \(f\), a quantity that indicates how far a state has evolved away from its initial condition and is zero if the system is exactly in its initial state. However, the mere observation of a DQPT, usually reliant on numerical calculations, does not shed light on the physical mechanisms that drive it. To address this open question, in our recent work Entanglement view of dynamical quantum phase transitions [Phys. Rev. Lett. 126, 040602 (2021)] we used a novel tensor-network based approach. Our work revealed the existence of two distinct physical mechanisms capable of driving DQPTs: if single-spin physics dominates the dynamics, one has precession-driven DQPTs (pDQPTs), whereas when spin-spin interactions dominate one has entanglement-driven DQPTs (eDQPTs). To uncover these mechanisms, we made use of the efficient encoding of the quantum state afforded by infinite Matrix Product States (iMPS). The iMPS can be seen as a linear superposition of all possible product states generated by an “automaton”, as shown below: Each circle in the automaton corresponds carries the local state \( |\Gamma_{ij}\rangle\) at a site. The arrows give the allowed choices for the states at the following site, with weights given by square roots of the entanglement spectrum \(\lambda_i\). The number of options available from each site, 2 in the above example, is known as bond dimension \(\chi\). In our work, we showed that suitable \(\chi=2\) Ansätze are able to capture the mechanisms of p- and eDQPTs. The fidelity density can be obtained by contracting the iMPS with the initial state, yielding the fidelity transfer matrix Tf. DQPTs are then caused by the second-largest eigenvalue of Tf, \(e_2\), overtaking \(e_1\) in magnitude, and this can happen for different reasons. When the entanglement gap is large, \(\lambda_1 >>\lambda_2\), the automaton prescribes that the state is approximately given by a product state \( |\Gamma_{11}\rangle\), with corrections \( |\Gamma_ {21,12}\rangle\) at each site being weighted by \(\sqrt{\lambda_2/\lambda_1}\). For example, for a system initialized in the all-down state, \(|\psi_1\rangle=|\downarrow\rangle\) and \(|\psi_2\rangle =|\uparrow\rangle\). The DQPT is then caused by an abrupt switch in the relative contribution of \(|\uparrow\rangle\) and \(|\downarrow\rangle\) as they precess respectively away from and towards the initial state. This is the physics of pDQPTs, and can be conveniently visualized on the Bloch sphere: In contrast, eDQPTs are driven by an avoided crossing in the entanglement spectrum. In this case, near the DQPT the quantum state undergoes a rearrangement whereby the initially off-diagonal component, which for \(\lambda_2 \ll \lambda_1\) provides a correction to the leading top-diagonal component, becomes the dominant contribution. The automaton representation then shows that excitations over the dominant product state can be created at decreasing cost as the gap is reduced. After the avoided crossing the formerly subleading component will become leading; this corresponds to the swap of \(e_1\), \(e_2\) and signals an eDQPT: In our work, we showed how the mechanism driving a given DQPT is also reflected in the behaviour of a number of physical observables, including the mutual information and one-point functions, opening the door to the experimental probing of the physics behind DQPTs.
{"url":"https://qdyn.ist.ac.at/research/tensor-networks/","timestamp":"2024-11-05T13:57:08Z","content_type":"text/html","content_length":"41605","record_id":"<urn:uuid:2068dc24-fde3-4429-a4cf-dd3bf79b3045>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00828.warc.gz"}
The Relationship Between Education and Welfare Dependency top of page The Relationship Between Education and Welfare Dependency Several studies have described the correlation between welfare dependency and factors such as welfare conditionality, gender, and high school or college graduation rates. Using Annual Social and Economics Supplement Data (ASEC) from 2009 through 2019, downloaded from sources such as IPUMS CPS, this paper crafts an OLS regression model to find the relationship that years of completed education have on welfare dependency status. This paper concludes that there is a negative correlation between higher education levels and lower participation in the welfare system, with the completion of one additional year of schooling suggesting a decrease in the probability of needing welfare by 0.1%. While this correlation is small, it is still statistically significant in the linear probability model due to a large sample size (n = 145,431). After adding other explanatory variables, such as measures for race, biological sex, and employment status to control for endogeneity, further regressions confirm that there is still a statistically significant negative relationship between education and welfare dependency. These results suggest that policymakers should focus on educational subsidies over welfare subsidies to increase social mobility. I. Introduction Education is often referred to as an essential mechanism in promoting social mobility (Haveman, 2006). However, the rising costs of education in America have forced many individuals to require more income to pay off student loans. As a result, families who are enrolled in welfare programs are spending a larger portion of their income on student debt, correlated with an increased reliance on such welfare programs and a positive feedback loop that makes it more difficult to climb out of welfare dependency (Johnson, 2019). In addition, most welfare programs have substantial requirements that, rather than helping recipients to get out of poverty, restrain recipients from escaping the welfare system (Rupp et al, 2020). This, and other societal pressures, have forced lots of students to put a pause on their education and work at low-skilled jobs with minimal pay, keeping them reliant on welfare programs (Johnson, 2019). This vicious cycle will only cause more people to remain trapped within welfare programs, preventing them from escaping poverty and improving their livelihoods. Previous studies have shown that education levels are correlated with the probability of a welfare recipient returning to welfare in the future (London, 2008). Other studies have also shown how changes in the welfare system have improved welfare recipients' education qualifications and subsequently their employment opportunities (Hernaes, 2017). London’s (2008) study focused on how attaining a higher educational degree allows welfare recipients to improve their employment opportunities, reduce their welfare dependency, and reduce their overall family poverty levels by 63%. Meanwhile, Hernaes et.al (2017) found that more conditionality in welfare programs helped Norwegian teenagers from welfare-recipient families reduce their reliance on welfare programs; and lower the country’s high school dropout rate by 21%. In addition, Pacheo & Maloney (2003) found that intergenerational welfare participation differs between genders due to family characteristics such as household size and parents’ welfare dependency. As a result, young females tend to have lower educational attainment and are nearly two times as probable of relying on welfare in the future when compared to their male counterparts (Pacheo et al, 2003). Based on the insights offered by the studies above, this paper aims to contribute to this field by investigating the hypothesis that years of schooling completed reducing the probability of receiving welfare in the future. Factors of endogeneity will also be analyzed through the implementation of explanatory variables such as race (Courtney, 1996), sex (Bakas, 2014), number of children (Arulampalam, 2000), marital status (Hoffman, 1997), hours worked (Bick et al, 2018) and employment status (Arranz, 2004) into the regression model. These variables were chosen due to past publications finding possible links between this psychographics and demographics to welfare benefits. Preliminary hypotheses predict that there will be a negative relationship between the education level attained and the probability that an individual will receive future welfare. Using simple and multi-linear OLS regression analysis and the IPUMS-CPS annual data from 2009-2019, it has been observed that individuals with more years of schooling completed are less likely to be on welfare in the future. Data was chosen from this period because the American economy was beginning to recover from the 2008 Financial Crisis during this time. This allows us to observe correlations between education levels attained by individuals and whether they ended up in welfare programs more clearly. This paper will be presented as follows: Section II will cover previous research on how welfare conditionality, gender, and education levels affect welfare dependency. Section III will present information on how this data was obtained and explain the data-cleaning process along with the types of variables used throughout this paper. This section ends with explanations of how the data is verified through the four OLS assumptions. Section IV will cover econometric methodology which includes alternate functional forms explored and additional X-Variables tested through multiple regression along with methodologies we’ve used to control the endogeneity of independent and explanatory variables to ensure the fairness of the regression model. Section V highlights the results, the sample regression line, and statistically significant information regarding the regression analysis. Section VI contains the paper's conclusions where the results are evaluated and put into context within the field. The paper concludes with section VII, the appendix, where all tables, figures, diagrams, and supporting calculations are represented for reference. II. Literature Review The literature works that are presented here serve as important foundations in the field and provide extensive insight into the relationship between welfare dependency and education levels, along with how other variables might affect this relationship. The study conducted by Hernaes et al. (2017) found that the strict welfare conditionality, linking welfare to certain characteristics or traits in Norwegian welfare programs, has reduced welfare dependency while increasing the high school graduation rate among Norwegian welfare recipients. In the process, they used a logarithmic regression model (LRM) and regressed a dependent dummy variable that identifies welfare recipients who are 21 years old onto an independent variable that consists of family characteristics such as parent’s education background and cumulative income, to control for endogeneity. The study resembles this approach because the dependent variable that they’ve used is also a dummy variable that indicates welfare recipients. In addition, the study used other explanatory variables, in particular the recipient’s parental background, to control for the endogeneity of those variables on the probability of returning to welfare. However, Hernaes et al. (2017) emphasizes how family background affects teens’ probability of returning to welfare in the future through explanatory variables that focus on family characteristics. Whereas this study focuses more on how other individual characteristics such as education level, labor condition, and family status of the welfare recipient have affected the welfare recipient’s probability of returning to welfare programs in the future. Notably, a previous study indicated that there is a correlation between welfare recipients who have obtained a higher education degree with reducing their reliance on welfare programs, but only if they receive additional financial aid to support their college expenses. London (2006) uses data such as college attendance, college graduation rate, and personal characteristics, such as extraversion and race demographics, to predict the welfare recipient’s three outcomes: employment, return to aid, and poverty status. By controlling influencing factors that change over time – such as the rate of college enrollment – and making sure all omitted variables, – such as familial culture and personal motivation – are factored into the result, the study employs instrumental variable econometric models to calculate predictions. The study found that “college attendance, more than graduation, is an important predictor of future employment. At the same time, college graduation better predicts the probability of returning to aid or being poor within five years of leaving welfare” (London, 2006, p. 491). Specifically, the study quoted “college graduation rather than enrollment without graduation has an effect on recidivism, and only in the five-year interval” (London, 2006, p. 489). Their findings support this hypothesis that the education level a welfare recipient attains is crucial to the probability of returning to welfare in the future. Despite the similarities in the use of variables to investigate the issue, predicting the probability of return to welfare using college graduation and attendance is only a part of this study’s objectives. The study also conducts an investigation into how college graduation and attendance affect employment opportunities and family poverty levels. Another earlier study showed that genders might have different levels of welfare participation and education attainment. Pacheco & Maloney (2003) learned that females “have an estimated intergenerational correlation coefficient that is more than double that for males.” (Pacheco & Maloney, 2003, p. 371). The study uses simple regression models and inputs such as the number of years in formal education completed by age 21, family background characteristics (parent’s education qualifications and the number of children in the household), and the proportion of years where parents obtain welfare benefits to produce their findings. In addition, Pacheco & Maloney (2003) found that female welfare recipients whose families have a history of welfare dependency tend to remain in welfare programs. The study uses the same regression model to offer insight into how familial and cultural forces affect male and female probabilities of returning to welfare in the future. Nevertheless, Pacheco & Maloney (2003) offered insight into how gender might have altered the relationship between education levels and probability in return to welfare. III. Descriptive Statistics All of the raw data was downloaded directly from the CPS portion of the IPUMS website, which is a reputable federal source for time series and cross-sectional data. Annual Social and Economic Supplement Data (ASEC) from 2009 to 2019 was downloaded. These years were selected to obtain the most up-to-date data while also analyzing enough observations to create the best regression analysis possible. Twenty-one variables were analyzed within these years, the most important of which were EDUC and INCWELFR, the two variables that were altered and then used for the regression analysis. These variables were raw and included nearly 150,000 observations over the 11 years. The data was meticulously cleaned before running any regressions to test the hypothesis. The first variable cleaned was EDUC. The raw EDUC variable could hold any coded value from 1 to 125. These coded values did not reflect the true years of schooling any individual had, so a new variable was created: EDUC_REV, to accurately reflect the true years of schooling each individual has completed. The values for this new variable were generated using the observations for the EDUC variable alongside the specific numeric code utilized by CPS. For example, an individual who has obtained a high school diploma through 12 years of completed education would receive a value of EDUC= 73 within the CPS data set. The data was cleaned so this specific value would now be EDUC_REV=12. This cleaning procedure was used for all possible levels of education within the data set. Individuals who were too young to receive any education at all were also removed from the data set (they were identified through EDUC=1 in the original data set). The focus then shifted toward the INCWELFR variable from CPS. This variable measures the dollar value of the income an individual receives from any source of government welfare benefits. In this study, the focus is on the effect that education has on the reception of welfare at all, not the amount of welfare that was received. This means the analysis is valid if an individual receives any form of welfare payments, and not focusing on the actual dollar value of said payments. So, for this reason, another new variable was created: WELFARE. This variable is a dummy variable that gets its values from the information in the INCWLFR variable. If the individual receives no form of welfare they will be assigned INCWLFR=0 in the data set. This same individual would be assigned a value of zero for the newly created variable (WELFARE=0 when INCWLFR=0). However, if an individual receives welfare in any form, regardless of the amount, they will be assigned a value of one for the new variable (WELFARE=1 when INCWLFR>0). Any individual who was not eligible to receive welfare in any form was denoted by INCWELFR=999999. These observations, many of which were individuals under 18, were removed from the data set to generate a less skewed, and more accurate, sample. Additional variables were also analyzed for the multiple regression analysis. These variables tested the effects of not only education, but also employment status, income, hours worked, marital status, gender, and number of children on the reception of welfare. These variables were used to try and control for endogeneity within the model and are further described in Table 1 of the appendix Before the new variables could be put through a proper regression analysis, the four assumptions of an Ordinary Least Squares Regression Line had to be tested. If all of these assumptions hold true then the estimators of b1 and b2 would be BLUE (Best Linear Unbiased Estimators) and all of the calculations done through STATA would be completely accurate. The first OLS assumption is that the expected error within a sample will be zero. This is noted as E[WELFARE_RES/EDUC_REV]=0 and this does hold true in this sample. The 95% confidence interval for WELFARE_RES does include zero so it is likely that the expected value of the error is zero and therefore the first OLS assumption is met. The second OLS assumption is that the data is homoscedastic. This is noted as Var(WELFARE_RES/EDUC_REV)=Sigma^2. However, since the dependent variable is a dummy variable, this regression takes the form of a linear probability model (LPM). By definition, every linear probability model has heteroscedastic data. Therefore, the second OLS assumption is not met. The third OLS assumption is that the data is free of clustering. This is noted as Cov(WELFARE_RES_i,WELFARE_RES_j)=0, meaning that the value of WELFARE for one value does not directly influence the value of any other observation within the data set. This influence usually occurs when two observations are within the same geographical unit. While there is no way to test if any observations are within the same geographical unit (such as the same household) due to confidentiality, the sample size is large enough and pulls from each region almost equally, so it would be extremely unlikely for any two observations to come from the same household. Therefore, for the sake of the regression, the third OLS assumption will be met. The fourth and final OLS assumption is that Y is normally distributed. This was tested by creating a histogram for WELFARE and seeing if it roughly resembled a bell curve. When this was done, it was obvious that the data was not normal. This is apparent through a multitude of factors but is most clearly shown by the high skewness, a value of over 16. Therefore, it was concluded that welfare was not normally distributed. However, since the sample size consists of 145,431 observations, the central limit theorem (CLT) is met. So, while the fourth OLS assumption failed to be met for this particular regression, it will not have a significant impact on the regression since the sampling distribution for WELFARE will still be normally distributed. In conclusion, the regression met two of the four OLS assumptions. Therefore, while the regression analysis will not be BLUE, it will still be significant since it is free of serious sampling errors. IV. Econometric Methodology While this paper mainly focuses on the linear probability model and the effect that education has on welfare dependency, other functional forms that could better fit the regression analysis were also considered to develop a more thorough analysis. This was done through the experimentation of the functional forms that the independent variable took. While the previous section discussed the linear form of EDUC_REV, exponential and logarithmic forms of this variable were also considered. The independent variable was only altered since the dependent variable is a dummy variable. Altering the value of the variable will not generate any different results since its domain is limited to {0,1}. Other explanatory variables, and the results they produced, are summarized in Table 5 of the While all of the functional forms tested would have produced statistically significant interpretations that support the hypothesis, although their interpretations would have been different, the original regression was still the most accurate for this particular data set. Other functional forms included EDUC_REV in quadratic, cubic, and log forms. These functional forms are used to emphasize the effects of EDUC_REV in order to match the data points. The original is the most accurate because it has the highest R-Squared value, a measure of how well the data points fit the linear regression line. These R-Squared values can be found in Table 5 but the linear model has the highest value of .0014. Since the linear regression between WELFARE and EDUC_REV has the most accurate regression line relative to the data set, this regression model was the basis from which all conclusions were drawn. Interaction terms were also analyzed by creating the term EDUC_UNEMP which was EDUC_REV multiplied by UNEMPLOYED. By using this interaction term, the possible effect of EDUCATION on WELFARE varying with UNEMPLOYED can be studied. The regression showed that when UNEMPLOYED is 0, the likelihood of WELFARE is constant plus b2. When UNEMPLOYED is 1 then the likelihood of being on WELFARE increases. This means that individuals who are unemployed are more likely to be receiving benefits from welfare. The motive that drives this is individuals who are unemployed do not receive any form of compensation or income outside of their welfare payments. Slope and Intercept dummy variables are additional variables added to this study. In this situation, the intercept dummy variable is UNEMPLOYED. The presence of UNEMPLOYED is represented with a 1 and causes an increase in the intercept, which translates to an increase in the probability of welfare. When describing this relationship on a graph there are two parallel lines and the difference between them is caused by the slope dummy variable. Both lines have the same slope and the probability gap of being on WELFARE remains the same at all levels. This is not the main difference between someone who is unemployed and someone who is employed. This supports the claims made through interaction term analysis in the previous paragraph. However, while the simple regression analysis supports the hypothesis, there could be other confounding variables that underlay such correlation seen between WELFARE and EDUC_REV. If these possible confounding variables are correlated with both WELFARE (controlling for EDUC_REV) and EDUC_REV, then it could make EDUC_REV an endogenous variable, indicating that EDUC_REV does not necessarily cause the decrease in the probability of an individual on welfare. To test this claim, a multiple regression analysis was run, including both EDUC_REV and a variety of other possibly confounding variables, for their possible effects on WELFARE. The results showed that the three variables with the largest effect on WELFARE were BLACK, MALE, and UNEMPLOYED. These are variables created within the data set describing an individual's race, gender, and employment status, respectively. All of these are strong contenders for possible confounding variables and the true reason the regression effect on welfare was observed, and therefore put EDUC_REV at risk of being an endogenous variable (Courtney, 1996; Bakas, 2014; Arranz, 2004). A full list of the additional X-Variables tested along with the multiple regression output can be found in Table 10. That being said, this is not enough evidence to conclude that education levels are definitely an endogenous variable when describing the probability of receiving welfare. These possible confounding variables could be further analyzed if a more in-depth regression analysis was performed in future studies. V. Results After the data had been completely cleaned and verified for OLS assumptions, the regression of EDUC_REV on WELFARE was run. This regression showed the noncausal effect that years of completed education have on the probability of receiving welfare. If the hypothesis holds true, the Least-Squares Regression Line should have a negative slope, denoting that the more years of education an individual completes, the less likely it is that the individual receives welfare. The output for the regression analysis, as well as the full, scatter plot showing the Least Squares Regression Line for EDUC_REV against WELFARE, can be seen in Table 8 and Table 9 of the appendix. However, these figures can be summarized by the equation for the sample regression: WELFARE_hat = b1 + b2 EDUC_REV t-statistic = -14.27 WELFARE_hat = .015 - .001 EDUC_REV n = 145,431 (SE) (8.08e-4) (5.68e-5) p-value = 0 *** The most important value within the sample regression line for the hypothesis is -.001, or the slope of the regression line denoted as b2. Since b2 is a negative value, there is a negative correlation between the number of years of completed schooling (EDUC_REV) and the reception of welfare (WELFARE). While this value seems too small to have any real effect, it is still statistically significant. This is because the 99% confidence interval for b2 does not include zero because the standard deviation is extremely close to zero based on the large sample size. A hypothesis test at the critical level of .01 was also run to see if the value generated for b2 could be equal to zero. This test gave a critical value for b2 of -14.27 and a probability of B2 being equal to zero of zero. These results lead to the conclusion that it is statistically significant that as EDUC_REV increases, WELFARE decreases within the regression. In conclusion, while increases in education could have a small effect on the probability of relying on welfare, it is still a statistically significant effect. However, this does not prove that increases in education will decrease the probability of relying on welfare since ceteris-paribus does not hold true for this collected data set and a causal relationship is not established. This regression analysis supports the hypothesis that as an individual's education increases, the probability that said individual will rely on welfare as a source of income decreases (since b2 is a statistically significant negative number). By applying these findings, it was determined that as an individual’s years of completed schooling (EDUC_REV) increases by 1 year, the probability that the individual will receive welfare (WELFARE as a dummy variable) decreases by .001 or 0.1%. This is because the slope of the linear regression model, with a dependent dummy variable, is -.001 and the functional form analyzed is a linear probability model. While this relation is not inherently strong, and years of completed schooling do not have a large impact on the probability of receiving welfare, it is still statistically significant. Within the regression, b1 is also statistically significant. The value of b1 in this sample regression line is .015, or an applied .15%. By applying this value to the context of the study, it was found that the probability of an individual receiving welfare given that they have completed zero years of schooling is .15%. This number is positive so it is technically feasible and within the domain of the study. However, it is extremely unlikely that an individual has received zero years of schooling and is also eligible to receive welfare (Stephens, 2014). For this reason, the value of b1 was not a focus within these results. VI. Conclusion As stated above, this study shows a minor, yet the statistically significant, effect of EDUC_REV on WELFARE. These results indicate that there is evidence to support a possible relationship between higher levels of completed education and lower chances of an individual receiving welfare in the future. The thought process behind this regression is that individuals with higher education are more likely to land better jobs and therefore make more money, thus decreasing their need for welfare. While focusing on the simple regression model for the majority of the paper, important results when controlling for endogeneity through a multiple regression model were also found. This multiple regression analysis was performed while controlling for multicollinearity. Since none of these variables share a strong correlation (r > .8) with each other, it is okay to run a regression model with all of these X-Variables. The full correlation results can be seen in Table 11 of the appendix. AIC, BIC/SC, and R_Squared were also analyzed and are summarized in Table 12. Since the multiple regression model has more X-Variables, it has a larger potential to explain any variation in Y and is likely to be a better fit for the data. Even with the introduction of these additional X-Variables, the initial variable tested in the multiple regression analysis, EDUC_REV, was still statistically significant, as seen in Table 10. Thus, even with controls for endogeneity, there is still a statistically significant negative correlation between the highest level of completed education and the probability of receiving welfare, only strengthening this paper’s claims. In relation to previous studies in part II, this study aligns with London’s (2006) conclusion that welfare recipients who have received a higher education degree have a lower probability of receiving welfare in the future, with the assumption that both genders fit into the conclusion. However, to what extent education attainment is beneficial to both genders and race remains questionable since the data lacked suitable information to investigate how omitted variables might have affected the relationship between the education level attained and the probability of receiving welfare. This paper has also failed to reproduce the findings that Pacheco & Maloney (2003) found. This paper did not control the age and time of welfare received by the recipient, whereas Pacheco & Maloney (2003) did. In addition, Pacheco & Maloney (2003) factors in the background of the welfare recipient’s parents, such as their income received from welfare, educational background, and race. This study, on the other hand, did not factor family characteristics into the regression model. This paper also failed to reproduce the results that Hernaes et. al (2017) produced because the nature of the data is different from Hernaes et. al (2017). First, Hernaes et. al’s (2017) dataset had the location of each welfare recipient’s municipality. The location variable allows Hernaes et. al (2017) to determine whether the welfare recipient was in a municipality that has stricter welfare policies or not. Second, Hernaes et. al (2017) was able to capture each municipality’s level of conditionality through survey responses collected in a report by a research institute. These are some of the features that the data, unfortunately, do not possess. This paper supports the theory that there is a correlation between the highest level of education completed and the probability of receiving welfare. Thus, more educated individuals are less likely to be dependent on welfare. In a broader context, policymakers could use this information to find more effective means for increasing social mobility, rather than investing heavily in welfare payments. Since there is possibly an inverse relationship between education and welfare, the federal government could create a new program to subsidize education rather than simply making payments to disadvantaged citizens. This would provide an economic incentive for individuals who were previously on welfare to attend school, making the entire nation more educated and more productively efficient as a result (Brown et al, 1991). However, while this paper could be used from a policy perspective, there are some drawbacks. The relationship between education and the probability of welfare is not proven to be causal after this analysis. This is because the ceteris-paribus condition does not hold true throughout the data and regression. In addition, this dataset has a limited scope regarding population characteristics. The dataset indicates the highest education level attained by the individual but does not indicate when they achieved that education. For example, some individuals might have dropped out of high school during their youth and returned to complete their high school degree after a long period of time. If that information is also provided in the dataset, that would open new frontiers on how education-level attainment influences the probability of receiving welfare. Before any change is enacted, especially on a governmental level, first proving a causal relationship would be recommended. This paper merely lays the framework for possible studies regarding welfare analysis in the future. This paper did support the hypothesis that as education levels rise, the probability that an individual becomes dependent on welfare decreases. Through the regression analysis, it was determined that there is a small, yet statistically significant, difference that education has on the probability of receiving welfare in the future. This trend could be utilized by policymakers to stimulate education as a means of reducing welfare dependency, creating a population that is not only less dependent on welfare payments, but more educated, and more productive as a result. Note: see "Full Editions," Volume IV Issue I for appendix. VIII. References Arranz, Jose Ma, and Muro, Juan. "Recurrent Unemployment, Welfare Benefits and Heterogeneity." International Review of Applied Economics. 18, no. 4 (2004): 423-41. Arulampalam, W. "Unemployment Persistence." Oxford Economic Papers. 52, no. 1 (2000): 24-50. Bakas, Dimitrios, and Papapetrou, Evangelia. "Unemployment by Gender: Evidence from EU Countries." International Advances in Economic Research. 20, no. 1 (2014): 103-11. Bick, Alexander, Fuchs-Schündeln, Nicola, and Lagakos, David. "How Do Hours Worked Vary with Income? Cross-Country Evidence and Implications." The American Economic Review. 108, no. 1 (2018): 170-99. Brown, Phillip, and Lauder, Hugh. "Education, Economy and Social Change." International Studies in Sociology of Education. 1, no. 1-2 (1991): 3-23. Cliff, Aiden, Rupp, Matthew, Lieng, Owen. “A Study on the Relationship Between Education and Probability to Receive Welfare Assistance.” Boston University (2020): 204 Courtney, ME. "Race and Child Welfare Services: Past Research and Future Directions." Child Welfare. 75 (1996): 99. Gooden, S. (2000). Race and Welfare. Journal Of Poverty, 4(3), 21-24. https://doi.org/10.1300/J134v04n03_02 Haveman, Robert, and Timothy Smeeding. "The Role of Higher Education in Social Mobility." The Future of Children 16, no. 2 (2006): 125-50. Accessed April 28, 2021. http://www.jstor.org/stable/3844794 Hernæs, Ø., Markussen, S., & Røed, K. (2017). Can welfare conditionality combat high school dropout. Labour Economics, 48, 144-156. https://doi.org/10.1016/j.labeco.2017. 08.003 Hoffman, Saul. "Marital Instability and the Economic Status of Women." Demography 14, no. 1 (1977): 67-76. Johnson, D. (2019). What Will It Take to Solve the Student Loan Crisis. Harvard Business Review. Retrieved 29 April 2020, from https://hbr.org/2019/09/ Kim, Hwanjoon. "Anti‐Poverty Effectiveness of Taxes and Income Transfers in Welfare States." International Social Security Review. 53, no. 4 (2000): 105-29. London, R. (2005). Welfare Recipients' College Attendance and Consequences for Time-Limited Aid. Social Science Quarterly, 86, 1104-1122. https://doi.org/10.1111/j.0038-4941.2005.00338. London, R. (2006). The Role of Postsecondary Education in Welfare Recipients' Paths to Self-Sufficiency. The Journal Of Higher Education, 77(3), 472-496. Retrieved 28 April 2020, from https:// Pacheco, G., & Maloney, T. (2003). Are the Determinants of Intergenerational Welfare Dependency Gender-specific. Australian Journal Of Labour Economics, 6(3), 371-382. Retrieved 28 April 2020, from https://www.researchgate.net/ publication/46557521_Are_the_Determinants_of_Intergeneration al_Welfare_Dependency_Gender-specific Stephens, Melvin, and Yang, Dou-Yan. "Compulsory Education and the Benefits of Schooling." The American Economic Review. 104, no. 6 (2014): 1777-792. bottom of page
{"url":"https://www.brownjppe.com/projects-2/the-relationship-between-education-and-welfare-dependency","timestamp":"2024-11-12T05:24:44Z","content_type":"text/html","content_length":"736077","record_id":"<urn:uuid:21a19c6a-6715-40fa-9d52-c88ac63c3026>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00855.warc.gz"}
8.033 Relativity, Fall 2003 Rappaport, S. A., 1942- Normally taken by physics majors in their sophomore year. Einstein's postulates; consequences for simultaneity, time dilation, length contraction, clock synchronization; Lorentz transformation; relativistic effects and paradoxes; Minkowski diagrams; invariants and four-vectors; momentum, energy and mass; particle collisions. Relativity and electricity; Coulomb's law; magnetic fields. Brief introduction to Newtonian cosmology. Introduction to some concepts of General Relativity; principle of equivalence. The Schwarzchild metric; gravitational red shift, particle and light trajectories, geodesics, Shapiro delay. Date issued Other identifiers local: 8.033 local: IMSCP-MD5-40c95128b02c8132b554fadca146c98f Einstein's postulates, consequences for simultaneity, time dilation, length contraction, clock synchronization, Lorentz transformation, Minkowski diagrams, particle collisions, Coulomb's law, magnetic fields, Newtonian cosmology, General Relativity, gravitational red shift, particle trajectories, light trajectories, invariants, four-vectors, momentum, energy, mass, relativistic effects, paradoxes, electricity, time dilation, length contraction, clock synchronization, Schwarzchild metric, geodesics, Shaprio delay, relativistic kinematics, relativistic dynamics, electromagnetism, hubble expansion, universe, equivalence principle, curved space time, Ether Theory, constants, speed of light, c, graph, pythagorem theorem, triangle, arrows, Relativity
{"url":"https://dspace.mit.edu/handle/1721.1/39131","timestamp":"2024-11-11T06:51:57Z","content_type":"text/html","content_length":"24358","record_id":"<urn:uuid:9166d5ce-1a18-4c18-a8e9-f79927922cba>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00318.warc.gz"}
Dispelling Tube Amp Myths Guys, I have tried to distill this information down to something the technically minded guitarist can understand. However, one may have to read through the post a couple of times. There is a lot myth surrounding tube amps. That is mainly because myth is easier for the layman to understand than science. One of the main myths is tube watts are louder than solid-state watts. That is complete nonsense, a watt is a watt. A watt is equivalent to one joule of energy per second. There are couple of reasons for this myth. A big reason is that solid-state amp designers were not always honest about power ratings, preferring to state power in peak power terms instead of RMS power, which led to inflated power ratings. That being said, the reason why tube amp often sounds louder than solid-state amps of the same RMS rating is because tube amps are poorly damped. Damping is the ratio between speaker impedance and amplifier output impedance. Damping factor determines how well the power stage of an amp can control speaker cone movement. Power tubes have high output impedances. In order to drive a speaker, they require an output transformer that steps the power tube output impedance down to the speaker nominal impedance. In the process, the output transformer converts a high voltage, low current signal to a lower voltage, higher current signal. It also increases that amp's damping factor somewhat. For example, A Fender Vibrolux has an output transformer with a primary impedance of 4,000 (4K) ohms with secondary impedance of 4 ohms. However, an output transformer knows nothing about impedances. It works using impedance ratios, that is, if we plug an 8-ohm speaker into the 4-ohm jack, the power tubes will see primary impedance of 8,000 ohms instead of 4,000 ohms because any change in the load attached to the output transformer's secondary winding (a.k.a. the wind that is attached to the speaker jack) is reflected back to the power tubes. This change is due the fact that output transformers actually have impedance ratios, not fixed impedances. The impedance ratio is the ratio between the primary and secondary impedances. In this case, the impedance ratio is 4000 / 4 = 1000. The impedance ratio is the square of the ratio between the number of turns of wire in the primary and the secondary windings, which is SQRT(1000) ~= 32, where SQRT is the square-root function. What that means in layman’s terms is that there is one turn of wire in the secondary winding for every thirty-two turns in the primary winding. Now, lets get to the meat of why tubes amp tend to sound lively and whereas solid-state designs often sound flat. That is because impedance is not a synonym for DC resistance. Impedance is an AC measurement that contains reactive components; namely, inductive reactance and capacitive reactance. The formula for computing impedance is: Z = SQRT(R^2 + (Xl – Xc)^2), where SQRT is the square-root formula, R = DC resistance, Xl = inductive reactance, and Xc = capacitive reactance. Xl = 2 * 3.14 * f * L, where f = frequency, L = inductance henries Xc = 1 / (2 * 3.14 * f * C), where f = frequency, C = capacitance in farads The important takeaway here is that both Xl and Xc change with respect to frequency, which means that impedances changes with respect to frequency. DC resistance does not change with respect to frequency, which is why impedance is not a synonym for DC resistance. Anyone who has ever measured the DC resistance of a speaker knows that is usually not equal to a speaker’s rated impedance. That is because a speaker’s impedance rating is based on the average of its lowest impedance values. An ohm meter uses DC current for measurement, which is zero hertz; therefore, we are measuring the resistance of speaker with the equivalent a zero hertz signal. Now, the we know that impedance changes with respect to frequency and an output transformer is merely a set of impedance ratios in which the primary impedance is based on the impedance of the load, we can move on to why tube amps tend to sound more lively and louder than most solid-state designs. That is because the output of a tube amp is a constant current source and voltage (E) equals current (I) times resistance (R). However, in our case, impedance (Z) is substituted for resistance. Now, we are in a position to understand why a tube amp often sounds more lively and louder than the average solid-state amp. The impedance of a speaker generally increases with respect to frequency, which causes the voltage applied to the speaker to increase with frequency because voltage equals current times impedance. This change causes an increase in power to be delivered to the load because the current remains constant and power in watts (w) is equal to voltage times current. This effect is unwanted in hi-fi because hi-fi seeks to have even frequency response. Poor damping is the reason why all but the most expensive tube audio power amps have less detail than the average solid-state power amp. In essence, poor damping is a technological foible that guitarists exploited to their advantage. Most of the second generation of solid-state music amp designs are highly damped because the amplifier’s output impedance is lower than the speaker impedance. High damping factor allows these amps to have much tighter control over speaker cone movement, which is why bass guitarists embraced solid-state amps. A bass is tuned an octave lower than a guitar and low notes require much larger cone movement than high notes, so a high damping factor is boon to bass guitar. Second generation solid-state amps also operate as constant voltage sources instead of constant current sources. That is why power delivered to the load increases when output impedance decreases. If we hold voltage steady and half the impedance of the load we double the amount of current that is flowing through the load because current (I) is equal to voltage (E) divided by impedance (Z). This way of reacting to speaker load is the exact opposite of what happens in a tube amp, meaning that the current delivered to the load decreases with respect to frequency because impedance increases with respect to frequency. If current delivered to the load decreases, power delivered to the load decreases, which results in higher notes not popping like they do with a tube amp. Now, there is a way to reduce damping factor and make a solid-state power stage behave much like a tube output stage. That is done by changing the topology such that the speaker forms part of the resistive divider that is the feedback loop. The resistance used to ground, Rg, of the resistive divider used for the feedback loop is less than the nominal impedance of the speaker, which functions as Rf. The feedback loop determines the gain of the output stage. The formula for gain equals Rf divided by Rg plus 1. If Rf goes up, the gain of the circuit goes up; hence, it causes power to increase with increases in load impedance much like a tube amp. Marshall first used this topology with their ValveState amps. The tube in the preamp is about as much for show as it is for sound. Last edited: Apr 8, 2020 Insteresting phenomenon, great explanation. For some reason, every modeling solid state amp I try sounds buzzy to me and fake. I never get clean tones like from a stereo or radio, whether adjusted like a clean amp or distorted. They don’t seem as articulate as a tube, and feel over compressed. I have not tried an expensive model like a kemper or fractal, maybe I need to. Apr 26, 2012 Guys, I have tried to distill this information down to something the technically minded guitarist can understand. However, one may have to read through the post a couple of times. There is a lot myth surrounding tube amps. That is mainly because myth is easier for the layman to understand than science. One of the main myths is tube watts are louder than solid-state watts. That is complete nonsense, a watt is a watt. A watt is equivalent to one joule of energy per second. There are couple of reasons for this myth. A big reason is that solid-state amp designers were not always honest about power ratings, preferring to state power in peak power terms instead of RMS power, which led to inflated power ratings. That being said, the reason why tube amp often sounds louder than solid-state amps of the same RMS rating is because tube amps are poorly damped. Damping is the ratio between speaker impedance and amplifier output impedance. Damping factor determines how well the power stage of an amp can control speaker cone movement. Power tubes have high output impedances. In order to drive a speaker, they require an output transformer that steps the power tube output impedance down to the speaker nominal impedance. In the process, the output transformer converts a high voltage, low current signal to a lower voltage, higher current signal. It also increases that amp's damping factor somewhat. For example, A Fender Vibrolux has an output transformer with a primary impedance of 4,000 (4K) ohms with secondary impedance of 4 ohms. However, an output transformer knows nothing about impedances. It works using impedance ratios, that is, if we plug an 8-ohm speaker into the 4-ohm jack, the power tubes will see primary impedance of 8,000 ohms instead of 4,000 ohms because any change in the load attached to the output transformer's secondary winding (a.k.a. the wind that is attached to the speaker jack) is reflected back to the power tubes. This change is due the fact that output transformers actually have impedance ratios, not fixed impedances. The impedance ratio is the ratio between the primary and secondary impedances. In this case, the impedance ratio is 4000 / 4 = 1000. The impedance ratio is the square of the ratio between the number of turns of wire in the primary and the secondary windings, which is SQRT(1000) ~= 32, where SQRT is the square-root function. What that means in layman’s terms is that there is one turn of wire in the secondary winding for every thirty-two turns in the primary winding. Now, lets get to the meat of why tubes amp tend to sound lively and whereas solid-state designs often sound flat. That is because impedance is not a synonym for DC resistance. Impedance is an AC measurement that contains reactive components; namely, inductive reactance and capacitive reactance. The formula for computing impedance is: Z = SQRT(R^2 + (Xl – Xc)^2), where SQRT is the square-root formula, R = DC resistance, Xl = inductive reactance, and Xc = capacitive reactance. Xl = 2 * 3.14 * f * L, where f = frequency, L = inductance henries Xc = 1 / (2 * 3.14 * f * C), where f = frequency, C = capacitance in farads The important takeaway here is that both Xl and Xc change with respect to frequency, which means that impedances changes with respect to frequency. DC resistance does not change with respect to frequency, which is why impedance is not a synonym for DC resistance. Anyone who has ever measured the DC resistance of a speaker knows that is usually not equal to a speaker’s rated impedance. That is because a speaker’s impedance rating is based on the average of its lowest impedance values. An ohm meter uses DC current for measurement, which is zero hertz; therefore, we are measuring the resistance of speaker with the equivalent a zero hertz signal. Now, the we know that impedance changes with respect to frequency and an output transformer is merely a set of impedance ratios in which the primary impedance is based on the impedance of the load, we can move on to why tube amps tend to sound more lively and louder than most solid-state designs. That is because the output of a tube amp is a constant current source and voltage (E) equals current (I) times resistance (R). However, in our case, impedance (Z) is substituted for resistance. Now, we are in a position to understand why a tube amp often sounds more lively and louder than the average solid-state amp. The impedance of a speaker generally increases with respect to frequency, which causes the voltage applied to the speaker to increase with frequency because voltage equals current times impedance. This change causes an increase in power to be delivered to the load because the current remains constant and power in watts (w) is equal to voltage times current. This effect is unwanted in hi-fi because hi-fi seeks to have even frequency response. Poor damping is the reason why all but the most expensive tube audio power amps have less detail than the average solid-state power amp. In essence, poor damping is a technological foible that guitarists exploited to their advantage. Most of the second generation of solid-state music amp designs are highly damped because the amplifier’s output impedance is lower than the speaker impedance. High damping factor allows these amps to have much tighter control over speaker cone movement, which is why bass guitarists embraced solid-state amps. A bass is tuned an octave lower than a guitar and low notes require much larger cone movement than high notes, so a high damping factor is boon to bass guitar. Second generation solid-state amps also operate as constant voltage sources instead of constant current sources. That is why power delivered to the load increases when output impedance decreases. If we hold voltage steady and half the impedance of the load we double the amount of current that is flowing through the load because current (I) is equal to voltage (E) divided by impedance (Z). This way of reacting to speaker load is the exact opposite of what happens in a tube amp, meaning that the current delivered to the load decreases with respect to frequency because impedance increases with respect to frequency. If current delivered to the load decreases, power delivered to the load decreases, which results in higher notes not popping like they do with a tube amp. Now, there is a way to reduce damping factor and make a solid-state power stage behave much like a tube output stage. That is done by changing the topology such that the speaker forms part of the resistive divider that is the feedback loop. The resistance used to ground, Rg, of the resistive divider used for the feedback loop is less than the nominal impedance of the speaker, which functions as Rf. The feedback loop determines the gain of the output stage. The formula for gain equals Rf divided by Rg plus 1. If Rf goes up, the gain of the circuit goes up; hence, it causes power to increase with increases in load impedance much like a tube amp. Marshall first used this topology with their ValveState amps. The tube in the preamp is about as much for show as it is for sound. I gave that a “like“ even though I have no idea what it means. It sure seemed impressive though. Deleted member 5962 There is a lot myth surrounding tube amps. That is mainly because myth is easier for the layman to understand than science. I have a question for you. Have you ever plugged any electric guitar into any pretty decent hi-fi stereo system? If you ever have, what did you think of the tone? Apr 26, 2012 I have a question for you. Have you ever plugged any electric guitar into any pretty decent hi-fi stereo system? If you ever have, what did you think of the tone? I'm interested in Em7's thoughts, but in the meantime: I played through a hi-fi stereo from a run of shows, feeding the same signal into both channels. I thought it sounded pretty good, squeaky clean, but that was right for what we were playing. Deleted member 5962 I'm interested in Em7's thoughts, but in the meantime: I played through a hi-fi stereo from a run of shows, feeding the same signal into both channels. I thought it sounded pretty good, squeaky clean, but that was right for what we were playing. So am I. My point is way to long to try to write out during work. I tried multiple times, with good stereo systems (Musical Concepts modified Hafler pre-amp and amp, modified Adcom pre-amp and amp), and it was extremely sterile. Without some verb or chorus or something, it didn't really sound good. TOO clean. (And very stiff). The point being, the "flaws" in many tube amp designs are there not because the designer didn't "know better" but because they sound better, feel better, or both. The things you design into an amp to make it better as a high fidelity reproducer of sound, are many time counter productive for use as a guitar amp. The undersized power sections of a Fender Champ for example, are a big part of why people who dig them, dig them. I dropped a 15 watt OT and bigger caps in one of my Valve Jr's and it completely changed it from the vibey, sagging, singing amp, to a much tighter, much clearer and cleaner, much punchier and louder (more headroom) amp. For the Marshall type tones I was going after, it worked well, but many guys who want the Champ vibe wouldn't have liked it at all. Totally changed it. That doesn't mean the champ is designed incorrectly, or just to be made cheaply. It's that way on purpose. I have a question for you. Have you ever plugged any electric guitar into any pretty decent hi-fi stereo system? If you ever have, what did you think of the tone? That is not because a hi-fi is solid-state. It is because a guitar amp has limited bandwidth. A guitar amp will not do remotely close to 20 hertz to 20K hertz like a good stereo. The bandwidth of a guitar amp limited heavily by guitar speakers, which have limited frequency response. With a tube amp, the output transformers also acts like a bandpass filter in that it limits high and low frequencies. That is mostly because the output transformers that have been used in tube amps have undersized cores. If one plugs a Hiwatt into a full range speaker and then plugs a guitar into it, one will see what I mean. The Partridge output transformers used in old Hiwatts are as close to hi-fi quality as one is going to find in guitar amp. Another important take away is the pass band of the average tube guitar amp output transformer narrows as the amp is pushed into clipping. That is because most guitar amp transformers are so undersized that bandwidth automatically constricts as one attempts to push more power through them. If one halves the bandwidth, one doubles an output transformer's power rating. Here is another technological foible that guitarists have exploited to their advantage. The narrowing of the pass band filters out higher-order harmonics that are produced during clipping. The top and bottom of a clipped signal are composed of the sum of the fundamental note(s) and the harmonics. In essence, a guitar output transformer acts like a variable high and low pass filter (a.k.a. a variable passband filter) that filters out subharmonics and harsh higher-order harmonics. For example, the magic in the Cinemag transformer is that it has horrible upper frequency response. The transformer basically attenuates, if not outright throws away a lot of higher frequency content. That is why the PRS amps that employ it have a warm, fat overdriven tone. Marshall really cheaped out on the transformers that were installed in the amps that were built when they built Duane Allman's 50W bass head (the amp that was used to blueprint the Cinemag output transformer). I guess that they figured that the amp would be used for bass, so the transformer did not need good upper frequency response. Finally, people should not conflate digital modeling with analog solid-state. They are completely different animals. Digital models are discrete approximations of continuous systems. Analog solid-state, by its nature, is continuous. What this difference means in layman's terms is that digital models have finite resolution. If one were to plot the the output of a digital model, it would look like a staircase that goes up and down. Noise is superimposed on the stair-stepped signal to smooth it out and make it appear continuous. With analog solid-state, the output is continuous with an infinite number of subdivisions. No smoothing noise is necessary. Another problem with modeling is that it is very difficult to model the entire system accurately because we are dealing nonlinear systems. Tube amps are operating the tubes in their nonlinear regions when they are driven. It is very difficult to model nonlinear systems. In simple terms, the change in output of a nonlinear system is not proportional to the input. I can assure everyone that none of the early guitar amp designers were expecting guitarists to operate their amps with tube operating in their nonlinear regions. They were just try to make a device that would take a small signal and make it proportionally bigger. Leo and company did everything in their power to prevent their tube amps from being pushed into the nonlinear region. They were limited by cost. The scoop in the blackface tonestack is a prime example of an attempt to prevent the amp from going nonlinear. That stack scoops out a lot of a guitar's voice. I do not know how many people here have backgrounds in computer science, but my undergraduate in concentration in computer science is known as computer engineering today (a.k.a. it was focused on digital logic design, computer organization, computer architecture, microcode, firmware, and operating system design as well as data communications in addition to standard engineering and math courses). What most people do not realize is that computers cannot represent every number due in large part to limited precision, but more importantly because some decimal numbers cannot be represented as binary fractions. For example, the decimal fraction 1/10th cannot be represented as binary fraction because it cannot be represented in a fixed number of bits. A lot of magic occurs in floating point units and floating point math packages to handle these kinds of numerical errors, which means that a discrete approximation like a digital model will never reproduce a continuous system with 100% accuracy. Deleted member 5962 That is not because a hi-fi is solid-state. Didn't mean that it was. Tried this through Conrad-Johnson system as well. It still the same thing. The passband on a hi-fi system, tube or solid-state, displays a guitar's true voice with no filtering. You may want to try this experiment the next time you build a guitar. Instead of using a guitar output transformer, use a hi-fi output transformer that is rated for hi-fi usage at the guitar amp rated wattage. The transformer should be at least double the size if not greater than double the size of the guitar output transformer. It will also be made with a lot better steel. You can also try the experiment with one of you existing builds if you do not mind drilling new holes. A lot of guys during the early days of the roll your own tube guitar amp amp were using hi-fi-rated Hammond output transformers and blaming the harsh sound on the fact the transformer had ultra-linear taps. The reality is that the output transformers used in tube amps in their day where made to a price point; therefore, they are pretty lousy in terms of fidelity, but that is what makes them good guitar amp transformers. It is also why they often blew up when the amp was dimed. Paramedics > Fire Fighters Dec 21, 2012 Z = SQRT(R^2 + (Xl – Xc)^2), where SQRT is the square-root formula, R = DC resistance, Xl = inductive reactance, and Xc = capacitive reactance. Xl = 2 * 3.14 * f * L, where f = frequency, L = inductance henries Xc = 1 / (2 * 3.14 * f * C), where f = frequency, C = capacitance in fafarads Here's my math: Me + tube*amp = we-go-way-back Deleted member 5962 It still the same thing. The passband on a hi-fi system, tube or solid-state, displays a guitar's true voice with no filtering. Yes, I know all this. That's what I was eventually going to get too in this thread. A little at a time, because as you know, there is A LOT you can discuss on this subject. Some of the things you've mentioned in the past about tube amps and a couple things that you mentioned as design "weaknesses" or however you choose to say it, are actually on purpose decisions made for what they want out of the amp. I know you know this of course, but for example, the other day you mentioned an amp that had way too small of a OT and power section to put out the amount of watts it was rated at, and it's an amp famous for saturating early and the "sag" that many like. I'm simply mentioning that it's not a design flaw, it's a design choice. But the analogy of playing a guitar through a high fi system, is referencing more some of the things you're mentioning now, plus a few you haven't gotten too yet. Limited bandwidth, power supplies that limit and change frequency response, power supplies that let the speaker affect the output frequencies, and all the distortions and limiting (compression) and other things that would be a big no-no on a hi fi amp are what makes the guitar amp world go round. So, to try not to write a book, all those things matter in tube guitar amps. And, many implement each phase of the amp based on the end goals. Last edited by a moderator: Aug 19, 2012 Z = SQRT(R^2 + (Xl – Xc)^2), where SQRT is the square-root formula, R = DC resistance, Xl = inductive reactance, and Xc = capacitive reactance. Xl = 2 * 3.14 * f * L, where f = frequency, L = inductance henries Xc = 1 / (2 * 3.14 * f * C), where f = frequency, C = capacitance in farads This was a really excellent post, thank you! And I'm gonna tip my nerd hand a little bit, because I was amused to see both the pythagorean theorem (or something that looks identical to it) and pi in here. My takeaway: the ancient Greeks knew (because they invented) all that math. Though they could never have guessed it would be applied this way. Feb 5, 2022 This was a really excellent post, thank you! Yes! Thank you! You had me at “ the effects of radiation on the polarized goldfish” You sir, are a genius. Yes, I know all this. That's what I was eventually going to get too in this thread. A little at a time, because as you know, there is A LOT you can discuss on this subject. Some of the things you've mentioned in the past about tube amps and a couple things that you mentioned as design "weaknesses" or however you choose to say it, are actually on purpose decisions made for what they want out of the amp. I know you know this of course, but for example, the other day you mentioned an amp that had way too small of a OT and power section to put out the amount of watts it was rated at, and it's an amp famous for saturating early and the "sag" that many like. I'm simply mentioning that it's not a design flaw, it's a design choice. That is where you are wrong. You are looking at tube amps from a twenty-first century lens. When the originals were manufactured, the designers were trying to manufacture amps that people could afford, so they cut corners. They used parts that they could acquire cheaply that were good enough. The highest note on a 21-fret guitar has a frequency of 1109 hertz with the highest note on a 22-fret guitar having a frequency of 1175 hertz (no one was thinking about harmonics). Why pay more for a transformer that has more bandwidth? Quality transformers were expensive and so were components. That is a large part of why the older amps have so little power supply capacitance. Sure, tube rectifiers are limited in how much capacitance they can handle in a capacitor input configuration, but many of the old Fender designs do not come close to what the rectifiers employed can handle. That was done because the amps were built to price points just like solid-state amps are today. Leo Fender was designing amps for Western Swing guitar, which is very clean and very bright. That goal was evident from the circuit changes that were made during the progression from tweed to brownface/blonde to blackface amps (clean was taken to an extreme with the silverface ultra-linear circuits). He never imagined that people would want to dime his amps. That is why there is so little clean headroom on amps like the Tweed Champ and Deluxe. Back then, distortion was undesired for the music styles of the day and so was sag. Once again, you are looking at these circuits from a twenty-first century lens, which gives a distorted view of why decisions were made. Luckily, I had my father to turn to when I started to get seriously involved in tube amp design. He was the ultimate tube amp myth buster. He was a twenty-something electronics professional in the 50s. He would laugh so hard that I thought he was going to pass out when I told about some of the claims that people were making such as the superiority of carbon composition resistors. The only thing that carbon composition resistors have over other resistor types is that they are non-inductive; however, that is only a problem at RF frequencies. In every other way, they are inferior to carbon film resistors, which are inferior to metal film resistors, but the myth sells well to guitarists. Leo was not alone in his frugalness. Radio manufacturers of the time did the same kind of thing with the All American Five superheterodyne radio receiver circuit. Those things are death traps because they do not have a power transformer. The 35Z5 (octal)/35W4 (7-pin) rectifier is connected directly to the mains with the filaments on all of the tubes daisy chained off of the mains. I would never use one of these radios without plugging it into an isolation transformer, which is what one had to do to work on them. Let's look at the frequency response for the Celestion G12M (a.k.a. Greenback). Now, compare and contrast that with the frequency response of the Celestion F12-X200, which is a full-range speaker. It is easy to see that the speaker employed in a guitar circuit has pretty lousy response after 5K hertz. When guitar amps were originally created, 5K hertz was more than enough bandwidth for guitar because the highest note on 22-fret guitar is 1175 hertz. What guitar speakers really are are AM radio range speakers. AM radio limits the upper modulating frequency to 5K hertz (I hold an Amateur Extra Class FCC license). Tube guitar amps were little more than adaptations of amplifiers first found in radio receivers (the same thing can be said for early public address systems). Fender circuits were adapted from circuits found the RCA Receiving Tube Manual (Fender was a radio repair shop before it was musical instrument company). No attempt was made to make them special. Leo and other designers were looking for a way to make a small signal much bigger in a linear fashion. He sure as heck was not thinking about distortion or sag being good things. That is a modern embellishment of the flaws in the technology. Back then, these artifacts were unwanted. Last edited: Deleted member 5962 That is where you are wrong. You are looking at tube amps from a twenty-first century lens. Sorry, I’m not “wrong” but you’re to applying what I said to one set of amps made 70 years ago, and I’m making the statement about amp design in general… so yes amps made then AND since that time. I even referenced my own mods and taking a “champ type” amp and adding a bigger OT and caps and losing that “champ vibe” but gaining punch and power. Just don’t get lost in semantics. I didn’t say “Leo designed them on purpose,” I said that choices were made based on what we knew about those particular parts values, and that is true of pretty much anyone building an amp today. I’m a hack, and I can take two champs and with a few parts changes completely change the whole sound and vibe of the amp. So if I was building and selling them, even I know enough to make a 3-4 completely different sounding and feeling versions of the same basic circuit. I’ve done it. That’s what I was referring too. Carry on. Interesting thread. Last edited by a moderator: While amp designers are taking advantage of poor engineering practices today, it is still poor engineering. Today, we have the ability to design amps that Leo would have loved to be able to design. In fact, Leo was making the transition with Music Man amps. Music Man amps are pretty much solid-state amps except for the output tubes, which are configured to operate in class B to achieve maximum clean headroom. Peavey took pretty much the same approach with their Classic, Deuce, and Mace amps in the seventies. The reality is that Leo's original target audience has mostly moved away from tube-type equipment. The only area where tube amps still rule is white boy blues and blues-rock, and even there, people are starting to see the light. A tube amp today is as much a status symbol as it is a piece of music gear. Everything that can be done with tubes can be done more reliably with analog solid-state. It just takes more engineering skill and a market that is open to spending real money on a solid-state amp, good luck with overcoming that bias. Established 1960, Still Not Dead Dec 10, 2019 What, exactly, are digital modelers modeling? Apr 26, 2012 If nobody saw it, it didn't happen. Aug 29, 2017 What, exactly, are digital modelers modeling? I'm no Em7 but, I think they are modeling models of other models. I'm from Alabama so that's all I've got. If nobody saw it, it didn't happen. Aug 29, 2017
{"url":"https://forums.prsguitars.com/threads/dispelling-tube-amp-myths.51284/","timestamp":"2024-11-04T17:21:44Z","content_type":"text/html","content_length":"208501","record_id":"<urn:uuid:33ed9520-c689-4b1e-9ad7-73e5ca4f9afb>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00174.warc.gz"}
Test 2Quizwiz - Ace Your Homework & Exams, Now With ChatGPT AI Test 2 Ace your homework & exams now with inferential statistics use sample data to draw general conclusions about populations Samples of size n = 4 are selected from a population with μ = 80 with σ = 8. What is the expectedvalue for the distribution of sample means? a. 8 b. 80 c. 40 d. 20 The standard deviation of the distribution of sample means is called _____. a. the Expected Value of M b. the Standard Error of M c. the sample mean d. the central limit mean A random sample of n = 36 scores is selected from a normal population. Which of the following distributions definitely will be normal? a. The scores in the sample will form a normal distribution. b. The scores in the population will form a normal distribution c. The distribution of sample means will form a normal distribution. d. Neither the sample, the population, nor the distribution of sample means will definitely be normal. A sample of n = 25 scores is selected from a population with μ = 100 with σ = 20. On average, how much error would be expected between the sample mean and the population mean? a. 25 points b. 20 points c. 4 points d. 0.8 points The distribution of sample means (for a specific sample size) consists of _____. a. all the scores contained in the sample b. all the scores contained in the population c. the sample means for all the possible samples (for the specific sample size) d. the mean computed for the specific sample of scores A random sample of n = 4 scores is selected from a population. Which of the following distributions definitely will be normal? a. The scores in the sample will form a normal distribution. b. The scores in the population will form a normal distribution c. The distribution of sample means will form a normal distribution. d. Neither the sample, the population, nor the distribution of sample means will definitely be normal. The Standard Error of M provides a measure of _____. a. the maximum possible discrepancy between M and µ b. the minimum possible discrepancy between M and µ c. the exact amount of discrepancy between each specific M and µ d. None of the other 3 choices is correct. a. is always normal b. is normal only if the population distribution is normal c. is normal only if the sample size is greater than 30 d. None of the other 3 choices is correct. Both b and c have to be true for the distribution of sample means to be approximately normal primary use of the distribution of sample means is to find the probability associated with any specific sample magnitude of standard error law of large numbers and population variance sample means mean= M std. dev.= osymbol subscript m mean= m std. dev= s mean= u symbol std. dev= o symbol variability of scores measured by the standard deviation sampling error natural discrepancy, or the amount of error, between a sample statistic and its corresponding population parameter distribution of sample means o subscript m= o/squareroot of n standard error the variability of sample error as the sample size increases... there is less error between the sample and the mean sampling error there will be discrepancy between a sample mean and the true population mean as sample size increases, the value of the standard error decreases. Central limit theorem u symbol subscript m= mean of the sample means is always equal to u. M is an unbiased statistic central limit theorem: shape of the distribution of normal if... 1. the population is normal or 2. sample size is 30 or more The mean of the distribution of sample means is called _____. a. the Expected Value of M b. the Standard Error of M c. the sample mean d. the central limit mean The symbol that corresponds to the Standard Error of M is _____. a. σM b. µ c. σ When a random sample is selected from a population, the sample mean is not expected to be exactly equal to the population mean. On average, the size of the difference between a sample mean and the population mean is predicted by _____. a. the Standard Error b. the expected value c. the mean of the population d. the standard deviation of the population distribution of sample means a collection of sample means for all the possible random samples of a particular size (n) that can be obtained from a population. a difference that is due to chance numerical value of z-score indicates... distance between M and u measured in terms of Standard Error. the distribution of sample means is always normal shaped. a distribution of statistics is obtained by selecting all the possible samples of a specific size (n) from a population population variance the greater the variance in the population, the less probable it is that the sample mean will be close to the population mean. law of large numbers the larger the sample size, the more probable it is that the sample mean will be close to the population mean. z-score represents the location of a score in a sample or in a population distribution of sample means and z-scores z=M-u/o subscript m Related study sets
{"url":"https://quizwizapp.com/study/test-2-LydJs","timestamp":"2024-11-09T05:43:18Z","content_type":"text/html","content_length":"81581","record_id":"<urn:uuid:79c36591-36e3-4dde-90bb-371f063809af>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00434.warc.gz"}
Guiding, Diffraction, and Confinement of Optical Radiation • 1st Edition - November 12, 2012 • Author: Salvatore Solimeno • Paperback ISBN: 9 7 8 - 0 - 1 2 - 4 1 2 5 0 4 - 9 • eBook ISBN: 9 7 8 - 0 - 3 2 3 - 1 4 4 1 9 - 3 Guiding, Diffraction, and Confinement of Optical Radiation presents a wide array of research studies on optics and electromagnetism. This book is organized into eight chapters that… Read more Save 50% on book bundles Immediately download your ebook while waiting for your print delivery. No promo code needed. Guiding, Diffraction, and Confinement of Optical Radiation presents a wide array of research studies on optics and electromagnetism. This book is organized into eight chapters that cover the problems related to optical radiation propagation and confinement. Chapter I examines the general features of electromagnetic propagation and introduces the basic concepts pertaining to the description of the electromagnetic field and its interaction with matter. Chapter II is devoted to asymptotic methods of solution of the wave equation, with particular emphasis on the asymptotic representation of the field in the form of the Luneburg-Kline series. This chapter also looks into a number of optical systems characterized by different refractive index distributions relying on the eikonal equation. Chapter III deals with stratified media, such as the multilayered thin films, metallic and dielectric reflectors, and interference filters. Chapters IV and V discuss the problem of propagation and diffraction integrals. Chapter VI describes the scattering from obstacles and the metallic and dielectric gratings. Chapters VII considers the passive and active resonators employed in connection with laser sources for producing a confinement near the axis of an optical cavity and Fabry-Perot interferometers and mainly relies on the use of diffraction theory. Chapter VIII presents the analytic approach to the study of transverse confinement near the axis of a dielectric waveguide hinges on the introduction of modal solutions of the wave equation. This book will be of value to quantum electronics engineers, physicists, researchers, and optics and electromagnetism graduate students. Preface Chapter I General Features of Electromagnetic Propagation 1 Maxwell's Equations 2 Propagation in Time-Dispersive Media 3 State of Polarization of the Electromagnetic Field 4 Propagation in Anisotropie Media 5 Propagation in Spatially Dispersive Media 6 Energy Relations 7 Propagation in Moving Media 8 Coherence Properties of the Electromagnetic Field Problems References BibliographyChapter II Ray Optics 1 Approximate Representation of the Electromagnetic Field 2 Asymptotic Solution of the Scalar Wave Equation 3 The Eikonal Equation 4 The Ray Equation 5 Field-Transport Equation for Ao 6 Field-Transport Equations for the Higher-Order Terms Am 7 Evanescent Waves and Complex Eikonals 8 Ray Optics of Maxwell Vector Fields 9 Differential Properties of Wave Fronts 10 Caustics and Wave Fronts 11 Reflection and Refraction of a Wave Front at the Curved Interface of Two Media 12 Solution of the Eikonal Equation by the Method of Separation of Variables 13 Ray Paths Obtained by the Method of Separation of Variables 14 Scalar Ray Equations in Curvilinear Coordinates: the Principle of Fermat 15 Elements of Hamiltonian Optics Problems References Bibliography Chapter III Plane-Stratified Media 1 Introduction 2 Ray Optics for Stratified Media 3 Matched Asymptotic Expansion: Langer's Method 4 Reflection and Transmission for Arbitrarily Inhomogeneous Media 5 Exact Solution for the Linearly Increasing Transition Profile 6 Stratified Media with Piecewise-Constant Refractive Index Profiles 7 Electric Network Formalism 8 Fresnel Formulas 9 Characteristic Matrix Formalism 10 Bloch Waves 11 Passbands and Stopbands of Quarter-Wave Stacks 12 Reflection Coefficient of a Multilayer 13 Metallic and Dielectric Reflectors 14 Antireflection (AR) Coatings 15 Interference Filters 16 Anisotropie Stratified Media 17 Propagation through Periodic Media 18 Analytical Properties of the Reflection Coefficient 19 Propagation of Surface and Leaky Waves through a Thin Film 20 Illumination at an Angle Exceeding the Critical One 21 Reflection and Refraction at a Dielectric-Lossy Medium Interface 22 Surface Waves at the Interface between Two Media 23 Impedance Boundary Conditions Problems References BibliographyChapter IV Fundamentals of Diffraction Theory 1 Introduction 2 Green's Function Formalism 3 Kirchhoff-Kottier Formulation of the Huygens Principle 4 Sommerfeld Radiation Condition 5 Rayleigh's Form of Diffraction Integrals for Plane Screens 6 Babinet's Principle 7 Diffraction Integrals for Two-Dimensional Fields 8 Plane-Wave Representation of the Field 9 Angular Spectrum Representation 10 Fresnel and Fraunhofer Diffraction Formulas 11 Field Expansion in Cylindrical Waves 12 Cylindrical Waves of Complex Order and Watson Transformation 13 Field Patterns in the Neighborhood of a Focus 14 Reduction of Diffraction Integrals to Line Integrals 15 Coherent and Incoherent Imagery Problems References Bibliography Chapter V Asymptotic Evaluation of Diffraction Integrals 1 Introduction 2 Stationary-Phase Method 3 Shadow Boundaries: Stationary Point near End Point 4 Caustics of Cylindrical Fields: Two Adjacent Stationary Points 5 Field in Proximity to a Two-Dimensional Cusp: A Model for the Impulse Response in the Presence of Defocusing and Third-Order Aberration 6 Steepest-Descent Method 7 Diffraction Effects at a Plane Interface between Two Dielectrics 8 Asymptotic Evaluation of the Diffraction Integrals in Cylindrical Coordinates 9 Asymptotic Series Derived from Comparison Integrals: Chester-Friedman-Ursell (CFU) Method 10 Asymptotic Evaluation of the Field Diffracted from an Aperture 11 Asymptotic Approximations to Plane-Wave Representation of the Field 12 Willis Formulas Problems References Bibliography Chapter VI Aperture Diffraction and Scattering from Metallic and Dielectric Obstacles 1 Introduction 2 Diffraction from a Wedge 3 Diffraction from a Slit 4 Diffraction from a Dielectric Cylinder 5 S-Matrix and Watson-Regge Representation 6 Surface Diffraction Waves 7 Generalized Fermat Principle and Geometric Theory of Diffraction 8 Scattering from a Dielectric Body 9 Physical Optics Approximation for a Perfect Conductor 10 Electromagnetic Theory of Diffraction from Perfectly Conducting and Dielectric Gratings 11 Scattering from Finite Bodies 12 Spherical Harmonics Representation of the Scattered Field 13 Scattering from Spherical Particles Problems References Bibliography Chapter VII Optical Resonators and Fabry-Perot Interferometers 1 Generalities on Electromagnetic Resonators 2 Generalities on Optical Resonators 3 Frequency Response of a Resonator 4 Ray Theory of a Closed Elliptic Resonator 5 Linear Resonators 6 Characterization of Resonators by Means of Lens Sequences and g-Parameters 7 Fields Associated with Sources Located at Complex Points 8 Hermite - Gauss and Laguerre - Gauss Beams 9 Ray-Transfer Matrix Formalism for a Lens Waveguide Equivalent to a Resonator 10 Modal Representation of the Field Inside a Stable Resonator Free of Diffraction Losses 11 Focus on Stable Resonators 12 Focus on Unstable Resonators 13 Wave Theory of Empty Resonators 14 Fox-Li Integral Equations 15 Overview of Mode Calculations 16 Stable Cavities with Rectangular Geometry 17 Rotationally Symmetric Cavities 18 Diffraction Theory of Unstable Resonators 19 Active Resonators 20 Frequency Control 21 Fabry-Perot Interferometers Problems References Bibliography Chapter VIII Propagation in Optical Fibers 1 Geometric Optics 2 Step-Index Fibers 3 Graded-Index Fibers 4 Mode Theory 5 Mode Theory for Step-Index Fibers 6 Weakly Guiding Step-Index Fibers 7 Parabolic-Index Fibers 8 Nonguided Modes 9 Single-Mode Fibers 10 The Electromagnetic Field Inside the Fiber 11 Attenuation 12 Modal Dispersion 13 Chromatic Dispersion 14 Modal Noise 15 Coupled-Mode Theory 16 Statistical Theory of Propagation in an Ensemble of Fibers 17 Polarization-Maintaining Optical Fibers 18 Nonlinear Effects in Optical Fibers 19 Self-Induced Nonlinear Effects Problems References Bibliography Appendix Index • Published: November 12, 2012 • Paperback ISBN: 9780124125049 • eBook ISBN: 9780323144193
{"url":"https://shop.elsevier.com/books/guiding-diffraction-and-confinement-of-optical-radiation/solimeno/978-0-12-654340-7","timestamp":"2024-11-12T00:36:52Z","content_type":"text/html","content_length":"188811","record_id":"<urn:uuid:8a8f9115-bdf9-4337-b22b-e41e7f95c170>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00557.warc.gz"}
Physical properties like shape, volume and size affect the dynamics of biological systems. Along these lines, we focus on the topological properties of biological fluids and their biochemical and physiological outcomes. We take as a paradigmatic example the salivary fluid and describe how its topological features may affect the physiopathology of the oral cavity. Topological approaches assess the general properties of saliva, ignoring small-scale physical details such as density, flow rate, stiffness, viscosity. Specifically, the mucin aggregates scattered in the salivary fluid can be tackled in terms of topological holes, i.e., vortical clusters that modify the direction, flow, impulse, local rate-of-change and velocity of saliva. While the current methodological approaches are inclined to remove the effects of impurities assessing systems as homogeneous structures, we argue that the occurrence of mucins breaks up the salivary fluid’s homogeneity, leading to unexpected biophysical modifications. We suggest that every collected salivary sample is not reliable for accurate clinical and experimental investigation, since it displays highly local as well as variable chemical, physical and biological features, not reflecting the current physiological state of the oral cavity. Therefore, the assessment of a single salivary sample is not fully reproducible and cannot provide information about the biophysical, enzymatic and microbiological content of the whole saliva. In sum, the very topological features of the saliva - such as volume, shape, antipodal cells, vortex area, whirling fluid mass, segmentation, discretization, triangulation, node numbering - produce unnoticed biological consequences and network connectivity features with intriguing operational implications.
{"url":"https://www.preprints.org/search?search1=Borsuk-Ulam%20theorem&field1=keywords","timestamp":"2024-11-08T12:06:07Z","content_type":"text/html","content_length":"149307","record_id":"<urn:uuid:a8bbaf5a-87cb-427d-bf83-9b491186d6c9>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00671.warc.gz"}
Who can guide me through advanced statistical techniques for spatial econometrics in stats projects? | Hire Some To Take My Statistics Exam Who can guide me through advanced statistical techniques for spatial econometrics in stats projects? I expect that would be a great guide; however, I didn’t finish applying it in the official documentation. 🙂 Thanks! This is part of the design and implementation of my 3D book being released today. It is still very much in the planning stages, very poorly built, but still very usable. It is now in prototype stage and is very easy to apply it to the specific feature ia of my project. After the preliminary step, it is in very ready supply I will show you on the project website in the next post. Let’s get off to the end to take a look at the design and the requirements for a statistical data analysis. It will be a lot to hard because of the huge amount required to capture the total number of rows in a dataset with 12 rows, so we will need to go into dimensionality – in dimensions 24 or 32. It’s really easy but not easy; the number of columns or bars is much higher due to the size of the database and the database is huge. There is a 5% chance that the dataset will have several thousand rows then we need to do some complex dimensionality analysis to get as much information to fit our requirements. The goal is to have: 5-10 cells, 2-4 cells, 3-4 cells and 3-6 cells. Once that’s done, we will have 15-20 cells. In order to do size scaling we have to go into k3-4 and in order to handle multiples, we have to take as many rows as we want. The issue I’ll address is how to do this in the design of each dataset based on the data we will collect in the upcoming research. I hope you will have a better understanding of how can we simplify the calculation and the number of rows so we can convert the dataset to a logical or matlab form without having to parse over any number of parameters to determineWho can guide me through advanced statistical techniques for spatial econometrics in stats projects? If there is such a thing as ArcMap, we (mostly sysadmins) probably know where they place the data. Even with ArcSpatial econometrics and R, perhaps you are not equipped with all the sophisticated tools needed to find what will make maps so interesting. As a colleague of mine put this code in, having picked the right way to do so. I would probably want to try moving my project to a newer version of R than RStudio and ArcDevTools (the ArcMap command-line tool seems to work with RStudio as well). I haven’t attempted anything like this. Not enough time has really been revealed to me by taking the idea of ArcSpatial and how one uses the software. You need more than a few hours to proofread my code. College Course Helper This will be so helpful but the following codes are not useful for some reason. 1. On a project I have created, and I need more information about each cluster and each member group, I define a simple vector of the latest cluster. 2. I have a class that looks just like this: I want to output this vector for each cluster, and I want to remove all references see this site the clusters, so I use some utility like this: import pytest as mtest import pandas as pd import kurbiajunk.data import pandas as pd from kurbiajunk.util import raster from kurbiajunk.data.collections import Matrix from kurbiajunk.data.data import data from kurbiajunk.data.headers import header import kurbiajunk.datasource import kurbiajunk.streams import kurbiajunk.utils.util from skypy.datasets as skf import shuffle from skypyWho can guide me through advanced statistical techniques for spatial econometrics in stats projects? I could apply general statistical analysis methods from the perspective of linear regression, but I can’t include the linear regressions for this nonlinear econometries for comparison. This is very Discover More to achieve for a large number of people with limited capacity of their daily life. This project is to investigate for the first time an extension of the well-known Linear Regression (LR) approach by James-Perrit et al. Pay Someone To Do My English Homework in their classic paper on Bayes regression in the three-dimensional setting. Their paper shows that a (linear) regression approach can give a much higher performance than the regular one when dealing with real-world problems. At low cost, this approach is only applied to problem with more than 50 regressors and may not be suitable for large scale real-world applications with limited number of people. The results at the end of the paper also confirm the general feasibility of this approach for non-linear regression. The number of regression and regression-induced econometries is very small and it may be difficult to generalize them for other forms of nonlinear regression within the framework of linear regression. Before proceeding I’d like to just comment upon the application of lincast as that is something I see among fellow modern mathematically inclined mathematicians, and also of many modern scientific physicists (other approaches come to mind though and would be relevant to my subject). I’ll address my own question as to its simplicity. The linear regression approach in these papers starts from the principle of linear regression as that of generating a vector by solving a series of linear equations and returning an appropriate value. The most prominent example of this approach is that for which the series of linear regressions is a vector, it takes as the input data a set of (simultaneous) regression lines of the law of the linear regression model. If the linear regression model is fixed a long series of regression lines is automatically generated by solving linear regression problems. The problem is not necessarily the same, but it takes
{"url":"https://hireforstatisticsexam.com/who-can-guide-me-through-advanced-statistical-techniques-for-spatial-econometrics-in-stats-projects","timestamp":"2024-11-13T03:08:59Z","content_type":"text/html","content_length":"168750","record_id":"<urn:uuid:7046ace9-56d5-4520-b6d2-140cf88d14c1>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00353.warc.gz"}
Distribution Overview Plot A reliability engineer studies the failure rates of engine windings of turbine assemblies to determine the times at which the windings fail. At high temperatures, the windings might decompose too The engineer records failure times for the engine windings at 80° C and 100° C. However, some of the units must be removed from the test before they fail. Therefore, the data are right censored. The engineer uses Distribution Overview Plot (Right Censoring) to fit a lognormal distribution to the data and to visually assess the survival and failure rates over time. 1. Open the sample data, EngineWindingReliability.MTW. 2. Choose . 3. In Variables, enter Temp80 Temp100. 4. Select Parametric analysis. From Distribution, select Lognormal. 5. Click Censor. Under Use censoring columns, enter Cens80 Cens100. 6. In Censoring value, type 0. 7. Click OK in each dialog box. Interpret the results The probability plot shows that the points for the failure times fall approximately on the straight line for both variables. Therefore, the lognormal distribution is a good fit for the data for both Use the hazard function plot to compare the failure rate for different variables. For example, at 100° C, the rate of failure is initially greater than at 80° C, reaching a peak of nearly 0.03 at approximately 40 hours. At 80° C, the rate of failure increases more slowly, reaching a peak of over 0.032 at approximately 50 hours. Use the survival function plot to compare the survival rate for different variables. For example, at times of less than approximately 150 hours, the percentage of windings that survive is considerably greater at 80° C than at 100° C. After 150 hours, the percentage of windings that survive becomes almost the same at the two temperatures.
{"url":"https://support.minitab.com/en-us/minitab/help-and-how-to/statistical-modeling/reliability/how-to/distribution-overview-plot-right-censoring/before-you-start/example/","timestamp":"2024-11-10T12:50:55Z","content_type":"text/html","content_length":"13400","record_id":"<urn:uuid:3fd49fa0-aa56-44a6-b140-40ec3f3109c2>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00643.warc.gz"}
Ordering Numbers: Mixed Place Value Ordering This section provides ordering numbers worksheets with sets of positive whole numbers with different numbers place values. Put the Numbers in: Greatest to Least Order More Complex Numbers in: Greatest to Least Order Even More Complex Numbers in: Greatest to Least Order Put the Numbers in: Least to Greatest Order More Complex Numbers in: Least to Greatest Order Even More Complex Numbers in: Least to Greatest Order Worksheets to Practice Ordering Sets of Numbers with Different Place Values Students struggle frequently when numbers become larger than what they can easily visualize. When learning basic arithmetic, we can use finger counting and progress to larger sets of manipulatives, but the symbolic nature of numerals is often a stumbling block for students especially as we deal with three digit, four digit or larger place value numbers. A student should have a well developed sense of place value by the time they reach 5th grade, and they should be capable of ordering not just sets of numbers with the same number of digits, but a collection of numbers with different significant digits. As an example, students should clearly understand that 9,999 is smaller than 10,111 even though all those 9's in the first number might make it seem to be a larger number. The mixed place value ordering worksheets in this section can help with these types of problems.
{"url":"https://www.dadsworksheets.com/worksheets/ordering-numbers-mixed-place-value-ordering.html","timestamp":"2024-11-08T20:29:54Z","content_type":"text/html","content_length":"108749","record_id":"<urn:uuid:3a2a5b2a-47fb-41d5-9689-35f1087fefbe>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00327.warc.gz"}
What Time Was It 15 Minutes Ago? - Inch Calculator What Time Was It 15 Minutes Ago? Are you trying to figure out what the time and date was fifteen minutes ago? The time 15 minutes ago was . This calculation is made using the current time, which is . You can validate this result using our minutes from now calculator. How to Calculate the Time 15 Minutes Ago You can figure out the time fifteen minutes ago by subtracting 15 from the minutes in the current time (). Subtract 15 minutes from the current time. If the minutes in the result is less than 0, then add 60 to the resulting minutes and subtract 1 from the hours. If the current time is after noon and the hours in the resulting time are less than 1, then add 12 to the resulting hours and add an AM to the time since the result is in the morning. If the current time is before noon and the hours in the resulting time are less than 1, then add 12 to the resulting hours and add a PM to the time since the result is in the afternoon or evening of the previous day. You can also use this method to find the time 15 minutes from now. How Much Time Is Fifteen Minutes Ago? Fifteen minutes ago is the same amount of time as: More Time Calculations
{"url":"https://www.inchcalculator.com/minutes-from/15-minutes-ago/","timestamp":"2024-11-07T14:06:39Z","content_type":"text/html","content_length":"52613","record_id":"<urn:uuid:2f10eaf4-71e4-48b2-b280-a571833ae14b>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00655.warc.gz"}
surface integral calculator We now have a parameterization of \(S_2\): \(\vecs r(\phi, \theta) = \langle 2 \, \cos \theta \, \sin \phi, \, 2 \, \sin \theta \, \sin \phi, \, 2 \, \cos \phi \rangle, \, 0 \leq \theta \leq 2\pi, \, 0 \leq \phi \leq \pi / 3.\), The tangent vectors are \(\vecs t_{\phi} = \langle 2 \, \cos \theta \, \cos \phi, \, 2 \, \sin \theta \,\cos \phi, \, -2 \, \sin \phi \rangle\) and \(\vecs t_{\theta} = \ langle - 2 \sin \theta \sin \phi, \, u\cos \theta \sin \phi, \, 0 \rangle\), and thus, \[\begin{align*} \vecs t_{\phi} \times \vecs t_{\theta} &= \begin{vmatrix} \mathbf{\hat i} & \mathbf{\hat j} & \ mathbf{\hat k} \nonumber \\ 2 \cos \theta \cos \phi & 2 \sin \theta \cos \phi & -2\sin \phi \\ -2\sin \theta\sin\phi & 2\cos \theta \sin\phi & 0 \end{vmatrix} \\[4 pt] To place this definition in a real-world setting, let \(S\) be an oriented surface with unit normal vector \(\vecs{N}\). To approximate the mass flux across \(S\), form the sum, \[\sum_{i=1}m \sum_{j=1}^n (\rho \vecs{v} \cdot \ vecs{N}) \Delta S_{ij}. To get an idea of the shape of the surface, we first plot some points. Surface integral of vector field calculator For a vector function over a surface, the surface integral is given by Phi = int_SFda (3) = int_S(Fn^^)da (4) = int_Sf_xdydz+f_ydzdx+f_zdxdy Solve Now. Therefore, \(\vecs t_x + \vecs t_y = \langle -1,-2,1 \rangle\) and \(||\vecs t_x \times \vecs t_y|| = \ sqrt{6}\). The Divergence Theorem can be also written in coordinate form as. These grid lines correspond to a set of grid curves on surface \(S\) that is parameterized by \(\vecs r(u,v)\). This surface is a disk in plane \(z = 1\) centered at \((0,0,1)\). So, we want to find the center of mass of the region below. So, for our example we will have. &= 32 \pi \int_0^{\pi/6} \cos^2\phi \sqrt{\ sin^4\phi + \cos^2\phi \, \sin^2 \phi} \, d\phi \\ In the case of the y-axis, it is c. Against the block titled to, the upper limit of the given function is entered. The little S S under the double integral sign represents the surface itself, and the term d\Sigma d represents a tiny bit of area piece of this surface. Our calculator allows you to check your solutions to calculus exercises. Step #3: Fill in the upper bound value. So I figure that in order to find the net mass outflow I compute the surface integral of the mass flow normal to each plane and add them all up. then, Weisstein, Eric W. "Surface Integral." How To Use a Surface Area Calculator in Calculus? In this video we come up formulas for surface integrals, which are when we accumulate the values of a scalar function over a surface. The tangent vectors are \(\vecs t_u = \langle 1,-1,1\rangle\) and \(\vecs t_v = \langle 0,2v,1\rangle\). Again, notice the similarities between this definition and the definition of a scalar line integral. Thus, a surface integral is similar to a line integral but in one higher dimension. Thank you! Then, \(\vecs t_x = \langle 1,0,f_x \rangle\) and \(\vecs t_y = \langle 0,1,f_y \ rangle \), and therefore the cross product \(\vecs t_x \times \vecs t_y\) (which is normal to the surface at any point on the surface) is \(\langle -f_x, \, -f_y, \, 1 \rangle \)Since the \(z\) -component of this vector is one, the corresponding unit normal vector points upward, and the upward side of the surface is chosen to be the positive side. The domain of integration of a scalar line integral is a parameterized curve (a one-dimensional object); the domain of integration of a scalar surface integral is a parameterized surface (a two-dimensional object). Here it is. To visualize \ (S\), we visualize two families of curves that lie on \(S\). By Equation, the heat flow across \(S_1\) is, \[ \begin{align*}\iint_{S_2} -k \vecs \nabla T \cdot dS &= - 55 \int_0^{2\pi} \int_0^1 \vecs \nabla T(u,v) \cdot\, (\vecs t_u \times \vecs t_v) \, dv\, du \\[4pt] A surface integral of a vector field is defined in a similar way to a flux line integral across a curve, except the domain of integration is a surface (a two-dimensional object) rather than a curve (a one-dimensional object). \nonumber \]. Surface Integral with Monte Carlo. We can now get the value of the integral that we are after. Stokes' theorem is the 3D version of Green's theorem. We can extend the concept of a line integral to a surface integral to allow us to perform this integration. \end{align*}\], To calculate this integral, we need a parameterization of \(S_2\). Learning Objectives. If it can be shown that the difference simplifies to zero, the task is solved. Let the lower limit in the case of revolution around the x-axis be a. ), If you understand double integrals, and you understand how to compute the surface area of a parametric surface, you basically already understand surface integrals. At the center point of the long dimension, it appears that the area below the line is about twice that above. Therefore, we have the following characterization of the flow rate of a fluid with velocity \(\vecs v\) across a surface \(S\): \[\text{Flow rate of fluid across S} = \iint_S \vecs v \cdot dS. Not what you mean? Introduction. is given explicitly by, If the surface is surface parameterized using This helps me sooo much, im in seventh grade and this helps A LOT, i was able to pass and ixl in 3 minutes, and it was a word problems one. By double integration, we can find the area of the rectangular region. To be precise, the heat flow is defined as vector field \(F = - k \nabla T\), where the constant k is the thermal conductivity of the substance from which the object is made (this constant is determined experimentally). If we choose the unit normal vector that points above the surface at each point, then the unit normal vectors vary continuously over the surface. This can also be written compactly in vector form as (2) If the region is on the left when traveling around , then area of can be computed using the elegant formula (3) Hold \(u\) constant and see what kind of curves result. It helps you practice by showing you the full working (step by step integration). In doing this, the Integral Calculator has to respect the order of operations. &= -55 \ int_0^{2\pi} du \\[4pt] MathJax takes care of displaying it in the browser. It transforms it into a form that is better understandable by a computer, namely a tree (see figure below). This is easy enough to do. In order to do this integral well need to note that just like the standard double integral, if the surface is split up into pieces we can also split up the surface integral. Surface integrals are a generalization of line integrals. To define a surface integral of a scalar-valued function, we let the areas of the pieces of \(S\) shrink to zero by taking a limit. What about surface integrals over a vector field? For example, consider curve parameterization \(\vecs r(t) = \langle 1,2\rangle, \, 0 \leq t \leq 5\). Suppose that \(u\) is a constant \(K\). \nonumber \]. First, lets look at the surface integral of a scalar-valued function. 192. y = x 3 y = x 3 from x = 0 x = 0 to x = 1 x = 1. Loading please wait!This will take a few seconds. Since the parameter domain is all of \(\mathbb{R}^2\), we can choose any value for u and v and plot the corresponding point. Computing surface integrals can often be tedious, especially when the formula for the outward unit normal vector at each point of \(\) changes. Let S be a smooth surface. Were going to need to do three integrals here. However, if we wish to integrate over a surface (a two-dimensional object) rather than a path (a one-dimensional object) in space, then we need a new kind of integral that can handle integration over objects in higher dimensions. Find step by step results, graphs & plot using multiple integrals, Step 1: Enter the function and the limits in the input field Step 2: Now click the button Calculate to get the value Step 3: Finally, the, For a scalar function f over a surface parameterized by u and v, the surface integral is given by Phi = int_Sfda (1) = int_Sf(u,v)|T_uxT_v|dudv. I unders, Posted 2 years ago. Parametric Equations and Polar Coordinates, 9.5 Surface Area with Parametric Equations, 9.11 Arc Length and Surface Area Revisited, 10.7 Comparison Test/Limit Comparison Test, 12.8 Tangent, Normal and Binormal Vectors, 13.3 Interpretations of Partial Derivatives, 14.1 Tangent Planes and Linear Approximations, 14.2 Gradient Vector, Tangent Planes and Normal Lines, 15.3 Double Integrals over General Regions, 15.4 Double Integrals in Polar Coordinates, 15.6 Triple Integrals in Cylindrical Coordinates, 15.7 Triple Integrals in Spherical Coordinates, 16.5 Fundamental Theorem for Line Integrals, 3.8 Nonhomogeneous Differential Equations, 4.5 Solving IVP's with Laplace Transforms, 7.2 Linear Homogeneous Differential Equations, 8. Let the upper limit in the case of revolution around the x-axis be b. button to get the required surface area value. Use the standard parameterization of a cylinder and follow the previous example. The Surface Area calculator displays these values in the surface area formula and presents them in the form of a numerical value for the surface area bounded inside the rotation of the arc. To log in and use all the features of Khan Academy, please enable JavaScript in your browser. We arrived at the equation of the hypotenuse by setting \(x\) equal to zero in the equation of the plane and solving for \(z\). Lets start off with a sketch of the surface \(S\) since the notation can get a little confusing once we get into it. Here is that work. The following theorem provides an easier way in the case when \(\) is a closed surface, that is, when \(\) encloses a bounded solid in \(\mathbb{R}^ 3\). You might want to verify this for the practice of computing these cross products. To approximate the mass of fluid per unit time flowing across \(S_{ij}\) (and not just locally at point \(P\)), we need to multiply \((\rho \vecs v \cdot \vecs N) (P)\) by the area of \(S_{ij}\). If it is possible to choose a unit normal vector \(\vecs N\) at every point \((x,y,z)\) on \(S\) so that \(\vecs N \) varies continuously over \(S\), then \(S\) is orientable. Such a choice of unit normal vector at each point gives the orientation of a surface \(S\). integral is given by, where We parameterized up a cylinder in the previous section. The intuition for this is that the magnitude of the cross product of the vectors is the area of a parallelogram. There are two moments, denoted by M x M x and M y M y. If , \label{surfaceI} \]. The result is displayed in the form of the variables entered into the formula used to calculate the. Let \(\vecs v(x,y,z) = \langle 2x, \, 2y, \, z\rangle\) represent a velocity field (with units of meters per second) of a fluid with constant density 80 kg/m3. Calculate surface integral \[\iint_S \vecs F \cdot \vecs N \, dS, \nonumber \] where \(\vecs F = \langle 0, -z, y \rangle\) and \(S\) is the portion of the unit sphere in the first octant with outward orientation. The upper limit for the \(z\)s is the plane so we can just plug that in. Why do you add a function to the integral of surface integrals? The surface in Figure \(\PageIndex{8a}\) can be parameterized by, \[\vecs r(u,v) = \langle (2 + \cos v) \cos u, \, (2 + \cos v) \sin u, \, \sin v \ rangle, \, 0 \leq u < 2\pi, \, 0 \leq v < 2\pi \nonumber \], (we can use technology to verify). Similarly, points \(\vecs r(\pi, 2) = (-1,0,2)\) and \(\vecs r \left(\dfrac{\pi}{2}, 4\right) = (0,1,4) \) are on \(S\). Did this calculator prove helpful to you? This is in contrast to vector line integrals, which can be defined on any piecewise smooth curve. The result is displayed in the form of the variables entered into the formula used to calculate the Surface Area of a revolution. On the other hand, when we defined vector line integrals, the curve of integration needed an orientation. When the integrand matches a known form, it applies fixed rules to solve the integral (e.g. partial fraction decomposition for rational functions, trigonometric substitution for integrands involving the square roots of a quadratic polynomial or integration by parts for products of certain functions). Suppose that the temperature at point \((x,y,z)\) in an object is \(T(x,y,z)\). Brian Alexander Prince Height British Airways Economy Standard Seat Selection Articles S
{"url":"http://ruffeodrive.com/psVKO/surface-integral-calculator","timestamp":"2024-11-07T19:58:39Z","content_type":"text/html","content_length":"19173","record_id":"<urn:uuid:855a5e18-dac0-4fb3-b978-c1663eba146f>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00779.warc.gz"}
Unscramble ABLAZE How Many Words are in ABLAZE Unscramble? By unscrambling letters ablaze, our Word Unscrambler aka Scrabble Word Finder easily found 30 playable words in virtually every word scramble game! Letter / Tile Values for ABLAZE Below are the values for each of the letters/tiles in Scrabble. The letters in ablaze combine for a total of 17 points (not including bonus squares) What do the Letters ablaze Unscrambled Mean? The unscrambled words with the most letters from ABLAZE word or letters are below along with the definitions. • ablaze (adv. & a.) - On fire; in a blaze, gleaming.
{"url":"https://www.scrabblewordfind.com/unscramble-ablaze","timestamp":"2024-11-12T15:37:12Z","content_type":"text/html","content_length":"43580","record_id":"<urn:uuid:de11a968-755c-46cc-a493-8d628c23a271>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00114.warc.gz"}
ECE 64100 Model-Based Image and Signal Processing | Zhankun Luo ECE 64100 Model-Based Image and Signal Processing Course Description: ECE 64100 Model-Based Image and Signal Processing Textbook: Model Based Imaging Notes: course notes Labs: instruction • maximum a posteriori (MAP) image restoration The lab explores the use of maximum a posteriori (MAP) estimation of images from noisy and blurred data using both Gaussian Markov random field (GGMRF) and non-Gaussian Markov random field (MRF) prior models (Generalized Gaussian MRF (GGMRF) and q-Generalized GMRF(QGGMRF)). We solved the high dimensional optimization problem of MAP estimation with iterative coordinate descent (ICD) techniques. • expectation-maximization (EM) algorithm, API docs for source code This lab explores the use of the expectation-maximization (EM) algorithm for the estimate of parameters. In particular, we will use the EM algorithm to estimate the parameters of Gaussian mixture distribution, and implement the method for automaticaly determining the number of clusters in a Gaussian mixture model by minimizing the minimum description length (MDL) estimator.
{"url":"https://zhankunluo.com/course/ece_image_proc_ii/","timestamp":"2024-11-02T21:43:10Z","content_type":"text/html","content_length":"16194","record_id":"<urn:uuid:7775745b-69d3-41bc-997f-0cf8b5125305>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00595.warc.gz"}
Symbols in Physics Across the mathematical sciences, the ability to proficiently use symbols to represent concepts and processes is important. The fluidity of either a single symbol being used to represent multiple concepts, or multiple symbols being used to represent the same concept requires a flexibility of thought from practitioners to contextually identify the correct meaning of a symbol each time it is encountered. In addition, font and style applied to similar notation may vary according to the medium of communication. Bardini & Pierce (2015) drew attention to the challenge for students learning mathematics as they transition from secondary school to studying university mathematics, where new symbols are introduced, and familiar symbols may be used with changed or extended meanings. This paper explores the difficulties experienced by physics students navigating this symbolically dense and potentially confusing language. We investigated whether first-year undergraduate students studying physics perceive any difficulties or differences related to the use of symbols during their transition to university. Do they experience difficulties with the diversity and/or duplication of symbols used within their physics studies or between mathematics and physics? How do these students manage these difficulties? First, we review key related literature. Mathematics derives much of its power from the use of symbols (Arcavi, 2005), but research over a long period of time has shown that their conciseness and abstraction can be a barrier to learning ( MacGregor & Stacey, 1997; Pierce et al., 2010). Symbols also have a significant role in physics, with increasing usage and importance at the tertiary level. In a study involving first-year university physics students (Torigoe & Gladding, 2007), it was found that students’ performance is highly correlated to their understanding of symbols. Our focus in this article is on students’ understanding and interpretation of symbolic expressions that are used in first-year undergraduate physics, including those that are unique to physics, as well as symbolic expressions that overlap with the mathematics that students have previously studied or are studying concurrently. We are not focusing on students’ skills in calculation or manipulation of symbolic expressions. Previous published research identifies four key issues in this regard: students’ difficulty dealing with the number and variety of symbols; the need to be cognizant of unwritten conventions; the benefit of recognizing meaning in common symbol templates; and the need to be aware of the epistemological difference in working with and interpreting symbols in mathematics and in physics. Firstly, as Bailey (1999) points out, physics uses many symbols. Roman and Greek letters are both used with lower- and upper-case letters being employed with different meanings. Not only must students be alert to this detail, but the same letter may be used with different meanings, for example as a pronumeral to indicate a quantity and as the abbreviation for a unit of measurement. A typical example is the use of m for the quantity of mass and m for meters. Here, a hint of the different meaning is given by using italics for the variable quantity. Bailey (1999) suggests that students may be helped by providing them with a list of symbols with their respective meanings and explaining to students that they will need to work at memorizing symbols as they are introduced during their course. Torigoe and Gladding (2006; 2007) report on their investigation of physics students’ numeric computation versus symbolic representation. In the initial phase of their study, in which they gave students parallel numeric and symbolic versions of examination questions, they found the mean score on numeric questions was 50% higher than the matching symbolic questions. Based on their first study, Torigoe and Gladding hypothesized that the discrepancy was due to students’ misunderstanding of variables rather than skills in calculations. Their extended study, using further parallel questions, yielded additional insights: working with numbers means each step may condense to a single value while symbolic expressions must be carried forward from step to step. Torigoe and Gladding (2007) then hypothesized that working with symbols leads to greater cognitive load especially in multiple-step problems. Giammarino (2000), a student writing on behalf of a group of students, gives voice to the anxiety they feel when faced with what they perceive to be different symbolic notations representing the same variables. He notes variety from teacher to teacher and textbook to textbook. Students worry about which symbol to use and which ones they will be faced with in examinations. While other authors stress the need for students to be flexible, these students make a plea for more consistency. Certain unwritten conventions need to be observed when constructing and writing symbolic equations. These conventions make it easier to recognize the patterns in expressions and hence aid interpretation. Bailey (1999), De Lozano and Cardenas (2002), and Moelter and Jackson (2012) each draw attention to these subtleties. As mentioned above, italics are typically used for variables and Roman letters for units. Constants, parameters, and variables are written in that order when they are multiplied together in an expression; where there are fractions, this order is followed separately in the numerator and in the denominator. A step further in considering the setting out of symbolic expressions was introduced by Sherin (2001) and taken up by Redish and Kuo (2015). Sherin highlights the importance of what he terms “symbolic forms” and their interpretation or conversely, in modelling, identifying that a physical phenomenon may be represented abstractly by a particular symbolic form. For example, if a “whole” is made up of parts, then symbolically it may be represented by an expression that will take the form □+□+□… or if one variable is proportional to another then an appropriate form will be prop- […/ (….x….)] (Sherin, 2001, p. 590). Choosing appropriate symbolic forms requires the student to consider not only variables and constants but also the mathematical processes and their meaning in a specific context. Students need to think, for example, about the real-world impact of adding rather than multiplying, adding rather than subtracting etc. They need to make an informed, thoughtful choice, not just hope to have memorized a formula correctly. De Lozano and Cardenas (2002) take up the theme of an epistemological lens with which symbols in mathematics and physics must be viewed. Mathematics, they contend is theoretical, built in steps from axioms, with a focus on being logically consistent. Physics on the other hand is always to be viewed and interpreted in the light of real-world constraints, which may or may not be made explicit. They suggest that while in mathematics “=” indicates a relationship that has properties of symmetry, reflexivity, and transitivity, in physics, it may be used to more loosely indicate “corresponds Students studying first-year physics, mathematics, or statistics were recruited from two Australian universities with which the researchers had existing collaborations. Participants we focused on in this report were lecturers of first-year physics (who were also the subject coordinators)and 187 first-year physics students recruited via advertisements posted on their subject’s online learning system noticeboards. Forty three (24%) of these students were enrolled in engineering studies with physics studies forming an element of the course. The remaining 134 (76%) were enrolled in a variety of undergraduate single and double-degree courses, with a decision regarding whether physics would become a final-year undergraduate major in their course still to be made. Of these students, 19 agreed to meet with us individually for a 30–40 minute semi-structured, audio-recorded interview, in addition to responding to an online survey. Survey responses were categorized by common response patterns and interview transcripts analyzed to identify repeated themes in students’ comments. Findings and discussion Do first-year undergraduate students studying physics perceive any difficulties or differences related to the use of symbols during their transition to university? Symbolic convention changes. Symbols that are used to represent a physics or mathematics concept at secondary school can, for a student new to university physics (illustrated by S13 below), appear to change unexpectedly and without explanation, with the new symbolic representation incorporating unfamiliar syntax such as subscripts and superscripts. S13: I just found it a really kind of very off-putting at the start…Like, s we used in Specialist [Year 12 mathematics]—it is now delta x. Instead of u and v for initial and final velocities, we have [ ]v i [v[i]] and v f [v[f]]. I mean like, the physics notation makes more sense and is more explicit, like it is better, but why are they different at all? Students mentioned a range of changes in notation (as opposed to new terminology) compared with prior studies after attending seven weeks of classes in first-year undergraduate physics. These have been summarized in Table 1. It is interesting to note that the difficulties which students highlighted regarding changed notation for unit vectors from î, ĵ, and k ̂to x̂, ŷ, and ẑ was observed in the reverse by Lecturer 1, where they use î, ĵ, and k ̂in first-year physics, but noticed students with prior experience using x̂, ŷ, and ẑ. Lecturer 2 noted that “you’ve got the i, j, k people and I guess one of the things…probably between academics, is there’s not going to be a lot of consistency, because they’re from a lot of different backgrounds.” This illustrates the difficulties both students and lecturers may have in written communication with each other. One contributing factor to students’ confusion, exemplified by S21 below, is that changes in notation from secondary studies were not always explicitly highlighted by lecturers or tutors, leaving students to decipher the equivalence of the symbols being used. Students interpreted this silence to mean that their instructors were unaware of the notational changes, and for example, commented: Student S21: I don’t think they realized, because I was just sitting in one of the workshops and the girl next to me had s and I was like, I’ve never seen that equation before I don’t recognize it as something that I’ve seen before, and then she goes ‘s means displacement.’ Oh, okay then. Within the subject of physics, a broad variety of what we term as “symbolic synonyms” (i.e., multiple symbols being used to represent a single concept) are used. International standards for notation in physics, based on the Système International (International System of Units) have been published since 1961 (see Cohen, 1987), however, they are not always used or adhered to even in commonly prescribed textbooks (for example, Walker et al., 2011). The physics lecturers we interviewed were aware of the inconsistency of notation in physics and the impact it can have on students’ understanding. They each aimed for consistency within their own teaching materials. For example: Lecturer 2: I’ve deliberately changed all the symbols over and I make sure when we’ve changed textbooks or we’ve changed references, I use the same symbols as in the textbook...[because] when they’re at the beginning, it becomes a barrier to entry to the topic in the first place, if you’re confusing them with a mixture of symbols. However, this was undertaken in isolation, with policy still to be developed to ensure consistency of notation within the physics departments (particularly given students can be taught by multiple physics lecturers throughout the course of each year) or with the Australian school curriculum. Do students experience difficulties with the diversity and/or duplication of symbols used within their physics studies or between mathematics and physics? Physics uses a very broad variety of symbols to represent constants, parameters, and variables in the equations, with the number and variety of symbols used increasing during undergraduate studies. This was noted multiple times through interview: S02: Like it’s...especially cos they’re almost like running out of symbols, so they’re just chucking Greek letters at us… S06: There’s a lot more variety of symbols and things this year. In high school, they were strict that this is this, and I think also with physics it’s just because there’s more equations to use, there’s more symbols and variables. Students (e.g., see S20 below) noted that some symbols are used to represent multiple concepts (which we describe as “symbolic homonyms”) both from newly introduced concepts at university and also for concepts met at school. S20: Oh well even just in physics, omega is about three different things. It’s angular velocity… [and] it’s angular frequency as well… So they’re the same for a circle, I think that’s why it might have the same symbol but it’s different for everything else, which is annoying. The large number of terms requiring a symbolic representation in physics may make this inevitable, but the varied meanings for a single symbol added a layer of deciphering, which students sometimes found difficult. Cohen (1987) indicates that in standard conventions the symbol that S20 highlighted, lower-case omega (ω) is used to represent each of a solid angle, angular frequency, and angular velocity, all within the area of mechanics, with five additional uses of lower-case omega listed using subscripts throughout various sections of physics. It is not difficult to see how students might be confused by this. Conversely, within undergraduate physics, students also encounter situations where multiple symbols were used to represent a single concept. S05: So, it turns out we have three different systems and it’s like really chaotic sometimes. Because I converted to the version that the textbook uses but my friends all use the ones in high school, and the lecturer uses a different one. Physics lecturers acknowledged the diversity and complexity of the symbolic aspect of the subject, highlighting even that “notation convention actually changes based on situation or context as well” (lecturer 2) with bespoke notation being used for emphasis at times. Students also encounter duplication or changes in conventions between physics and mathematics, for example: S22: Physics often uses a k and math often uses a c, although I have seen a bit you know, one borrowing the other” [in relation to general constants]. S13: Physics will use delta t in the constant acceleration formula, which is better than d [in mathematics], because that kind’ve indicates a change over a time interval. Other topics where students highlighted that symbolic synonyms are pervasive between subjects include vectors (i, j, and k versus x, y, and z, and tilde versus arrows or bolding), motion (s - displacement, u - initial velocity, v - final velocity, a - acceleration, t - time: mechanics notation colloquially known as suvat formulae versus other notation), forces (N versus R for normal reaction forces) and different conventions with SI units. Physics lecturers also commented on the differences in notation between their subject and mathematics, for example, Lecturer 1 stated that “I would say that the use of the same symbol for different things is the biggest one [symbol difficulty]. It’s the difference in how certain mathematical concepts are presented. For example, differentials are presented in a number of different ways.” Aligning physics and mathematics curricula Added to this is a complexity in scheduling the timing of physics topics to ensure required university mathematics knowledge has already been studied by their students as exemplified by the following S02: Ah Physics, yeah… One of the other things is like... vectors came into it [physics] very early and I’m only doing [introductory first-year calculus subject], and we’ve only just finished vectors. So like now it all makes sense, but back then it was... what’s going on? How do first-year physics students manage difficulties with symbols? When asked whether the inconsistencies and differences caused difficulties, students felt they could accommodate the changes, but the associated additional load was noted as making the tasks harder. S02: …But, just like... working through it has confused me a bit. I’m getting a bit more on top of it going over it all a few times, but yeah, it’s a lot harder to pick up the first time than it was for me in school. Some students were also anxious as to whether the notation they had learned in their previous studies would be understood by examiners. This was a driver for them to change nomenclature (such was the disconnect in notational use from secondary to tertiary physics studies). For example: S05: Actually, I was quite concerned about my use of symbols as well. I wasn’t sure if my—the teaching associate who marks my assignments will understand my... the symbols I used in high school. That’s why I converted to the new one, and I did struggle in the start. Some students commented, as we might expect, that they worked out the meaning of the symbol using the context of the situation. However, other students had more difficulty; they chose to perform calculations using the symbols familiar from school then “translated” to the symbols used in their university physics class. S13: Mainly [it] just slowed me down. … I’m debating... should I use his notation or my notation? … I find it weird when the lecturer’s lecturing in a particular notation, to use different notation when I’m writing it… I’m actually trying to write in my own notation. But then your brain will be trying to listen to him at the same time and then you end up just writing what he writes and it’s like half in one half in the other …. This error-prone method adds complexity and cognitive load for these students. Nevertheless, some students felt more confident with this approach rather than working with the new notation. Discussion and implications for teaching Our findings, and the earlier research literature discussed above, suggest that students may experience difficulties coping with the increased variety, duplication, and reliance on symbolic notation in undergraduate physics. They are trying to make sense of newly encountered symbols or contexts, in part by using prior conceptions that spring to mind when they see a familiar symbol and in part struggling to match new notation symbolizing a familiar context. The difficulties reported by students who encountered unexpected and at times, unexplained notational changes during their transition from secondary to undergraduate physics studies, were not unforeseen by the experienced physics lecturers. However, these senior staff were glad of research evidence to share with their colleagues. Given the variance in symbolic standards within the subject, a disconnect in symbol usage would seem almost inevitable for some students, as they transition to university physics studies. We would advocate for lecturers and tutors to maintain familiarity with the content and conventions in the secondary national curriculum, but perhaps a useful approach is for tutors to ask students to read symbolic statements aloud and then explain what the notation means. This will both determine knowledge and preconceptions. Differences or changes in notation need to be explicitly addressed. In addition to being alert to and explicitly highlighting the many symbolic synonyms and homonyms that students encounter after transition to university physics and mathematics studies, additional measures can be taken to scaffold a student’s competence in using physics’ symbolic language. For example: • Taking note of and specifically highlighting a symbol’s form, noting if it is italicized, capital or lowercase, which alphabet it is from, are there subscripts or superscripts and what they each mean. For example, the transition from s to x[i] in the topic of mechanics is symbolically more complicated as a result of the subscript, but x[i] conveys additional important information. • Explicitly teaching conventions such as ordering the symbols in equations with constants first, parameters next, and then variables as suggested by Moelter and Jackson (2012) may also be helpful. • Setting tasks to increase students’ familiarity with the varied symbols they encounter independent of context. For example, Greek letters are ubiquitous in this field, but many students have little or unrelated prior experience with them. Introducing students to the entire Greek alphabet including case, phonetics, and how to write them can help to make these symbols less of an obstacle for students, when they encounter them in formulae. • Tutors querying students to identify new, changed, or conflicting use of symbols within and between the subjects they study. Regardless of internationally published standards, the notational inconsistencies in the physics domain, though not new (Bailey, 1999), remain an obstacle impeding some students’ learning. This complexity cannot—and arguably should not—be avoided with students, if they are to navigate their pursuits in physics successfully. It is necessary for students to develop a flexible mindset regarding physics notation. However, lecturers and tutors need to be alert to likely areas of confusion. Physics and mathematics educators should be aware of the extensive interdisciplinary use of the same symbols, and the potential for confusion that the different uses of the symbols both within and between the subjects can cause for students, particularly as the format and information conveyed by each symbol increases in complexity as studies progress. Contemplating a novice’s perspective can influence pedagogical techniques, with a view to helping to improve a student’s experience and success in the subject. The authors wish to thank Dr. Caroline Bardini for her development of this research project, and to the physics lecturers who generously gave us their time and insights on the challenges faced when teaching undergraduate physics. The authors also wish to thank the students, lecturers, and tutors across multiple universities who gave their time, knowledge, and course materials for the benefit of this research. This research has been funded by the Australian Research Council: DP150103315.
{"url":"https://www.nsta.org/journal-college-science-teaching/journal-college-science-teaching-mayjune-2021/symbols-physics","timestamp":"2024-11-08T04:55:58Z","content_type":"text/html","content_length":"119783","record_id":"<urn:uuid:6c785d6d-e4ba-4290-9676-926b6518aeaa>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00803.warc.gz"}
240 BCE: Geodesy - The book of science Eratosthenes geodesy cartography Eratosthenes realized that the earth was a sphere. He knew that at noon on the summer solstice in Aswan a man looking down blocked the sun’s reflection on the water at the bottom of a deep well, whereas at the same time in Alexandria the shadow of a gnomon made an angle of one-fiftieth of a circle, which meant that the distance from Alexandria to Aswan was one-fiftieth of the earth’s circumference. Given the distance between Aswan and Alexandria, Eratosthenes calculated the circumference of the earth. Eratosthenes invented geography. On his map of the known world he named and located four hundred cities, he divided the world into five climate zones, and he placed parallels and meridians so that distances could be estimated easily. Judging the accuracy of Eratosthenes’ calculation is confused by uncertainty over his unit of distance. A stadion is six hundred feet, but a foot could be different lengths in different countries. The first international standard meter, one ten-millionth part of the quarter of a meridian, which is the distance from the north pole to the equator, quickly proved to be impractical. Extending the calculation of pi to more and more digits will always fall far short of infinity, and the number of irrational numbers is an even bigger infinity. Geodesy is the study of the size and shape of the earth. In addition to estimating the circumference of the earth, there are reports that Eratosthenes estimated the length of the year, distances to the moon and sun, and the diameter of the sun. See also in The book of science: Readings in wikipedia:
{"url":"https://sharpgiving.com/thebookofscience/items/bc0240.html?f=geodesy","timestamp":"2024-11-11T15:53:42Z","content_type":"text/html","content_length":"17058","record_id":"<urn:uuid:5d9357d4-cb7c-4a30-86ff-5330129e2357>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00865.warc.gz"}
571 ml to usgallon - How much is 571 milliliters in US gallons? [CONVERT] ✔ 571 milliliters in US gallons Conversion formula How to convert 571 milliliters to US gallons? We know (by definition) that: $1&InvisibleTimes;ml ≈ 0.000264172052358148&InvisibleTimes;usgallon$ We can set up a proportion to solve for the number of US gallons. $1 &InvisibleTimes; ml 571 &InvisibleTimes; ml ≈ 0.000264172052358148 &InvisibleTimes; usgallon x &InvisibleTimes; usgallon$ Now, we cross multiply to solve for our unknown $x$: $x &InvisibleTimes; usgallon ≈ 571 &InvisibleTimes; ml 1 &InvisibleTimes; ml * 0.000264172052358148 &InvisibleTimes; usgallon → x &InvisibleTimes; usgallon ≈ 0.1508422418965025 &InvisibleTimes; Conclusion: $571 &InvisibleTimes; ml ≈ 0.1508422418965025 &InvisibleTimes; usgallon$ Conversion in the opposite direction The inverse of the conversion factor is that 1 US gallon is equal to 6.62944270402802 times 571 milliliters. It can also be expressed as: 571 milliliters is equal to $1 6.62944270402802$ US gallons. An approximate numerical result would be: five hundred and seventy-one milliliters is about zero point one five US gallons, or alternatively, a US gallon is about six point six two times five hundred and seventy-one milliliters. [1] The precision is 15 significant digits (fourteen digits to the right of the decimal point). Results may contain small errors due to the use of floating point arithmetic.
{"url":"https://converter.ninja/volume/milliliters-to-us-gallons/571-ml-to-usgallon/","timestamp":"2024-11-08T23:40:08Z","content_type":"text/html","content_length":"20200","record_id":"<urn:uuid:9e7c4679-a483-43d6-8066-e6cc44e03818>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00817.warc.gz"}
Mathematics Colloquium: Dan Edidin, University of Missouri Representation theory and its connection to structural biology 4 – 5 p.m., Nov. 14, 2024 Math Building Room 501 Title: Representation theory and its connection to structural biology Abstract: The goal of this talk is to explain how basic representation theory of compact Lie groups can be very useful for giving insights into mathematical problems at the foundations of cryo-electron microscopy and X-ray crystallography - two important techniques in structural biology. Although they involve quite different experimental setups, mathematically they can be viewed as group orbit recovery problems where the experimental data determines invariant tensors (moments) of the unknown molecular structure. For computational reasons there is particular interest in determining prior conditions on the structure which ensure that the structure can be resolved from moments of low degree. We will discuss recent results in this direction. (Refreshments will be served in the Math Commons Room at 3:30pm)
{"url":"https://www.math.arizona.edu/events/mathematics-colloquium-dan-edidin-university-missouri","timestamp":"2024-11-07T10:51:38Z","content_type":"text/html","content_length":"40287","record_id":"<urn:uuid:542c1606-7804-4e28-964e-b8029c6ce239>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00306.warc.gz"}
Lesson start ideas Give a creative idea how to begin a lesson Which subject Mathematics What age group Year or Grade 3 What topic Angles Quantity 1 Any other preferences Welcome to our Angle Adventure! Today, we will learn about angles and discover how we can use them to make our everyday life more interesting. Are you ready for some fun? Understanding Angles First, let us define what an angle is. An angle is the space between two lines or rays that meet at a point. We can measure angles in degrees, which is determined by the size of the angle. Exploring Angles Now, let us explore some angles together! Look around the room and see how many angles you can find. Can you spot any right angles? Can you find any angles that are less than 90 degrees? How about angles that are greater than 90 degrees? Angle Hunt Let's take our angle exploration outside the classroom! Let's go on an Angle Hunt, and look for angles around the school. Let's see if we can find any acute, obtuse, and right angles. Angle Art Finally, let's get creative and make some Angle Art! Design a picture using angles, and see which types of angles you can use. Can you create a picture using only right angles? How about using only acute or obtuse angles? Are you excited to start the Angle Adventure? Let's go and explore the fascinating world of angles together!
{"url":"https://aidemia.co/view.php?id=1168","timestamp":"2024-11-05T21:43:38Z","content_type":"text/html","content_length":"7011","record_id":"<urn:uuid:55ec5409-f104-4ee6-8bbd-5a5628117097>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00053.warc.gz"}
utors in Johannesburg Pure Maths tutors in Johannesburg Personalized Tutoring Near You Pure Maths lessons for online or at home learning in Johannesburg Pure Maths tutors in Johannesburg near you Namalambo M Randburg, Johannesburg Having a degree in Actuarial Science and also a career in quantitative analytics, I have extensive mathematical knowledge. I have tutoring experience and I have the desirable soft skills; patience, understanding and structure to successfully tutor this subject. Teaches: English as a foreign Language, Statistics, Trigonometry, Mathematics Literacy, Linear Algebra, Math, Algebra, Calculus, Mathematics, Pure Maths, Vocabulary, Writing, Reading, Grammar, English skills, IELTS Available for Pure Maths lessons in Johannesburg Miriam M Bezuidenhout Valley Nhlanhla Lucky N Subjects related to Pure Maths in Johannesburg Find Pure Maths tutors near Johannesburg
{"url":"https://turtlejar.co.za/tutors/johannesburg-gt/pure-maths","timestamp":"2024-11-11T01:09:52Z","content_type":"text/html","content_length":"141142","record_id":"<urn:uuid:c8d7f853-02bc-4207-81f5-807e3dac5e6d>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00840.warc.gz"}
Crypto Jargon Comments Off on Crypto Jargon Crypto Jargon Hash Functions A hash function is a completely public algorithm (no key in that) which mashes bit together in a way which is truly infeasible to untangle: anybody can run the hash function on any data, but finding the data back from the hash output appears to be much beyond our wit. The hash output has a fixed size, typically 256 bits (with SHA-256) or 512 bits (with SHA-512). The SHA-* function which outputs 160 bits is called SHA-1, not SHA-160, because cryptographers left to their own devices can remain reasonable for only that long, and certainly not beyond the fifth pint. Signature Algorithms A signature algorithm uses a pair of keys, which are mathematically linked together, the private key and the public key (recomputing the private key from the public key is theoretically feasible but too hard to do in practice, even with Really Big Computers, which is why the public key can be made public while the private key remains private). Using the mathematical structure of the keys, the signature algorithm allows one: • to generate a signature on some input data, using the private key (the signature is a mathematical object which is reasonably compact, e.g. a few hundred bytes for a typical RSA signature); • to verify a signature on some input data, using the public key. Verification takes as parameters the signature, the input data, and the public key, and returns either “perfect, man !” or “dude, these don’t match”. For a secure signature algorithm, it is supposedly unfeasible to produce a signature value and input data such that the verification algorithm with a given public key says “good”, unless you know the corresponding private key, in which case it is easy and efficient. Note the fine print: without the private key, you cannot conjure some data and a signature value which work with the public key, even if you can choose the data and the signature as you wish. “Supposedly unfeasible” means that all the smart cryptographers in the world worked on it for several years and yet did not find a way to do it, even after the fifth pint. Most (actually, all) signature algorithms begin by processing the input data with a hash function, and then work on the hash value alone. This is because the signature algorithm needs mathematical objects in some given sets which are limited in size, so they need to work on values which are “not too big”, such as the output of a hash function. Due to the nature of the hash function, things work out just well (signing the hash output is as good as signing the hash input). Key Exchange and Asymmetric Encryption A key exchange protocol is a protocol in which both parties throw mathematical objects at each other, each object being possibly linked with some secret values that they keep for them, in a way much similar to public/private key pairs. At the end of the key exchange, both parties can compute a common “value” (yet another mathematical object) which totally evades the grasp of whoever observed the bits which were exchanged on the wire. One common type of key exchange algorithm is asymmetric encryption. Asymmetric encryption uses a public/private key pair (not necessarily the same kind than for a signature algorithm): With the public key you can encrypt a piece of data. That data is usually constrained in size (e.g. no more than 117 bytes for RSA with a 1024-bit public key). Encryption result is, guess what, a mathematical object which can be encoded into a sequence of bytes. With the private key, you can decrypt, i.e. do the reverse operation and recover the initial input data. It is assumed that without the private key, tough luck. Then the key exchange protocol runs thus: one party chooses a random value (a sequence of random bytes), encrypts that with the peer’s public key, and sends him that. The peer uses his private key to decrypt, and recovers the random value, which is the shared secret. A historical explanation of signatures is: “encryption with the private key, decryption with the public key”. Forget that explanation. It is wrong. It may be true only for a specific algorithm (RSA), and, then again, only for a bastardized-down version of RSA which actually fails to have any decent security. So no, digital signatures are not asymmetric encryption “in reverse”. Symmetric Cryptography Once two parties have a shared secret value, they can use symmetric cryptography to exchange further data in a confidential way. It is called symmetric because both parties have the same key, i.e. the same knowledge, i.e. the same power. No more private/public dichotomy. Two primitives are used: Symmetric encryption: how to mangle data and unmangle it later on. Message Authentication Codes: a “keyed checksum”: only people knowing the secret key can compute the MAC on some data (it is like a signature algorithm in which the private and the public key are identical — so the “public” key had better be not public !). HMAC is a kind of MAC which is built over hash functions in a smart way, because there are many non-smart ways to do it, and which fail due to subtle details on what a hash function provides and does NOT provide. A certificate is a container for a public key. With the tools explained above, one can begin to envision that the server will have a public key, which the client will use to make a key exchange with the server. But how does the client make sure that he is using the right server’s public key, and not that of a devious attacker, a villain who cunningly impersonates the server ? That’s where certificates come into play. A certificate is signed by someone who is specialized in verifying physical identities; it is called a Certificate Authority. The CA meets the server “in real life” (e.g. in a bar), verifies the server identity, gets the server public key from the server himself, and signs the whole lot (server identity and public key). This results in a nifty bundle which is called a certificate. The certificate represents the guarantee by the CA that the name and public key match each other (hopefully, the CA is not too gullible, so the guarantee is reliable — preferably, the CA does not sign certificates after its fifth pint). The client, upon seeing the certificate, can verify the signature on the certificate relatively to the CA public key, and thus gain confidence in that the server public key really belongs to the intended server. But, you would tell me, what have we gained ? We must still know a public key, namely the CA public key. How do we verify that one ? Well, we can use another CA. This just moves the issue around, but it can end up with the problem of knowing a priori a unique or a handful of public keys from über-CAs which are not signed by anybody else. Thoughtfully, Microsoft embedded about a hundred of such “root public keys” (also called “trust anchors”) deep within Internet Explorer itself. This is where trust originates (precisely, you forfeited the basis of your trust to the Redmond firm — now you understand how Bill Gates became the richest guy in the world ?). Now let’s put it all together, in the SSL protocol, which is now known as TLS (“SSL” was the protocol name when it was a property of Netscape Corporation). The client wishes to talk to the server. It sends a message (“ClientHello”) which contains a bunch of administrative data, such as the list of encryption algorithms that the client supports. The server responds (“ServerHello”) by telling which algorithms will be used; then the server sends his certificate (“Certificate”), possibly with a few CA certificates in case the client may need them (not root certificates, but intermediate, underling-CA certificates). The client verifies the server certificate and extracts the server public key from it. The client generates a random value (“pre-master secret”), encrypts it with the server public key, and sends that to the server (“ClientKeyExchange”). The server decrypts the message, obtains the pre-master, and derive from it secret keys for symmetric encryption and MAC. The client performs the same computation. Client sends a verification message (“Finished”) which is encrypted and MACed with the derived keys. The server verifies that the Finished message is proper, and sends its own “Finished” message in response. At that point, both client and server have all the symmetric keys they need, and know that the “handshake” has succeeded. Application data (e.g. an HTTP request) is then exchanged, using the symmetric encryption and MAC. There is no public key or certificate involved in the process beyond the handshake. Just symmetric encryption (e.g. 3DES, AES or RC4) and MAC (normally HMAC with SHA-1 or SHA-256). Thanks to Tom Leek for this.
{"url":"https://silentadmin.gsans.com/everything-else/crypto-jargon/","timestamp":"2024-11-11T01:59:09Z","content_type":"text/html","content_length":"66486","record_id":"<urn:uuid:942666dd-2713-4caf-a03e-bf4822843be8>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00054.warc.gz"}
Compound interest is a financial concept that describes the effect of earning interest on an already existing sum of money, which is then added to the original sum - Daily Buzz Worthy Compound interest is a financial concept that describes the effect of earning interest on an already existing sum of money, which is then added to the original sum What Is Compound Interest? – Forbes Advisor Australia An Introduction to Compound Interest The financial world can seem intimidating at times with a seemingly complex procession of terms and concepts. Compound interest, however, is an integral component of the monetary sphere that anyone dipping a toe into the pool of finance must understand. Whether you’re planning to invest in a high-yield savings account or considering taking out a loan, understanding compound interest and its implications could be pivotal in making the most astute financial choices. Compound interest refers to the concept of interest earning interest. It is the amount earned on both the initial money deposited or borrowed – known as the principal sum – and the interest that amount has already garnered. Consequently, compound interest is what makes an investment or a debt grow at a faster pace compared to simple interest, which only earns or costs money on the original Unlike regular interest, where your deposit or debt produces constant returns over time, compound interest results in exponential growth. Each compounding period — which can occur monthly, yearly, or at other intervals, chosen by you or your lender/investment provider— adds more to your initial deposit or debt. So, let’s dive in further. Consider an investment scenario where you have $5000 in a savings account with a 5 percent annual interest rate compounded yearly. In the first year, you earn 5 percent on $5000, which amounts to $250. In the second year, you earn 5 percent on $5250 (your original amount plus the interest earned), adding $262.5 to your account. And so the process continues, snowballing each passing year. • Your principal for the first year: $5000 • Interest earned in the first year: $250 (5% of $5000) • Total balance at the end of the first year: $5250 ($5000 + $250) • Your principal for the second year: $5250 (total from last year) • Interest earned in the second year: $262.5 (5% of $5250) • Total balance at the end of the second year: $5512.5 ($5250 + $262.5) The Mathematical Formula Behind Compound Interest The underlying engine of compound interest is a mathematical formula that governs its work. This formula, often referred to as the compound interest formula, provides a systematic method to calculate the future value of an investment or the total cost of a loan given specific variables. In its simplest form, the compound interest formula appears like this: A = P (1 + r/n)^(nt) – A represents the future value of the investment/loan, including interest. – P stands for the principal sum (initially invested/borrowed amount). – r is the annual interest rate in decimal form (so 5 percent becomes 0.05). – n indicates the number of times compounding occurs per year. – t signifies the time the money is to be invested or borrowed for in years. So, if we ventured back into the previous example but this time using the formula to determine your account balance after two years: • Initial principal amount (P): $5000 • Annual interest rate in decimal (r): 0.05 (5% / 100) • Number of compounding periods annually (n): 1 (compounded annually) • Total number of time in years money will be compounded (t): 2 • Apply these into our formula, it’ll result to (A): $5000 (1 + 0.05/1)^(1*2) = $5506.25 • So, utilizing the compound interest formula indicates that your balance, after two years at 5% compounded annually, would total $5506.25. Brief History of Compound Interest The concept of compound interest may sound like a modern invention, but in reality, it traces back thousands of years into history. The ancient civilisations understood the power of compound interest, although not exactly in the way we do today. Records from Babylon and Egypt hint at establishing a very basic form of compound interest, where a grain outstanding would breed more grain, leading to an ever-increasing debt obligation as time passed. Fast forward to the middle ages; Europeans used rudimentary compounding principles in their financial transactions, which led to the birth of the banking industry as we know it today. For instance, during the reign of Roman Empire, financiers or ‘money lenders’ often offered loans with yearly interest. A borrower taking out a loan of 1000 denarii at 10 percent per annum would owe 1100 denarii after one year. If the loan were to be unpaid, another 10 percent interest would be tacked onto the new total, making it 1210 denarii following the second year. • Initial debt (P): 1000 denarii • Annual interest rate (r): 10% • Debt at end of first year: 1100 denarii (1000 + 10% of 1000) • Interest added for the second year: 110 denarii (10% of 1100) • Debt at end of second year: 1210 denarii (1100 + 10% of 1100) • This shows us that the principle of compound interest was in existence and use even back in ancient times. Significance of Compound Interest for Investors Compound interest can be an investor’s best friend. It comes into play when investing in assets or financial instruments like stocks, bonds, mutual funds, and retirement accounts. Because it can significantly increase the value of investments over time, understanding how compound interest operates is foundational knowledge for any investor. Albert Einstein reputedly referred to compound interest as the “8th wonder of the world,” further stating, “He who understands it, earns it; he who doesn’t, pays it.” The importance of compound interest in investments is well-illustrated using the concept of ‘time value of money.’ Essentially, this principle asserts that a dollar today is worth more than a dollar tomorrow because the dollar today can earn interest. The farther out the investment horizon, the more potent compound interest becomes. Suppose you invested $10000 in a bond with a 6% annual yield, compounded annually for 30 years. Look how your initial investment grows: • Principal amount (P): $10000 • Annual interest rate (r): 6% • Number of years (t): 30 years • Total earning after 30 years: $10000 * (1 + 0.06/1)^(1*30) = $57,434.98 • The power of compound interest in this case means your $10000 investment would balloon to a whopping $57,434.98 over 30 years! • This illustrates the strong incentive for early investment wherein investors can put compound interest to work and watch their wealth grow exponentially. Impact of Compound Interest on Loans However, while compound interest can be fantastic for investments, the flip side is also true. When borrowing money, compound interest has the exact opposite effect on your finances. It makes your debt grow faster and, over time, could add a significant burden unless you keep up with payments. Whether it’s for a mortgage, car loan, student loan, or credit card debt, the impact of compound interest can make a considerable difference to the total amount you end up paying back. The longer your loan term and the higher your interest rate, the more interest you will pay over the life of the loan. Let’s consider an example of $10000 borrowed at an annual interest rate of 6% compounded annually for 10 years: • Principal debt amount (P): $10000 • Annual interest rate (r): 6% • Number of years (t): 10 years • Total debt balance after 10 years: $10000 * (1 + 0.06/1)^(1*10) = $17908.49 • This indicates that without any additional repayments over 10 years, a $10000 loan will expand to over $18000! • It’s no wonder wise financial advice always dictates paying off your debts as quickly as possible to limit the adverse effects of compound interest. Frequency of Compounding The frequency with which interest compounds crucially affects the total amount of compound interest earned or charged. This could range from annually, quarterly, monthly, daily, or even continuously. Generally speaking, the more frequently compounding occurs, the greater the overall amount of compound interest. For instance, if you were investing $10000 in a 5 year term deposit at a bank offering a fixed annual interest rate of 2%, but one bank offers yearly compounding while another offers daily compounding. Although the difference might not seem large, over time, compounded daily will result in slightly more interest. When comparing the two: • Principal investment amount (P): $10000 • Annual interest rate (r): 2% • Number of years (t): 5 years • Total value with yearly compounding: $10000 * (1 + 0.02/1)^(1*5 ) = $11040.81 • Total value with daily compounding: $10000 * (1 + 0.02/365)^(365*5 ) = $11051.27 • The difference may not be massive, but it illustrates how compounding frequency can affect your returns or loan repayments subtly. Strategies to Maximise Compound Interest As we’ve discussed above, compound interest can build wealth or unintentionally create debt; it’s all about how you use it. For investors and savers, a few strategies could help magnify the benefits of compound interest. Firstly, start saving and investing early. The earlier you invest, the longer period your money has to compound and grow. Secondly, consider reinvesting your returns. By plowing your gains back into your investment, you increase the base amount that compounds over time. Finally, choose accounts with higher interest rates and more frequent compounding periods. This increases the overall amount of compound interest you earn. Suppose you decide to start investing early in a retirement fund such as a superannuation account with a 7% annual return when you’re 21 instead of 31: • Initial investment at age 21: $5000 • Annual contributions: $2000 • Rate of return: 7% • Value of investment when retired at 65, if started at 21: $1,119,000 • Value of investment when retired at 65, if started at 31: $489,383 • This demonstrates the significant advantage in starting your investments early! Conclusion: Power of Compound Interest All in all, compound interest is an incredibly powerful tool and understanding it holds the key to smart financial decisions. For investors, it can help multiply your gains over time setting you onto a path of financial abundance when wielded correctly. However, borrowers need to be extremely cautious, as this same magic tool can merrily turn around causing debts to inflate quickly. Therefore, familiarise yourself with how compounding works, frequently review your finances to ensure your investments are maximizing your potential returns, and your debt repayments are restricting the expanding influence of compound interest. Past the jargons and equations, educating on the power of compound interest brings a little more order to your financial universe. In a scenario where you have a $5000 initial investment and a fixed annual interest rate of 5% for 20 years: • If compounded annually, total gains will amount to $13266.64 • If compounded semi-annually, total gains will amount to $13340.50 • If compounded quarterly, total gains will amount to $13401.17 • If compounded monthly, total gains will amount to $13449.56 • If compounded daily, total gains will amount to $13467.29 Annual Compound $13266.64 Semi-Annual Compound $13340.50 Quarterly Compound $13401.17 Monthly Compound $13449.56 Daily Compound $13467.29 It’s a perfect demonstration of the magic that is compound interest, reinforcing the mantra: The more often it compounds, the better! So go ahead, let this magic work in your favour and watch your money grow faster than you ever thought possible.
{"url":"https://dailybuzzworthy.com/compound-interest-is-a-financial-concept-that-describes-the-effect-of-earning-interest-on-an-already-existing-sum-of-money-which-is-then-added-to-the-original-sum/","timestamp":"2024-11-14T02:22:32Z","content_type":"text/html","content_length":"115563","record_id":"<urn:uuid:d01c41d0-bb9a-4355-bbc0-bb2d780ec424>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00587.warc.gz"}
CBSE Solutions For Class 3 Maths Chapter 14 The Surajkund Fair - CBSE School Notes CBSE Solutions For Class 3 Maths Chapter 14 The Surajkund Fair CBSE Notes For Class 3 Maths Mela Chapter 14 The Surajkund Fair Soni and Avi are going to see a fair with their grandparents. They are going to Surajkund in Faridabad district of Haryana. Let us join them and have fun. Let us Discuss • What do you see in the picture? • Spot things in the picture that look the same from the left and right sides. Read and Learn More CBSE Solutions For Class 3 Maths Make Malas Soni and Avi reach a stall where a man and a woman are making malas with beads. Let us Do 1. Colour the beads in the strings using two colours (Qs a) to show the malas that you have made. Question 2. On the previous page, tick the symmetrical males. 3. How many such malas can be made? Discuss. 1. Tick (✓) the malas that are symmetrical and cross (✗) the one(s) that are not symmetrical. 2. Now, use 6 beads of one colour and 2 beads of another colour to make symmetrical males. Vanakkamj Rangolis All Around ! Soni and Avi arrive at the stall of Tamil Nadu. Amma was maldng kolam in front of the hut. Follow the steps: Let us Think 1. Observe the rangolis given below. Are all rangolis symmetrical? 2. Trace these rangolis on a paper. Fold the tracing paper in such a way that one half of the rangoli lies exactly in the other half. 3. Draw lines in the given rangolis that divide them into two identical halves. 4. Look for other symmetrical things around you. Discuss. Let us Do Enjoy making rangolis 1. Draw and complete the symmetrical rangolis given below. 2. Draw some more rangolis in your notebook that are symmetrical. Make Masks! Tit for Tat Soni gets her Picture made by a painter. Let us Think 1. What is the trick the painter is playing? Find things for the painter to draw so that he can no longer play the trick. Draw three such things here. The Mirror Game Soni and Avi started playing this game. Let us play with them. Has Avi placed the counters in the right places? Check it by placing the mirror on the line drawn. Let us Explore 1. Pick the odd one out and give a reason. 2. Fill 4 boxes with red and 3 with blue in such a way that one side is the mirror image of the other. In how many ways can you fill it? Think, think! Question 3. Make Micy’s side the same as that of Catty’s side. You can rearrange only three balls on Micy’s side. 4. Which shape cutouts would fit in the given shape without overlapping and without gaps? Let us Do 1. Use rangometry shapes to fill the shapes with no gaps and overlaps. Making Tiles, Creating Paths Soni and Avi have started making their own tiles by joining different shapes. Let us Do 1. Use two or more rangometry shapes to create your tiles. Now trace the tiles to create different paths. 2. Try making paths. Giant Wheel Read the conversation between Soni and Avi and mark the place they are talking about. Let us Play Imagine yourself sitting with Soni and AvL You think of a place or a stall and challenge your friend to find out which stall you have in your mind. You can help them guess by answering yes or no. Search for Dada and Dadi Soni and Avi’s Dada and Dadi were missing. They hear their announcement. Let us Do 1. Help Soni and Avi read the map and find the following: • Which place does the • Circle the picture on the map that shows the play area. • Which place does the • How many exit routes are there in the fair? 2. Follow the path that Avi and Soni are following. • Walk on the blue lane. • Turn right on the green lane. • You will see a restaurant on your right. Don’t sit there. • Take a left towards the red lane. • Take the first left turn towards the golden lane. Stalls will be seen on the way. • Pass the stalls to find the Chaupal and meet Dada Dadi. 3. An uncle asks Dada Ji the way to the ATM. Tell him the way to the ATM from the chapel. Let us Do 1. There are two ways to go out of the Surajkund fair. One seems to be a maze and the other goes straight there. Follow the maze with Soni and Avi to exit the fair. Question 2. Share the way you went through the maze. Write the things you found on the way. Leave a Comment
{"url":"https://cbseschoolnotes.com/cbse-solutions-for-class-3-maths-chapter-14/","timestamp":"2024-11-06T11:27:23Z","content_type":"text/html","content_length":"157722","record_id":"<urn:uuid:26a7b238-5c4e-48e1-9507-43021a1e5517>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00844.warc.gz"}
Precise mathematical model for the ratchet tooth root bending stress Articles | Volume 12, issue 2 © Author(s) 2021. This work is distributed under the Creative Commons Attribution 4.0 License. Precise mathematical model for the ratchet tooth root bending stress A ratchet is an essential component of the ratchet pawl mechanism. But the traditional ratchet strength check method has certain limitations in the design process. In this paper, the stress analysis of the ratchet is discussed and a precision mathematical model for the ratchet tooth root bending stress is proposed for the first time. This model was established by the folded section and defined by the incision effect theory. To test the prediction ability of the proposed mathematical model, the maximum stress of three standard ratchets and one non-standard ratchet were analyzed by the FEA (finite element analysis) method. The non-standard ratchet was adapted in the ratchet experiment to analyze its maximum stress. The analysis results presented in this paper show that the proposed mathematical model has a good predictability, regardless of whether it is a standard or non-standard ratchet. It is recommended that this model can be used to predict the ratchet tooth root bending stress in the ratchet design process. Received: 20 Oct 2021 – Accepted: 14 Nov 2021 – Published: 20 Dec 2021 A ratchet is an essential component of the ratchet pawl mechanism, which has the advantages of a simple structure, convenient manufacture, and the time ratio of moving or stopping can be controlled by the selecting drive mechanism. It is widely used in machine tools, safety nets, for lifting permanent magnets and other structures that require a unidirectional intermittent motion or anti-reverse function. However, there are few references or design methods in the literature dealing specifically with the analysis of ratchet bending stress. Given its extended applications, the stress research of the ratchet has great significance from the perspective of theoretical and engineering applications. It should be noted that the traditional ratchet strength check method is based on the module published elsewhere (Daxian, 2008; Bangchun, 2010; Datong and Lingyun, 2011). That is, the designed module needs to be larger than the checked one. This method for checking can ensure the security for the condition that the position of the ratchet and pawl is fixed. The contact between both is mostly line contact. But when the pawl is driven and moved with other components, this method will no longer work, and this method for checking cannot reflect the ratchet stress. Take the example of lifting a permanent magnet (Ning et al., 2011, 2019a, b); its pawl is a drive, mounted on the rotating arm, and is rotated. In this condition, the contact type between the ratchet and pawl is a surface contact. Its stress state is much better than that of line contact. So far, there are few studies on the ratchet tooth root bending stress (Da and Chongxian, 1998; Mingjun et al., 2015; Yukun et al., 2017). In view of the similarity between the ratchet and the gear, the bending stress studies of the gear have a certain reference for the ratchet stress studies (Hongbin et al., 1999; Litvin-Faydor et al., 2005; Gonzalez-Perez et al., 2011; Zhongming et al., 2016; Cheng et al., 2017; Fajia et al., 2017; Lisle-Timothy et al., 2017; Gonzalez-Perez and Fuentes-Aznar, 2018; Yonghu et al., 2018; Min et al., 2019; Nan et al., 2019). In the mechanics of materials, almost the whole stress calculation is based on the flat section hypothesis. The bending, stretching, compressing and torsion of beams are resolved on the basis of this hypothesis, and the gear is no exception. The gear tooth is generally assumed to be a cantilever beam when calculating the gear tooth root bending stress. However, the stress based on the flat section is imprecise due to the tooth profile being involute. Thus, the stress correction factor is introduced to correct the results. This method is widely used in practical engineering applications for its simplicity. With regard to the ratchet, its tooth can be considered as being a variable section beam. The ratchet tooth root bending stress can also be calculated by the flat section hypothesis. But figuring out the stress correction factor requires thousands of experiments. This is a lengthy and complex process. Besides the flat section hypothesis, there is also the folded section hypothesis and the circular section hypothesis. These hypotheses are collectively known as the non-flat section hypotheses and were proposed by the scientist A. B. Verkhvsky, of the former Soviet Union, in 1967. Due to the non-flat section hypothesis being very close to the actual fractured shape of the teeth, its calculation is sufficiently precise, and the stress correction factor is no longer needed. The objective of this study is to introduce a precise mathematical model for the ratchet tooth root bending stress. This model can acquire the actual value of the ratchet tooth bending stress and define the basic rules for the ratchet strength check. 2Ratchet tooth root bending stress precision mathematical model 2.1Incision effect theory Before analyzing the ratchet tooth root bending stress, it is necessary to introduce the incision effects theory (Verkhvsky et al., 1967). Take the example of a steel plate with an incision, which is shown in Fig. 1. First, we assume that the incision is fairly shallow. Thus, the depth of the incision crack under a collapsing force will not affect the entire width of the steel plate, as in the following: $\begin{array}{}\text{(1)}& \mathrm{2}\sqrt{t\mathit{\rho }}<a,\end{array}$ where ρ is the curvature of the incision. t is the depth of the incision. a is the width of the steel plate. For this kind of plate, the effects of a shallow incision would be limited to a certain depth, as follows: $\begin{array}{}\text{(2)}& {a}_{\mathrm{0}}=\mathrm{2}\sqrt{t\mathit{\rho }},\end{array}$ where a[0] is the maximum depth of the incision crack under a collapsing force. 2.2Analysis of the ratchet tooth root bending stress According to the analysis of the gear, the maximum stress should be generated at the tooth root fillet when the ratchet is being loaded. To ensure that the designed ratchet meets the strength requirement, the folded section hypothesis is introduced to analyze this maximum stress. In the folded section hypothesis, the equivalent critical cross section includes two parts, where the first one can be determined by the theory of the incision depth effects. For most ratchets, it satisfies the requirement that is $\mathrm{2}\sqrt{t\mathit{\rho }}<a$, so the length of the first one is as follows: $\begin{array}{}\text{(3)}& {l}_{\mathit{AB}}=\mathrm{2}\sqrt{{t}_{\mathrm{r}}\mathit{\rho }},\end{array}$ where t[r] is the distance between the addendum circle and dedendum circle of the ratchet. Along the length l[AB], we make the first polyline. Then, we draw a radial from the tooth root fillet center and go through point A, which is the mid-point of this arc. We take point A as the center and the length l[AB] as the radius to draw another arc; this arc intersects the radial at a point which is named point B. The position of points A and B is shown in Fig. 2. The second part of the equivalent critical cross section is determined by its geometric structure. We connect points B and O so that the line BO intersects the axle hole at point C. The polyline ABC is the projection of the equivalent critical cross section on the front (it is also named the component hazard section). As can be seen from Fig. 2, ratchet teeth are asymmetrical structures. If the ratchet is cut along the radial direction and the circular contour is straightened, then a steel plate with asymmetrical incision is shown. For the asymmetrical structure, its neutral layer can be determined by the area of the stress diagram as follows (Verkhvsky et al., 1967) where a[0] is the projection of l[AB] in the radial direction, y[a] is the distance between the dedendum circle and neutral layer, and y[b] is the distance between the neutral layer and the axle According to the neutral plane, when the ratchet is applied at force F, the torque M is as follows: $\begin{array}{}\text{(5)}& M=F×L,\end{array}$ where L is the length of the arm of the force. For the bending deformation, tension and compression occur simultaneously. The ratchet is no exception. When the ratchet tooth is loaded, the polyline AB and BD will be tense. To analyze its deformation, we draw another polyline A[1]B[1]C[1] in the same way as the polyline ABC, but below it, take a microelement KF between AB and A[1]B[1] and take another microelement K[1]F[1] between BD and B[1]D[1]. When the ratchet tooth deforms, the line BD is pivoted by an angle Δγ around the point D to B[2]D and the line AB around to A[2]B[2]. This is shown in Fig. 3. The microelement KN is stretched to KF, and K[1]N[1] is stretched to K[1]F[1]. Now assume that there is no lateral pressure between microelement. According the Hook's law, the stress of microelement KN and K[1]N[1] is as $\begin{array}{}\text{(6)}& {\mathit{\sigma }}_{\mathrm{1}}={\mathit{\epsilon }}_{\mathrm{1}}E=\frac{\mathit{FN}}{\mathit{KF}}E=\frac{\left[\left({y}_{a}-{a}_{\mathrm{0}}\right)\mathrm{cos}\mathit{\ alpha }+{u}_{\mathrm{1}}\right]\mathrm{\Delta }\mathit{\gamma }E}{\left(\mathit{\rho }+{l}_{\mathit{AB}}-{u}_{\mathrm{1}}\right)\mathrm{\Delta }\mathit{\alpha }}\text{(7)}& {\mathit{\sigma }}_{\ mathrm{2}}={\mathit{\epsilon }}_{\mathrm{2}}E=\frac{{F}_{\mathrm{1}}{N}_{\mathrm{1}}}{{K}_{\mathrm{1}}{F}_{\mathrm{1}}}E=\frac{{u}_{\mathrm{2}}\mathrm{\Delta }\mathit{\gamma }E}{\left(\mathit{\rho }+ {l}_{\mathit{AB}}\right)\mathrm{\Delta }\mathit{\alpha }\cdot \mathrm{cos}\mathit{\alpha }}.\end{array}$ In Eq. (6), KN is as follows: $\begin{array}{}\text{(8)}& \mathit{KN}=\left({y}_{a}-{a}_{\mathrm{0}}\right)\mathrm{\Delta }\mathit{\gamma }\cdot \mathrm{cos}\mathit{\alpha }+{u}_{\mathrm{1}}\mathrm{\Delta }\mathit{\gamma }\ where ε[1] and ε[2] are the relative elongation of the microelement, E is the elasticity modulus, u[1] is the distance between the point B and the microelement KF, and α is the angle of the line AB and the line O[1]O. The compression occurs in the BD of the second polyline BC. It is also pivoted by an angle Δγ around the point D to DC[2]. Now take a microelement K[2]N[2] between the line DC and the line D[1]C[1]. When DC is pivoted, then the microelement K[2]N[2] is compressed into K[2]F[2]. Then, assume that there is no lateral pressure between microelement. According the Hook's law, the stress of microelement K[2]N[2] is as follows: $\begin{array}{}\text{(9)}& {\mathit{\sigma }}_{\mathrm{3}}={\mathit{\epsilon }}_{\mathrm{3}}E=\frac{{F}_{\mathrm{2}}{N}_{\mathrm{2}}}{{K}_{\mathrm{2}}{F}_{\mathrm{2}}}E\frac{{u}_{\mathrm{3}}\mathrm {\Delta }\mathit{\gamma }E}{\left(\mathit{\rho }+{l}_{\mathit{AB}}\right)\mathrm{\Delta }\mathit{\alpha }\cdot \mathrm{cos}\mathit{\alpha }},\end{array}$ where u[3] is the distance between the point D and the microelement K[2]F[2]. In the following, according to the moment equilibrium condition: $\begin{array}{}\text{(10)}& \begin{array}{rl}M& =\underset{\mathrm{0}}{\overset{{l}_{\mathit{AB}}}{\int }}{\mathit{\sigma }}_{\mathrm{1}}b\left[\left({y}_{a}-{a}_{\mathrm{0}}\right)\mathrm{cos}\ mathit{\alpha }+{u}_{\mathrm{1}}\right]\mathrm{d}{u}_{\mathrm{1}}+\underset{\mathrm{0}}{\overset{{y}_{a}-{a}_{\mathrm{0}}}{\int }}{\mathit{\sigma }}_{\mathrm{2}}b{u}_{\mathrm{2}}\mathrm{d}{u}_{\ mathrm{2}}\\ & +\underset{-{y}_{b}}{\overset{\mathrm{0}}{\int }}{\mathit{\sigma }}_{\mathrm{3}}b{u}_{\mathrm{3}}\mathrm{d}{u}_{\mathrm{3}},\end{array}\end{array}$ where b is the width of the ratchet. Substitute Eqs. (6), (7) and (9) into Eq. (10) and take the terms that do not change with u[1] and u[2] and u[3] outside the integral symbol as follows: $\begin{array}{}\text{(11)}& \frac{\mathrm{\Delta }\mathit{\gamma }E}{\mathrm{\Delta }\mathit{\alpha }}=\frac{M}{\begin{array}{c}b\left[{\int }_{\mathrm{0}}^{{l}_{\mathit{AB}}}\frac{\left[\left({y}_ {a}-{a}_{\mathrm{0}}\right)\mathrm{cos}\mathit{\alpha }+{u}_{\mathrm{1}}{\right]}^{\mathrm{2}}\mathrm{d}{u}_{\mathrm{1}}}{\mathit{\rho }+{l}_{\mathit{AB}}-{u}_{\mathrm{1}}}\\ +{\int }_{\mathrm{0}}^ {{y}_{a}-{a}_{\mathrm{0}}}\frac{{u}_{\mathrm{2}}^{\mathrm{2}}\mathrm{d}{u}_{\mathrm{3}}}{\left(\mathit{\rho }+{l}_{\mathit{AB}}\right)\mathrm{cos}\mathit{\alpha }}+{\int }_{-{y}_{b}}^{\mathrm{0}}\ frac{{u}_{\mathrm{3}}^{\mathrm{2}}\mathrm{d}{u}_{\mathrm{4}}}{\left(\mathit{\rho }+{l}_{\mathit{AB}}\right)\mathrm{cos}\mathit{\alpha }}\right]\end{array}}.\end{array}$ For convenience, let the integral in the square bracket equal N and solve it as follows: $\begin{array}{}\text{(12)}& \begin{array}{rl}N& =\underset{\mathrm{0}}{\overset{{l}_{\mathit{AB}}}{\int }}\frac{{\left[\left({y}_{a}-{a}_{\mathrm{0}}\right)\mathrm{cos}\mathit{\alpha }+{u}_{\mathrm {1}}\right]}^{\mathrm{2}}\mathrm{d}{u}_{\mathrm{1}}}{\mathit{\rho }+{l}_{\mathit{AB}}-{u}_{\mathrm{1}}}\\ & +\underset{\mathrm{0}}{\overset{{y}_{a}-{a}_{\mathrm{0}}}{\int }}\frac{{u}_{\mathrm{2}}^{\ mathrm{2}}\mathrm{d}{u}_{\mathrm{3}}}{\left(\mathit{\rho }+{l}_{\mathit{AB}}\right)\mathrm{cos}\mathit{\alpha }}+\underset{-{y}_{b}}{\overset{\mathrm{0}}{\int }}\frac{{u}_{\mathrm{3}}^{\mathrm{2}}\ mathrm{d}{u}_{\mathrm{4}}}{\left(\mathit{\rho }+{l}_{\mathit{AB}}\right)\mathrm{cos}\mathit{\alpha }}\\ & =\left({y}_{a}-{a}_{\mathrm{0}}{\right)}^{\mathrm{2}}{\mathrm{cos}}^{\mathrm{2}}\mathit{\ alpha }\mathrm{ln}\left(\frac{\mathit{\rho }+{l}_{\mathit{AB}}}{\mathit{\rho }}\right)\\ & +\mathrm{2}\left({y}_{a}-{a}_{\mathrm{0}}\right)\mathrm{cos}\mathit{\alpha }\left[\left(\mathit{\rho }+{l}_ {\mathit{AB}}\right)\mathrm{ln}\left(\frac{\mathit{\rho }+{l}_{\mathit{AB}}}{\mathit{\rho }}\right)-{l}_{\mathit{AB}}\right]\\ & -\left(\mathit{\rho }+{l}_{\mathit{AB}}\right){l}_{\mathit{AB}}-\frac {{l}_{\mathit{AB}}^{\mathrm{2}}}{\mathrm{2}}+\left(\mathit{\rho }+{l}_{\mathit{AB}}{\right)}^{\mathrm{2}}\mathrm{ln}\left(\frac{\mathit{\rho }+{l}_{\mathit{AB}}}{\mathit{\rho }}\right)\\ & +\frac{\ left({y}_{a}-{a}_{\mathrm{0}}{\right)}^{\mathrm{3}}}{\mathrm{3}\left(\mathit{\rho }+{l}_{\mathit{AB}}\right)\mathrm{cos}\mathit{\alpha }}+\frac{{y}_{b}^{\mathrm{3}}}{\mathrm{3}\left(\mathit{\rho }+ {l}_{\mathit{AB}}\right)\mathrm{cos}\mathit{\alpha }}.\end{array}\end{array}$ Here, N is the section factor which represents the geometric properties of the ratchet. Substitute Eqs. (11) and (12) into Eqs. (6), (7) and (9). The stress at any point on the polyline ABC can be obtained by the following equation: $\begin{array}{}\text{(13)}& {\mathit{\sigma }}_{\mathrm{1}}=\frac{M\left[\left({y}_{a}-{a}_{\mathrm{0}}\right)\mathrm{cos}\mathit{\alpha }+{u}_{\mathrm{1}}\right]}{bN\left(\mathit{\rho }+{l}_{\ mathit{AB}}-{u}_{\mathrm{1}}\right)}\text{(14)}& {\mathit{\sigma }}_{\mathrm{2}}=\frac{M{u}_{\mathrm{2}}}{bN\left(\mathit{\rho }+{l}_{\mathit{AB}}\right)\mathrm{cos}\mathit{\alpha }}\text{(15)}& {\ mathit{\sigma }}_{\mathrm{2}}=\frac{M{u}_{\mathrm{3}}}{bN\left(\mathit{\rho }+{l}_{\mathit{AB}}\right)\mathrm{cos}\mathit{\alpha }}.\end{array}$ To obtain the maximum stress of the critical cross section of the equivalent at the point A, let u[1]=l[AB]. $\begin{array}{}\text{(16)}& {\mathit{\sigma }}_{\mathrm{1}}=\frac{M\left[\left({y}_{a}-{a}_{\mathrm{0}}\right)\mathrm{cos}\mathit{\alpha }+{l}_{\mathit{AB}}\right]}{bN\mathit{\rho }},\end{array}$ where σ[1] is the maximum ratchet tooth root bending stress. Compare σ[1] with the bending fatigue limit stress σ[Flim]. If ${\mathit{\sigma }}_{\mathrm{1}}<{\mathit{\sigma }}_{Flim}$, then the design ratchet meets the strength requirement. 3Finite element analysis of ratchet To verify the above theory, the FEA (finite element analysis) method was utilized to analyze the ratchet tooth root bending stress. In this analysis, three standard and one non-standard ratchets were adapted (the non-standard ratchet was designed to fit the hollow shaft. When limited by the structure, it has a smaller contact area than the standard one and a larger shaft hole). Their structure data are listed in Table 1. The analysis 3D models are shown in Fig. 4. To save analysis time, these models ignore the keyway and some chamfering features which will not affect the analysis results. In the mesh process, the meshing method is swept, which will give the preference to the hexahedral element, and the mesh destiny of the tooth root fillet was refined to reduce the effect of the stress concentrate. The mesh result of ratchet 2 is shown in Fig. 5. After the mesh operation, the material C45E was sent to the model. Its main parameters are shown in Table 2. According to the actual work demand, the fixed support was added to the shaft hole of ratchet, and the concentrated force was applied to the working surface of the ratchet, as shown in Fig. 6. In the solution information, the equivalent stress and the equivalent strain were added and run to resolve the above setting finish. The analysis results will be shown in the form of colored stress patterns. For the example of ratchet 2, its colored stress pattern under 20000N is shown in Fig. 7. As can be seen from Fig. 7, the maximum stress generated at the root fillet of the ratchet, when the loading force reaches 20000N, is 329.13MPa. The mathematical model's stress value is 356.09MPa. Its deviation is about 7.57%. Analysis results of other ratchets are shown in Fig. 8 in the form of a line chart. The results of the mathematical model are also shown in the chart to make a contrast. Its deviations are shown in Fig. 9. As can be seen from Figs. 8 and 9, the simulation data fit well with the model analysis data, and the model analysis data are slightly larger, which can ensure the design safety to a certain extent. It also should be noted that the non-standard ratchet has a larger deviation compared to the other standard ratchets. The deviation of ratchet 1 is about 3.75%, the deviation of ratchet 2 is about 7.57%, the deviation of ratchet 3 is about 4.08%, and the deviation of ratchet 4 is about 4.88%. The reason for this is that the non-standard ratchet, which has a big shaft hole, has a better torsion resistance. A big shaft hole increases the distance between the ratchet center and the material of shaft hole, which will increase its torsion resistance. When the ratchet was loaded with a constant force, the torsion borne by the material of the tooth root is big, and its torsion resistance is big. Conversely, the torsion borne by the material of the center is small, and its torsion resistance is small. So, the mean torsion resistance is small. But if the material outer is moved, the torsion resistance of the shaft hole material will increase, and the mean torsion resistance will also increase. This leads to a larger deviation with the standard ratchets. For the standard ratchet, its size variation has a certain regularity. For example, the ratchet tooth height increases by 1.5 for every increase of 2 modules (Bangchun, 2010). That means that its stress variation also has a corresponding regularity. But for the non-standard ratchet, it does not have this regularity because its size is often determined by the working conditions. This will make its stress variation have a different trend compared with the standard ratchet, and this has been proved by the FEA method. If the proposed mathematical model can make a good prediction for the non-standard ratchet, then it can also do the same thing for the standard ratchet. According the above analysis, the non-standard ratchet was adapted as the experimental ratchet. Its parameters are shown in Table 1. The experimental ratchet is shown in Fig. 10. The type of apparatus and devices for ratchet experiment are shown in Table 3. 4.1Experiment apparatus and its principle To acquire the maximum stress, the strain gauges were pasted on the tooth root fillet, which is shown in Fig. 11. The structure of the ratchet test bed is shown in Fig. 12. It consists of a baseboard, shaft, pawl, lock nut, ratchet and support plate. The baseboard was bolted on the working table of an electronic universal testing machine. The pawl was fixed on the loading end of electronic universal testing machine. The shaft support is the L-shaped plate, which was bolted to the test bed bottom. The shaft and the ratchet were connected by the buttress thread, which only transmits power in one direction. The lock nut was soldered to the shaft to prevent the rotation of the ratchet under load. The experiment was carried out as follows: we operated the electronic universal testing machine to lower the pawl to the level of the working surface of the ratchet. Then, we loosened the shaft support bolts and pushed the test bed to the position where the pawl coincides with the ratchet-tooth-loaded surface. Thereafter, we adjusted the ratchet and loading pawl until full contact was Then, we applied force to the working surface of ratchet and recorded the readings on strain gauge when the force reached the set value. 4.2Experiment results and discussion In the experiment process, each ratchet tooth was pasted into two sets of strain. We averaged the two sets of data and multiplied the elastic modulus E to obtain its stress. The jaw vise was used to assemble and disassemble the ratchet, and two teeth were deformed in this process. So, only four teeth could be tested. Finally, the stress data of three teeth were collected successfully. These experimental data are shown in Fig. 13, and the mathematical model results are also shown in the figures to make a contrast. Its deviations under different forces are shown in the right-hand side of Fig. 13. As can be seen from Fig. 13, the experimental data fit well with the model analysis data and are also larger. The average deviation between the mathematical model and the experiment of tooth 1 is approximately 6.9%, tooth 2 is approximately 6.8%, and tooth 3 is approximately 9%. It also should be noted that the average deviation between the mathematical model and experiment is more than 6%, and the experimental data are closer to the simulation data. This further verifies that the non-standard ratchet, which has a big shaft hole, has a better torsion resistance. Apart from that, the deviation between the experimental data and the simulation data and the model analysis data is not constant and shows a decreasing trend with the loading force. The reason for that has two aspects. First, the low machining precision of the ratchet test bed causes a tiny gap between the lock nut and the ratchet, and it could not be eliminated completely by preloading. So, the ratchet will rotate on a tiny angle, with the loading force and results in a variable deviation. Second, the ratchet tooth will have a tiny deformation as the loading force increases in the loading process. This deformation changes the contact condition and results in a variable deviation. This study offers a precision mathematical model for the ratchet tooth root bending stress, which was established by the folded section. The FEA was utilized to analyze the maximum stress of three standard ratchets and one non-standard ratchet. The analysis results show that the mathematical model prediction values are consistent with the FEA. The deviation of standard ratchet does not exceed 5%, and the deviation of non-standard ratchet does not exceed 8%. Afterwards, the ratchet experiment was designed to analyze the actual stress of the non-standard ratchet. The experiment results show that the mathematical model also has a good prediction for the non-standard ratchet. The average deviation does not exceed 9%. By analyzing the FEA method and the ratchet experiment, the proposed mathematical model has a good predictive ability for the standard ratchet and the non-standard ratchet. Apart from that, the predictive stress value is bigger than the real value. This would improve the safety factor of ratchet. All data included in this study are available upon request from the corresponding author. CL and ND were in charge of the whole analysis and modeling and experiment. CL wrote the paper. JD and LZ were in charge of the experimental scheme. SC, SJ and AL were in charge of the acquisition of the experimental data. The contact author has declared that neither they nor their co-authors have any competing interests. Publisher's note: Copernicus Publications remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. The authors express their sincere thanks to the Materials Mechanics Laboratory of Changchun University. The research project has been supported by the Jilin Industrial Technology and Development Program, China (grant no. 2018C043-2), and the Jilin Scientific and Technological Development Program, China (grant no. 20200401111GX). This paper was edited by Daniel Condurache and reviewed by Li Guofa and one anonymous referee. Bangchun, W.: Handbook of Mechanical Design, China Machine Press, Beijing, ISBN 978-7-111-29225-8, 2010. Cheng, L., Wenku, S., Zhiyong, C., Wei, H., Rusong, R., and Huailan, S.: Experiment on tooth root bending stress of driving axle hypoid gear of automobile, Journal of Jilin University (Engineering and Technology Edition), 47, 344–352, https://doi.org/10.13229/j.cnki.jdxbgxb201702002, 2017. Da, X. and Chongxian, J.: Structure and Design of Special Vehicle, Beijing Institute of Technology Press, Beijing, ISBN 7-81045-492-7, 1998. Datong, Q. and Lingyun, X.: Handbook of Mechanical Design, Chemical Industry Press, Beijing, ISBN 978-7-122-08712-6, 2011. Daxian, C.: Handbook of Mechanical Design, Chemical Industry Press, Beijing, ISBN 978-7-122-01408-5, 2008. Fajia, L., Rupeng, Z., Miaomiao, L., and HeYun, B., and Guanghu, J.: Calculation method of external meshed gear tooth root bending stress of high contact ratio gear, Journal of Aerospace Power, 32, 138–147, https://doi.org/10.13224/j.cnki.jasp.2017.01.019, 2017. Gonzalez-Perez, I. and Fuentes-Aznar, A.: Implementation of a Finite Element Model for Gear Stress Analysis Based on Tie-Surface Constraints and Its Validation Through the Hertz's Theory, ASME J. Mech. Des., 140, 023301, https://doi.org/10.1115/1.4038301, 2018. Gonzalez-Perez, I., Iserte-Jose, L., and Fuentes, A.: Implementation of Hertz theory and validation of a finite element model for stress analysis of gear drives with localized bearing contact, Mech. Mach. Theory, 46, 765–783, https://doi.org/10.1016/j.mechmachtheory.2011.01.014, 2011. Hongbin, X., Guanghui, Z., and Kato, M.: Research on Bending Strength of Double Involute Gear with Ladder Shape Teeth, Chin. J. Mech. Eng., 36, 39–42, 1999. Litvin, F. L., Gonzalez-Perez, I., Fuentes, A., Vecchiato, D., Hansen, B. D., and Binney, D.: Design, generation and stress analysis of face-gear drive with helical pinion, Comput. Method. Appl. M., 194, 3870–3901, https://doi.org/10.1016/j.cma.2004.09.006, 2005. Lisle-Timothy, J., Shaw-Brian, A., and Frazer-Robert, C.: External spur gear root bending stress: A comparison of ISO 6336:2006, AGMA 2101-D04, ANSYS finite element analysis and strain gauge techniques, Mech. Mach. Theory, 111, 1–9, https://doi.org/10.1016/j.mechmachtheory.2017.01.006, 2017. Min, J., Md Rasedul, I., Liu, L., and Mohammad Habibur, R.: Contact stress and bending stress calculation model of spur face gear drive based on orthogonal test, Microsyst. Technol., 26, 1055–1065, https://doi.org/10.1007/s00542-019-04630-w, 2019. Mingjun, N., Zhichao, S., Maile, Z., Huixuan, Z., Qi, W., and Yun, Z.: Design and Experiment on Longitudinal Seedling Feeding Mechanism for Rice Pot Seedling Transplanting with Ratchet Gear, Transactions of the Chinese Society for Agricultural Machinery, 46, 43–48, https://doi.org/10.6041/j.issn.1000-1298.2015.11.007, 2015. Nan, F., Jingcai, Z., Jinfeng, L., and Man, C.: Research of Test Method of Single Tooth Bending Fatigue Loading of Involute Helical Gear, Journal of Mechanical Transmission, 43, 156–160, https:// doi.org/10.16578/j.issn.1004.2539.2019.07.028, 2019. Ning, D., Dingtong, Z., and Yizheng, P.: Novel and Saving Energy Lifting Permanent Magnet Design, Adv. Mater. Res., 201–203, 2846, https://doi.org/10.4028/www.scientific.net/AMR.201-203.2846, 2011. Ning, D., Chao, L., Jingsong, D., Shuna, J., and Yueqian, H.: Energy Efficient Rare Earth Lifting Permanent Magnet, IOP C. Ser. Earth Env., 267, 022016, https://doi.org/10.1088/1755-1315/267/2/022016 , 2019a. Ning, D., Chao, L., Jingsong, D., and Shuna, J.: Design of Double-drive Mechanism for Energy Saving Lifting Permanent Magnet, E3S Web Conf., 118, 020704, https://doi.org/10.1051/e3sconf/201911802074, Verkhvsky, A. B., Andronov, B. P., Ionov, B. A., Lubanova, O. K., and Cherginov, B. I.: The stress determination of equivalent critical cross section of complex shape components, China Industry Press, Beijing, 133–162, 1967 (in Russian). Yonghu, Y., Jingning, T., and Hong, H.: Dynamic Meshing Contact Analysis for Plastic Gears Based on Finite Element Method, Machine Design & Research, 34, 87–90, https://doi.org/10.13952/ j.cnki.jofmdr.2018.0020, 2018. Yukun, H., Junmin, L., Yunzhen, Z., and Guobin, L.: Safety Calculation and Testing of Falling Protector based on Passive Technology, Journal of Mechanical Transmission, 41, 74–77, https://doi.org/ 10.16578/j.issn.1004.2539.2017.08.015, 2017. Zhongming, L., Yupeng, Y., Weizhong, X., and Haijun, Z.: Method of Calculation and Experiment of Bending Stress for Rough Module Racks, J. Mech. Eng., 52, 152, https://doi.org/10.3901/JME.2016.23.152 , 2016.
{"url":"https://ms.copernicus.org/articles/12/1105/2021/","timestamp":"2024-11-13T09:43:46Z","content_type":"text/html","content_length":"236235","record_id":"<urn:uuid:991129ca-275a-4440-9731-2b776a449a10>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00832.warc.gz"}
Project Euler #68: Magic N-gon ring | HackerRank [This problem is a programming version of Problem 68 from projecteuler.net] Consider the following "magic" ring, filled with the numbers to , and each line adding to nine. Working clockwise, and starting from the group of three with the numerically lowest external node (4,3,2 in this example), each solution can be described uniquely. For example, the above solution can be described by the set: 4,3,2; 6,2,1; 5,1,3. It is possible to complete the ring with four different totals: 9, 10, 11, and 12. There are eight solutions in total. By concatenating each group it is possible to form 9-digit strings; the strings for a ring where total is are and . Given , which represents the and the total print all concatenated solution strings in alphabetical sorted order. Note: It is guaranteed that solution will exist for testcases. You are given and separated by a space. Print the required strings each on a new line.
{"url":"https://www.hackerrank.com/contests/projecteuler/challenges/euler068/problem","timestamp":"2024-11-04T04:38:31Z","content_type":"text/html","content_length":"948760","record_id":"<urn:uuid:82e455a4-0abd-4655-b4c2-66ad29391267>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00258.warc.gz"}
Philosophical Issues in Quantum Theory First published Mon Jul 25, 2016; substantive revision Wed Mar 23, 2022 This article is an overview of the philosophical issues raised by quantum theory, intended as a pointer to the more in-depth treatments of other entries in the Stanford Encyclopedia of Philosophy. 1. Introduction Despite its status as a core part of contemporary physics, there is no consensus among physicists or philosophers of physics on the question of what, if anything, the empirical success of quantum theory is telling us about the physical world. This gives rise to the collection of philosophical issues known as “the interpretation of quantum mechanics”. One should not be misled by this terminology into thinking that what we have is an uninterpreted mathematical formalism with no connection to the physical world. Rather, there is a common operational core that consists of recipes for calculating probabilities of outcomes of experiments performed on systems subjected to certain state preparation procedures. What are often referred to as different “interpretations” of quantum mechanics differ on what, if anything, is added to the common core. Two of the major approaches, hidden-variables theories and collapse theories, involve formulation of physical theories distinct from standard quantum mechanics; this renders the terminology of “interpretation” even more inappropriate. Much of the philosophical literature connected with quantum theory centers on the problem of whether we should construe the theory, or a suitable extension or revision of it, in realist terms, and, if so, how this should be done. Various approaches to what is called the “Measurement Problem” propose differing answers to these questions. There are, however, other questions of philosophical interest. These include the bearing of quantum nonlocality on our understanding of spacetime structure and causality, the question of the ontological character of quantum states, the implications of quantum mechanics for information theory, and the task of situating quantum theory with respect to other theories, both actual and hypothetical. In what follows, we will touch on each of these topics, with the main goal being to provide an entry into the relevant literature, including the Stanford Encyclopedia entries on these topics. Contemporary perspectives on many of the issues touched on in this entry can be found in The Routledge Companion to Philosophy of Physics (Knox and Wilson, eds., 2021); The Oxford Handbook of the History of Quantum Interpretations (Freire, et al. eds., 2022) contains essays on the history of discussions of these issues. 2. Quantum Theory In this section we present a brief introduction to quantum theory; see the entry on quantum mechanics for a more detailed introduction. 2.1 Quantum states and classical states In classical physics, with any physical system is associated a state space, which represents the totality of possible ways of assigning values to the dynamical variables that characterize the state of the system. For systems of a great many degrees of freedom, a complete specification of the state of the system may be unavailable or unwieldy; classical statistical mechanics deals with such a situation by invoking a probability distribution over the state space of the system. A probability distribution that assigns any probability other than one or zero to some physical quantities is regarded as an incomplete specification of the state of the system. In quantum mechanics, things are different. There are no quantum states that assign definite values to all physical quantities, and probabilities are built into the standard formulation of the theory. In formulating a quantum theory of some system, one usually begins with the Hamiltonian or Lagrangian formulation of the classical mechanical theory of that system. In the Hamiltonian formulation of classical mechanics, the configuration of a system is represented by a set of coordinates. These could be, for example, the positions of each of a set of point particles, but one can also consider more general cases, such as angular coordinates that specify the orientation of a rigid body. For every coordinate there is an associated conjugate momentum. If the coordinate indicates the position of some object, the momentum conjugate to that coordinate may be what we usually call “momentum,” that is, the velocity of the body multiplied by its mass. If the coordinate is an angle, the momentum conjugate to it is an angular momentum. Construction of a quantum theory of a physical system proceeds by first associating the dynamical degrees of freedom with operators. These are mathematical objects on which operations of multiplication and addition are defined, as well as multiplication by real and complex numbers. Another way of saying this is that the set of operators forms an algebra. Typically, it is said that an operator represents an observable, and the result of an experiment on a system is said to yield a value for some observable. Two or more observables are said to be compatible if there is some possible experiment that simultaneously yields values for all of them. Others require mutually exclusive experiments; these are said to be incompatible. Of course, in a classical theory, the dynamical quantities that define a state also form an algebra also, as they can be multiplied and added, and multiplied by real or complex numbers. Quantum mechanics differs from classical mechanics in that the order of multiplication of operators can make a difference. That is, for some operators \(A\),\(B\), the product \(AB\) is not equal to the product \(BA.\) If \(AB = BA,\) the operators are said to commute. The recipe for constructing a quantum theory of a given physical systems prescribes algebraic relations between the operators representing the dynamical variables of the system. Compatible observables are associated with operators that commute with each other. Operators representing conjugate variables are required to satisfy what are called the canonical commutation relations. If \(q \) is some coordinate, and \(p\) its conjugate momentum, the operators \(Q\) and \(P\) representing them are required to not commute. Instead, the difference between \(PQ\) and \(QP\) is required to be a multiple of the identity operator (that is, the operator \(I\) that satisfies, for all operators \(A\), \(IA = AI).\) A quantum state is a specification, for every experiment that can be performed on the system, of probabilities for the possible outcomes of that experiment. These can be summed up as an assignment of an expectation value to each observable. These states are required to be linear. This means that, if an operator \(C\), corresponding to some observable, is the sum of operators \(A\) and \(B\), corresponding to other observables, then the expectation value that a quantum state assigns to \(C\) must be the sum of the expectation values assigned to \(A\) and \(B\). This is a nontrivial constraint, as it is required to hold whether or not the observables represented are compatible. A quantum state, therefore, relates expectation values for quantities yielded by incompatible Incompatible observables, represented by noncommuting operators, give rise to uncertainty relations; see the entry on the uncertainty principle. These relations entail that there are no quantum states that assign definite values to the observables that satisfy them, and place bounds on how close they can come to be simultaneously well-defined in any quantum state. For any two distinct quantum states, \(\rho\), \(\omega\), and any real number between 0 and 1, there is a corresponding mixed state. The probability assigned to any experimental outcome by this mixed state is \(p\) times the probability it is assigned by \(\rho\) plus \(1-p\) times the probability assigned to it by \(\omega\). One way to physically realize the preparation of a mixed state is to employ a randomizing device, for example, a coin with probability \(p\) of landing heads and probability \(1-p\) of landing tails, and to use it to choose between preparing state \(\rho\) and preparing state \(\omega\). We will see another way to prepare a mixed state after we have discussed entanglement, in section 3. A state that is not a mixture of any two distinct states is called a pure state. It is both useful and customary, though not strictly necessary, to employ a Hilbert space representation of a quantum theory. In such a representation, the operators corresponding to observables are represented as acting on elements of an appropriately constructed Hilbert space (see the entry on quantum mechanics for details). Usually, the Hilbert space representation is constructed in such a way that vectors in the space represent pure states; such a representation is called an irreducible representation. Irreducible representations, in which mixed states are also represented by vectors, are also possible. A Hilbert space is a vector space. This means that, for any two vectors \(|\psi\rangle\), \(|\phi\rangle\) , in the space, representing pure states, and any complex numbers \(a\), \(b\), there is another vector, \(a |\psi\rangle + b |\phi\rangle\), that also represents a pure state. This is called a superposition of the states represented by \(|\psi\rangle\) and \(|\phi\rangle\) . Any vector in a Hilbert space can be written as a superposition of other vectors in infinitely many ways. Sometimes, in discussing the foundations of quantum mechanics, authors fall into talking as if some state are superpositions and others are not. This is simply an error. Usually what is meant is that some states yield definite values for macroscopic observables, and others cannot be written in any way that is not a superposition of macroscopically distinct states. The noncontroversial operational core of quantum theory consists of rules for identifying, for any given system, appropriate operators representing its dynamical quantities. In addition, there are prescriptions for evolving the state of system when it is acted upon by specified external fields or subjected to various manipulations (see section 1.3). Application of quantum theory typically involves a distinction between the system under study, which is treated quantum mechanically, and experimental apparatus, which is not. This division is sometimes known as the Heisenberg cut. Whether or not we can expect to be able to go beyond the noncontroversial operational core of quantum theory, and take it to be more than a means for calculating probabilities of outcomes of experiments, remains a topic of contemporary philosophical discussion. 2.2 Quantum mechanics and quantum field theory Quantum mechanics is usually taken to refer to the quantized version of a theory of classical mechanics, involving systems with a fixed, finite number of degrees of freedom. Classically, a field, such as, for example, an electromagnetic field, is a system endowed with infinitely many degrees of freedom. Quantization of a field theory gives rise to a quantum field theory. The chief philosophical issues raised by quantum mechanics remain when the transition is made to a quantum field theory; in addition, new interpretational issues arise. There are interesting differences, both technical and interpretational, between quantum mechanical theories and quantum field theories; for an overview, see the entries on quantum field theory and quantum theory: von Neumann vs. Dirac. The standard model of quantum field theory, successful as it is, does not yet incorporate gravitation. The attempt to develop a theory that does justice both the quantum phenomena and to gravitational phenomena gives rise to serious conceptual issues (see the entry on quantum gravity). 2.3 Quantum state evolution 2.3.1 Schrödinger and Heisenberg pictures When constructing a Hilbert space representation of a quantum theory of a system that evolves over time, there are some choices to be made. One needs to have, for each time t, a Hilbert space representation of the system, which involves assigning operators to observables pertaining to time t. An element of convention comes in when deciding how the operators representing observables at different times are to be related. For concreteness, suppose that have a system whose observables include a position, \(x\), and momentum, \(p\), with respect to some frame of reference. There is a sense in which, for two distinct times, \(t\) and \(t'\), position at time \(t\) and position at time \(t'\) are distinct observables, and also a sense in which they are values, at different times, of the same observable. Once we have settled on operators \(\hat{X}\) and \(\hat{P}\) to represent position and momentum at time \(t\), we still have a choice of which operators represent the corresponding quantities at time \(t.\) On the Schrödinger picture, the same operators \(\hat{X}\) and \(\hat{P}\) are used to represent position and momentum, whatever time is considered. As the probabilities for results of experiments involving these quantities may be changing with time, different vectors must be used to represent the state at different times. The equation of motion obeyed by a quantum state vector is the Schrödinger equation. It is constructed by first forming the operator \(\hat{H}\)corresponding to the Hamiltonian of the system, which represents the total energy of the system. The rate of change of a state vector is proportional to the result of operating on the vector with the Hamiltonian operator \(\hat{H}\). \[ i \hbar {\,\D}/{\D t}\, \ket{\psi (t)} = \hat{H} \ket{\psi (t)}. \] There is an operator that takes a state at time 0 into a state at time \(t\); it is given by \[ U(t) = \exp\left(\frac{{-}i H t}{\hbar}\right). \] This operator is a linear operator that implements a one-one mapping of the Hilbert space to itself that preserves the inner product of any two vectors; operators with these properties are called unitary operators, and, for this reason, evolution according to the Schrödinger equation is called unitary evolution. For our purposes, the most important features of this equation is that it is deterministic and linear. The state vector at any time, together with the equation, uniquely determines the state vector at any other time. Linearity means that, if two vectors \(\ket{\psi_1(0)}\) and \(\ket{\psi_2(0)}\) evolve into vectors \(\ket{\psi_1(t) }\) and \(\ket{\psi_2(t)}\), respectively, then, if the state at time 0 is a linear combination of these two, the state at any time \(t\) will be the corresponding linear combination of \(\ket{\psi_1(t)}\) and \(\ket{\psi_2(t)}\). \[ a\ket{\psi_{1}(0)} + b\ket{\psi_{2}(0)} \rightarrow a\ket{\psi_{1}(t)} + b\ket{\psi_{2}(t)} . \] The Heisenberg picture, on the other hand, employs different operators \(\hat{X}(t)\), \(\hat{X}(t')\) for position, depending on the time considered (and similarly for momentum and other observables). If \(\hat{A}(t)\)is a family of Heisenberg picture operators representing some observable at different times, the members of the family satisfy the Heisenberg equation of motion, \[ i \hbar d/dt \; \hat{A}(t) = \hat{A}(t) \hat{H} - \hat{H} \hat{A}(t). \] One sometimes hears it said that, on the Heisenberg picture, the state of the system is unchanging. This is incorrect. It is true that there are not different state vectors corresponding to different times, but that is because a single state vector serves for computing probabilities for all observables pertaining to all times. These probabilities do change with time. 2.3.2. The collapse postulate As mentioned, standard applications of quantum theory involve a division of the world into a system that is treated within quantum theory, and the remainder, typically including the experimental apparatus, that is not treated within the theory. Associated with this division is a postulate about how to assign a state vector after an experiment that yields a value for an observable, according to which, after an experiment, one replaces the quantum state with an eigenstate corresponding to the value obtained. Unlike the unitary evolution applied otherwise, this is a discontinuous change of the quantum state, sometimes referred to as collapse of the state vector, or state vector reduction. There are two interpretations of the postulate about collapse, corresponding to two different conceptions of quantum states. If a quantum state represents nothing more than knowledge about the system, then the collapse of the state to one corresponding to an observed result can be thought of as mere updating of knowledge. If, however, quantum states represent physical reality, in such a way that distinct pure states always represent distinct physical states of affairs, then the collapse postulate entails an abrupt, perhaps discontinuous, change of the physical state of the system. Considerable confusion can arise if the two interpretations are conflated. The collapse postulate occurs already in the general discussion at the fifth Solvay Conference in 1927 (see Bacciagaluppi and Valentini, 2009, 437–450). It is also found in Heisenberg’s The Physical Principles of the Quantum Theory, based on lectures presented in 1929 (Heisenberg, 1930a, 27; 1930b, 36). Von Neumann, in his reformulation of quantum theory a few years later, distinguished between two types of processes: Process 1:, which occurs upon performance of an experiment, and Process 2:, the unitary evolution that takes place as long as no measurement is made (von Neumann, 1932; 1955, §V.I). He does not take this distinction to be a difference between two physically distinct processes. Rather, the invocation of one process or the other depends on a somewhat arbitrary division of the world into an observing part and an observed part (see von Neumann,1932, 224; 1955, 420). The collapse postulate does not appear in the first edition (1930) of Dirac’s Principles of Quantum Mechanics; it is introduced in the second edition (1935). Dirac formulates it as follows. When we measure a real dynamical variable \(\xi\), the disturbance involved in the act of measurement causes a jump in the state of the dynamical system. From physical continuity, if we make a second measurement of the same dynamical variable \(\xi\) immediately after the first, the result of the second measurement must be the same as that of the first. Thus after the first measurement has been made, there is no indeterminacy in the result of the second. Hence, after the first measurement has been made, the system is in an eigenstate of the dynamical variable \(\xi\), the eigenvalue it belongs to being equal to the result of the first measurement. This conclusion must still hold if the second measurement is not actually made. In this way we see that a measurement always causes the system to jump into an eigenstate of the dynamical variable that is being measured, the eigenvalue this eigenstate belongs to being equal to the result of the measurement (Dirac 1935: 36). Unlike von Neumann and Heisenberg, Dirac is treating the “jump” as a physical process. Neither von Neumann nor Dirac take awareness of the result by a conscious observer to be a necessary condition for collapse. For von Neumann, the location of the cut between the “observed” system and the “observer”is somewhat arbitrary. It may be placed between the system under study and the experimental apparatus. On the other hand, we could include the experimental apparatus in the quantum description, and place the cut at the moment when light indicating the result hits the observer’s retina. We could also go even further, and include the retina and relevant parts of the observer’s nervous system in the quantum system. That the cut may be pushed arbitrarily far into the perceptual apparatus of the observer is required, according to von Neumann, by the principle of psycho-physical parallelism. A formulation of a version of the collapse postulate according to which a measurement is not completed until the result is observed is found in London and Bauer (1939). For them, as for Heisenberg, this is a matter of an increase of knowledge on the part of the observer. Wigner (1961) combined elements of the two interpretations. Like those who take the collapse to be a matter of updating of belief in light of information newly acquired by an observer, he takes collapse to take place when a conscious observer becomes aware of an experimental result. However, like Dirac, he takes it to be a real physical process. His conclusion is that consciousness has an influence on the physical world not captured by the laws of quantum mechanics. This involves a rejection of von Neumann’s principle of psycho-physical parallelism, according to which it must be possible to treat the process of subjective perception as if it were a physical process like any other. There is a persistent misconception that, for von Neumann, collapse is to be invoked only when a conscious observer becomes aware of the result. As noted, this is the opposite of his view, as the cut may be placed between the observed system and the experimental apparatus, and it is for him an important point that the location of the cut be somewhat arbitrary. In spite of this, von Neumann’s position is sometimes conflated with Wigner’s speculative proposal, and Wigner’s proposal is sometimes erroneously referred to as the von Neumann-Wigner interpretation. None of the standard formulations are precise about when the collapse postulate is to be applied; there is some lee-way as to what is to count as an experiment, or (for versions that require reference to an observer) what is to count as an observer. Some, including von Neumann and Heisenberg, have taken it to be a matter of principle that there be some arbitrariness in where to apply the postulate. It is common wisdom that, in practice, this arbitrariness is innocuous. The rule of thumb that seems to be applied, in practice, in setting the split between the parts of the world treated quantum-mechanically and things treated as classical objects has been formulated by J. S. Bell as, “[w]hen in doubt enlarge the quantum system,” to the point at which including more in the quantum system makes negligible difference to practical predictions (Bell 1986, 362; Bell 2004, 189). If anything is to be counted as “standard” quantum mechanics, it is the operational core we have discussed, supplemented by a heuristic rule of application of this sort. Standard quantum mechanics works very well. If, however, one seeks a theory that is capable of describing all systems, including macroscopic ones, and can yield an account of the process by which macroscopic events, including experimental outcomes, come about, this gives rise to the so-called “measurement problem”, which we will discuss after we have introduced the notion of entanglement (see section 3). 2.3.3. Wave functions Among the Hilbert-space representations of a quantum theory are wave-function representations. Associated with any observable is its spectrum, the range of possible values that the observable can take on. Given any physical system and any observable for that system, one can always form a Hilbert-space representation for the quantum theory of that system by considering complex-valued functions on the spectrum of that observable. The set of such functions form a vector space. Given a measure on the spectrum of the observable, we can form a Hilbert space out of the set of complex-valued square-integrable functions on the spectrum by treating functions that differ only on a set of zero measure as equivalent (that is, the elements of our Hilbert space are really equivalence classes of functions), and by using the measure to define an inner product (see entry on Quantum Mechanics if this terminology is unfamiliar). If the spectrum of the chosen observable is a continuum (as it is, for example, for position or momentum), a Hilbert-space representation of this sort is called a wave function representation, and the functions that represent quantum states, wave functions (also “wave-functions,” or “wavefunctions”). The most familiar representations of this form are position-space wave functions, which are functions on the set of possible configurations of the system, and momentum-space wave functions, which are functions of the momenta of the systems involved. 3. Entanglement, nonlocality, and nonseparability Given two disjoint physical systems, \(A\) and \(B\), with which we associate Hilbert spaces \(H_{A}\) and \(H_{B}\), the Hilbert space associated with the composite system is the tensor product space, denoted \(H_{A} \otimes H_{B}\). When the two systems are independently prepared in pure states \(\ket{\psi}\) and \(\ket{\phi}\), the state of the composite system is the product state \(\ket{\psi} \otimes \ket{\phi}\) (sometimes written with the cross, \(\otimes\), omitted). In addition to the product states, the tensor product space contains linear combinations of product states, that is, state vectors of the form \[ a\ket{\psi_{1}} \otimes \ket{\phi_{1}} + b\ket{\psi_{2}} \otimes \ket{\phi_{2}} \] The tensor product space can be defined as the smallest Hilbert space containing all of the product states. Any pure state represented by a state vector that is not a product vector is an entangled The state of the composite system assigns probabilities to outcomes of all experiments that can be performed on the composite system. We can also consider a restriction to experiments performed on system \(A\), or a restriction to experiments performed to \(B\). Such restrictions yields states of \(A\) and \(B\), respectively, called the reduced states of the systems. When the state of the composite system \(AB\) is an entangled state, then the reduced states of \(A\) and \(B\) are mixed states. To see this, suppose that in the above state the vectors \(\ket{\phi_{1}}\) and \(\ket{\ phi_{2}}\) represent distinguishable states. If one confines one’s attention to experiments performed on \(A\), it makes no difference whether an experiment is also performed on \(B\). An experiment performed on \(B\) that distinguishes \(\ket{\phi_{1}}\) and \(\ket{\phi_{2}}\) projects the state of \(A\) into either \(\ket{\psi_{1}}\) or \(\ket{\psi_{2}}\), with probabilities \(\abs{a}^{2}\) and \(\abs{b}^{2}\), respectively, and probabilities for outcomes of experiments performed on \(A\) are the corresponding averages of probabilities for states \(\ket{\psi_{1}}\) and \(\ket{\psi_{2}} \). These probabilities, as mentioned, are the same as those for the situation in which no experiment is performed on \(B\). Thus, even if no experiment is performed on \(B\), the probabilities of outcomes of experiments on \(A\) are exactly as if system \(A\) is either in the state represented by \(\ket{\psi_{1}}\) or the state represented by \(\ket{\psi_{2}}\), with probabilities \(\abs{a}^ {2}\) and \(\abs{b}^{2}\), respectively. In general, any state, pure or mixed, that is neither a product state nor a mixture of product states, is called an entangled state. The existence of pure entangled states means that, if we consider a composite system consisting of spatially separated parts, then, even when the state of the system is a pure state, the state is not determined by the reduced states of its component parts. Thus, quantum states exhibit a form of nonseparability. See the entry on holism and nonseparability in physics for more information. Quantum entanglement results in a form of nonlocality that is alien to classical physics. Even if we assume that the reduced states of \(A\) and \(B\) do not completely characterize their physical states, but must be supplemented by some further variables, there are quantum correlations that cannot be reduced to correlations between states of \(A\) and \(B\); see the entries on Bell’s Theorem and action at a distance in quantum mechanics. 4. The measurement problem 4.1 The measurement problem formulated If quantum theory is meant to be (in principle) a universal theory, it should be applicable, in principle, to all physical systems, including systems as large and complicated as our experimental apparatus. It is easy to show that linear evolution of quantum states, when applied to macroscopic objects, will routinely lead to superpositions of macroscopically distinct states. Among the circumstances in which this will happen are experimental set-ups, and much of the early discussions focussed on how to construe the process of measurement in quantum-mechanical terms. For this reason, the interpretational issues have come to be referred to as the measurement problem. In the first decades of discussion of the foundations of quantum mechanics, it was commonly referred to as the problem of observation. Consider a schematized experiment. Suppose we have a quantum system that can be prepared in at least two distinguishable states, \(\ket{0} _{S}\) and \(\ket{1} _{S}\). Let \(\ket{R} _{A}\) be a ready state of the apparatus, that is, a state in which the apparatus is ready to make a measurement. If the apparatus is working properly, and if the measurement is a minimally disturbing one, the coupling of the system \(S\) with the apparatus \(A\) should result in an evolution that predictably yields results of the form \[ \ket{0} _{S} \ket{R} _{A} \Rightarrow \ket{0} _{S}\ket{“0” } _{A} \] \[ \ket{1} _{S} \ket{R} _{A} \Rightarrow \ket{1} _{S}\ket{“1”} _{A} \] where \(\ket{“0” } _{A}\) and \(\ket{“1”} _{A}\) are apparatus states indicating results 0 and 1, respectively. Now suppose that the system \(S\) is prepared in a superposition of the states \(\ket{0} _{S}\) and \(\ket{1}_{S}\). \[ \ket{\psi(0)} _{S} = a\ket{0} _{S} + b\ket{1} _{S}, \] where \(a\) and \(b\) are both nonzero. If the evolution that leads from the pre-experimental state to the post-experimental state is linear Schrödinger evolution, then we will have \[ \ket{\psi(0)} _{S} \ket{R} _{A} \rightarrow a\ket{0} _{S} \ket{“0” } _{A} + b\ket{1} _{S}\ket{“1” } _{A}. \] This is not an eigenstate of the instrument reading variable, but is, rather, a state in which the reading variable and the system variable are entangled with each other. The eigenstate-eigenvalue link, applied to a state like this, does not yield a definite result for the instrument reading. The problem of what to make of this is called the “measurement problem” which is discussed in more detail below. 4.2 Approaches to the measurement problem If quantum state evolution proceeds via the Schrödinger equation or some other linear equation, then, as we have seen in the previous section, typical experiments will lead to quantum states that are superpositions of terms corresponding to distinct experimental outcomes. It is sometimes said that this conflicts with our experience, according to which experimental outcome variables, such as pointer readings, always have definite values. This is a misleading way of putting the issue, as it is not immediately clear how to interpret states of this sort as physical states of a system that includes experimental apparatus, and, if we can’t say what it would be like to observe the apparatus to be in such a state, it makes no sense to say that we never observe it to be in a state like Nonetheless, we are faced with an interpretational problem. If we take the quantum state to be a complete description of the system, then the state is, contrary to what would antecedently expect, not a state corresponding to a unique, definite outcome. This is what led J.S. Bell to remark, “Either the wavefunction, as given by the Schrödinger equation, is not everything, or it is not right” (Bell 1987: 41, 2004: 201). This gives us a (prima facie) tidy way of classifying approaches to the measurement problem: 1. There are approaches that involve a denial that a quantum wave function (or any other way of representing a quantum state) yields a complete description of a physical system. 2. There are approaches that involve modification of the dynamics to produce a collapse of the quantum state in appropriate circumstances. 3. There are approaches that reject both horns of Bell’s dilemma, and hold that quantum states undergo unitary evolution at all times and that a quantum state-description is, in principle, complete. We include in the first category approaches that deny that a quantum state should be thought of as representing anything in reality at all. These include variants of the Copenhagen interpretation, as well as pragmatic and other anti-realist approaches. Also in the first category are approaches that seek a completion of the quantum state description. These include hidden-variables approaches and modal interpretations. The second category of interpretation motivates a research programme of finding suitable indeterministic modifications of the quantum dynamics. Approaches that reject both horns of Bell’s dilemma are typified by Everettian, or “many-worlds” interpretations. 4.2.1 The “Copenhagen interpretation” Since the mid-1950’s, the term “Copenhagen interpretation” has been commonly used for whatever it is that the person employing the term takes to be the ‘orthodox’ viewpoint regarding the philosophical issues raised by quantum mechanics. According to Howard (2004), the phrase was first used by Heisenberg (1955, 1958), and is intended to suggest a commonality of views among Bohr and his associates, included Born and Heisenberg himself. Recent historiography has emphasized diversity of viewpoints among the figures associated with the Copenhagen interpretation; see the entry on Copenhagen interpretation of quantum mechanics, and references therein. Readers should be aware that the term is not univocal, and that different authors might mean different things when speaking of the“Copenhagen interpretation.” 4.2.2 Non-realist and pragmatist approaches to quantum mechanics From the early days of quantum mechanics, there has been a strain of thought that holds that the proper attitude to take towards quantum mechanics is an instrumentalist or pragmatic one. On such a view, quantum mechanics is a tool for coordinating our experience and for forming expectations about the outcomes of experiments. Variants of this view include some versions of the Copenhagen interpretation. More recently, views of this sort have been advocated by physicists, including QBists, who hold that quantum states represent subjective or epistemic probabilities (see Fuchs et al., 2014). The philosopher Richard Healey defends a related view on which quantum states, though objective, are not to be taken as representational (see Healey 2012, 2017a, 2020). For more on these approaches, see entry on Quantum-Bayesian and pragmatist views of quantum theory. 4.2.2 Hidden-variables and modal interpretations Theories whose structure include the quantum state but include additional structure, with an aim of circumventing the measurement problem, have traditionally been called “hidden-variables theories”. That a quantum state description cannot be regarded as a complete description of physical reality was argued for in a famous paper by Einstein, Podolsky and Rosen (EPR) and by Einstein in subsequent publications (Einstein 1936, 1948, 1949). See the entry on the Einstein-Podolsky-Rosen argument in quantum theory. There are a number of theorems that circumscribe the scope of possible hidden-variables theories. The most natural thought would be to seek a theory that assigns to all quantum observables definite values that are merely revealed upon measurement, in such a way that any experimental procedure that, in conventional quantum mechanics, would count as a “measurement” of an observable yields the definite value assigned to the observable. Theories of this sort are called noncontextual hidden-variables theory. It was shown by Bell (1966) and Kochen and Specker (1967) that there are no such theories for any system whose Hilbert space dimension is greater than three (see the entry on the Kochen-Specker theorem). The Bell-Kochen-Specker Theorem does not rule out hidden-variables theories tout court. The simplest way to circumvent it is to pick as always-definite some observable or compatible set of observables that suffices to guarantee determinate outcomes of experiments; other observables are not assigned definite values and experiments thought of as “measurements” of these observables do not reveal pre-existing values. The most thoroughly worked-out theory of this type is the pilot wave theory developed by de Broglie and presented by him at the Fifth Solvay Conference held in Brussels in 1927, revived by David Bohm in 1952, and currently an active area of research by a small group of physicists and philosophers. According to this theory, there are particles with definite trajectories, that are guided by the quantum wave function. For the history of the de Broglie theory, see the introductory chapters of Bacciagaluppi and Valentini (2009). For an overview of the de Broglie-Bohm theory and philosophical issues associated with it see the entry on Bohmian mechanics. There have been other proposals for supplementing the quantum state with additional structure; these have come to be called modal interpretations; see the entry on modal interpretations of quantum 4.2.3 Dynamical Collapse Theories As already mentioned, Dirac wrote as if the collapse of the quantum state vector precipitated by an experimental intervention on the system is a genuine physical change, distinct from the usual unitary evolution. If collapse is to be taken as a genuine physical process, then something more needs to be said about the circumstances under which it occurs than merely that it happens when an experiment is performed. This gives rise to a research programme of formulating a precisely defined dynamics for the quantum state that approximates the linear, unitary Schrödinger evolution in situations for which this is well-confirmed, and produces collapse to an eigenstate of the outcome variable in typical experimental set-ups, or, failing that, a close approximation to an eigenstate. The only promising collapse theories are stochastic in nature; indeed, it can be shown that a deterministic collapse theory would permit superluminal signalling. See the entry on collapse theories for an overview, and Gao, ed. (2018) for a snapshot of contemporary discussions. Prima facie, a dynamical collapse theory of this type can be a quantum state monist theory, one on which, in Bell’s words, “the wave function is everything”. In recent years, this has been disputed; it has been argued that collapse theories require “primitive ontology” in addition to the quantum state. See Allori et al. (2008), Allori (2013), and also the entry on collapse theories, and references therein. Reservations about this approach have been expressed by Egg (2017, 2021), Myrvold (2018), and Wallace (2020). 4.2.4 Everettian, or “many worlds” theories In his doctoral dissertation of 1957 (reprinted in Everett 2012), Hugh Everett III proposed that quantum mechanics be taken as it is, without a collapse postulate and without any “hidden variables”. The resulting interpretation he called the relative state interpretation. The basic idea is this. After an experiment, the quantum state of the system plus apparatus is typically a superposition of terms corresponding to distinct outcomes. As the apparatus interacts with its environment, which may include observers, these systems become entangled with the apparatus and quantum system, the net result of which is a quantum state involving, for each of the possible experimental outcomes, a term in which the apparatus reading corresponds to that outcome, there are records of that outcome in the environment, observers observe that outcome, etc.. Everett proposed that each of these terms be taken to be equally real. From a God’s-eye-view, there is no unique experimental outcome, but one can also focus on a particular determinate state of one subsystem, say, the experimental apparatus, and attribute to the other systems participating in the entangled state a relative state, relative to that state of the apparatus. That is, relative to the apparatus reading ‘+’ is a state of the environment recording that result and states of observers observing that result (see the entry on Everett’s relative-state formulation of quantum mechanics, for more detail on Everett’s views). Everett’s work has inspired a family of views that go by the name of “Many Worlds” interpretations; the idea is that each of the terms of the superposition corresponds to a coherent world, and all of these worlds are equally real. As time goes on, there is a proliferation of these worlds, as situations arise that give rise to a further multiplicity of outcomes (see the entry many-worlds interpretation of quantum mechanics, and Saunders 2007, for overviews of recent discussions; Wallace 2012 is an extended defense of an Everettian interpretation of quantum mechanics). There is a family of distinct, but related views, that go by the name of “Relational Quantum Mechanics”. These views agree with Everett in attributing to a system definite values of dynamical variables only relative to the states of other systems; they differ in that, unlike Everett, they do not take the quantum state as their basic ontology (see the entry on relational quantum mechanics for more detail). 4.3 Extended Wigner’s friend scenarios as a source of no-go theorems As mentioned, quantum theory, as standardly formulated, employs a division of the world into a part that is treated with the theory, and a part that is not. Both von Neumann and Heisenberg emphasized an element of arbitrariness in the location of the division. In some formulations, the division was thought of as a distinction between observer and observed, and it became common to say that quantum mechanics requires reference to an observer for its formulation. The founders of quantum mechanics tended to assume implicitly that, though the “cut” is somewhat moveable, in any given analysis a division would be settled on, and one would not attempt to combine distinct choices of the cut in one analysis of an experiment. If, however, one thinks of the cut as marking the distinction between observer and observed, one is led to ask about situations involving multiple observers. Is each observer permitted to treat the other as a quantum system? The consideration of such scenarios was initiated by Wigner (1961). Wigner considered a hypothetical scenario in which a friend conducts an observation, and he himself treats the joint system, consisting of the friend and the system experimented upon, as a quantum system. For this reason, scenarios of this sort have come to be known as “Wigner’s friend” scenarios. Wigner was led by consideration of such scenarious to hypothesize that conscious observers cannot be in a superposition of states corresponding to distinct perceptions; the introduction of conscious observers initiates a physical collapse of the quantum state; this involves, according to Wigner, “a violation of physical laws where consciousness plays a role” (Wigner 1961, 294 ;167, 181). Frauchiger and Renner (2018) initiated the discussion of scenarios of this sort involving more than two observers, which have come to be called “extended Wigner’s friend” scenarios. Further results along these lines include Brukner (2018), Bong et al. (2020), and Guérin et al. (2021). The strategy of these investigations is to present some set of plausible-seeming assumptions (a different set, for each of the works cited), and to show, via consideration of a hypothetical situation involving multiple observers, the inconsistency of that set of assumptions. The theorems are, therefore, no-go theorems for approaches to the measurement problem that would seek to satisfy all of the members of the set of assumptions that has been shown to be inconsistent. An assumption common to all of these investigations is that it is always permissible for one observer to treat systems containing other observers within quantum mechanics and to employ unitary evolution for those systems. This means that collapse is not regarded as a physical process. It is also assumed that each observer always perceives a unique outcome for any experiment performed by that observer; this excludes Everettian interpretations. Where the works cited vary is in the other assumptions made. It should be noted that each of the major avenues of approach to the measurement problem is capable of giving an account of goings-on in any physical scenario, including the ones considered in these works. Each of them, therefore, must violate some member of the set of assumptions shown to be inconsistent. These results do not pose problems for existing approaches to the measurement problem; rather, they are no-go theorems for approaches that might seek to satisfy all of the set of assumptions shown to be inconsistent. As the assumptions considered include both unitary evolution and unique outcomes of experiments, and the scenarios considered involved situations involving superpositions of distinct experimental outcomes, these results concern theories on which the quantum state, as given by the Schrödinger equation, is not a complete description of reality, as it fails to determine the unique outcomes perceived by the observers. These preceptions could be thought of as supervening on brain states, in which case there is physical structure not included in the quantum state, or as attributes of immaterial minds. On either interpretation, the sorts of theories ruled out fall under the first horn of Bell’s dilemma, mentioned in section 4.2, and these no-go results in part reproduce, and in part extend, no-go results for certain sorts of modal interpretations (see entry on modal interpretations of quantum mechanics). These results involving extended Wigner’s friend scenarios have engendered considerable philosophical discussion; see Sudbery (2017, 2019), Healey (2018, 2020), Dieks (2019), Losada et al. (2019), Dascal (2020), Evans (2020), Fortin and Lombardi (2020), Kastner (2020), Muciño & Okon (2020), Bub (2020, 2021), Cavalcanti (2021), Cavalcanti and Wiseman (2021), and Żukowski and Markiewicz (2021). 4.4 The role of decoherence A quantum state that is a superposition of two distinct terms, such as \[ \ket{\psi} = a \ket{\psi_{1}} + b \ket{\psi_{2}} , \] where \(\ket{\psi_{1}}\) and \(\ket{\psi_{2}}\) are distinguishable states, is not the same state as a mixture of \(\ket{\psi_{1}}\) and \(\ket{\psi_{2}}\), which would be appropriate for a situation in which the state prepared was either \(\ket{\psi_{1}}\) or \(\ket{\psi_{2}}\), but we don’t know which. The difference between a coherent superposition of two terms and a mixture has empirical consequences. To see this, consider the double-slit experiment, in which a beam of particles (such as electrons, neutrons, or photons) passes through two narrow slits and then impinges on a screen, where the particles are detected. Take \(\ket{\psi_{1}}\) to be a state in which a particle passes through the top slit, and \(\ket{\psi_{2}}\), a state in which it passes through the bottom slit. The fact that the state is a superposition of these two alternatives is exhibited in interference fringes at the screen, alternating bands of high and low rates of absorption. This is often expressed in terms of a difference between classical and quantum probabilities. If the particles were classical particles, the probability of detection at some point \(p\) of the screen would simply be a weighted average of two conditional probabilities: the probability of detection at \(p\), given that the particle passed through the top slit, and the probability of detection at \ (p\), given that the particle passed through the bottom slit. The appearance of interference is an index of nonclassicality. Suppose, now, that the electrons interact with something else (call it the environment) on the way to the screen, that could serve as a “which-way” detector; that is, the state of this auxiliary system becomes entangled with the state of the electron in such a way that its state is correlated with \(\ket{\psi_{1}}\) and \(\ket{\psi_{2}}\). Then the state of the quantum system, \(s\), and its environment, \(e\), is \[ \ket{\psi} _{se} = a \ket{ \psi_{1}} _{s} \ket{ \phi_{1}} _{e} + b \ket{ \psi_{2}}_{s} \ket{ \phi_{2}} _{e} \] If the environment states \(\ket{\phi_{1}} _{e}\) are \(\ket{\phi_{2}}_{e}\) are distinguishable states, then this completely destroys the interference fringes: the particles interact with the screen as if they determinately went through one slit or the other, and the pattern that emerges is the result of overlaying the two single-slit patterns. That is, we can treat the particles as if they followed (approximately) definite trajectories, and apply probabilities in a classical manner. Now, macroscopic objects are typically in interaction with a large and complex environment—they are constantly being bombarded with air molecules, photons, and the like. As a result, the reduced state of such a system quickly becomes a mixture of quasi-classical states, a phenomenon known as decoherence. A generalization of decoherence lies at the heart of an approach to the interpretation of quantum mechanics that goes by the name of decoherent histories approach (see the entry on the consistent histories approach to quantum mechanics for an overview). Decoherence plays important roles in the other approaches to quantum mechanics, though the role it plays varies with approach; see the entry on the role of decoherence in quantum mechanics for information on this. 4.5 Comparison of approaches to the measurement problem Most of the above approaches take it that the goal is to provide an account of events in the world that recovers, at least in some approximation, something like our familiar world of ordinary objects behaving classically. None of the mainstream approaches accord any special physical role to conscious observers. There have, however, been proposals in that direction (see the entry on quantum approaches to consciousness for discussion). All of the above-mentioned approaches are consistent with observation. Mere consistency, however, is not enough; the rules for connecting quantum theory with experimental results typically involve nontrivial (that is, not equal to zero or one) probabilities assigned to experimental outcomes. These calculated probabilities are confronted with empirical evidence in the form of statistical data from repeated experiments. Extant hidden-variables theories reproduce the quantum probabilities, and collapse theories have the intriguing feature of reproducing very close approximations to quantum probabilities for all experiments that have been performed so far but departing from the quantum probabilities for other conceivable experiments. This permits, in principle, an empirical discrimination between such theories and no-collapse theories. A criticism that has been raised against Everettian theories is that it is not clear whether they can even make sense of statistical testing of this kind, as it does not, in any straightforward way, make sense to talk of the probability of obtaining, say, a ‘+” outcome of a given experiment when it is certain that all possible outcomes will occur on some branch of the wavefunction. This has been called the “Everettian evidential problem”. It has been the subject of much recent work on Everettian theories; see Saunders (2007) for an introduction and overview. If one accepts that Everettians have a solution to the evidential problem, then, among the major lines of approach, none is favored in a straightforward way by the empirical evidence. There will not be space here to give an in-depth overview of these ongoing discussions, but a few considerations can be mentioned, to give the reader a flavor of the discussions; see entries on particular approaches for more detail. Everettians take, as a virtue of the approach, the fact that it does not involve an extension or modification of the quantum formalism. Bohmians claim, in favor of the Bohmian approach, that a theory on these lines provides the most straightforward picture of events; ontological issues are less clear-cut when it comes to Everettian theories or collapse theories. Another consideration is compatibility with relativistic causal structure. See Myrvold (2021) for an overview of relavistic constraints on approaches to the measurement problem.The de Broglie-Bohm theory requires a distinguished relation of distant simultaneity for its formulation, and, it can be argued, this is an ineliminable feature of any hidden-variables theory of this sort, that selects some observable to always have definite values (see Berndl et al. 1996; Myrvold 2002, 2021). On the other hand, there are collapse models that are fully relativistic. On such models, collapses are localized events. Though probabilities of collapses at spacelike separation from each other are not independent, this probabilistic dependence does not require us to single one out as earlier and the other later. Thus, such theories do not require a distinguished relation of distant simultaneity. There remains, however, some discussion of how to equip such theories with beables (or “elements of reality”). See the entry on collapse theories and references therein; see also, for some recent contributions to the discussion, Fleming (2016), Maudlin (2016), and Myrvold (2016). In the case of Everettian theories, one must first think about how to formulate the question of relativistic locality. Several authors have approached this issue in somewhat different ways, with a common conclusion that Everettian quantum mechanics is, indeed, local. (See Vaidman 1994; Bacciagaluppi 2002; Chapter 8 of Wallace 2012; Tipler 2014; Vaidman 2016; and Brown and Timpson 2016.) 5. Ontological Issues As mentioned, a central question of interpretation of quantum mechanics concerns whether quantum states should be regarded as representing anything in physical reality. If this is answered in the affirmative, this gives rise to new questions, namely, what sort of physical reality is represented by the quantum state, and whether a quantum state could in principle give an exhaustive account of physical reality. 5.1 The question of quantum state realism. Harrigan and Spekkens (2010) have introduced a framework for discussing these issues. In their terminology, a complete specification of the physical properties is given by the ontic state of a system. An ontological model posits a space of ontic states and associates, with any preparation procedure, a probability distribution over ontic states. A model is said to be \(\psi\)-ontic if the ontic state uniquely determines the quantum state; that is, if there is a function from ontic states to quantum states (this includes both cases in which the quantum state also completely determines the physical state, and cases, such as hidden-variables theories, in which the quantum state does not completely determine the physical state). In their terminology, models that are not \(\psi\) -ontic are called \(\psi\)-epistemic. If a model is not \(\psi\)-ontic, this means that it is possible for some ontic states to be the result of two or more preparations that lead to different assignments of pure quantum states; that is, the same ontic state may be compatible with distinct quantum states. This gives a nice way of posing the question of quantum state realism: are there preparations corresponding to distinct pure quantum states that can give rise to the same ontic state, or, conversely, are there ontic states compatible with distinct quantum states? Pusey, Barrett, and Rudolph (2012) showed that, if one adopts a seemingly natural independence assumption about state preparations—namely, the assumption that it is possible to prepare a pair of systems in such a way that the probabilities for ontic states of the two systems are effectively independent—then the answer is negative; any ontological model that reproduces quantum predictions and satisfies this Preparation Independence assumption must be a \(\psi\)-ontic model. The Pusey, Barrett and Rudolph (PBR) theorem does not close off all options for anti-realism about quantum states; an anti-realist about quantum states could reject the Preparation Independence assumption, or reject the framework within which the theorem is set; see discussion in Spekkens (2015): 92–93. See Leifer (2014) for a careful and thorough overview of theorems relevant to quantum state realism, and Myrvold (2020) for a presentation of a case for quantum state realism based on theorems of this sort. 5.2 Ontological category of quantum states The major realist approaches to the measurement problem are all, in some sense, realist about quantum states. Merely saying this is insufficient to give an account of the ontology of a given interpretation. Among the questions to be addressed are: if quantum states represent something physically real, what sort of thing is it? This is the question of the ontological construal of quantum states. Another question is the EPR question, whether a description in terms of quantum states can be taken as, in principle, complete, or whether it must be supplemented by different ontology. De Broglie’s original conception of the “pilot wave” was that it would be a field, analogous to an electromagnetic field. The original conception was that each particle would have its own guiding wave. However, in quantum mechanics as it was developed at the hands of Schrödinger, for a system of two or more particles there are not individual wave functions for each particle, but, rather, a single wave function that is defined on \(n\)-tuples of points in space, where \(n\) is the number of particles. This was taken, by de Broglie, Schrödinger and others, to militate against the conception of quantum wave functions as fields. If quantum states represent something in physical reality, they are unlike anything familiar in classical physics. One response that has been taken is to insist that quantum wave functions are fields nonetheless, albeit fields on a space of enormously high dimension, namely, \(3n\), where \(n\) is the number of elementary particles in the universe. On this view, this high-dimensional space is thought of as more fundamental than the familiar three-dimensional space (or four-dimensional spacetime) that is usually taken to be the arena of physical events. See Albert (1996, 2013), for the classic statement of the view; other proponents include Loewer (1996), Lewis (2004), Ney (2012, 2013a,b, 2021), and North (2013). Most of the discussion of this proposal has taken place within the context of nonrelativistic quantum mechanics, which is not a fundamental theory. It has been argued that considerations of how the wave functions of nonrelativistic quantum mechanics arise from a quantum field theory undermines the idea that wave functions are relevantly like fields on configuration space, and also the idea that configuration spaces can be thought of as more fundamental than ordinary spacetime (Myrvold 2015). A view that takes a wave function as a field on a high-dimensional space must be distinguished from a view that takes it to be what Belot (2012) has called a multi-field, which assigns properties to \(n\)-tuples of points of ordinary three-dimensional space. These are distinct views; proponents of the \(3n\)-dimensional conception make much of the fact that it restores Separability: on this view, a complete specification of the way the world is, at some time, is given by specification of local states of affairs at each address in the fundamental (\(3n\)-dimensional) space. Taking a wave function to be a multi-field, on the other hand, involves accepting nonseparability. Another difference between taking wave-functions as multi-fields on ordinary space and taking them to be fields on a high-dimensional space is that, on the multi-field view, there is no question about the relation of ordinary three-dimensional space to some more fundamental space.­ Hubert and Romano (2018) argue that wave-functions are naturally and straightforwardly construed as multi-fields. It has been argued that, on the de Broglie-Bohm pilot wave theory and related pilot wave theories, the quantum state plays a role more similar to that of a law in classical mechanics; its role is to provide dynamics for the Bohmian corpuscles, which, according to the theory, compose ordinary objects. See Dürr, Goldstein, and Zanghì (1997), Allori et al. (2008), Allori (2021). Dürr, Goldstein, and Zanghì (1992) introduced the term “primitive ontology” for what, according to a physical theory, makes up ordinary physical objects; on the de Broglie-Bohm theory, this is the Bohmian corpuscles. The conception is extended to interpretations of collapse theories by Allori et al. (2008). Primitive ontology is to be distinguished from other ontology, such as the quantum state, that is introduced into the theory to account for the behavior of the primitive ontology. The distinction is meant to be a guide as to how to conceive of the nonprimitive ontology of the 6. Quantum computing and quantum information theory Quantum mechanics has not only given rise to interpretational conundrums; it has given rise to new concepts in computing and in information theory. Quantum information theory is the study of the possibilities for information processing and transmission opened up by quantum theory. This has given rise to a different perspective on quantum theory, one on which, as Bub (2000, 597) put it, “the puzzling features of quantum mechanics are seen as a resource to be developed rather than a problem to be solved” (see the entries on quantum computing and quantum entanglement and information). 7. Reconstructions of quantum mechanics and beyond Another area of active research in the foundations of quantum mechanics is the attempt to gain deeper insight into the structure of the theory, and the ways in which it differs from both classical physics and other theories that one might construct, by characterizing the structure of the theory in terms of very general principles, often with an information-theoretic flavour. This project has its roots in early work of Mackey (1957, 1963), Ludwig (1964), and Piron (1964) aiming to characterize quantum mechanics in operational terms. This has led to the development of a framework of generalized probabilistic model. It also has connections with the investigations into quantum logic initiated by Birkhoff and von Neumann (1936) (see the entry quantum logic and probability theory for an overview). Interest in the project of deriving quantum theory from axioms with clear operational content was revived by the work of Hardy (2001 [2008], Other Internet Resources). Significant results along these lines include the axiomatizations of Masanes and Müller (2011) and Chiribella, D’Ariano, and Perinotti (2011). See Chiribella and Spekkens (2015) for an overview of this burgeoning research area. • Albert, David Z., 1996, “Elementary quantum metaphysics”, in J.T. Cushing, A. Fine, & S. Goldstein (eds.), Bohmian Mechanics and Quantum Mechanics: An appraisal, Dordrecht: Kluwer, 277–284. • –––, 2013, “Wave function realism”, in Ney and Albert (eds.) 2013: 52–57. • Allori, Valia, 2013, “Primitive Ontology and the Structure of Fundamental Physical Theories”, in Ney and Albert (eds.) 2013: 58–90. • Allori, Valia, 2021, “Wave-functionalism”, Synthese, 199: 12271–12293. • Allori, Valia, Sheldon Goldstein, Roderich Tumulka, and Nino Zanghì, 2008, “On the Common Structure of Bohmian Mechanics and the Ghirardi–Rimini–Weber Theory”, The British Journal for the Philosophy of Science, 59(3): 353–389. doi:10.1093/bjps/axn012 • Bacciagaluppi, Guido, 2002, “Remarks on Space-time and Locality in Everett’s Interpretation”, in T. Placzek and J. Butterfield (eds.), Non-locality and Modality, Berlin: Springer, 105–124. • Bacciagaluppi, Guido, and Antony Valentini, 2009, Quantum Theory at the Crossroads: Reconsidering the 1927 Solvay Conference, Cambridge: Cambridge University Press. • Bell, J.S., 1966, “On the Problem of Hidden Variables in Quantum Mechanics”, Reviews of Modern Physics, 38: 447–52; reprinted in Bell 2004: 1–13. • –––, 1986, “Six Possible Worlds of Quantum Mechanics”, in S. Allén (ed.), Possible Worlds in Humanities, Arts and Sciences, Berlin, Walter de Gruyter, 359–373; reprinted in Bell 2004, 181–195. • –––, 1987, “Are There Quantum Jumps?” in C.W. Kilmister (ed), Schrödinger: Centenary celebration of a polymath, Cambridge: Cambridge University Press, 41–52; reprinted in Bell 2004: 201–212. • –––, 1990, “Against ‘Measurement’”, Physics World, 3: 33–40; reprinted in Bell 2004: 213–231. • –––, 2004, Speakable and Unspeakable in Quantum Mechanics, 2^nd edition, Cambridge: Cambridge University Press. • Bell, Mary and Shan Gao (eds.), 2016, Quantum Nonlocality and Reality: 50 Years of Bell’s Theorem, Cambridge: Cambridge University Press. • Belot, Gordon, 2012, “Quantum States for primitive ontologists: a case study”, European Journal for the Philosophy of Science, 2: 67–83. • Berndl, Karin, Detlef Dürr, Sheldon Goldstein, and Nino Zanghì, 1996, “Nonlocality, Lorentz invariance, and Bohmian quantum theory“, Physical Review A, 53: 2062–2073. • Birkhoff, Garrett, and John von Neumann, 1936, “The Logic of Quantum Mechanics”, Annals of Mathematics (second series), 37: 823–43. • Bong, Kok-Wei, Aníbal Utreras-Alarcón, Farzad Ghafari, Yeong-Cherng Liang, Nora Tischler, Eric G. Cavalcanti, Geoff J. Pryde & Howard M. Wiseman, 2020, “A strong no-go theorem on the Wigner’s friend paradox”, Nature Physics, 16: 1199–1205. • Brown, Harvey R. and Christopher G. Timpson, 2016,“Bell on Bell’s Theorem: The Changing Face of Nonlocality”, in Bell and Gao (eds.) 2016: 91–123. • Brukner, Časlav, 2018, “A no-go theorem for observer-independent facts”, Entropy, 20(5): 350. • Bub, Jeffrey, 2000, “Indeterminacy and entanglement: the challenge of quantum mechanics”, The British Journal for the Philosophy of Science, 51: 597–615. • –––, 2020, “‘Two Dogmas’ Redux”, in M. Hemmo and O. Shenker (eds.), Quantum, Probability, Logic: The Work and Influence of Itamar Pitowsky, Berlin: Springer, 199–216. • –––, 2021, “Understanding the Frauchiger–Renner Argument”, Foundations of Physics, 51: 36. • Cavalcanti, Eric, 2021, “The View from a Wigner Bubble”, Foundations of Physics, 51: 39. • Cavalcanti, Eric, and Howard M. Wiseman, 2021, “Implications of Local Friendliness for Violation for Quantum Causality”, Entropy, 23: 925. • Chiribella, Giulio, Giacomo Mauro D’Ariano, and Paolo Perinotti, 2011, “Informational derivation of quantum theory”, Physical Review A, 84: 012311. doi:10.1103/PhysRevA.84.012311 • Chiribella, Giulio and Robert W. Spekkens (eds.), 2015, Quantum Theory: Informational Foundations and Foils, Berlin: Springer. • Dascal, Michael, 2020, “What’s left for the neo-Copenhagen theorist”, Studies in History and Philosophy of Modern Physics, 72: 310–321. • Deutsch, David and Patrick Hayden, 2000, “Information flow in entangled quantum systems”, Proceedings of the Royal Society of London A, 456: 1759–74. • Dieks, Dennis, 2019, “Quantum Mechanics and Perspectivalism”, in O. Lombardi, S. Fortin, C. Lopez, and F. Holik (eds.), Quantum Worlds: Perspectives on the Ontology of Quantum Mechanics, Cambridge: Cambridge University Press, 51–70. • Dirac, P.A.M., 1935, Principles of Quantum Mechanics, 2^nd edition, Oxford: Oxford University Press. • Dürr, Detlef, Sheldon Goldstein, and Nino Zanghì, “Quantum Equilibrium and the Origin of Absolute Uncertainty”, Journal of Statistical Physics, 67: 843–907. • –––, 1997, “Bohmian Mechanics and the Meaning of the Wave Function”, in R.S. Cohen, M. Horne and J. Stachel (eds.), Experimental Metaphysics: Quantum Mechanical Studies for Abner Shimony (Volume 1), Boston: Kluwer Academic Publishers. • Einstein, Albert, Boris Podolsky, and Nathan Rosen, 1935, “Can Quantum-Mechanical Description of Reality Be Considered Complete?” Physical Review, 47: 777–780. • Einstein, Albert, 1936, “Physik und Realität”, Journal of the Franklin Institute, 221: 349–382. English translation in Einstein 1954. • –––, 1948, “Quanten-Mechanik und Wirklichkeit”, Dialectica, 2: 320–324. • –––, 1949, “Autobiographical notes”, in P.A. Schilpp (ed.), Albert Einstein: Philosopher-Scientist, Chicago: Open Court. • –––, 1954, “Physics and reality”, in Ideas and Opinions, New York: Crown Publishers, Inc., 290–323 (translation of Einstein 1936). • Egg, Matthias, 2017, “The physical salience of non-fundamental local beables”, Studies in History and Philosophy of Modern Physics, 57: 104–110. • –––, 2021, “Quantum Ontology without Speculation”. European Journal for Philosophy of Science, 11: 32. • Evans, Peter W., 2020, “Perspectival objectivity, Or: how I learned to stop worrying and love observer-dependent reality”, European Journal for Philosophy of Science, 10: 19. • Everett III, Hugh, 2012, The Everett Interpretation of Quantum Mechanics: Collected Works 1955–1980 With Commentary, Jeffrey A. Barrett and Peter Byrne (eds.), Princeton: Princeton University • Fleming, Gordon N., 2016, “Bell Nonlocality, Hardy’s Paradox and Hyperplane Dependence”, in Bell and Gao (eds.) 2016: 261–281. • Fortin, Sebastian, and Olimpia Lombardi, 2020, “The Frauchiger-Renner argument: A new no-go result?” Studies in History and Philosophy of Modern Physics, 70: 1–7. • Frauchiger, Daniela, & Renato Renner, 2018, “Quantum theory cannot consistently describe the use of itself”, Nature Communications, 9: 3711. • Freire Jr., Olival , Guido Bacciagaluppi, Olivier Darrigol, Thiago Hartz, Christian Joas, Alexei Kojevnikov, and Osvaldo Pessoa Jr. (eds.), 2022, The Oxford Handbook of the History of Quantum Interpretations, Oxford: Oxford University Press. • French, Steven, and Juha Saatsi (eds.), 2020, Scientific Realism and the Quantum, Oxford: Oxford University Press. • Fuchs, Christopher A., N. David Mermin, and Rüdiger Schack, 2014, “An introduction to QBism with an application to the locality of quantum mechanics”, American Journal of Physics, 82: 749–752. • Gao, Shan (ed.), 2018, Collapse of the Wave Function: Models, Ontology, Origin, and Implications, Cambridge: Cambridge University Press. • Guérin, Philippe Allard, Veronika Baumann, Flavio Del Santo, and Časlav Brukner, 2021, “A no-go theorem for the persistent reality of Wigner’s friend’s perception”, Communications Physics, 4: 93. • Harrigan, Nicholas and Robert W. Spekkens, 2010, “Einstein, Incompleteness, and the Epistemic View of Quantum States”, Foundations of Physics, 40: 125–157. • Healey, Richard, 2012, “Quantum Theory: A Pragmatist Approach”, The British Journal for the Philosophy of Science, 63: 729–771. • –––, 2017a, “Quantum States as Objective Informational Bridges”, Foundations of Physics, 47: 161–173. • –––, 2017b, The Quantum Revolution in Philosophy, Oxford: Oxford University Press. • –––, 2018, “Quantum theory and the limits of objectivity”, Foundations of Physics, 48: 1568–1589. • —, 2020, “Pragmatist Quantum Realism”, in French and Saatsi (eds.), 2021: 123–146. • Heisenberg, Werner, 1930a, Die Physicalische Prinzipien der Quantentheorie, Leipzig: Verlag von S. Hirzel. • –––, 1930b, The Physical Principles of the Quantum Theory, Carl Eckert and F.C. Hoyt (trans.), Chicago: University of Chicago Press. • Howard, Don, 2004, “Who Invented the ‘Copenhagen Interpretation’? A Study in Mythology”, Philosophy of Science, 71: 669–682. • Hubert, Mario, and Davide Romano, 2018, “The wave-function as a multi-field”, European Journal for the Philosophy of Science, 8: 521–537. • Kastner, Ruth, 2020, “Unitary‑Only Quantum Theory Cannot Consistently Describe the Use of Itself: On the Frauchiger–Renner Paradox”, Foundations of Physics, 50: 441–456. • Knox, Eleanor, and Alastair Wilson (eds.), 2021, The Routledge Companion to Philosophy of Physics, London: Routledge. • Kochen, Simon and Ernst Specker, 1967, “The Problem of Hidden Variables in Quantum Mechanics”, Journal of Mathematics and Mechanics, 17: 59–87. • Lazarovici, Dustin, and Mario Hubert, 2019, “How Quantum Mechanics can consistently describe the use of itself”, Scientific Reports, 9: 470. • Leifer, Matthew Saul, 2014, “Is the Quantum State Real? An Extended Review of \(\psi\)-ontology Theorems”, Quanta, 3: 67–155. • Lewis, Peter J., 2004, “Life in configuration space”, The British Journal for the Philosophy of Science, 55: 713–729. doi:10.1093/bjps/55.4.713 • Loewer, B., 1996, “Humean supervenience”, Philosophical Topics, 24: 101–127. • London, Fritz and Edmond Bauer, 1939, La théorie de l’observation en mécanique quantique, Paris: Hermann. English translation, “The theory of observation in quantum mechanics”, in Quantum Theory and Measurement, J.A. Wheeler and W.H. Zurek (eds.), Princeton: Princeton University Press, 1983, 217–259. • Losada, Marcelo, Roberto Laura, and Olimpia Lombardi, 2019, “Frauchiger-Renner argument and quantum histories”, Physical Review A, 100: 052114. • Ludwig, G., 1964, “Versuch einer axiomatischen Grundlegung der Quantenmechanik und allgemeinerer physikalischer Theorien”, Zeitschrift für Physik, 181: 233–260. • Mackey, George W. 1957, “Quantum Mechanics and Hilbert Space”, American Mathematical Monthly, 64: 45–57. • –––, 1963, The Mathematical Foundations of Quantum Mechanics: A lecture-note volume, New York: W.A. Benjamin. • Masanes, Lluís and Markus P. Müller, 2011, “A derivation of quantum theory from physical Requirements”, New Journal of Physics, 13: 063001. • Maudlin, Tim, 2016, “Local Beables and the Foundations of Physics”, in Bell and Gao (eds.) 2016: 317–330. • Muciño, R., and E. Okon, 2020, “Wigner’s convoluted friends”, Studies in History and Philosophy of Modern Physics, 72: 87–90. • Myrvold, Wayne C., 2002, “Modal Interpretations and Relativity”, Foundations of Physics, 32: 1773–1784. • –––, 2015, “What is a Wavefunction?” Synthese, 192: 3247–3274. • –––, 2016, “Lessons of Bell”s Theorem: Nonlocality, Yes; Action at a Distance, Not Necessarily”, in Bell and Gao (eds.) 2016: 237–260. • –––, 2018, “Ontology for Collapse Theories,” in Gao (ed.) 2018: 97–123. • –––, 2020, “On the Status of Quantum State Realism,” in French and Saatsi (eds.), 2020: 229–251. • –––, 2021, “Relativistic Constraints on Interpretations of Quantum Mechanics”, in Knox and Wilson (eds.) 2021: 99–121. • Ney, Alyssa, 2012, “The status of our ordinary three dimensions in a quantum universe”, Noûs, 46: 525–560. • –––, 2013a, “Introduction”, in Ney and Albert (eds.) 2013: 1–51. • –––, 2013b, “Ontological reduction and the wave function ontology”, in Ney and Albert (eds.) 2013: 168– 183. • –––, 2021, The World in the Wave Function: A Metaphysics for Quantum Physics, Oxford, Oxford University Press. • Ney, Alyssa and David Z. Albert (eds.), 2013, The Wave Function: Essays on the Metaphysics of Quantum Mechanics, Oxford: Oxford University Press. • North, Jill, 2013, “The structure of a quantum world”, in Ney and Albert (eds.) 2013: 184–202. • Piron, Constantin, 1964, “Axiomatique quantique”, Helvetica Physica Acta, 37: 439–468. • Pusey, Matthew F., Jonathan Barrett, and Terry Rudolph, 2012, “On the Reality of the Quantum State”, Nature Physics, 8: 475–478. • Saunders, Simon, 2007. “Many Worlds? An Introduction”, in S. Saunders, J. Barrett, A. Kent, and D. Wallace (eds.), Many Worlds? Everett, Quantum Theory, and Reality, Oxford: Oxford University Press, 1–50. • Spekkens, Robert W., 2007, “Evidence for the Epistemic view of Quantum States: A Toy Theory”, Physical Review A, 75: 032110. • –––, 2015, “Quasi-Quantization: Classical Statistical Theories with an Epistemic Restriction”, in Chiribella and Spekkens 2015: 83–135. • Sudbery, Anthony, 2017, “Single-world theory of the extended Wigner’s friend experiment”, Foundations of Physics, 47: 658–669. • –––, 2019, “The Hidden Assumptions of Frauchiger and Renner”, International Journal of Quantum Foundations, 5: 98-109. • Tipler, Frank J., 2014, “Quantum nonlocality does not exist”, Proceedings of the National Academy of Sciences, 111: 11281–6. • Vaidman, Lev, 1994, “On the paradoxical aspects of new quantum experiments”, in D. Hull, M. Forbes and R.M. Burian (eds.), PSA 1994 (Volume 1), Philosophy of Science Association, 211–17. • –––, 2016, “The Bell Inequality and the Many-Worlds Interpretation”, in Bell and Gao (eds.) 2016: 195–203. • von Neumann, John, 1932, Mathematische Grundlagen der Quantenmechanik, Berlin, Springer Verlag. • –––, 1955, Mathematical Foundations of Quantum Mechanics, Robert T. Beyer (trans.), Princeton: Princeton University Press. • Wallace, David, 2012, The Emergent Multiverse: Quantum Theory according to the Everett interpretation, Oxford: Oxford University Press. • –––, 2020, “On the Plurality of Quantum Theories: Quantum Theory as a Framework, and its Implications for the Quantum Measurement Problem”, in French and Saatsi (eds.) 2020: 78–102. • Wigner, Eugene P., 1962, “Remarks on the Mind-Body Problem”, in I.J. Good (ed.), The Scientist Speculates: An Anthology of Partly-Baked Ideas, London: William Heinemann, 284–320; reprinted in Wigner (1967), 171–184. • –––, 1967, Symmetries and Reflections: Scientific Essays of Eugene P. Wigner, Bloomington, Indiana University Press. • Żukowski, Marek, and Marcin Markiewicz, 2021, “Physics and Metaphysics of Wigner’s Friends: Even Performed Pre-measurements Have No Results”, Physical Review Letters, 126: 130402. Academic Tools How to cite this entry. Preview the PDF version of this entry at the Friends of the SEP Society. Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers, with links to its database.
{"url":"https://seop.illc.uva.nl/entries/qt-issues/index.html","timestamp":"2024-11-14T21:08:59Z","content_type":"text/html","content_length":"104545","record_id":"<urn:uuid:9f6d5077-dd7b-479d-8089-6c647ca0c2b8>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00079.warc.gz"}
27. A pair of dice is thrown. If the two... | Filo Question asked by Filo student 27. A pair of dice is thrown. If the two numbers appearing on them are different, find the probability that the sum of the numbers appearing is 6. (2016) Not the question you're searching for? + Ask your question Video solutions (1) Learn from their 1-to-1 discussion with Filo tutors. 7 mins Uploaded on: 2/3/2023 Was this solution helpful? Found 4 tutors discussing this question Discuss this question LIVE for FREE 11 mins ago One destination to cover all your homework and assignment needs Learn Practice Revision Succeed Instant 1:1 help, 24x7 60, 000+ Expert tutors Textbook solutions Big idea maths, McGraw-Hill Education etc Essay review Get expert feedback on your essay Schedule classes High dosage tutoring from Dedicated 3 experts Students who ask this question also asked View more Stuck on the question or explanation? Connect with our Mathematics tutors online and get step by step solution of this question. 231 students are taking LIVE classes Question Text 27. A pair of dice is thrown. If the two numbers appearing on them are different, find the probability that the sum of the numbers appearing is 6. (2016) Updated On Feb 3, 2023 Topic Algebra Subject Mathematics Class Class 12 Answer Type Video solution: 1 Upvotes 121 Avg. Video Duration 7 min
{"url":"https://askfilo.com/user-question-answers-mathematics/frac-4-20-frac-2-20-0-frac-6-20-frac-3-10-27-a-pair-of-dice-34303834373432","timestamp":"2024-11-08T21:09:24Z","content_type":"text/html","content_length":"415427","record_id":"<urn:uuid:816956b8-3afd-4b37-93ea-7289b9d6d43c>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00193.warc.gz"}
Ancient PI (π) Understanding that a circle’s circumference could be approximated by a polygon perimeter with an infinite number of sides, Viète consequentially deduced the following formula for calculating π in This formula allowed Viète to accurately approximate the constant π to nine digits, as 3.141592654. Long before the Greek mathematician Archimedes used polygons to approximate π, it is apparent that Egyptian and Babylonian builders recorded more practical approximations of π. For example, a Babylonian clay tablet (~1900-1600 BC) implies a value of 25 / 8, which is equal to 3.125. Also, the Rhind Papyrus of Egypt treats π as 19 / 6, which is equal to 3.1605. Nearly a thousand years later in India, Sanskrit texts recorded π as 9785 / 5568, which is equal to 3.088. Indian mathematicians also approximated π as √10, or 3.1622. After 100 AD, Ptolemy expressed π to four digits as 3.1416 - close enough to build almost anything in the ancient world. The Pyramids of Giza in Cairo, Egypt Perhaps of greater relevance to the history of π is Egypt’s Great Pyramid. Giza's wonder of the ancient world was built with a perimeter of about 1760 cubits and a height of about 280 cubits. Some have noted that the ratio was equal to 44 / 7, which is two times the familiar 22 / 7 ratio, which is still used today to conveniently calculate π to two decimal places, or 3.14. This Giza Pyramid geometry is summarized in the illustration below: The PI Constant as Conveyed in the Great Pyramid While some Egyptologists try to suggest this recorded “ratio in stone” is but a matter of mere coincidence, it would be absurd to propose that people building on a scale such as Giza would not be able to arrive at 3.14 by means of empirical measurement – provided that their engineers and mathematicians were remedial. After all, the Pyramid builders aligned sides of the square pyramid base to true north within 4 minutes of arc (equal to a total of 0.067 degree), which implies that the accuracy with which they measured or surveyed was twice that of their convenient π approximation! If they had the technology to place over 2 million blocks while controlling the base dimensions of the structure to greater than 0.1% accuracy, surely they could have measured the ratio between a circle and its diameter with equal or greater precision. Ikonos satellite image of the Great Pyramid Although several scholars have debated Israel's involvement in creating the Giza Pyramid, few would suggest that the Pyramids were not standing at the time of Israel's Exodus. Moses was said to have been "Learned in the wisdom of the Egyptians". While skeptics can easily dismiss the relationships between the structure's and the 22/7 ratio for π, it's hard to believe that Moses would have left Egypt at 40 years old not knowing how to calculate the circumference of a circle. 1. http://commons.wikimedia.org/wiki/File:Archimedes_pi.png 2. http://www.theglobaleducationproject.org/egypt/studyguide/gpmath.php
{"url":"http://project314.org/34-explore/pages/133-ancient-pi","timestamp":"2024-11-09T23:37:18Z","content_type":"text/html","content_length":"47184","record_id":"<urn:uuid:0663c41d-f185-4fa8-a181-fd09193d1c32>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00194.warc.gz"}
Excel Formula for Highlighting Columns in Python In this tutorial, we will learn how to write an Excel formula in Python that highlights two columns in a row based on whether one of the cells in those columns is blank. This can be achieved using conditional formatting in Excel, which allows us to apply formatting based on certain conditions. By using the IF, OR, and ISBLANK functions in Excel, we can create a formula that checks if either of the cells in the two columns is blank and applies the desired formatting if the condition is met. To implement this formula in Python, we can use the openpyxl library, which provides functionality for working with Excel files. We will use the openpyxl library to load the Excel file, iterate over the rows, and apply the conditional formatting based on the formula. Let's dive into the step-by-step explanation of the formula and see how it can be implemented in Python using the openpyxl library. Step-by-step Explanation 1. The formula uses the ISBLANK function to check if cell A1 is blank. If it is blank, the ISBLANK function returns TRUE; otherwise, it returns FALSE. 2. The ISBLANK function is also used to check if cell B1 is blank. If it is blank, the ISBLANK function returns TRUE; otherwise, it returns FALSE. 3. The OR function is used to check if either of the two conditions (cell A1 is blank or cell B1 is blank) is TRUE. If at least one of the conditions is TRUE, the OR function returns TRUE; otherwise, it returns FALSE. 4. The IF function is used to determine the result of the formula. If the OR function returns TRUE (indicating that one of the cells is blank), the IF function returns the text 'Highlight'; otherwise, it returns an empty string (''). 5. The formula can be applied to other rows by adjusting the cell references accordingly (e.g., A2, B2 for the second row, A3, B3 for the third row, and so on). For example, if we have the following data in columns A and B: The formula =IF(OR(ISBLANK(A1), ISBLANK(B1)), 'Highlight', '') would return 'Highlight' for the first row because cell A1 is blank. For the second row, the formula would return 'Highlight' because cell B2 is blank. For the third row, the formula would return 'Highlight' because both cell A3 and B3 are blank. For the fourth row, the formula would return an empty string ('') because neither cell A4 nor B4 is blank. Now that we understand the formula and its implementation in Python, let's see how we can apply it to an Excel file using the openpyxl library. An Excel formula =IF(OR(ISBLANK(A1), ISBLANK(B1)), "Highlight", "") Formula Explanation This formula uses the IF function in combination with the OR and ISBLANK functions to determine whether to highlight two columns in a row based on whether one of the cells in those columns is blank. Step-by-step explanation 1. The ISBLANK function is used to check if cell A1 is blank. If it is blank, the ISBLANK function returns TRUE; otherwise, it returns FALSE. 2. The ISBLANK function is also used to check if cell B1 is blank. If it is blank, the ISBLANK function returns TRUE; otherwise, it returns FALSE. 3. The OR function is used to check if either of the two conditions (cell A1 is blank or cell B1 is blank) is TRUE. If at least one of the conditions is TRUE, the OR function returns TRUE; otherwise, it returns FALSE. 4. The IF function is used to determine the result of the formula. If the OR function returns TRUE (indicating that one of the cells is blank), the IF function returns the text "Highlight"; otherwise, it returns an empty string (""). 5. The formula can be applied to other rows by adjusting the cell references accordingly (e.g., A2, B2 for the second row, A3, B3 for the third row, and so on). For example, if we have the following data in columns A and B: | A | B | | | | | 123 | | | | ABC | | XYZ | DEF | The formula =IF(OR(ISBLANK(A1), ISBLANK(B1)), "Highlight", "") would return "Highlight" for the first row because cell A1 is blank. For the second row, the formula would return "Highlight" because cell B2 is blank. For the third row, the formula would return "Highlight" because both cell A3 and B3 are blank. For the fourth row, the formula would return an empty string ("") because neither cell A4 nor B4 is blank.
{"url":"https://codepal.ai/excel-formula-generator/query/bD3VpzlV/excel-formula-highlight-columns","timestamp":"2024-11-03T17:15:34Z","content_type":"text/html","content_length":"100365","record_id":"<urn:uuid:1afcc016-a97d-4465-bc50-7aca22fce11c>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00304.warc.gz"}
How am I supposed to do this We define a bow-tie quadrilateral as a quadrilateral where two sides cross each other. An example of a bow-tie quadrilatera is shown below. distinct points are chosen on a circle. We draw all chords that connect two of these points. Four of these chords are selected at random. What is the probability that these four chosen chords form a bow-tie quadrilateral? Guest Jan 23, 2023
{"url":"https://web2.0calc.com/questions/probability_73115","timestamp":"2024-11-11T21:28:54Z","content_type":"text/html","content_length":"18979","record_id":"<urn:uuid:05959ad5-0868-4780-8baa-c7bad9b2147d>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00482.warc.gz"}
Clever Math Puns That Will Multiply Your Fun115+ Clever Math Puns That Will Multiply Your Fun Clever Math Puns That Will Multiply Your Fun Who said numbers can’t be funny? Brace yourself for some side-splitting math puns that will add up to a great time! We’ve calculated the perfect formula for laughs. These puns are as infinite as pi. By the end, you’ll be counting the times you chuckled! Sum Fun with One-Liner Math Puns! 1. Pi rates of the Caribbean love their 3.14 treasure. 2. A circle’s favorite Netflix show is Law and Order: Sines. 3. Decimals have a point, don’t they? 4. Algebra needs a little bit of solve-esteem. 5. The parallelogram was always so right-angled. 6. Calculus is integral to my happiness. 7. Without geometry, life is pointless. 8. Graphs can be very plotting, don’t you think? 9. Numbers always count on each other. 10. Triangles are acutely aware of their angles. 11. A right angle is always 90 degrees cool. 12. Algebra: where you try to find your X and wonder Y. 13. Statistics show that probabilities are quite mean. 14. Geometry teachers have all the right angles. 15. Pi never goes on a diet, it’s irrational. 16. Fractions keep everything in equal parts. 17. Math teachers are rulers of their own world. 18. When you subtract, you make a real difference. 19. The math book was so sad because it had too many problems. 20. A negative number’s favorite dance is the slide. Math Puns for the Numerically Inclined 1. Why was the equal sign so humble? Because it knew it wasn’t less than or greater than anyone else. 2. What did the zero say to the eight? Nice belt! 3. Why can’t a nose be 12 inches long? Because then it would be a foot. 4. Why did the student wear glasses in math class? To improve di-vision. 5. Why was the math book sad? It had too many problems. 6. Did you hear about the mathematician who’s afraid of negative numbers? He’ll stop at nothing to avoid them. 7. Why do plants hate math? It gives them square roots. 8. Parallel lines have so much in common, it’s a shame they’ll never meet. 9. Why was the obtuse angle always so stressed out? Because it was never right. 10. Do you know why six was really afraid of seven? Because seven eight nine. 11. Why was the fraction skeptical? It had its doubts, couldn’t be whole-hearted. 12. The triangle said to the circle, You’re pointless. 13. If two’s company and three’s a crowd, what are four and five? Nine. 14. Why did the two fours skip lunch? They already eight. 15. How do you stay warm in a cold room? You go to the corner, it’s always 90 degrees. Mathematical Wordplay with Double Meanings 1. The fraction felt divided, like it couldn’t even itself out. 2. Algebra’s dating life is one big X finding a value. 3. Geometry taught the circle how to get its angle on. 4. The triangle went on a diet and lost some of its acute-ness. 5. The decimal point feels it’s always overlooked in big calculations. 6. Parallel lines have so much in common but never meet! 7. The math teacher’s jokes are all derivative, but at least they have limits. 8. The radius said, “Let’s just keep our distance!” 9. Calculus is integral to math, just don’t let it drive a wedge function between us. 10. The tangent line is always missing the point. 11. The mathematician’s favorite plant is a square root. 12. Numbers were always odd, but letters are irrational. 13. The equation was too complex; it didn’t have enough ‘solve’! 14. The subtraction problem felt it was going into negative territory. 15. Zero felt it was nothing without a one to give it value. Geomet-ree These Hilarious Math Puns 1. The fraction said, “I have my problems halved, but I still feel divided.” 2. Why was the equal sign so humble? Because it knew it wasn’t less or greater than anyone else. 3. The math book looks sad because it has too many problems to solve. 4. Why did the obtuse angle go to school? Because it wasn’t right. 5. Calculus jokes aren’t very fair—they’re all about limits! 6. Algebra’s favorite clothing store? Old Navy, because of all the variables. 7. When the geometry teacher tried to impress with a joke, it just didn’t measure up. 8. Pi and cake have something in common—they both go on forever. 9. Why is six afraid of seven? Because seven, eight (ate), nine! 10. A math teacher’s favorite vacation spot is Times Square. 11. Tangents and triangles went on a date; it was a bit off the curve but they had their angles. 12. Why was the number six so strong? It knew how to multiply. 13. Parallel lines have so much in common, it’s a shame they’ll never meet. 14. The triangle was great at poker; it always had the best angle. 15. The number zero said to eight, “Nice belt!” When Math Puns Integrate With Math Jokes 1. Why did the mathematician bring a ladder to the bar? She heard the drinks had high risks and likely hoods. 2. Geometry was feeling depressed, so it went to a shrink… to get its angles straightened out. 3. Calculus might seem hard, but it’s just a derivative consideration of life. 4. I told my algebra teacher a joke about an exponential curve—she found it steadily increasing in humor. 5. Do you know why prime numbers hate children? They can’t even! 6. When polygons throw a great party, it’s always an angle-tastic event. 7. A statistician’s dog barks with extreme frequency and means accurately. 8. The fraction proposed to its significant other, but there were some mixed feelings and improper answers. 9. The mathematician felt his relationship wasn’t adding up, so he solved for X and left the equation. 10. The number line threw a bash; it was full of points and absolutely positive vibes. 11. Why couldn’t the angle get a loan? Because its interest was completely irrational. 12. The equal sign couldn’t remain neutral—it wanted to equate itself with something more meaningful. 13. Complex numbers have such imaginary friends, it’s hard to keep up with their diverse circles. 14. Logarithms are the life of the party—they really know how to change the base and spice things up. 15. The geometry teacher wasn’t sure about canceling class, but the students said they were all on the same plane. Adding a Twist to Idioms: Math Pun Edition 1. A rolling stone gathers no cosine. 2. Don’t count your derivatives before they’re integrated. 3. A fraction saved is a fraction earned. 4. Pie in the sky. 5. Taking the path of least resistance is parallel to laziness. 6. Two’s a company, three’s a perfect triangle. 7. All roads lead to the square root of Rome. 8. Practice makes rational. 9. You can’t judge a book by its logarithm. 10. The early bird catches the polynomial. 11. Time and tangent wait for no man. 12. A penny for your thoughts, a dollar for your equations. 13. Absence makes the hypotenuse grow longer. 14. The sine of the times. 15. Actions speak louder than vectors. 16. A journey of a thousand miles begins with a single step function. 17. He who laughs last laughs in radians. 18. Too many cooks spoil the quadratic equation. 19. You can’t teach an old dog new algorithms. 20. Every cloud has a silver tangent. Math Punny Business 1. Alge-bra: The only support you’ll need in solving equations. 2. Geo-me-tree: Where shapes come to life and grow leaves. 3. Trigonome-tree: The branch of math that really gets to the root of angles. 4. Multi-ply: The only place where repetitive times are a good thing. 5. Divi-dend: The gift that keeps on giving in the world of division. 6. Fractional: When you’re only partially excited about math class. 7. Subtrac-tion: The action of taking away your free time with homework. 8. Calculus-trophes: The disasters that occur when limits go wrong. 9. Numera-tor: The superhero of fractions who always stays on top. 10. Denomina-tor: The underdog in every fraction that keeps things grounded. 11. Quadratic-tic: The nervous twitch you get when solving equations. 12. Asympto-tote: The bag that approaches you but never quite gets there. 13. Parabo-lick: When your graphing skills make everyone laugh out loud. 14. Coeffi-she-ent: The feminine touch in your polynomial equations. 15. Exponent-ially: How your math skills grow when you study hard. 16. Hypotenu-sis: The long and sometimes complicated relationship in a right triangle. 17. Variable-ious: The numerous mysterious elements in algebra. 18. Integra-tion: The act of combining math concepts unequally. 19. Sine-cere: The most genuine angle you’ll ever meet. 20. Tangent-al: When your conversation about math goes off on a wild angle. Ending the Collection of Math Puns 1. Why was the math book sad? It had too many problems. 2. How do you stay warm in a cold room? You go to the corner—it’s always 90 degrees. 3. Why is math so optimistic? It always looks at the positive side. 4. What’s a math teacher’s favorite place? Times Square. 5. Why did the student wear glasses during math class? To improve his “di-vision.” 6. Why are obtuse angles always so frustrated? Because they’re never right. 7. Why was the equal sign so humble? It knew it wasn’t less than or greater than anyone else. 8. What do you call friends who love math? Algebros. 9. How does a math poet express love? With “rhyme” and reason. 10. Why do plants hate math? It gives them square roots. 11. Why did the mathematician work at home? Because she could do sum of her work from there. 12. What did the zero say to the eight? Nice belt. 13. Why was the fraction always nervous? It couldn’t keep its numerator down. 14. How do you make seven an even number? Take away the “s.” 15. Why did the math student look sad? Because she didn’t know how to “cope-r” with her “tan-gent” of issues. 16. How do mathematicians plow fields? With pro-tractors. 17. What does a mathematician do about constipation? He works it out with a pencil. 18. What’s a bird’s favorite type of math? Owl-gebra. 19. Why don’t mathematicians argue? Because they always sum things up. 20. Why was the angle freezing? It was less than 32 degrees. Math puns can add a fun twist to an otherwise serious subject. They make learning and discussing math more enjoyable and engaging. So, the next time you come across a tricky equation, remember to lighten the mood with a good math pun. Max Louis I'm Max, and "Punfinity" is a little glimpse of my humor. I've always found joy in bringing a smile to people's faces, and what better way than through the universal language of laughter? I believe that a day without laughter is like a sky without stars. So, here I am, using my love for puns to paint a starry night in your everyday life. Leave a Comment
{"url":"https://punfinity.com/math-puns/","timestamp":"2024-11-06T19:02:18Z","content_type":"text/html","content_length":"124962","record_id":"<urn:uuid:ae91303b-fee8-4714-9df8-39ee1a989409>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00400.warc.gz"}
Oscillons from string moduli | PARTICLES AND COSMOLOGY - ANTUSCH GROUP | University of Basel Oscillons from string moduli (movies) Oscillons from the Kähler modulus in the KKLT scenario The movies below show the non-linear evolution of the Kähler modulus (and its energy density) in the KKLT scenario. The movies were created from three dimensional lattice simulations with 128 lattice points per dimension. Results from simulations with 256 and 512 points per dimension are presented and described in more detail in our paper "Oscillons from string moduli" arXiv:1708.08922 [hep-th]. Field and energy density Left: evolution of the Kähler modulus Φ(x,t) as the field oscillates around the minimum of the potential at Φ=Φmin . The red areas correspond to field values Φ(x,t)-Φmin < 0 while blue areas denote Φ (x,t)-Φmin > 0. The minimum shown amplitude, denoted by the faintest colored areas, corresponds to |Φ(x,t)-Φmin|≥0.008 MPl while the largest highlighted amplitudes are |Φ(x,t)-Φmin|≥0.04 MPl (brightest areas). Right: evolution of the energy density ρ(x,t) in units of <ρ>. The yellow surfaces correspond to ρ(x,t)/<ρ> = 6 while the blue ones denote ρ(x,t)/<ρ> = 12. The movie was created from a three dimensional lattice simulation with 128 points per spatial dimension. Oscillons from blow-up moduli in the Large Volume Scenario (LVS) The movies below were created from the results of two and three dimensional lattice simulations of the evolution of a blow-up Kähler modulus in the LVS in an expanding universe. More details on the models can be found in our paper "Oscillons from string moduli" arXiv:1708.08922 [hep-th]. Field and energy density (2D) The movie shows the time evolution of the blow-up Kähler modulus Φ(x,t) (left) and of its energy density ρ(x,t) (right) in two spatial dimensions. The energy density is shown in units of the average energy density <ρ>. The movie was created from a lattice simulation with 1024 points per spatial dimension. Energy density in 3D Evolution of the energy density of the blow-up Kähler modulus. The yellow surfaces correspond to ρ(x,t)/<ρ>=6 and the blue surfaces to ρ(x,t)/<ρ>=12, where <ρ> is the average energy density. The movie was created from a three dimensional lattice simulation with 128 points per spatial dimension.
{"url":"https://particlesandcosmology.physik.unibas.ch/en/downloads/oscillons-from-string-moduli/","timestamp":"2024-11-05T07:28:18Z","content_type":"text/html","content_length":"24018","record_id":"<urn:uuid:8e438296-581c-4467-8211-b6dffdf1da7a>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00565.warc.gz"}
Mathematics Lists - CopyLists.com Mathematics Lists Maths (or Math) is an important subject in the field of mathematics, and it is used in many other fields. It is also a part of our daily lives. Below are some links to lists about numbers and all fields of mathematics that you might find helpful. List of Numbers and Maths Numbers are Everywhere Numbers are present in our day-to-day lives. They seem to be everywhere and we often take them for granted. But, numbers and math can be key to success in many different areas of life. From your bank account to your weight, numbers can be used as a guide for difficult decision-making or as a representation of something important such as the number of hours you slept last night. The study of mathematics goes back thousands of years and is filled with many discoveries and inventions. From arithmetic to algebra, geometry, and calculus, math has found ways to be applied to every field imaginable. With technological innovations such as smartphones and social media, the way we learn math may never be the same. Mathematics has areas of study that include many different subjects, including algebra, geometry, trigonometry, calculus, and statistics. It is an essential part of many fields, including natural sciences, engineering, medicine, finance, computer science, and social sciences. Practical vs Applied Mathematics Applied mathematics is the use of math for practical purposes. The subject area can be divided into two broad categories: applied mathematics and pure mathematics. Applied mathematics includes areas such as statistics, game theory, and operations research. It is often the case that a mathematician will work on a problem for years without applying any of the knowledge to any real-world situation. Even though they might not know it at the time, many mathematical discoveries have later been applied to other fields for solving practical problems in those areas. Despite the majority of English words now being on a 10 base system, there are some 12 based systems. Babylonian numerals for example, which is the oldest recorded numeral system, was 12 based. Universal Langauage of Numbers Worldwide, mathematics is the most commonly shared subject with consistent standards. This means that regardless of your language, math skills are universally useful. For example, a high school student from Ghana could be working on the same problem as a high school student from Austria. However, while numbers are a universal language, they can be used differently in different parts of the world. For example, Arabic speakers write their numbers (and words) from right to left instead of from left to right.
{"url":"https://copylists.com/mathematics-lists/","timestamp":"2024-11-06T23:54:05Z","content_type":"text/html","content_length":"151849","record_id":"<urn:uuid:09dbb5c9-beb6-450d-9324-fa8792818a24>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00429.warc.gz"}
Analysis on manifolds About this course Manifolds are the main objects of differential geometry. They give a precise meaning to the more intuitive notion of “space”, when “smoothness” is important (in comparison, when interested only in “continuity”, one looks at topological spaces, and one follows the course “Inleiding Topologie”). The simplest examples are the usual embedded surfaces in R^3; in general, the underlying idea is similar to how cartographers describe the earth: there is a map, i.e., a plane representation, for every part of Earth and if two maps represent the same location or have an overlap, there is a unique (smooth) way to identify the overlapping points on both maps. Similarly, a manifold should look locally like R^n, i.e. there are maps which identify parts of the manifold with the flat space R ^n and if two maps describe overlapping regions, there is a unique smooth way to identify the overlapping points. Most of the notions from calculus on R^n are local in nature and hence can be transported to manifolds. Further, some nonlocal constructions, such as integration, can be performed on manifolds using patching arguments. One interesting aspect of this pasasage from R^n to general manifolds is that various aspects of Analysis become much more geometric/intuitive- in some sense, they get a new life (in this way, a set of functions on R^n, depending on how they were used, may remain a function, or may become a vector-field, or a 1-form, etc). This course is optional for mathematics students. The course is recommended to students interested in pure mathematics, such as differential geometry, topology, algebraic geometry, pure analysis. Please find more information about the study advisory paths in the bachelor at the student website. This course will cover the following concepts: • definition and examples of manifolds, • smooth maps, immersions, submersions, diffeomorphisms • special submanifolds, • Lie groups, quotients • tangent and cotangent spaces/bundles, • vector fields, Lie derivatives and flows, • differential forms, exterior derivative and de Rham cohomology, • integration and Stoke’s theorem. The course will also cover the following important results relating the concepts above: • implicit and inverse function theorems, • Cartan identities and Cartan calculus, • Stoke’s theorem The students should learn the contents of the course, namely • the definition of a manifold as well as ways to obtain several examples e.g., • by finding parametrizations, • as regular level sets of functions, • as quotients of other manifolds by group actions. • the various equivalent description of tangent vectors. • the relationship between vector fields and curves (flows). • Differential forms and the various interpretations/properties of the exterior (DeRham) derivatice. • Orintations, volume forms and integration of differential forms. • Stokes’ theorem and the very basics of DeRham cohomology. At the end of the course, the successful student will have demonstrated their abilities to: • Be fluent in using the regular value theorem in order to obtain (sub)manifolds and compute their tangent spaces. • Be able to compute flows of vector fields. • Be able to manipulate with differential forms both locally (in coordinate charts) as well as more globally (e.g. using global formulas for Lie derivarives, DeRham differential, etc). In particular, make use of the Cartan calculus. • Be able to integrate differential forms and derive consequences of Stoke’s theorem. • Compute DeRham cohomology of some simple spaces. Two times per week two hours of lectures and two times per week two hours of tutorials. There will be mandatory weekly homework exercises. These lead to a grade H with 1 decimal of accuracy. At the end of the course there will be a 3 hour written exam, leading to a grade E with 1 decimal of accuracy. The final grade F is be determined by F = max {(7E + 3H)/10, (17E + 3 H)/20} rounded off to an integral number up to 6, and to a half integer above 6 (to the closest one). The requirements for passing the exam are: H and E have to be at least 5 (before rounding!) and F has to be at least 6. Herkansing en inspanningsverplichting: The same rules apply for the retake. Taal van het vak: The language of instruction is English. body { font-size: 9pt; Prior knowledge Linear algebra (WISB107 and WISB108), Calculus of several variables (WISB212), Introduction to topology (WISB 243) and Introduction to Groups and Rings (WISB124). See the courseplanner (cursusplanner.uu.nl) for the contents of those courses: select Faculty of Science and then the programme of the bachelor Mathematics of the most recent year. • Book Lee - Introduction to Smooth Manifolds, gratis verkrijgbaar via de website van Springer. (This is for the students that want a book that does more things for them, e.g. more details. But please be aware that, as good/attractive as that sounds, having (too) many things done for you is not necessarily positive ...). • Dictation There are lecture notes for the course, which will be made available on the webpage of the course: https://webspace.science.uu.nl/~crain101/manifolds-2021/ These will be a revised version of the lecture notes from the previous year- see https://webspace.science.uu.nl/~crain101/manifolds-2020/ • Book Guillemin, Pollack - Differential Topology - (This is for the students that want to get some more geometric, intuitive insight, with some nicer stories, not all details worked out but fun to read/consult e.g. during a train ride). Additional information • Credits ECTS 7.5 • Level • Start date 2 September 2024 □ Ends 8 November 2024 □ Term * Period 1 □ Location □ Instruction language Course is currently running For guests registration, this course is handled by Utrecht University
{"url":"https://eduxchange.nl/guest/catalog-uu/courses/analysis-on-manifolds_3f894463-ba1a-4258-b7e5-8134ecdafe6c","timestamp":"2024-11-05T14:05:47Z","content_type":"text/html","content_length":"42852","record_id":"<urn:uuid:a778aa5d-2e49-4fc2-96f5-f7e4bf740410>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00298.warc.gz"}