content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
From ARIMA models to Recurrent Neural Networks | elastacloud-channels
Recently I have been applying a range of different time series models to rich transport industry data. From anticipating the development of faulty parts to understanding the flow of traffic and how
to better strategize timetabling adjustments, time series modelling is extremely powerful at improving the efficiency of organisations and cutting costs. So how do we go from ARIMA models to
recurrent neural networks? The answer lies in the complexity of the data and hypotheses formulated.
The typical workflow of time series modelling is highlighted below:
1). Start simple with basic ARIMA models. Once you understand your data, the question you want to ask, what you want to predict and have shaped your data and engineered features accordingly a time
series model can be applied.
The figure below highlights the steps involved in modelling time series data. There are two main approaches for fitting simple time series models using R. Firstly, an auto.arima function can be
applied which automates steps 3 – 5, to fit a model with appropriate parameters selected. Alternatively, each step can be carried out separately, and the output (AIC) of several models compared.
2) Dynamic regression. Whilst basic time series models may give adequate predictions for one parameter, they are likely to be improved with the consideration of additional information. For example,
when predicting how late a bus will be at its next stop, a good prediction could be made by modelling the lateness throughout its journey so far, but a better prediction may be made by enriching the
input data to also take into account the weather at different time points, the distance left to travel and so forth. Dynamic regressions, model time series data and make predictions based on this
additional information.
3). Recurrent Neural Networks. The decision to use RNNs must be carefully evaluated given the drawbacks associated with explaining and visualising model outputs or predictions. Justifications for
using RNNS include trying to obtain better predictions of time series data that shows subtle complex patterns or where you have a data set of multiple time series and multiple influential features
which you would like to capture the interactions between in order to make predictions. For example, if you are trying to predict lateness of multiple buses on a network taking into consideration the
lateness over journeys of multiple buses, journey paths of different buses on the network, distance of bus journeys, age of buses, passenger capacity, weather, etc. This type of RNN would be a many
to many RNN but other architectures of RNN exist as shown below.
Below is a list of useful blogs and resources to get started in understanding and developing recurrent neural networks:
Setting up the Python environment for deep learning
The effectiveness of recurrent neural networks
Recurrent neural network algorithms for deep learning
An introduction to backpropagation through time
Implementing the backpropagation algorithm from scratch in python
visualising RNN training and validation accuracy and loss
grid search deep learning hyperparameters
the dropout regularisation technique
adjusting classfication threshold
model evaluation error metrics | {"url":"https://www.channels.elastacloud.com/channels/championing-data-science/from-arima-models-to-recurrent-neural-networks/dl-focus","timestamp":"2024-11-02T00:07:22Z","content_type":"text/html","content_length":"860238","record_id":"<urn:uuid:5dd32c88-984f-48bd-943f-97ff442c57fb>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00522.warc.gz"} |
How many kPa are in 2150 mmHg? | Socratic
How many kPa are in 2150 mmHg?
1 Answer
We can simply use the conversion factor $760 m m H g = 101.325 k P a$ and the given value of $2150 m m H g$:
$2150 m m H g \left(\frac{101.325 k P a}{760 m m H g}\right) = 286.6 k P a$
Impact of this question
2400 views around the world | {"url":"https://api-project-1022638073839.appspot.com/questions/how-many-kpa-are-in-2150-mmhg","timestamp":"2024-11-13T21:50:22Z","content_type":"text/html","content_length":"32248","record_id":"<urn:uuid:f7756013-a4cc-44e6-a181-7b67fceddd80>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00635.warc.gz"} |
What Is A Dilation And Which Graph Shows A Dilation → Education Answers - Expert Answers to Your Education QuestionsWhat Is A Dilation And Which Graph Shows A Dilation
What Is A Dilation And Which Graph Shows A Dilation
which graph shows the dilation ? from brainly.com
When it comes to math, there are many concepts that can be difficult to understand. One of these concepts is dilation. Dilation is when a shape or an image is enlarged or reduced in size. This
process can be confusing for many students, but it is actually quite simple. By understanding the basics of dilation, you can easily identify which graph shows a dilation.
What is Dilation?
Dilation is a mathematical term used to describe the enlargement or reduction of an object or image. It involves scaling the object or image by a certain amount. For example, if you were to double
the size of a shape, you would be performing a dilation with a scale factor of two. The scale factor is the number that you multiply the object or image by in order to perform the dilation.
How is Dilation Used?
Dilation can be used in a variety of ways. It is often used in the field of geometry to describe the enlargement or reduction of shapes. It can also be used in art, photography, and other areas. The
process of dilation can also be used to change the size of an image or to change the resolution of an image.
Types of Dilation
There are two main types of dilation: enlargement and reduction. Enlargement is when an object or image is increased in size. Reduction is when an object or image is decreased in size. Both of these
processes involve multiplying the object or image by a certain number, known as the scale factor.
Graphs That Show a Dilation
There are several types of graphs that can be used to show a dilation. The most common type of graph used to show a dilation is a scatterplot. A scatterplot is a graph that plots the points of a data
set on a two-dimensional plane. The points on the graph can then be connected to form a line, which will show the dilation.
Other Graphs That Show a Dilation
Other graphs that can show a dilation include bar graphs and line graphs. Bar graphs are graphs that show the quantity of a certain item or the number of items within a certain range. Line graphs are
graphs that show the relationship between two variables. Both of these types of graphs can show a dilation if the data points are scaled appropriately.
Dilation is an important concept in mathematics that can be used to enlarge or reduce an image or shape. By understanding the basics of dilation, it is possible to identify which graph shows a
dilation. Scatterplots, bar graphs, and line graphs can all show a dilation if the data points are scaled appropriately. | {"url":"https://kat1055.com/which-graph-shows-a-dilation/","timestamp":"2024-11-06T20:14:01Z","content_type":"text/html","content_length":"133083","record_id":"<urn:uuid:f3ac9a73-f7fb-48a8-bb18-fa029e7a3fd6>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00608.warc.gz"} |
The "advanced monthly bill" output file shows a relational data frame containing each energy and demand charge component for each month of each optimization year. The 'billing period' column allows
you to match the charge back to the input tariff file and see exactly what component of the tariff incurs what charges at what time. This table breaks out "original" charges, which are what the
charge would be without any DER, from the charges themselves, which are with all DER.
This file contains lifetime present value costs and benefits for each benefit and cost component of the analysis, along with a row called "Lifetime Present Value", which is the total present value
for both the cost and benefit column.
This is a log file, useful for diagnosing problems.
This file shows a rectangular table of the price of energy, whether from a time series input or a retail tariff. This is useful for making a heatmap of energy prices, showing hour of the day along
the y-axis, day of the year along the x-axis, and the price of energy in color.
This file contains a column for each DER with rows showing the DER's beginning of life (start of construction), beginning of operation, end of life, and its expected lifetime.
This file contains information on how energy storage systems are cycled. This is the results of a rainflow counting algorithm, which counts the number of half-cycles at what depth of discharge the
storage system executes in every optimization window. Each row represents a new cycle (count=1) or half-cycle (count=.5).
• "rng" - effective energy capacity (after degradation) of the energy storage system in this optimization window.
• "mean" - Average SOE of the storage system in the half-cycle.
• "count" - is the cycle represented in this row a full (both charge and discharge) cycle or a half cycle (only charge or only discharge)?
• "i_start" - the time step in the optimization window when the (half) cycle starts
• "i_end" - the time step in the optimization window when the (half) cycle ends
• "Opt window" - The optimization window number. (1 is the first opt window)
• "Input_cycle_DoD_mapping" - Depth of discharge of the (half) cycle in kWh
• "Cycle Life Value" - This comes from the input cycle life file. It is the number of cycles at the depth of discharge of the cycle until replacement (if starting SOH = 100%)
This file is processes the cycle counting data (above) into degradation outcomes.
• "Optimization Start" - The optimization window number
• "degradation" - The change in SOH of the storage system over the optimization window.
• "soh" - The remaining state of health at the end of the optimization window
• "effective energy capacity" - The remaining useful energy capacity at the end of the optimization window.
This is a rectangular data file used to make heatmaps of energy storage system charge and discharge profiles.
This shows the change in SOH of a storage system in each year of the analysis.
Similar to the cost_benefit.csv file, this expresses the present value of each cost/benefit category across the analysis window, but does not break out cost and benefits separately. Instead, costs
are negative numbers and benefits are positive numbers.
This provides a detailed look under the hood of the optimization. This expresses every term of the objective function and the values they take in every optimization window.
This file expresses some financial metrics of the project - Payback Period, Discounted Payback Period, Lifetime Net Present Value, Internal Rate of Return, and Benefit-Cost Ratio.
Where applicable, this file finds the day with the peak site_load and simply expresses the site load vs time for that day. This is just used to quickly identify when the peak load occurs, what the
peak load is, and make a plot accordingly.
This is a full nominal cash flows pro forma document that expresses the nominal cost (-) or benefit (+) of each cost/benefit category in every year of the analysis window. This document shows how
cash flows are escalated/interpolated, when DER are constructed, become operational, and fail, and all other detailed benefit-cost analysis mechanisms.
Similar to the advanced monthly bill file, this expresses the energy and demand charges associated with a retail tariff. Unlike the advanced file, this aggregates each months charges into demand
charges, energy charges, original (read: without DER) demand charges, and original energy charges.
This returns the size of each DER, whether it was input by the user or determined optimally in DER-VET.
This file simply contains a list of all DERs present and their unique name.
This is the heaviest output of DER-VET. It show detailed information on all timeseries inputs and how DERs are operated in every time step of every optimization year. | {"url":"https://storagewiki.epri.com/index.php/DER_VET_User_Guide/Outputs?oldid=2171","timestamp":"2024-11-04T23:31:09Z","content_type":"text/html","content_length":"38060","record_id":"<urn:uuid:d7998aee-bf93-406d-bb9a-3662b325d04e>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00622.warc.gz"} |
On the Development of a Numerical Model for the Simulation of Air Flow in the Human Airways
Lancmanová, A.; Bodnar, T.; Sequeira, Adélia
Conference on Topical Problems of Fluid Mechanics, (2023),
This contribution reports on an ongoing study focusing on reduced order models for incompressible viscous fluid flow in two dimensional channels. A finite difference solver was developed using a
simple implementation of the immersed boundary method to represent the channel geometry. The solver was validated for unsteady flow by comparing the obtained two-dimensional numerical solutions with
analytical profiles computed from the Womersley solution. Finally the 2D model was coupled to a simple 1D extension simulating the flow in axisymmetric elastic vessel (tube). Some of the coupling
principles and implementation issues are discussed in detail. | {"url":"https://cemat.tecnico.ulisboa.pt/document.php?project_id=6&member_id=83&doc_id=3664","timestamp":"2024-11-09T12:30:22Z","content_type":"text/html","content_length":"8626","record_id":"<urn:uuid:30ecd657-732e-4a08-9e82-9a4b1c2cd16f>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00087.warc.gz"} |
Good practices
Nitrite Water Analysis: a practical approach to dye synthesis
Working Group:Science through digital learning, Low Achievers in ScienceCountry:GermanyLanguages:English, GermanAge of students (target group/s):15-18
The project described here is embedded in the frame of a regular school lesson conducted in senior classes in high school. To address issues of the chemical field of dye synthesis and especially the
azo dye substances, this lessons setup starts with a problem definition. This is done by introducing the problem using a question posed in an internet forum expressing the problems of fish dying in a
fish tank. By reading and discussing this question, a definition of the problem is formulated and thus, the first step of inquiry based learning is completed. The teacher then provides a chemical
analysis of the water in the fish tank in which all fish have died. This focuses the students’ attention on the most important facts of the problem. Planning and introducing the issue with this
method results in a problem oriented perspective of the class and leads the students in the right direction to address the problem with a suitable approach. After this introduction, the class is
divided into three groups with each group dealing with a different aspect of the problem (Method: Group puzzle). To fully understand the problem and to underline the results, the students plan and
conduct a water analysis focusing on nitrite. The results of each group are finally combined to generate a solution to the problem with respect to a) the chemical mechanisms (azo dye synthesis), b)
the experiments conducted (evaluating the results and possible sources of errors) and c) the biological aspects (nitrogen cycle). The solution developed at the end of the lesson is given to the
“owner of the fish tank” by replying in the internet forum. The reply provided in the internet forum not only gives a suitable and traceable solution to the problem, but additionally a specific focus
is given on precisely formulating the problem statement, the approach for the solution of the problem and the results with special emphasis on the use of the correct terminology. Thus, by using a
format (internet forum) and problem which is meaningful, intuitive and close to the students’ communication habits, the basics of scientific writing skills are developed.Strong points and
opportunities:Dye synthesis is a very complex topic in senior classes. Almost every azo dye is toxic. Thus teachers can usually not use them in class. The nitrite water analysis is one of very few
opportunities to experiment with an azo dye in schools, thus connecting theory to practical application and experiment.
The focus of the approach is on the students’ perspective in a) determining the problem, b) developing relevant questions and c) establishing a method and schedule to solve the problem. This approach
is perceived by the students as being very satisfying. It gives them great confidence in what they have learned so far and how to use their knowledge. They learn how to approach a problem and how to
work as a group to solve it. This student oriented method enables the students to understand the mechanisms of dye synthesis more easily than using a teacher oriented approach.
Another advantage of this approach is that all students get a result at the end of the lesson. This lesson is clearly structured with a motivational introduction posing a real world problem, the
development of an approach to solve the problem and a clear conclusion with regards to the problem solution and documentation. Thus the format of this approach leads to a stronger engagement of the
students, since they are more integrated into the topic of the lesson. They play a role in the development of the solution and in the documentation of the results.Limitations:There is a possibility
of failure, since the topic of this lesson is complex and relates to more advanced levels in high school. Depending on the prior knowledge of the students the teacher may have to alter the schedule
of this lesson with respect to the theory of the dye synthesis, possibly enhancing the practical parts.Added value with regards to the 3 topics of the MASS project:The nitrite water analysis lesson
has a strong focus on inquiry based learning. The students develop a definition of a problem and then a plan how to solve it. By doing so, they combine chemical theory and concepts with practical
aspects such as the water analysis. Despite the inherent complexity of the topic, the combination of theory and experiments is suitable for low achievers in the class, since every student can
contribute to the success of the project and to the solution of the problem. With regards to the technical equipment used in this lesson, all students have to use the internet and a laptop to extract
the main problem and to document the solution. This blended learning approach integrates inquiry based learning using experiments, group learning and the use of internet technology for motivation and
documentation or the problem. | {"url":"http://mass4education.gridw.pl/get-inspired/good-practices?e=56-nitrite-water-analysis-a-practical-approach-to-dye-synthesis","timestamp":"2024-11-10T21:09:13Z","content_type":"application/xhtml+xml","content_length":"24536","record_id":"<urn:uuid:06aa31a6-f433-42f9-b54e-6264ac6064fd>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00633.warc.gz"} |
Class AklToussaintHeuristic
public final class AklToussaintHeuristic extends Object
A simple heuristic to improve the performance of convex hull algorithms.
The heuristic is based on the idea of a convex quadrilateral, which is formed by four points with the lowest and highest x / y coordinates. Any point that lies inside this quadrilateral can not be
part of the convex hull and can thus be safely discarded before generating the convex hull itself.
The complexity of the operation is O(n), and may greatly improve the time it takes to construct the convex hull afterwards, depending on the point distribution.
See Also:
• Method Summary
Modifier and Type
Returns a point set that is reduced by all points for which it is safe to assume that they are not part of the convex hull.
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
• Method Details
□ reducePoints
Returns a point set that is reduced by all points for which it is safe to assume that they are not part of the convex hull.
points - the original point set
a reduced point set, useful as input for convex hull algorithms | {"url":"https://hipparchus.org/apidocs-3.1/org/hipparchus/geometry/euclidean/twod/hull/AklToussaintHeuristic.html","timestamp":"2024-11-03T21:48:50Z","content_type":"text/html","content_length":"11895","record_id":"<urn:uuid:13967dd0-8912-4199-8d30-2f30ea8912f1>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00460.warc.gz"} |
(-0.3) + 0.9 find the sum — QuizWhiz Homework Help
The absolute value of -0.3 is 0.3. So, (-0.3) + 0.9 can be seen as 0.9 - 0.3, which equals 0.6. Thealgebraic sumis the result of adding or subtracting the numbers according to their signs. In this
case, since both numbers are positive, the sum ispositive.In algebra, the sum of twonumbersis obtained by adding them together. When adding a positive number and a negative number, we can think of it
as subtracting the absolute value of thenegativenumber from the positive number.To find the sum of (-0.3) and 0.9, we add the two numbers together.Step 1:Writedown the numbers:(-0.3) + 0.9Step
2:Addthe numbers:(-0.3) + 0.9 = 0.6The algebraicsumof (-0.3) and 0.9 is 0.6.Therefore, the absolute value of -0.3 is 0.3. So, (-0.3) + 0.9 can be seen as 0.9 - 0.3, which equals 0.6. Thealgebraic
sumis the result of adding or subtracting the numbers according to their signs. In this case, since both numbers are positive, the sum ispositive.Learn more aboutalgebraic sumshere:brainly.com/
Unlock full access for 72 hours, watch your grades skyrocket.
For just $0.99 cents, get access to the powerful quizwhiz chrome extension that automatically solves your homework using AI. Subscription renews at $5.99/week. | {"url":"https://quizwhiz.org/questions-and-answers/03-09-find-the-sum","timestamp":"2024-11-05T04:10:31Z","content_type":"text/html","content_length":"21586","record_id":"<urn:uuid:63becfda-dea9-46b0-91bf-4be61d9b1b85>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00627.warc.gz"} |
BDD & PD: TemperatureIncrease
Tags and keywords
This diagram shows the Block TemperatureIncrease with a supporting ConstraintBlock TemperatureIncreaseConstraint:
Click on the image to view it full size
Time for some basic thermo and some dimensional analysis:
Remember this loophole?
Let's see whether we can make the numbers fit anyway:
The startup value 4180.0 for specificHeat works quite well across the expected temperature ranges if it is in J/(K⋅L), noting the following data are in J/(K⋅cm^3):
So 4180.0 J/(K⋅L) seems reasonable.
But note also that to get the dimensional analysis to work the "waterVolume" has to be a rate:
This is based in part on the observation that the consumer HeatingCalculation does NOT treat the temperature increase as a rate, together with:
Assume you've got all of the power 400 J/s from the heater (ignore any radiative loss) and plugin these values:
specificHeat -> 4.180 J/(K cm^3)
waterVolume -> 0.1 L/s
energy -> 400 J/s
L -> cm^3*1000
This gives a temperature increase of 0.956938 K. We'll see also later when we run the simulation that the assumption of waterVolume = 0.1 L/s corresponds well enough with the equivalent vapor output
Notes Snippets (quotes/extracts) Flags | {"url":"https://webel.com.au/node/3122","timestamp":"2024-11-08T20:52:15Z","content_type":"text/html","content_length":"71711","record_id":"<urn:uuid:da92deb3-7b36-4713-803a-009ce56398c5>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00612.warc.gz"} |
Statistical Performance of Ice Skating Scoring Methods
The following is a lengthy, fairly technical analysis of the way different methods of scoring ice skating competitions perform under a variety of circumstances. If you are interested in the
mathematics of how scoring systems work, this is it. If you are not much for numbers you may just want to skim this - or run away screaming.
For the past few years, the scoring system used in competitive ice skating has come under attack from one quarter or another for a variety of reasons. In 1996 serious consideration was given to
changing to the system used in professional skating in which the high and low marks are thrown out and the remaining marks averaged. Most recently the current ordinal system has been criticized by
both the President of the ISU and the President of the IOC as being too difficult for the public to understand and unfair because it allows the results of higher placed skaters to change depending on
the results of lower placed skaters who compete later in a competition. In order to compare the characteristics of the ordinal method and several alternate scoring methods a Monte Carlo analysis of
13 scoring methods was undertaken. Compared here are the characteristics of these different methods and descriptions how they behave in the presence of random errors and systematic errors (biases) in
the assigning of marks.
In choosing which methods to consider it was decided to confine our self to methods which do not allow the results of later skaters in a group to change the results of skaters who have already
competed, as this seems to be the major concern of the ISU at the time. Some consideration was also given to limiting the analysis to methods that are easy to understand, but not all the methods
studied might be so described. Given the large number of spectators at events who record the marks and calculate results as events unfold, it is not clear that the current method is actually beyond
the comprehension of the public, at least the public that attends competitive events. Among the TV viewing public confusion is probably more widespread, but that may be as much the fault of the ISU
and the media who make no effort to adequately explain the method to the public as anything else. The methods studied were compared to the current ordinal method in terms of their response to marks
subject only to random errors, marks subject to small systematic biases by 1 to 3 judges, and marks subject to large systematic biases by 1 to 3 judges. The ordinal method is the reference method
against which the others are compared.
While not perfect, the ordinal system is, nevertheless, a remarkable system thanks to the two fundamental characteristics on which it is based, relative judging and the use of the median ordinal.
It is well known that human perceptions are much better at making comparative judgments than absolute judgments. Human senses can frequently compare and contrast observations to better that 1% but
generally can only make absolute judgments to a few percent, and often no better than 5-10%. In the ordinal method the marks are only a preliminary step to establishing an order of finish for the
competitors by each judge. The marks from the individual judges need not be on the same absolute scale (or any absolute scale at all) in order to produce consistent orders of finish among the judges.
In determining the winner, the judges only have to decide who is the best in the group, but not how the skater rates in any absolute sense.
It is the use of relative judging, however, that gives rise to the occasional place switching which now concerns the ISU. Because the ordinal method is a relative judging method, it only gives the
correct final result after everyone has skated and the placements of all the skaters relative to each other are calculated. Intermediate calculations will not necessarily match the final result since
the relative placements of all the skaters are not known until it is all over. The place switching effect is not a flaw in the method, but rather a flaw in the process of releasing intermediate
results. Since the practice of releasing intermediate results is expected by the public, one must either accept the existence of place swapping and try to explain it to the public, or change to a
method that does not allow it to occur in the first place. The latter course of action means changing to a system in which the judges mark on an absolute scale so that intermediate calculations are
not altered by the marks of subsequent skaters. As noted above, since humans are not as good at assigning absolute marks as they are relative marks, this change will add some uncertainty to the
results produced by the scoring method.
Changing to absolute scoring will also have a second impact on the way marks are assigned. No longer would it be possible to give two skaters the same total mark and let the first or second mark
break the tie in determining the ordinal. If placement is based in some fashion on the total mark, then more marks will be needed to separate the skaters. For example, in the ordinal method there are
15 places using marks between 5.6 and 6.0, but only 5 places using the corresponding total marks of 11.6 through 12.0. Consequently, switching to an absolute scoring method will require marking
competitions more finely than the current 0.1 basis. At a minimum marking to the nearest 0.05 would be necessary. Fortunately, this level of precision appears both numerically adequate and humanly
Although it is not obvious, and usually not pointed out when explaining the workings of the ordinal method, the key element of the method which makes it successful in minimizing the effects of judges
bias is that the method is based on the median of the ordinals assigned each skater; i.e., when you read a results sheet the majority ordinal is actually the median ordinal. When you see 5/3 for a
skater's majority what this means is that the median ordinal for the skater was third, and five judges placed the skater at or above the median ordinal.
In the ordinal method the median ordinal is the primary determinant for the skaters' places; everything else is tie breakers. If two skaters have the same median ordinal the number of judges placing
the skater at or above the median ordinal breaks the tie. If the skaters have the same number of judges at or above the median ordinal then the sum of the ordinals for the judges in the majority
breaks the tie. If that number is the same, then the sum of all the ordinals breaks the tie. If that number is the same the skaters are tied.
This use of the median to filter out judge's bias is a well know tool in numerical analysis. Whenever a measured parameter is subject to random errors and occasional systematic errors, the median
mark is frequently a better gauge of the characteristic value of the parameter than is a simple average. For a parameter in which positive random errors are as common as negative random errors, the
median value and the average value for the data will be the same in the absence of systematic errors. If, however, systematic errors are present, one large systematic error (a noise spike) can
substantially skew the average value, but will have little impact on the median value, thus making it preferable to use the median value. Because the use of median values can be such a powerful tool
to filter out systematic errors, several of the methods studied here rely heavily on the use of the median mark.
Methods Studied
The following methods have been thus far been studied.
This is the current method and serves as a reference against which the other methods are compared.
Simple Mean
This consists of taking the average of the judges total marks. This method is not of practical use since it has no immunity to the effects of bias. It is, however, the best method to use when marks
are only affected by random errors and, thus, a good reference against which to compare the performance of other methods in dealing with random errors.
Clipped Mean
This consists of dropping out the high and low marks and then averaging the remaining marks. The first and second marks can be clipped and averaged separately and then summed, or the total marks can
be clipped and averaged. Both approaches have been studied.
Median Mark
Placement based solely on the median mark. The median mark can be calculated for the first and second marks and then summed or the median of the total marks can be used. Both approaches have been
Gaussian Mean
The judge's marks are sorted into a histogram. The mean mark is calculated assuming the histograms are gaussian distributions (a common noise model for random errors). The gaussian means can be
calculated for the first and second marks and then summed, or the gaussian mean for the total marks can be calculated. Both approaches have been studied.
Weighted Mean
The median mark is calculated first. It is then assumed that the farther a judge's mark is from the median the less likely it is to be correct and thus should be given less weight in an average. The
average of the judges' marks is calculated weighted by how far they depart from the median mark. Weighting the scores by the reciprocal of the deviation from the median marks, and the square of the
reciprocal of the deviation from the median marks were both tried, with marks within 0.1 of the median marks having a weight of 1.0. This process can be applied to the individual marks first which
are then added, or to the total marks. Both approaches have been studied.
Median Range
The median mark is calculated first. It is then assumed that marks that are too far off from the median are wrong and should not be counted at all and marks that are within an acceptable range are
valid and should be included in an average; i.e., the average is taken only for those marks within the selected range. This process can be applied to the individual marks first with the results
added, or to the total marks. Both approaches were studied.
Median Mark with Tie Breakers
This method is strictly analogous to the ordinal method, except it makes use of the median mark. The median marks is calculated first for the total marks. Placement is first based on the median mark
(the skaters with the highest median total mark wins). If any skaters have the same median mark the number of judges assigning the median total mark or higher breaks the tie. If skaters have the same
number of judges giving them the same median total mark the average of the total marks in the majority breaks the tie. If skaters are still tied the average of all the judges' total marks breaks the
tie. This method is no easier to understand than the ordinal method, but it represents the least radical change from the current method, and by using the marks as assigned without comparing them to
other skaters it precludes place swapping during an event.
Method of Analysis
Each of the scoring methods was studied by calculating the results of a large number of synthetic competitions for a variety of scenarios. The following parameters are adjustable for each scenario in
the software used to do the calculations:
1. The number of skaters in the competition is selectable to a maximum of 30. Most test cases were run for either 6 or 12 skaters. Since in each method the maximum error in a skater's place was never
greater than five places, there was no need to consider larger groups.
2. The number of judges on the panel is selectable to be an odd number between 3 through 15. The results here describe the performance of the different methods for a standard panel of 9 judges. A few
other cases were run to see how thing vary for different size panels.
3. The spread in the judges marks due to random errors is adjustable in a range of 0.1 through 1.0. Most cases were run for spreads of + 0.1 or 0.2, which is typical for the judges marks in senior
level competitions. The random errors could be selected to be uniformly distributed or to have a gaussian distribution. A few cases of very large spreads were run to see how the methods perform when
a panel's marks are "all over the place", as is frequently found in lower level events.
4. The precision with with the marks are assigned is selectable as either 0.1 or 0.05. Most cases were run for 0.1. In general, the need to use a precision of 0.05 is driven by the need to have
enough marks to separate the skaters on an absolute scale, and not the mathematical properties of the methods.
5. The permitted range of valid marks for the Median Range method is selectable between 0.1 through 0.5. For most examples a spread of 0.2 for individual marks and 0.4 for total marks were used.
To run a scenario, marks were first assigned the skaters on an absolute scale to define the "truth". These are the marks the skaters would receive if all judges were equally skilled, used identical
judgment in exact accordance with the rules, and their marks were completely error free and bias free. The various adjustable parameters were then assigned values and marks created for some number of
synthetic competitions. In general, 1000 synthetic competitions were run for each scenario. For each synthetic competition the judges' marks were assigned by generating a random error which was
applied to the truth marks. Systematic errors were also be applied to the marks of individual judges to test the impact of systematic biases. From the results of the synthetic competitions several
statistics were then examined to gauge the performance of the methods. These statistics included the fraction of the cases in which skaters ended up in their correct place on a place by place basis,
the fraction of the cases in which all skaters in the group ended up in the correct place (a "perfect sheet"), and the fraction of the cases that produced ties.
The following scenarios were studied.
1. Random errors only, with skaters separated by 0.1 or 0.2 points in total marks, and the spread in the judges marks also either +0.1 or +0.2. The purpose of these tests was to see how well the
different methods give the correct result when the judges are marking a competition on an absolute scale and only make random errors of judgment. [The ranges of marks used here are typical of what
one finds in the marks for senior level events at Nationals and Worlds. Further, the spread in the ordinals calculated here from the spread in the marks is also consistent with what is found for the
ordinals at Nationals and Worlds. One concludes, then, that judges at Nationals and Worlds marks on a roughly absolute (or at least consistent) scale to a level of about 0.2 points (about 3.3%
2. The same parameters for random errors as in item 1, with the addition of small systematic biases of 0.1 in each mark nudging the first skater down and the second skater up. Test were run for 1
through 3 judges biasing their marks in this way. The purpose of these tests were to see how easy it is for a small group of judges to change the outcome for a given place (first place in particular)
by making small adjustments to their marks.
3. The same parameters for random errors as in item 1, with the addition of large systematic biases if 0.5 or greater. Tests were run for 1 through 3 judges biasing there marks in this way. The
purpose of these tests was to see how the methods perform when a small group of judges bias their marks by a significant amount.
4. Random errors only, with skaters separated by 0.1 or 0.2 points in total marks and the spread in the judges marks +0.4, to see the effect of large random errors on the results determined by the
5. Various random errors combined with various permitted ranges for the median range method to begin to estimate the optimum value for permitted range that works best for that method.
The following describes the performance of each method and the limiting factors that drive that performance. The relative merits of each method are then compared in tabular form which provides a
quick-look qualitative comparison of the methods. Note that for all the methods studied which have two variations, the variation in which a method is applied to the total marks, as opposed to the
individual marks first and then summing them, was found superior and is the variation to which the comments below specifically apply.
Ordinal Method
The ordinal method gives reasonable, though hardly outstanding, performance in dealing with random errors, small biases, and large biases. For each of these general situations it is neither
exceptionally good nor exceptionally bad, merely adequate. For skaters separated by 0.1 in total marks and a small spread in the judges marks of +0.1 the skaters are correctly placed 80-90% of the
time and the method produces only a small number of ties. When the range in the judges' marks is +0.2 the skaters are correctly place 65-80% of the time. For larger spreads in the marks the ordinal
method begins to degrade rapidly. For a spread of +0.4 in the marks the method gives the correct answer only 40-70% of the time, and for panels of fewer than 9 judges it is far worse.
As a sidelight to this study, it is found that for lower level events where there is a large spread in the judges' marks the practice of using fewer than 9 judges strongly degrades the quality of the
results. In such competitions, the practice of using panels of 5 judges should probably be abandoned.
When small biases are added to the mix, the ordinal method does fairly well if only one judge biases his/her marks and the range in the judges' marks is only +0.1, with first place still correctly
determined 81% of the time. But if 2 or 3 judges bias their marks by 0.1, or if the natural spread in the judges' marks is +0.2, and 1 or more judges bias their marks by 0.1, then first place is
incorrectly determined as much as 78% of the time in the worst case. This poor performance is common to all the methods studied for these situations. Because the skaters tend to be separated by
0.1-0.2 points in total marks and the natural spread in the judges' marks is also 0.1-0.2, it is impossible to filter out small biases based on a single set of marks since a small bias of 0.1 cannot
be distinguished from the natural spread in the marks.
When large biases are applied the ordinal method deteriorates as the numbed of judges with errors increases. With a large bias from one judge the ordinal method gives the correct place for the skater
affected about 60% of the time; for 2 judges about 40%, and for three judges about 20%. Thus, the method is moderately successful at accommodating one large error, but not very good when there are
two or more.
Simple Mean
The simple mean is the best method for dealing with random errors only. For a small spread in the marks (+0.1) the simple mean gives the correct answer more than 96% of the time, and for a moderate
spread (+0.2) 76-89% of the time. For a large spread in the marks (+0.4) it still performs the best, giving the correct about 20% more often than does the ordinal method.
With small biases the simple mean does slightly better than the ordinal method for most of the cases studied and slightly worse for the remainder. Overall, for small biases the results for the simple
mean and the ordinal method can be considered nearly equivalent. For large biases, however, it is another story.
With a large bias from one judge (0.5 each mark) the simple mean gives the correct place for the skater affected only 41% of the time, and for more than one judge virtually never. Because the simple
mean has no immunity to the effects of large errors it is not an appropriate method for use in actual competitions, it does however set the standard for how a method should perform in the presence of
random errors alone.
Clipped Mean
The idea behind the clipped mean is that if the high and low mark are thrown away it will dissuade judges from consciously biasing their marks, and will filter out large errors in the marks by
throwing away the extreme marks. The drawbacks to the method are that there is no guarantee that the high and low marks are erroneous, in most cases it throws away marks that are not in error, and it
does not protect against more than one judge biasing their marks in the same direction.
In terms of random errors the clipped mean performs identically to the simple mean, but with two less judges. In other words, the clipped mean with 9 judges is the same as the simple mean with 7
judges so far as random errors are concerned.
For small biases the clipped mean does worse than the simple mean. Like all the methods it does not filter out the effects of small biases, and with 2 fewer judges going into the average the effects
of random errors do not cancel out as well.
With large biases, the clipped mean performs better than both the ordinal method and the simple mean for 1 judge, but for 2 or more judges it is far worse than the ordinal method and almost as bad as
the simple mean - but not quite. In some of the cases studied, when two judges gang-up on a skater, that skater can be incorrectly placed as much as 89% of the time.
Median Mark
The median mark method performs fairly well for most of the cases studied but is limited by the fact it produces an excessive number of ties. For most scenarios it produces ties for the majority of
the synthetic competitions. This results from the fact that the marks are too coarse and the number of judges too small to get sufficiently precise values for the median marks. To obtain more precise
median marks would require marking to a precision of 0.025 or greater and the addition of more judges than would be practical to employ in a competition. Nevertheless, variations on the use of the
median mark can overcome this weakness, and are discussed below.
Gaussian Mean
This method works fairly well for random errors only, intermediate between the ordinal method and the simple mean. In the case of biases, however, it does not do as well. For small biases it does
worse than the other methods and for large biases it does not as effectively filter out the effects of the marks that are way out of line. This method gives lower weight to marks that are greatly in
error but does not totally ignore them, which results in the unsatisfactory performance found. It is also adversely affected if the histogram of marks do not actually correspond to a gaussian
Weighted mean
For random errors the weighted mean does better than the ordinal method, and nearly as well as the simple mean. For small biases it also does well when only one judge's marks are biased. If the marks
of 2 or more judges have small biases, however, the results are worse than for both the ordinal method and the simple mean. This method does fairly well dealing with large biases, performing better
than the simple mean, the clipped mean, and the ordinal method. If the performance of the method was a little better in the case of small biases this might be a viable method to use, but since small
biases are probable a more common occurrence in judging than large biases, its deficiency in dealing with the latter is a serious limitation.
Median Range
This method is second best of all the methods studied when it comes to dealing with random errors, performing as well or nearly as well as the simple mean. For small biases it is comparable to the
other methods, doing slightly better in some cases, slightly worse in others. It is the best of all the methods in dealing with large biases, placing a skater correctly more than 80% of the time even
when three judges bias their marks substantially. The only weakness to this method is that the range selected over which the marks are averaged must be carefully matched to the expected consistency
of the judges' marks. For these tests several ranges were compared. It was found that averaging total marks within +0.3 or 0.4 worked well. A narrower range does not work as well as it begins to
filter out valid marks within the naturally expected spread, which reduces the statistically accuracy of the average, while a wider spread of +0.5 gives the judges too much latitude to add moderate
biases to their marks. A related concern is that for lower levels the natural range of the judges marks is frequently greater than +0.2.
As the natural range of the judges' marks increases, the number of judges' marks falling outside the acceptable range will increase, and the number going into the average decreases, reducing the
statistical accuracy of the results. To test the performance of this method for events with a large range in the judges' marks, cases were run where random errors of +0.4 were applied to events were
the skaters were separated by 0.1 in total marks. The results, while not pretty, were comparable to the other methods studied and slightly better than the ordinal method. As in the case of the
ordinal method, performance also degrades with the use of panels with fewer than 9 judges.
Based on the overall performance of this method, this approach to scoring appears to be a viable alternative to the ordinal method.
Median Mark with Tie Breakers
Because this method is so closely analogous to the ordinal method, it is not surprising to find that this method performs nearly identically to the ordinal method. In terms of random errors it
actually does about 5-10% better than the ordinal method. For small biases it also does slightly better than the ordinal method in most cases; however, in a few cases it is slightly worse. For large
biases it performs 25-50% better than the ordinal method.
This method represents a small but significant improvement over the current ordinal method. It does not allow place switching, represents the smallest conceptual change from the current system, and
overall performs better than the current system. In principle, the main tie breaking rules could be represented numerically, which would allow the posting a single composite score that the public
could easily understand. This method appears to be a viable alternative to the current scoring system.
Relative Performance
The following table gives a qualitative comparison of how the different methods studied perform for the cases tested. In the columns for small biases and large biases two grades are given. The first
is for biases with a small range of random errors and the second is for biases with a moderate range of random errors (only the second case was run for the gaussian mean method). The grade given is
based primarily on the frequency with which the methods give the correct place for the skaters under each condition.
│ Method │ Random Errors │ Small Biases │ Large Biases │
│ │ Small Range │ Moderate Range │ Large Range │ 1 Judge │ 2 Judges │ 3 Judges │ 1 Judge │ 2 Judges │ 3 Judges │
│ Ordinal │ A- │ B │ D+ │ B │ C+ │ D │ B+ │ C+ │ D │
│ │ │ │ │ │ │ │ │ │ │
│ │ │ │ │ C │ D │ F │ C │ D │ F │
│ Simple Mean │ A+ │ B+ │ C- │ A │ C │ F │ D+ │ F │ F │
│ │ │ │ │ │ │ │ │ │ │
│ │ │ │ │ B │ C- │ F │ D │ F │ F │
│ Clipped Mean │ A+ │ B+ │ C- │ A │ C- │ F │ A │ F │ F │
│ │ │ │ │ │ │ │ │ │ │
│ │ │ │ │ B │ D+ │ F │ B- │ F │ F │
│ Median Mark │ B │ B- │ C- │ B+ │ C │ D │ A- │ B+ │ C+ │
│ │ │ │ │ │ │ │ │ │ │
│ │ │ │ │ C+ │ D │ F │ B- │ C │ D+ │
│ Gaussian Mean │ A │ B │ D+ │ C │ F │ F │ C+ │ D │ F │
│ Weighted Mean │ A │ B │ D+ │ A- │ C │ D- │ A │ A │ A- │
│ │ │ │ │ │ │ │ │ │ │
│ │ │ │ │ C │ D- │ F │ B- │ B- │ C- │
│ Median Range │ A+ │ B+ │ D+ │ A │ C │ F │ A+ │ A+ │ A+ │
│ │ │ │ │ │ │ │ │ │ │
│ │ │ │ │ B │ C- │ F │ B+ │ B+ │ B+ │
│ Median w. Tie Breakers │ A │ B │ C- │ A- │ C │ F │ A │ B │ C │
│ │ │ │ │ │ │ │ │ │ │
│ │ │ │ │ C+ │ D │ F │ C+ │ C │ D │
Based on the test cases tabulated above, the current ordinal method of scoring competitions appears to be superior to the simple mean, the clipped mean, the median mark, and the gaussian mean
methods. Although the median mark method has higher scores overall than the ordinal method, the large number of ties generated by the median mark method renders it useless in its basic form.
The median mark with tie breakers performs roughly 20% better than the ordinal method and is conceptually closest to that method. The weighted mean performs better, in general, than all but the
median range method, but its somewhat weaker performance in dealing with small biases makes it less attractive. In terms of overall performance and ease of understanding for the public, the median
range method seems the clear winner of the methods tested. It offers the easily understandable deterrence of throwing away all marks that are too out of line, but is smart enough not to throw away
marks unnecessarily. In this respect it can be viewed as a more sophisticated version of the clipped mean method.
A limited number of test cases were run with panels of other than 9 judges. For the cases run, no significant benefit was found in increasing panels in size, up to a total of 15 judges. For smaller
panels results degrade significantly, especially with moderate to large ranges in the judges marks. This effect is least significant for upper level events where the consistency of the marks if
fairly high, but for lower level events where the judges' marks span a greater range it is a great disservice to the skaters to use panels of fewer than 7 judges. | {"url":"http://iceskatingintnl.com/archive/rules/scoring2.htm","timestamp":"2024-11-07T23:24:32Z","content_type":"text/html","content_length":"38596","record_id":"<urn:uuid:676d7d1c-d5c0-441c-a265-ced8aca3ecf6>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00447.warc.gz"} |
Count the number of selections in multiple drop down lists in one cell.
I am trying to count the number of selections (not caring what they select) in multiple drop down lists on multiple columns. Any example: If a person where to select 3 items from the drop down list
in A2, and 4 items from the drop down list in B2, and 1 item from the drop down list in C2, I would want D2 to show 8.
I am tagging Paul as he seems to be an expert based on his other posts. :) @Paul Newcome
Best Answers
• You would need to use the COUNTM function.
=COUNTM([Column A]@row) + COUNTM([Column B]@row)
• Hi @Jamie6325 ,
Try using COUNTM. For example:
=COUNTM([Column 1]@row:[Column 2]@row)
You would use the above if the columns are adjacent. If they aren't, you could use:
=COUNTM([Column 1]@row)+COUNTM([Column 2]@row)
Hope this helps! Let me know if it works.
• You would need to use the COUNTM function.
=COUNTM([Column A]@row) + COUNTM([Column B]@row)
• Hi @Jamie6325 ,
Try using COUNTM. For example:
=COUNTM([Column 1]@row:[Column 2]@row)
You would use the above if the columns are adjacent. If they aren't, you could use:
=COUNTM([Column 1]@row)+COUNTM([Column 2]@row)
Hope this helps! Let me know if it works.
• I keep getting a syntex error. I didn't have to change the formula at all, correct? I literally just copied and pasted it in there.
• I keep getting a syntex error. I didn't have to change the formula at all, correct? I literally just copied and pasted it in there.
• @Jamie6325 You'll need to replace the column names (in [square brackets]) with your actual column names.
• is there a simple way to omit one drop down selection from the total?
• Sorry I probably did a poor job of explaining but I figured it out. I was trying to create a column function that counted the number of elements in an adjacent cell with a column designation of
"drop down list". I was able to use the COUNTM function combined with =IF to return zero for a given element I wanted omitted.
Help Article Resources | {"url":"https://community.smartsheet.com/discussion/87795/count-the-number-of-selections-in-multiple-drop-down-lists-in-one-cell","timestamp":"2024-11-10T11:28:39Z","content_type":"text/html","content_length":"425023","record_id":"<urn:uuid:588a1ddc-6470-4ec6-b18b-d86833dedd99>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00374.warc.gz"} |
What is the Rule of 72?
The Rule of 72 operates on the concept of compound interest, enabling a swift calculation of the time needed for an initial investment to double at a certain interest rate. It can also be utilized to
estimate the annualized rate of return required within a specific timeframe to achieve the goal of doubling the principal.
Definition of the Rule of 72
The Rule of 72, also known as the "compound interest rule of 72," operates on the principle of compound interest, allowing for a quick estimation of the time required for an investment to double at a
certain interest rate. It can also be applied to approximate the annualized rate of return needed over a specific period to achieve asset doubling.
Origin of the Rule of 72
The roots of the Rule of 72 trace back to 1494, introduced by the renowned Italian mathematician Luca Pacioli in his work "Summa de arithmetica, geometria, proportioni et proportionalità." Pacioli is
also revered as the father of modern accounting.
Calculation Formula of the Rule of 72
Assuming a principal amount of P yuan and investing it in a product with an annual interest rate of r%, the time required for the principal to double after N years can be computed. The calculation
proceeds as follows:
From the second year onwards, the principal becomes the sum of the previous year's principal and interest (P(1+r%)), and the interest is recalculated. The formula for calculation is:
Hence, the formula to determine how much the principal accumulates after compounding for N years becomes:
Having grasped this formula, one can calculate the time required for capital doubling.
So why is the Rule of 72 used instead of 69.3, even though the calculated result is 69.3?
Because the number 72 has more factors, making it more likely to yield an integer result during calculations, thus facilitating quick computation. However, there may be some margin of error in the
results, especially as the annual interest rate being calculated increases.
The formula for the Rule of 72 calculation is:
Utilization of the Rule of 72
Armed with the understanding of the Rule of 72's computation, investors can employ it to estimate investment returns or forecast the time required for wealth doubling.
For instance, if you possess 500,000 yuan of investment funds and encounter an opportunity with an annualized return rate of 8%, you can calculate the time needed to double your investment to
1,000,000 yuan:
Investment years for doubling assets = 72 ÷ Annualized return rate = 72 ÷ 8 = 9 (years)
According to the calculation derived from the Rule of 72, an investment of 500,000 yuan with an annualized return rate of 8% would take approximately 9 years to double. Although there may be slight
discrepancies between the calculated and actual results, the disparity is minimal.
Suppose you aim to achieve asset doubling within 8 years. In that case, you need to select an investment product with an annualized return rate:
Annualized return rate for doubling assets = 72 ÷ Investment years = 72 ÷ 8 = 9%
Based on the Rule of 72, you would require an investment product with a 9% annualized return rate to realize asset doubling within 8 years.
While the Rule of 72 serves as a convenient tool for estimation, several factors warrant consideration. Firstly, the stability of the investment product's annual interest rate is crucial for ensuring
the accuracy of the calculation. Secondly, higher annualized return rates often entail higher risks, necessitating careful evaluation of the risk-return trade-off by investors.
Disclaimer: The views in this article are from the original author and do not represent the views or position of Hawk Insight. The content of the article is for reference, communication and learning
only, and does not constitute investment advice. If it involves copyright issues, please contact us for deletion. | {"url":"https://www.hawkinsight.com/en/article/what-is-the-72-rule","timestamp":"2024-11-04T04:59:34Z","content_type":"text/html","content_length":"413249","record_id":"<urn:uuid:8278f8c9-cf0b-437f-bf26-3abb31a056be>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00129.warc.gz"} |
Jon Aquino's Mental Garden
Proof of Fermat's Last Theorem
It's cool to look over Andrew Wiles'
100-page proof of Fermat's Last Theorem
(i.e., prove that no three positive integers a, b, and c can satisfy the equation a^n + b^n = c^n for any integer value of n greater than two).
For mere mortals like myself, the
Wikipedia article
is cool as well. Obviously, the map between R and T is an isomorphism if and only if two abelian groups occurring in the theory are finite and have the same cardinality. | {"url":"http://www.jona.ca/2011/02/proof-of-fermats-last-theorem.html","timestamp":"2024-11-08T09:26:29Z","content_type":"application/xhtml+xml","content_length":"22338","record_id":"<urn:uuid:ee307aab-9be1-4bbd-90cd-37a7dca0ba2f>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00147.warc.gz"} |
If you’re running an endurance race such as a 10K, half-marathon, or marathon, it might seem obvious that the quickest way of getting to the finishing line is to run continuously over the entire
distance. But some people (notably Jeff Galloway) suggest that, particularly if you’re a slower runner, you might actually finish sooner if you walk some of the way. The rest that walking gives you
can boost your running pace enough to make your overall pace faster. Galloway claims gains of up to 7 minutes in a half-marathon and 13 minutes in a marathon.
Websites such as Galloway’s give list of suggested run/walk ratios, but I haven’t found anything that lets you see what overall pace you’ll do if you follow a given run-walk strategy. Here, I aim to
fill that gap.
The table below tells you how long (in time) your bursts of running need to be to achieve a given overall average pace (left-hand column), for different paces of running (top row). It assumes that
you are going to alternate bursts of running with 1-minute walking breaks, and that you walk at a pace of 15 minutes per mile.
Here are two ways in which you might use the table.
Example 1: Suppose that you aspire to run the race at an average 10 minutes per mile. How fast and how long do your running bursts need to be? Locate 10:00 in the left-hand column of the table. Now
reading across, you come to the time 0:47. Looking at the top of the column, the pace in bold is 7:00 minutes per mile. So you can achieve a 10:00 per mile pace by running at 7:00 per mile for bursts
of 47 seconds, and walking for a minute between them. Proceeding along the same row, you could get the same overall pace by running at 7:30 per mile for bursts of 1m00s, and so on up to a probably
more realistic 9:30 per mile for bursts of 6m20s.
Example 2: Suppose that you think you can run at 9:30 per mile while you are actually running. What is the average pace for different lengths of running burst? Locate 9:30 along the top of the table.
Going down the column to where it says “6:20”, and reading across to the left-hand column, you find that running bursts 6m20s long will give you an average pace of 10:00 per mile. Similarly, running
bursts 2m51s long will give you an average pace of 10:30, and so on.
Note that the table says nothing about what you are capable of. It just tells you what your overall pace will be if you can achieve certain durations and paces for the running bursts.
If you want walking breaks longer than 1 minute, increase the length of the running bursts in the same proportion.
If the combination you want isn’t in the table, or you want to assume a different walking pace, or you want to work in kilometres, there’s a formula below that you can use.
The formula
Let your walking and running paces be
Let the durations of the walking breaks and running bursts be
Your average pace (in the same units that you used to specify your running and walking paces) is given by
If you’re walking for 1-minute breaks, this simplifies to
Runners usually express how fast they are running in terms of minutes per mile (or kilometre). I’m going to call this the running pace. However, because we want to average over time, rather than
distance, we need to do the averaging using speeds expressed as miles (or kilometres) per minute.
Let your walking and running paces be
We will assume that you alternate running for walk for a time
We’re going to work out the weighted average of your walking and running paces to work out the overall average pace. Because we’re averaging over time, not distance, we need to do the averaging using
speeds, not paces, and then convert back to a pace.
If your time-average pace is
so your time-average pace (time-per-distance) is
which we can tidy up a bit to give
Many thanks to Graham Rose for his wonderful cartoon. It feels like that from inside, too.
Mathematical typesetting was done using the QuickLatex plugin.
Biofuels in aviation
Boeing 787 Dreamliner. At least 30 football pitches of biofuel crop needed for one full-range flight. Image credit: pjs2005 from Hampshire, UK, CC BY-SA 2.0, via Wikimedia Commons.
Carbon emissions and climate change are a huge story in the news at the moment, and the aviation industry is, quite rightly, often in the spotlight. There is talk of using biofuels to partially or
completely displace fossil fuels in aviation.
That’s easy to say, but how much land would be needed to produce the energy crops? This is a complicated question, but what I want to do here is an order-of-magnitude calculation to show the alarming
scale of the issue. I’m going to ask what area of oil-seed crop we would need to fuel a single full-range flight of a typical long-haul airliner.
For a smallish long-haul airliner, such as the one above, and using the controversial but high-yielding oil palm for fuel, we’d need the annual crop from 20 hectares of land to fuel a single flight.
That’s about 30 football pitches. For one flight.
That figure becomes 100 hectares (a square kilometre, 150 football pitches) if we use the less controversial oil-seed rape. For one flight.
Or to put it into a different context, airports have large areas of grass on them. There’s roughly 2 square kilometres of grass at Heathrow. Let’s suppose that we use all of that area to grow
oil-seed rape instead. We could use that crop to fuel TWO full-range flights of a smallish long-haul airliner each year. About a quarter of a million planes take off from Heathrow annually.
I despair at the refusal of people (often privileged Westerners such as myself) to face up to reality when it comes to flying or transport more generally.
Yes, but… (1)
…isn’t this an unrealistically pessimistic calculation? We won’t necessarily be using dedicated fuel crops for aviation. For example, there are other crop residues that we could use to provide fuels.
About 70% of the land area of the UK is devoted to agriculture, about a third of which is arable land: roughly 60 000 square kilometres. So if we used the whole lot for growing oil-seed rape, it
looks doubtful that we’d keep Heathrow in jet fuel, even allowing for the facts that not every flight is long-haul and that not all planes take off with full tanks. But if, instead of using a crop
optimised for oil production, we use the wastes from crops optimised for food production, the land requirement must increase hugely. And don’t forget that some of those wastes already have uses.
Yes, but…(2)
…can’t we grow the fuels elsewhere and import them?
I haven’t done any sums here. But remember that other countries are likely to want to produce biofuels for their own aviation industries.
The calculation
There’s a table here showing the annual yield of various crops from which we can produce oil. The yields vary from 147 kg of oil per hectare per year for maize, to 1000 kg/ha/yr for oil-seed rape
(common in the UK), to 5000 kg/ha/yr for the highly controversial oil palm. I will assume that the oil can be converted to jet fuel with 100% efficiency.
The fuel capacity of long-haul airliners varies from about 100 tonnes (eg Boeing 787 Dreamliner) up to 250 tonnes (Airbus A380).
Taking the smallest plane and the highest-yielding oil crop, the annual land requirement is
If we use oil-seed rape instead, the resulting land area is 100 hectares per flight.
Why hillwalkers should love the Comte de Buffon
Part of Beinn a Bhuird, Cairngorms, Scotland.
Wandering the mountains of the UK has been a big part of my life. You won’t be surprised that before I start a long walk I like to know roughly what I’m letting myself in for. One part of this is
estimating how far I’ll be walking.
Several decades ago my fellow student and hillwalking friend David told me of a quick and simple way to estimate the length of a walk. It uses the grid of kilometre squares that is printed on the
Ordnance Survey maps that UK hillwalkers use.
To estimate the length of the walk, count the number of grid lines that the route crosses, and divide by two. This gives you an estimate of the length of the walk in miles.
Yes, miles, even though the grid lines are spaced at kilometre intervals. On the right you can see a made-up example. The route crosses 22 grid lines, so we estimate its length as 11 miles.
Is this rule practically useful? Clearly, the longer a walk, the more grid lines the route is likely to cross, so counting grid lines will definitely give us some kind of measure of how long the walk
is. But how good is this measure, and why do we get an estimate in miles when the grid lines are a kilometre apart?
I’ve investigated the maths involved, and here are the headlines. They are valid for walks of typical wiggliness; the rule isn’t reliable for a walk that unswervingly follows a single compass
• On average, the estimated distance is close to the actual distance: in the long run the rule overestimates the lengths of walks by 2.4%.
• There is, of course, some random variation from walk to walk. For walks of about 10 miles, about two-thirds of the time the estimated length of the walk will be within 7% of the actual distance.
• The rule works because 1 mile just happens to be very close to
The long-run overestimation of 2.4% is tiny: it amounts to only a quarter-mile in a 10-mile walk. The variability is more serious: about a third of the time the estimate will be more than 7% out. But
other imponderables (such as the nature of the ground, or getting lost) will have a bigger effect than this on the effort or time needed to complete a walk, so I don’t think it’s a big deal.
In conclusion, for a rule that is so quick and easy to use, this rule is good enough for me. Which is just as well, because I’ve been using it unquestioningly for the past 35 years.
And the Comte de Buffon?
George-Louis Leclerc, Comte de Buffon, (1707-1788) was a French intellectual with no recorded interest in hillwalking. But he did consider the following problem, known as Buffon’s Needle:
Suppose we have a floor made of parallel strips of wood, each the same width, and we drop a needle onto the floor. What is the probability that the needle will lie across a line between two
Let’s recast that question a little and ask: if the lines are spaced 1 unit apart, and we drop the needle many times, what’s the average number of lines crossed by the dropped needle? It turns out
that it is
where l is the length of the needle. Now add another set of lines at right angles (as if the floor were made of square blocks rather than strips). The average number of lines crossed by the dropped
needle doubles to
Can you see the connection with the distance-estimating rule? The cracks in the floor become the grid lines, and the needle becomes a segment of the walk. A straight segment of a walk will cross, on
So the fact that using a kilometre grid gives us a close measure of distance in miles is just good luck. It’s because a mile is very close to
In a future post, I’ll explore the maths further. We’ll see where the results above come from, and look in more detail at walk-to-walk variability. We’ll also see why results that apply to straight
lines also apply to curved lines (like walks), and in doing so discover that not only did the Comte de Buffon have a needle, he also had a noodle.
Mathematical typesetting by QuickLaTeX.
Here again is the processor package from my old laptop. The processor has a clock in it that delivers electric pulses that trigger the events in the processor. The clock on this processor “ticks” at
2.2 gigahertz, that is, it sends out 2.2 billion pulses per second.
Over two thousand million pulses every second! How can we make sense of such a huge number?
In this post, I’m going to do with time what I did with space in the previous post. I’m going to ask the question:
Suppose that we slow down the processor so that you could just hear the individual “ticks” of the the processor clock (if we were to connect it to a loudspeaker), and suppose that we slow down my
bodily processes by the same amount. How often would you hear my heart beat?
Answer: My heart would beat about once every year and a half.
The calculation
How slow would the processor clock need to tick for me to be able to hear the individual ticks? A sequence of clicks at the rate of 10 per second clearly sounds like a series of separate clicks.
Raise the frequency to 100 per second, and it sounds like a rather harsh tone; the clicks have lost their individual identity. Along the way, the change from sounding like a click-sequence to
sounding like a tone is rather gradual; there’s no clear cutoff.
You can try it yourself using this online tone generator. Choose the “sawtooth” waveform. This delivers a sharp transition once per cycle, which is roughly what a train of very short clicks would do,
and play around with the value in the “hertz” box. (Hertz is the unit of frequency; for example, 20 hertz is 20 cycles per second.)
I found that a 40 hertz sawtooth definitely sounds like a series of pulses, and that a 60 hertz sawtooth has a distinct tone-like quality. So let’s say that the critical frequency is 50 hertz, that
is, 50 ticks per second. I don’t expect you to agree with me exactly.
If I can hear individual pulses at a repetition rate of 50 hertz, then to hear the ticks of a 2.2 gigahertz clock I need to slow down the clock by a factor of
At rest, my heart beats about once per second, so if it was slowed down by the same factor as the processor clock, it would beat every 44 × 10^6 seconds, which is about every 17 months.
Or should it be twice as long?
The signal from the processor clock is usually a square wave with 50% duty cycle. Try the square wave option on the online signal generator with a 1 hertz frequency (one cycle per second). You’ll
hear two clicks per second, because in each cycle of the wave, there are two abrupt transitions, a rising one and a falling one.
This means that if we did connect a suitably slowed-down processor clock to a loudspeaker, we’d hear clicks at twice the nominal clock rate. Looked at this way, we’d need to slow down the clock, and
my heart, twice as much as we’ve calculated above. My heart would beat once every three years.
However, most processors don’t respond to both transitions of the clock signal. Some processors respond to the rising transition, others to the falling transition. To assume that we hear both of
these transitions is to lose the spirit of what we mean by one “tick” of the processor clock.
Making the micro macro
What is this strange collection of pillars, one of which is propping me up? Read on to find out. Many thanks to Graham Rose for the ilustration.
On the right is the processor package from my old laptop. The numbers associated with microelectronic devices like this one are beyond comprehension. The actual processor – the grey rectangle in the
middle – measures only 11 mm by 13 mm and yet, according to the manufacturer, it contains 291 million transistors. That’s about 2 million transistors per square millimetre.
To try to bring these numbers within my comprehension, I asked the following question:
If I were to magnify the processor – the grey rectangle – so that I could just make out the features on its active surface with my unaided eye, how big would it be?
The answer is that the processor would be something like 15 metres across.
Consider that for a moment: an area slightly larger than a singles tennis court, packed with detail so fine that you can only just make it out.
The package that the processor is part of would be over 50 metres across, and the pins on the back of the package (right) would be 3 metres tall, half a metre thick, and about 2 metres apart.
The result above is rather approximate, as you’ll see if you read the details of the calculation below. However, if it inadvertently overstates the case for my processor, which is 10 years old, the
error is made irrelevant by progress in microprocessor fabrication. Processors are available today that are similar in physical size but on which the features are nearly 5 times smaller. If my
processor had that density of features, the magnified version would be around 70 metres across, on a package 225 metres across. And those pins would be 13 metres tall and 2.25 metres thick.
The calculation
The processor is an Intel T7500. According to the manufacturer, the chip is made by the 65-nanometre process. Exactly what this means in terms of the size of the features on the chip is quite hard to
pin down. Printed line widths can be as low as 25 nm, but the pitch of successive lines may be greater than the 130 nm that you might expect. I’ve assumed that the lines on the chip and the gaps
between them are all 65 nm across.
“The finest detail that we can make out” isn’t well defined either. It depends, among other things, on the contrast. But roughly, the unaided human visual system can resolve details as small as 1
minute of arc subtended at the eye in conditions of high contrast. This is about 3 × 10^-4 radians. At a comfortable viewing distance of 30 cm, this corresponds to 0.09 mm.
So to make the features on the processor just visible (taking high contrast for granted) we need to magnify them from 65 nm to 0.09 mm, which is a magnification factor of 1385.
Applying this magnification factor to the whole processor, its dimensions of 11 by 13 millimetres become 15 by 18 metres. The pins are 2 mm high, so they become 2.8 metres high and about half a metre
Some processors are now made using 14 nm technology. This increases the required magnification factor by a factor of 65/14, to 6430, yielding the results given in Caveat above.
The 0.7%
No, this post isn’t about wealth inequality. It’s about daylight inequality.
Today is the day of the spring equinox. For the past three months, the days have gradually been getting longer, and from tomorrow, the sun finally starts to spend more time above the horizon than
below it*.
It’s also the day when everyone in the world enjoys a day of approximately equal duration. But from tomorrow onwards until the autumnal equinox in September, the further north you are, the longer
your days will be, in the sense that the sun will spend more time above the horizon.
I wondered where Edinburgh, where I live, fits into this scale of day lengths. For the next six months, we’ll have longer days than anyone living south of us. What fraction of the world’s population
is that?
I estimate that, over the coming summer, we’ll have longer days than roughly 99.3% of the world’s population.
I was very surprised at how large this number is. And delighted too: it somehow seems to make up for the seemingly endless dark dreich dampness of the Scottish winter.
The calculation
Rather than counting the number of people who live south of Edinburgh, it’s easier to count the much smaller number who live further north.
There are a few countries that are wholly north of Edinburgh, namely Finland, Norway, Latvia, Estonia, Iceland, the Faroes, and Greenland. There are some countries that are partially north of
Edinburgh: Sweden, Denmark, and Lithuania. And then there is Russia, which spans a vast range of latitudes, but which has relatively few cities north of Edinburgh, of which the largest and/or most
well known are St Petersburg, Nishny Novgorod, Perm, Yekaterinburg, Tomsk, Archangelsk, and Murmansk (though I counted a few more). There’s one state of the USA: Alaska. Finally we have Aberdeen,
Inverness, Dundee and Perth, the main centres of population further north in Scotland.
To roughly compensate for the fact that these counts don’t cover minor centres of population, and that the northern part of Moscow probably overlaps somewhat with Edinburgh, I included the whole of
Sweden, Denmark and Lithuania in the sum, despite the fact that the countries have major towns that are south of Edinburgh.
When I added it all up, it came to just under 44 million people living north of Edinburgh, against a world population at the time (I actually did the sums a few years ago) of 6.8 billion. Expressed
as a percentage, 99.35% of the world’s population live south of Edinburgh, which I’ll round to 99.3% to avoid overstating my case.
*Actually, the sun appears to be above the horizon even at times when, geometrically speaking, it is slightly below it. This is because the light rays are refracted by the atmosphere and so travel in
slight curves rather than straight lines.
Puff pastry
NOTE: the video that previously headed this post is no longer available. It shows the mixing of two thick sheets of coloured silicone material that had the apparent consistency of clay. One sheet was
laid on the other sheet, and the pair were rolled up. The roll was squashed flat by passing it through a pair of rollers. It was then rolled up again, squashed flat again, rolled up again, squashed
flat and so on. Remarkably, after only 4 such cycles the mixing was done to the satisfaction of the operators.
I was rather taken by the video above, which I first saw on Core77. I started wondering how many times you have to put the roll of silicone material through the machine to get satisfactory mixing of
the two colours of material. The people in the video consider the job done after four passes. What does that mean in terms of the thickness of the red and white layers within the material?
The roll is a rather complicated object, so I worked with an idealised version of the real process, where the sheet emerging from the rollers isn’t rolled up, but cut into several pieces which are
stacked up before being passed through the rollers again. I came up with the following:
After only 2 passes, the layers in the slab are too thin to see with the naked eye. And by some margin, too: there are over 600 of them and they’re only a fortieth of a millimetre thick. If you made
a perpendicular cut through the slab, it wouldn’t appear to have red and white layers in it.
After only 4 passes, a standard compound microscope operating in visible light wouldn’t be able to resolve the layers in the slab.
After only 6 passes, the layers would be thinner than the width of the molecules of the silicone material. At this stage the concept of red and white layers no longer makes sense.
These results will only apply to material near the centre of the roll. It’s easy to see from the video that material near the edges is not mixed so well.
The calculation
From the video, it looks like there are about 9 turns in the roll. Each time the roll is flattened by the rollers, those 9 turns are converted into 18 layers. The resulting sheet is rolled up and
passed through the rollers again, multiplying the number of layers by 18, and so on.
This doesn’t work at the sides of the roll. We’ll ignore that complication, and work with a flat analogue of the actual situation. We’ll assume that we start with two long rectangular flat sheets of
material, a white one and a red one, laid on top of each other. We’ll cut this assembly into 18 identical pieces, and make a stack of them; this stack will have 36 layers. We now flatten this stack
in the rollers, cut it into 18 pieces, stack them up (giving us 648 layers), and repeat.
On emerging from the roller, the sheet appears, by eye, about 1.5 cm thick. We’ll assume that we start with two layers of half this thickness. The table below shows the number of layers and the
thickness of each layer after 0, 1, 2, 3… passes through the rollers.
Number of passes Number of layers Layer thickness (m)
0 2 7.50 × 10^-3
1 36 4.17 × 10^-4
2 648 2.31 × 10^-5
3 11 664 1.29 × 10^-6
4 209 952 7.14 × 10^-8
5 3 779 136 3.97 × 10^-9
6 68 024 448 2.21 × 10^-10
We can identify various milestones, as follows:
Limit of visual acuity. A person with clinically normal vision can resolve detail that subtends roughly 1 minute of arc at the eye. At a viewing distance of 30 cm, this corresponds to about 0.1 mm
(10^-4 m). The layers of material are much thinner than this after only 2 passes. If you made a perpendicular cut through the slab of material, after two passes you wouldn’t be able to see the
layered structure. (This might not be true if the cut was oblique.)
Limit of standard light microscopy. A compound microscope working in visible light can resolve detail down to about 200 nm (2 × 10^-7 m). The layers become thinner than this after only 4 passes.
Single-molecule layers. The question here is the number of passes needed before the layers are less than a molecule thick (at which point the idea of layers fails). The difficulty is that molecules
of silicones are long chains, and these chains are almost certainly bent, so their size is ill-defined. This part of the calculation will be hugely approximate. We’ll be as pessimistic as possible,
assuming that the molecules are roughly straight and that they lie parallel to the layers in the slab of material.
A common silicone material is polydimethylsiloxane or PDMS. This consists of a silicon-oxygen backbone with methyl groups attached. The lengths of carbon-silicon and carbon-hydrogen bonds are 1.86 ×
10^-10 m and 1.09 × 10^-10 m respectively. So the width of the molecule is going to be, very, very approximately, of the order of 4 × 10^-10 m. The layers are thinner than this after only 6 passes.
Ink cubed
Edwin Pickstone, a colleague at Glasgow School of Art, appointed me as a maths consultant recently. His project was to produce a book which had a black square on each page. The size of the square
and the number of pages had to be such that the envelope of the resulting block of ink was a cube containing 1 kilogram of ink.
His printer provided a sample run so that we knew what the areal density of the ink film would be, and I did the sums to work out how big the squares would need to be and how many pages we’d need. It
turned out that we needed squares of side 19.28 cm, and (coincidentally) 1928 pages.
The responsibility weighed heavily on me, and I was a little nervous as the order went off to the printer. So I was relieved and delighted when Edwin told me that when he’d walked into the printers
to pick the job up, the printer said ‘bloody hell, it took a whole kilo tin of ink to print your job’.
Edwin is Lecturer, Typography Technician and Designer in Residence at Glasgow School of Art.
Pedal-powered geology
What on earth am I doing here? Read on to find out. Many thanks to Simon Gage for the idea and to Graham Rose for the wonderful illustration.
In the previous post, we discovered that the kinetic energy of a drifting continent is of the same general magnitude as that of a moving bicycle and its rider – 1500 joules would be a typical figure.
I went on to calculate that, whereas it takes me only about 10 seconds to get my bike up to full speed, it would take me hundreds of years to get the continent up to its tiny full speed were I to put
my shoulder against it and push (assuming that it was perfectly free to move). How can this be, when the amount of energy that I’m giving each of these objects is the same?
The problem is that when I push the continent, I am, effectively, in the wrong gear.
On a bike with gears, you’ve got a range of choices about how you power it: you can ride in a high gear, pedalling slowly but pushing hard on the pedals, or ride in a low gear, pedalling more quickly
but pushing less hard on the pedals. There’s a simple tradeoff: if you want to pedal half as fast, you’ve got to push twice as hard for the same effect.
But there’s a limit to how hard you can push on the pedals, which means that if you move up too far up through the gears, there comes a point where you can no longer make up for the decreased
pedalling rate by pushing harder on the pedals, and the power that you can supply to the bicycle falls.
Anyone who’s tried to accelerate a bicycle when they are in too high a gear will have experienced this problem, and it’s what I experience when I try to push the continent directly. Because the top
speed of the continent is extremely low (about the speed of a growing fingernail), I’m necessarily pushing it very slowly as I accelerate it. This means that to give it energy at the rate that I want
to (1500 joules in 10 seconds, like the bike) I would have to push it impossibly hard – the force needed is about the same as the weight of a 300-metre cube of solid rock.
Is there a way that we can put me into a lower gear, so that I can push with a force that suits me, over a longer distance, and still apply the very high force over a short distance to the continent?
Yes. Just as we’ve all used a screwdriver as a lever to get the lid of a tin of paint off, so I could use a lever to move the continent. Similarly to the bike gears, the lever allows me to exchange
pushing hard over a small distance with pushing less hard over a longer distance. To do the job, the lever would need to be long enough to allow me to push, with all my might, through a distance of
about 2.5 metres, with the short arm of the lever pushing the continent. We’d need an imaginary immoveable place for me to stand, and we could use the edge of the neighbouring continent as the pivot
(just as we use the rim of a paint tin as the pivot). The catch is the length of the lever: if the short arm was 1 metre long, the long arm would be about 1.5 million kilometres long.
Simon Gage of Edinburgh International Science Festival suggested a more compact arrangement: a bicycle with an extremely low gear ratio, with the front wheel immobilised on the neighbouring
continent (assumed immoveable), and the back wheel resting on the continent we’re trying to accelerate. A transmission giving 17 successive 4:1 speed reductions would do the job nicely. Ten seconds
of hard pedalling would get the continent up to full speed. To me on the saddle, it shouldn’t feel any different to accelerating my bike away from the lights.
A wee caveat. This is a thought experiment, and we’ve swept some fairly significant engineering issues under the carpet. The rearmost parts of the power train would be moving at speeds that are
literally geological, so in reality it would take me years of pedalling to take all of the slack and stretch out of the system. These parts would also be transmitting mountainous forces, and so
they’d need to be supernaturally strong. There will be frictional losses. And then there’s the issue of transmitting a gigantic force to the continent through the contact of a bike tyre on the
The calculations
What force is required to accelerate the Eurasian plate to top speed in 10 seconds?
The top speed of the plate is 3.2 × 10^-10 ms^-1. If I accelerate it uniformly, its average speed will be half of this, and so in the 10 seconds over which I hope to accelerate it, it will travel 1.6
× 10^-9 m.
Now W = fd
where W is the work that I do on the plate (ie the kinetic energy that I give it), f is the force that I apply to it, and d is the distance through which I push the plate. Rearranging gives us
f = W/d
We know W from the previous post (it’s 1500 joules) and we’ve just calculated d. Thus f works out at about 9.4 × 10^11 newtons.
For comparison, a 300-metre cube of rock of density 2700 kg m^-3 will have a weight of (300 m)^3 × 2700 kg m^-3 × 9.81 m s^-2 = 7 × 10^11 newtons roughly.
The lever
When a lever is used to amplify a force, the ratio of the lengths of the arms of the lever needs to be the same as the ratio of the two forces. Suppose that I can push with a force equal to my own
body weight, about 600 newtons. If I’m to use a lever to amplify my push of 600 N to a force of 9.4 × 10^11 N, the ratio of the lengths of the arms needs to be (9.4 × 10^11)/600, or roughly 1.5 × 10^
9. So if the short arm of the lever is 1 metre long, the long arm needs to be about 1.5 × 10^9 metres long, which is 1.5 million kilometres. For comparison, the Moon is about 400,000 kilometres away.
To do 1500 joules of work with a force of 600 N, I’d need to push over a distance of 2.5 metres (because 600 × 2.5 = 1500).
The bicycle gearing
I estimated that it takes me 15 pedal revolutions to get my bike up to full speed. Knowing the length of the pedal cranks, I know the total distance that I have pushed the pedals through, and I know
how much work I have done on the bicycle – 1500 joules. (I’m ignoring energy losses here, because they are small at low speeds on a bike and the calculation is highly approximate anyway). Using work
done = force × distance, this gives an average force on the pedals of about 94 newtons.
The 17 stages of 4:1 reduction mean that the back wheel is rotating 417 = 1.7 × 10^10 times slower than I’m pedalling. The pedalling force is amplified in the same ratio, to give a force on the teeth
of the rearmost gear of 1.6 × 10^12 newtons. We now have to allow for the fact that the radius of the rear wheel is about twice the length of the pedal crank. This roughly halves the force available
at the rim of the rear wheel, giving a force of about 8 × 10^11 newtons, which is close to what we need.
The kinetic energy of a drifting tectonic plate…
…is broadly similar to the kinetic energy of me and my bike as I pedal along.
Map of tectonic plates (United States Geological Survey) http://pubs.usgs.gov/publications/text/slabs.html
According the the theory of plate tectonics, the outer layer of the Earth is divided into a number of separate plates, which very slowly drift around, opening and closing oceans, causing earthquakes,
and thrusting up mountain ranges.
A moving body has energy by virtue of its motion: kinetic energy. Kinetic energy is proportional to a body’s mass and to the square of its speed.
Now tectonic plates move extremely slowly: the usual comparison is with a growing fingernail. But they are also extremely heavy: tens of millions of square kilometres in area, over 100 km thick, and
made of rock. I wondered how the minute speed and colossal mass play out against each other: what’s the kinetic energy of a drifting tectonic plate?
There are so many variables, that vary such a lot, that this calculation is going to be extremely approximate. But the answer is delightfully small: the kinetic energy of the tectonic plate on which
I live, as observed from one of the plates next door, is about the same as the kinetic energy of me and my bike when I’m going at a reasonable pace: about 1500 joules.
Me struggling up one of the many steep roads in north-west Scotland. Here, the kinetic energy of me and my bike is much less than the kinetic energy of a drifting tectonic plate. In fact the speed of
me and my bike is probably much less than that of a drifting tectonic plate ;-).
This is a fun calculation to do, but we shouldn’t get carried away thinking about the kinetic energy of tectonic plates. Plates are driven by huge forces, and their motion is resisted by equally
large forces. The mechanical work done by and against these forces will dominate a plate’s energy budget in comparison to its kinetic energy.
But the calculation does provoke an interesting thought about forces and motion. I can get my bike up to full speed in, say, 10 seconds. If the Eurasian plate were as free to move as my bike, and I
were to put my shoulder against it and shove as hard as I could, it would take me about 500 years to get it up to its (very tiny) full speed.
In both cases, I’m giving the moving object roughly 1500 joules of kinetic energy. How come I can give that energy to my bike in a few seconds, but to give it to the plate would take me centuries?
I’ll return to that thought in a later post.
The calculation
Depending on how you count them, there are 6-7 major tectonic plates, 10 minor plates, and many more microplates. The plates vary hugely in size, from the giant Pacific Plate with an area of 100
million km^2, to the dinky New Hebridean plate, which is a hundred times smaller. The microplates are smaller still. Plates also vary a lot in speed: 10-40 mm is typical.
I’m going to be parochial, and choose the Eurasian plate for this calculation.
Let’s call the area of the plate a and its mean thickness t. Its volume is then given by at, and if its mean density is ρ, then its mass m is ρat.
A body of mass m moving at a speed v has kinetic energy ½mv^2. So our plate will have kinetic energy ½ρatv^2.
The area of the Eurasian plate is 67,800,000 km^2 or 6.78 × 10^13 m^2, and its speed relative to the African plate is (the only speed I have) is given as 7-14 mm per year. We’ll use 10 mm per year,
which is 3.2 × 10^-10 ms^-1. The thickness of tectonic plates in general varies roughly in the range 100-200 km depending upon whether we are talking about oceanic or continental lithosphere; let’s
call it 150 km or 1.5× 10^5 m. The density of lithospheric material varies in the range 2700-2900 kg m^-3; we’ll use 2800 kg m^-3.
Putting all of these numbers into our formula for kinetic energy, we get a value of 1500 joules (to 2 significant figures, which the precision of the input data certainly doesn’t warrant).
Now for me and my bike. I weigh about 57 kg, my bike is probably about 10 kg. Suppose I’m riding at 15 mph, which is 6.7 ms^-1. My kinetic energy is almost exactly…
…1500 joules!
The closeness of these two values is unmitigated luck*, and we shouldn’t be seduced by the coincidence. Just varying the speed of the plate in the range 7-14 mm would cause a 4-fold change in
kinetic energy, and there’s the variability in plate thickness and rock density to take into account as well. The choice of bike speed was arbitrary, I guessed the mass of the bike, and I’ve since
realised that I didn’t account for the fact that the wheels of my bike rotate as well as translate.
However, what we can say is that the kinetic energy of a drifting continent is definitely on a human scale, which leads to a new question:
Suppose the Eurasian plate were as free to move as my bicycle, and that I put my shoulder against it and shoved, how long would it take me to get it up to speed?
From the figures above, the mass of the plate is 2.85 × 10^22 kg. If I can push with a force equal to my own weight (about 560 newtons) then by Newton’s 2nd Law I can give it an acceleration of about
1.96 × 10^-20 ms^-2. Rearranging the equation of motion v = at, where v is the final speed, a is the acceleration, and t is the time, then t = v/a. Inserting the values for v and a, we get t = 1.6 ×
10^10 seconds, or about 500 years.
* I didn’t tweak my assumptions: what you see above really is the very first version of the calculation! | {"url":"https://bencraven.org.uk/category/calculations/","timestamp":"2024-11-10T12:09:34Z","content_type":"text/html","content_length":"107467","record_id":"<urn:uuid:6920e469-ba05-4118-8697-899a3754d6e1>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00113.warc.gz"} |
Do you want to delete “Colors” permanently? You will not be able to undo this action.
import random import math Square(color='white') pixelSize = 4 for x in range(0,100+pixelSize, pixelSize): for y in range(0,100+pixelSize, pixelSize): r = int(math.sqrt(((50.0-x)*(50.0-x))+((50.0-y)*
(50.0-y)))*3.5)%256 g = int(math.sqrt(pow(x,2)+pow(y,2))*1.8)%256 b = int(math.sqrt(((100-x)*(100-x))+((y)*(y)))*1.8)%256 #rnd = lambda: random.randint(0,255) color='#%02X%02X%02X' % (r,g,b) Square(x
=x, y=y, width=pixelSize+1, height=pixelSize+1, color=color) #Text(text='#%02X'% int(math.sqrt(10))) | {"url":"https://shrew.app/show/NoorsDad/colors","timestamp":"2024-11-08T09:11:01Z","content_type":"text/html","content_length":"9695","record_id":"<urn:uuid:64ba3f3f-77e6-4250-ada2-ab39fb777aed>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00014.warc.gz"} |
How to use this tool?
This free online converter lets you convert code from Matlab to R in a click of a button. To use this converter, take the following steps -
1. Type or paste your Matlab code in the input box.
2. Click the convert button.
3. The resulting R code from the conversion will be displayed in the output box.
The following are examples of code conversion from Matlab to R using this converter. Note that you may not always get the same code since it is generated by an AI language model which is not 100%
deterministic and gets updated from time to time.
Example 1 - Is String Palindrome
Program that checks if a string is a palindrome or not.
Example 2 - Even or Odd
A well commented function to check if a number if odd or even.
Key differences between Matlab and R
Characteristic Matlab R
Syntax Matlab uses a syntax that is similar to traditional programming languages, with a focus on R uses a syntax that is focused on statistical analysis and data manipulation, with a wide
matrix operations and numerical computations. range of functions and operators for these tasks.
Paradigm Matlab is primarily a procedural language, but it also supports object-oriented programming. R is a functional programming language, but it also supports procedural and
object-oriented programming.
Typing Matlab is dynamically typed, meaning that variable types are determined at runtime. R is dynamically typed, meaning that variable types are determined at runtime.
Performance Matlab is known for its high performance in numerical computations and matrix operations. R can be slower than Matlab in numerical computations and matrix operations, but it has
many packages that can improve performance.
Libraries and Matlab has a large number of built-in functions and toolboxes for various applications, but R has a vast library of packages for various applications, including statistical analysis,
frameworks it may require additional toolboxes for some tasks. data visualization, and machine learning.
Community and Matlab has a large and active community, with many resources available for learning and R has a large and active community, with many resources available for learning and
support troubleshooting. troubleshooting.
Learning curve Matlab has a relatively steep learning curve, especially for those without a background in R has a moderate learning curve, with a focus on statistical analysis and data
programming or numerical analysis. manipulation. | {"url":"https://www.codeconvert.ai/matlab-to-r-converter","timestamp":"2024-11-10T20:38:55Z","content_type":"text/html","content_length":"32106","record_id":"<urn:uuid:8b150871-a51e-4cf2-a045-e392eac374bb>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00103.warc.gz"} |
37. A correlation coefficient of r = -0.98
between two quantitative variables A and B...
37. A correlation coefficient of r = -0.98 between two quantitative variables A and B...
37. A correlation coefficient of r = -0.98 between two quantitative variables A and B indicates that
A. As A increases, B tends to increase.
B. Changes in A cause changes in B.
C. As A increases, B tends to decrease.
D. There is a very weak association between A and B, and change in A will not affect B.
Please show work using excel functions!
37. The correlation co-efficient between A and B is - 0.98.
We can see that the absolute value of correlation is 0.98, which is close to 1 i.e. the correlation is very high. It indicates that A and B are strongly associated i.e. the linear relationship
between A and B is very strong.
Also, note that, the sign of the correlation co-efficient is negative, which indicates increase in A causes decrease in B and vice-versa.
Hence, Option (C) is the correct choice. | {"url":"https://justaaa.com/statistics-and-probability/949741-37-a-correlation-coefficient-of-r-098-between-two","timestamp":"2024-11-14T07:46:24Z","content_type":"text/html","content_length":"43624","record_id":"<urn:uuid:2298d43f-4b1e-4304-90d9-5ed861919089>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00496.warc.gz"} |
Java Program to Find Minimum Revolutions to Move Center of a Circle to a Target - BTech Geeks
Java Program to Find Minimum Revolutions to Move Center of a Circle to a Target
In the previous article, we have seen Java Program to Solve Pizza Cut Problem(Circle Division by Lines)
In this article we will discuss about how to find minimum revolutions to move center of a circle to a target using java programming language.
Java Program to Find Minimum Revolutions to Move Center of a Circle to a Target
Before jumping into the program directly, let’s first know how can we find minimum revolutions to move center of a circle to a target .
Formula to Find Minimum Revolutions to Move Center of a Circle to a Target: ceil(d/2*r)
When r=2,P1=(0,0), and P2=(0,4), d = 4
Minimum Revolutions: ceil(d/2*r)
=> ceil(4/2*2)
=> 1
Let’s see different ways to find minimum revolutions to move center of a circle to a target.
Method-1: Java Program to Find Minimum Revolutions to Move Center of a Circle to a Target By Using Static Value
• Declare the value for the coordinates of the point, of radius and size of radius.
• Find the distance between both the points.
• Find the minimum revolutions using the formula ceil(distance/(2*radius))
• Then print the result.
import java.awt.Point;
import java.util.Scanner;
import static java.lang.Math.*;
public class Main
public static void main(String[] args){
// Static initialization of both points and the radius
Point rad = new Point(0,0);
Point p = new Point(0,4);
double radius = 2;
// Caclculates the distance between the radius and the point
double distance = Math.sqrt((rad.x-p.x)*(rad.x-p.x)+(rad.y-p.y)*(rad.y-p.y));
// Prints the minimum revloutions
System.out.println("The minimum revolutions required is "+(int)Math.ceil(distance/(2*radius)));
The minimum revolutions required is 1
Method-2: Java Program to Find Minimum Revolutions to Move Center of a Circle to a Target By User Input Value
• Take user input the value for the coordinates of the point and radius and size of radius.
• Find the distance between both the points.
• Find the minimum revolutions using the formula ceil(distance/(2*radius))
• Then print the result.
import java.awt.Point;
import java.util.Scanner;
import static java.lang.Math.*;
public class Main
public static void main(String[] args){
Scanner scan = new Scanner(System.in);
//Asking the user to input both points and the radius
System.out.println("Enter coordinates of the point");
Point p = new Point(scan.nextInt(),scan.nextInt());
System.out.println("Enter coordinates of the radius");
Point rad = new Point(scan.nextInt(),scan.nextInt());
System.out.println("Enter the radius");
double radius = scan.nextDouble();
// Caclculates the distance between the radius and the point
double distance = Math.sqrt((rad.x-p.x)*(rad.x-p.x)+(rad.y-p.y)*(rad.y-p.y));
// Prints the minimum revloutions
System.out.println("The minimum revolutions required is "+(int)Math.ceil(distance/(2*radius)));
Enter coordinates of the point
Enter coordinates of the radius
Enter the radius
The minimum revolutions required is 1
Are you a job seeker and trying to find simple java programs for Interview? This would be the right choice for you, just tap on the link and start preparing the java programs covered to crack the
Related Java Articles: | {"url":"https://btechgeeks.com/java-program-to-find-minimum-revolutions-to-move-center-of-a-circle-to-a-target/","timestamp":"2024-11-03T00:13:03Z","content_type":"text/html","content_length":"63788","record_id":"<urn:uuid:d0dd8fed-00b2-4f1e-bb23-0b23b4b458bb>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00222.warc.gz"} |
Effective Duration: Definition; Formula; Example » YVES BROOKS
Effective Duration: Definition; Formula; Example
When it comes to investing, understanding the concept of duration is crucial. Duration measures the sensitivity of a bond's price to changes in interest rates. It helps investors assess the potential
impact of interest rate fluctuations on their bond investments. While there are different types of duration, one that is particularly useful is effective duration. In this article, we will explore
the definition, formula, and provide examples of effective duration to help you better understand this important concept in finance.
What is Effective Duration?
Effective duration is a measure of a bond's sensitivity to changes in interest rates, taking into account not only the bond's maturity but also its cash flows and the optionality embedded in it. It
provides investors with an estimate of how much the bond's price is likely to change in response to a change in interest rates.
Unlike Macaulay duration, which only considers the timing of cash flows, effective duration incorporates the impact of changes in interest rates on both the timing and size of cash flows. This makes
it a more accurate measure of a bond's price sensitivity to interest rate movements.
Formula for Effective Duration
The formula for effective duration is as follows:
Effective Duration = (P[–] – P[+]) / (2 * P[0] * Δy)
• P[–] represents the price of the bond when interest rates decrease by Δy.
• P[+] represents the price of the bond when interest rates increase by Δy.
• P[0] represents the initial price of the bond.
• Δy represents the change in interest rates.
The formula calculates the percentage change in the bond's price for a given change in interest rates. It takes into account the convexity of the bond, which captures the non-linear relationship
between bond prices and interest rates.
Example of Effective Duration
Let's consider an example to illustrate how effective duration works. Suppose you own a bond with an initial price of $1,000, an effective duration of 5 years, and interest rates increase by 1%.
Using the formula, we can calculate the estimated percentage change in the bond's price:
Effective Duration = (P[–] – P[+]) / (2 * P[0] * Δy)
Effective Duration = (P[–] – P[+]) / (2 * $1,000 * 0.01)
Let's assume that the bond's price decreases to $950 when interest rates increase and increases to $1,050 when interest rates decrease. Plugging in these values, we get:
Effective Duration = ($950 – $1,050) / (2 * $1,000 * 0.01)
Effective Duration = -$100 / $20
Effective Duration = -5%
The negative sign indicates that the bond's price is expected to decrease by 5% when interest rates increase by 1%.
Why is Effective Duration Important?
Effective duration is an essential tool for bond investors because it helps them assess the potential impact of interest rate changes on their bond holdings. By understanding a bond's effective
duration, investors can make more informed decisions about their portfolio allocation and risk management strategies.
Here are some key reasons why effective duration is important:
• Interest Rate Risk Assessment: Effective duration allows investors to gauge the sensitivity of a bond's price to changes in interest rates. Bonds with longer effective durations are more
sensitive to interest rate movements, making them riskier in a changing interest rate environment.
• Portfolio Diversification: By considering the effective durations of different bonds in a portfolio, investors can diversify their holdings and reduce overall interest rate risk. Combining bonds
with varying effective durations can help offset potential losses in one bond with gains in another.
• Yield Curve Analysis: Effective duration can provide insights into the shape and slope of the yield curve. Bonds with longer effective durations tend to have steeper yield curves, indicating
higher yields for longer maturities.
Limitations of Effective Duration
While effective duration is a valuable measure, it does have some limitations that investors should be aware of:
• Assumes Parallel Shifts in Yield Curve: Effective duration assumes that changes in interest rates affect all maturities equally. In reality, yield curves can shift in different ways, with
short-term rates moving differently from long-term rates. This limitation can impact the accuracy of effective duration calculations.
• Does Not Account for Credit Risk: Effective duration focuses solely on interest rate risk and does not consider credit risk. Bonds with the same effective duration may have different credit
qualities, leading to variations in their price sensitivity to interest rate changes.
• Does Not Capture Optionality: Effective duration does not fully capture the impact of embedded options, such as call or put options, on a bond's price sensitivity. Bonds with embedded options may
exhibit different price behavior compared to bonds without options.
Effective duration is a powerful tool for bond investors to assess the potential impact of interest rate changes on their bond holdings. It provides a more accurate measure of a bond's price
sensitivity by considering both the timing and size of cash flows, as well as the optionality embedded in the bond. By understanding effective duration, investors can make more informed decisions
about portfolio allocation, risk management, and yield curve analysis. However, it is important to recognize the limitations of effective duration and consider other factors, such as credit risk and
embedded options, when evaluating bond investments.
You must be logged in to post a comment. | {"url":"https://yves-brooks.com/glossary/e/effective-duration-definition-formula-example/","timestamp":"2024-11-08T05:57:10Z","content_type":"text/html","content_length":"123832","record_id":"<urn:uuid:480689d8-ad94-4449-9214-a37d33a0b66c>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00264.warc.gz"} |
Get days, hours, and minutes between dates in Excel November 6, 2024 - Excel Office
Get days, hours, and minutes between dates in Excel
To calculate and display the days, hours, and minutes between two dates, you can use the TEXT function with a little help from the INT function. Alternatively, you can adapt the formula using
=INT(end-start)&" days "&TEXT(end-start,"h"" hrs ""m"" mins """)
In the example shown, the formula in D5 is:
=INT(C5-B5)&" days "&TEXT(C5-B5,"h"" hrs ""m"" mins """)
How this formula works
Most of the work in this formula is done by the TEXT function, which applies a custom number format for hours and minutes to a value created by subtracting the start date from the end date.
TEXT(C5-B5,"h"" hrs ""m"" mins """)
This is an example of embedding text into a custom number format, and this text must be surrounded by an extra pair of double quotes.
The value for days is calculated with the INT function, which simply strips off the integer portion of the end date minus the start date:
Although you can use “d” in a custom number format for days, the value will “roll over” back to zero when days is greater than 31.
Include seconds
To include seconds, you can extend the custom number format like this:
=INT(C5-B5)&" days "&TEXT(C5-B5,"h"" hrs ""m"" mins ""s"" secs""")
Total days, hours, and minutes between dates
To get the total days, hours, and minutes between a set of start and end dates, you can adapt the formula using SUMPRODUCT like this:
=INT(SUMPRODUCT(ends-starts))&" days "&TEXT(SUMPRODUCT(ends-starts),"h"" hrs ""m"" mins """)
where “ends” represents the range of end dates, and “starts” represents the range of start dates. In the example shown, D11 contains this formula:
=INT(SUMPRODUCT(C5:C9-B5:B9))&" days "&TEXT(SUMPRODUCT(C5:C9-B5:B9),"h"" hrs ""m"" mins """) | {"url":"https://www.xlsoffice.com/excel-functions/date-and-time-functions/get-days-hours-and-minutes-between-dates-in-excel/","timestamp":"2024-11-06T10:43:21Z","content_type":"text/html","content_length":"65117","record_id":"<urn:uuid:e92cb4e8-c3ce-43b9-8952-d94d4668f68b>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00383.warc.gz"} |
Problem 1
Problem 1
Solve an IVP using the unilateral Laplace transform. The IVP problem from the recitation
on Friday, June 2, 2023, will be solved using a Laplace transform approach. Here is the problem
y + 4y = 4u
y(0) = 1.3
uk (t)
Note that u is periodic for t ≥ 0, in other words, u(t +T)
= 1 second. Answer the following:
1. First, compute the unilateral Laplace transform of a single rectangular pulse, denoted uo,
1 t € [0, 0.5]
t> 0.5
so it can be argued that
t = [0, 0.5]
t≤ (0.5, 1)
(one period), t≥0
Argue that ROCuo is the entire complex plane.
2. Now compute the unilateral Laplace transform of a delayed rectangular pulse (delayed by k
seconds, where k is a positive integer),
0 t = [0, k)
te [k, k +0.5], k = 1,2,3,...
0 t>k+0.5
In fact, show ûk = ûoe-ks. In other words, the Laplace transform of a time-shifted function
has a simple relation with the transform of the unshifted function.
3. The input to the system u can be expressed as a superposition of these shifted pulses,
u(t) for all t ≥ 0. The period is
u(t) = Σ uk(t), t> 0
û(s) = Σûk(s)
What is the ROC associated with u, though? Hint: it is no longer the entire complex plane. 4. If s € ROCu, show that the geometric series formula can be applied to (1). What is the sum
in this case?
5. Apply the unilateral Laplace transform to the ODE to write a complete expression for the
unilateral Laplace transform of the IVP solution that is valid for t € [0, k]. In other words,
compute ŷ.
6. The inverse Laplace transform is
Since s = o + jw, where o is in the ROC of ŷ, then ds
y(t) = lim
y(t) = lim
R→∞ j2π o-jR
= eat wo
= eat wo
kwo, k
This integral can be approximated by a Riemann sum by discretizing w as w =
. . .‚ —2, —1, 0, 1, 2,… The frequency “step" is wo, i.e. dw wo. The Riemann sum shares
many similarities with the Fourier series synthesis formula,
y(t) ~ Σ ŷ(o + jkwo)e(o+jkwo)t
y(t) ~
R→∞ 2π
po jR
[ +1² (8)est ds.
Σ ŷ (o + jkwo) ej kwot
2 (163-
jdw and integral can be written as
+ jw)e(o+jw)t
ŷ(o) + 2 Σ Re ŷ(o
ŷ(0) +2ΣRe [ŷ(0 + jkwo)ej kwot]
Use Matlab to numerically approximate y on the interval t = [0,3] using wo
(confirm this is in the ROC of y), and the following limits on k,
t dw.
+jkwo) ej kwot
Use a time grid so that the time step is ts 200000
This will ensure at least four time
steps fall within one period of the highest frequency sinusoid in the sum.
Graph the numerical approximations of u and y for t = [0,3] seconds. Note that Gibbs
phenomenon is present in u.
0.1 rad/s, o = 1 | {"url":"https://tutorbin.com/questions-and-answers/problem-1-solve-an-ivp-using-the-unilateral-laplace-transform-the-ivp-problem-from-the-recitation-on-friday-june-2-2023","timestamp":"2024-11-04T10:55:13Z","content_type":"text/html","content_length":"72723","record_id":"<urn:uuid:3e7a2634-5849-40bb-a5dd-524c31fe0305>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00227.warc.gz"} |
Going Slideways
Tires aren’t just for cars and trucks. In “inner tubing,” you sit or lie down on a giant donut-shaped balloon and slide down the snowy, slippery hill. It’s called an “inner” tube because it’s from
inside a truck tire. People first started snow tubing in 1820 in Switzerland, a long time before cars or trucks — so where did they get their tubes? More importantly, what happens when you spin as
you slide?
Wee ones: If your inner tube spins once to the left, then once to the right, then once to the left, then once to the right…which way do you spin next?
Little kids: If you, 3 friends, and 2 snow-loving dogs all pile onto an inner tube, how many riders are there? Bonus: You can also ride tubes on waterslides. If you go 10 miles an hour on snow but
twice as fast in water, how fast do you tube on the water?
Big kids: If you start sliding facing downhill, with the hill’s right side on your right, and as you slide you spin 1/2 turn to your left, then 1/4 of a turn to your right, then 3/4 turn to the left,
which way are you facing now? Bonus: If 1/2 the tubes are double tubes (seating 2 people) and the other 1/2 are single tubes, how many tubes are there if they hold 18 people total?
The sky’s the limit: If your tube spins once around every 2 seconds, your friend spins once every 3 seconds, another friend spins once every 4 seconds, and the last friend spins once every 5 seconds,
what’s the soonest you’ll all face forward at the same time if you all started facing forward?
Wee ones: To the left.
Little kids: 6 riders. Bonus: 20 miles an hour.
Big kids: Downhill! The 1/2 turn faced you backwards, the 1/4 turn left you facing left, then the 3/4 turn spun you around to the front. Bonus: 12 tubes in total: 6 singles, and 6 doubles which will
seat 12 more people. If 1/2 are single and 1/2 are double, then each double and single forms a pair, and each pair holds 3 people. Then just divide that into the total of 18.
The sky’s the limit: In 60 seconds, the smallest multiple of 2, 3, 4 and 5. You don’t need to multiply 2 x 3 x 4 x 5 to get the smallest number, because if it’s divisible by 4, it’s already divisible
by 2. | {"url":"https://bedtimemath.org/fun-math-snow-tubing/","timestamp":"2024-11-14T11:37:37Z","content_type":"text/html","content_length":"86829","record_id":"<urn:uuid:3413f847-2b8d-41ca-9233-eb43ec68447e>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00680.warc.gz"} |
Wittgenstein's Critique of Gödel and Russell: A Philosophical Inquiry
Written on
Chapter 1: Unpublished Insights from Wittgenstein
In 1956, posthumous writings of Wittgenstein, which he never published during his lifetime, were disclosed to the public. These texts were compiled into the book Remarks on the Foundations of
Mathematics. Within these pages, Wittgenstein expressed dissatisfaction with how philosophers, logicians, and mathematicians interpreted paradoxes, providing several controversial arguments against
accepting Gödel’s incompleteness theorems.
Wittgenstein's Perspective on Logical Paradoxes
Here’s what Wittgenstein had to say regarding troublesome paradoxes in logic and arithmetic:
"If a contradiction were to emerge in arithmetic, it would merely demonstrate that an arithmetic containing such a contradiction could still function effectively. It is more advantageous for us to
adjust our understanding of the certainty required, rather than claim it could never be a legitimate arithmetic. 'But surely this isn't ideal certainty!' — Ideal for what purpose? The rules of
logical inference are merely rules of the language-game." (RMF, Wittgenstein).
Consider this: when determining the square root of 4, we encounter two potential answers: 2 and -2. Does this ambiguity detract from the integrity of arithmetic? Certainly not. Both natural numbers
and integers are employed in economics without issues.
A similar situation arises in Russell’s Paradox. Here, we grapple with whether the “set of all sets that are not members of themselves” is or isn't a member of itself, leading us to a formal
limitation akin to the square root of 4, which results in ambiguity. Is it irrational or impossible for mathematics to deal with such ambiguity? Absolutely not! As Wittgenstein articulated in the
"It is as impossible to express in language anything that 'contradicts logic' as it is to depict a figure in geometry that contradicts the laws of space, or to specify the coordinates of a
non-existent point." (Wittgenstein, Tractatus 3.032).
Thus, the ambiguity inherent in Russell’s paradoxical set appears to reveal a different category of set—one that cannot define its own members, much like our inability to resolve the square root of
The Wittgenstein-Gödel Dispute
The ongoing debate between advocates of Wittgenstein and Gödel merits closer examination. For those seeking to delve deeper into this conflict, I recommend the following articles: [1], [2], [3],
which are cited at the conclusion of this text.
First, it is essential to recognize that Wittgenstein largely disregarded Gödel’s theorems. He made considerable efforts to overlook them. However, a student once posed a question to him: “Could
there not be true propositions expressed in this (Gödelian) symbolism that are not provable within Russell’s system?” Wittgenstein’s response was, “Why should propositions—such as those in physics—be
articulated in Russell’s symbolism?” This indicates a rather limited and unusual reaction to Gödel’s first incompleteness theorem.
To grasp the complexities involved, we must first understand how propositional variables function within Gödel’s symbolic language—a system adapted from Russell and Whitehead’s Principia Mathematica
(1910), which illustrates the implications for every closed formal system, including those utilized in physics.
According to Nagel and Newman in Gödel’s Proof (1958), Gödel’s first theorem states that if ‘S’ signifies a formula, its formal negation, non’S’, also qualifies as a formula. Therefore, we arrive at
the following scenario: if “p is a non-demonstrable formula,” it follows that “p is a demonstrable formula.” When operating within a system containing both formulas, we ultimately cannot ascertain
whether p is a non-demonstrable or demonstrable proposition.
But is this truly the case? Let’s consider an illustration involving a liar who asserts:
"Everything I say is a lie."
Before we declare this statement true or false, it’s vital to understand what this self-proclaimed liar is actually lying about. We need examples of statements that can be tested against reality for
their truth, falsity, or ambiguity. Without such examples, we are left with mere indeterminacy.
Wittgenstein's point in response to Gödel’s work is clear: we require propositions that can be verified in “reality” to assess their truth, falsity, or ambiguity. Since propositional variables often
assert the existence of valid negations of fundamental propositions without supporting evidence, it can be argued that Gödel assumed the conditions of undecidability and incompleteness as inherent in
the foundational rules of the language of his theorem. Nevertheless, Gödel's assertions were not erroneous.
Indeed, Wittgenstein’s pragmatic mindset led him to question whether Gödel’s incompleteness could apply to the realm of physics. The answer to this inquiry is both “yes and no”: incompleteness can,
and cannot, be applicable to physics. This is because the rules of physics could evolve if the universe’s rules were to change (suggesting that physics might never achieve complete status). However,
all successful theories in physics today are grounded in facts. Consequently, these theories will invariably remain the successful ones, as they accurately relate to facts (indicating that they will
always correspond to fact-based truths).
Check here the distinction between Platonic completeness and pragmatic completeness.
[1] A Note on Wittgenstein’s “Notorious Paragraph” about the Gödel Theorem (2000). By Juliet Floyd and Hilary Putnam in The Journal of Philosophy, Vol. 97, No 11 (2000), pp. 624–632.
[2] Misunderstanding Gödel: New Arguments about Wittgenstein and New Remarks by Wittgenstein (2003). By Victor Rodych in Dialectica, Vol. 57, No 3 (2003), pp. 279–313.
[3] Wittgenstein and Gödel: An Attempt to Make "Wittgenstein’s Objection" Reasonable (2018). By Timm Lampert in Philosophia Mathematica, Vol. 26, No 3 (2018), pp. 324–345.
Chapter 2: Engaging with Gödel’s Theorems
Russell vs. Wittgenstein: Judgment or Representation?
In this video, we explore the contrasting philosophies of Russell and Wittgenstein, particularly focusing on their differing views regarding judgment and representation in logic.
Wittgenstein versus Gödel part 1
This video provides an in-depth analysis of the conflict between Wittgenstein and Gödel, examining their perspectives on logic, mathematics, and the implications of Gödel’s theorems. | {"url":"https://austinsymbolofquality.com/wittgenstein-godel-russell-critique.html","timestamp":"2024-11-10T14:42:45Z","content_type":"text/html","content_length":"14415","record_id":"<urn:uuid:ce110735-4a3d-4941-bc90-c5eea874558f>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00034.warc.gz"} |
Formation of a System of Linear Equations - Applications of Matrices: Solving System of Linear Equations
The meaning of a system of linear equations can be understood by formulating a mathematical model of a simple practical problem.
Formation of a System of Linear Equations
The meaning of a system of linear equations can be understood by formulating a mathematical model of a simple practical problem.
Three persons A, B and C go to a supermarket to purchase same brands of rice and sugar. Person A buys 5 Kilograms of rice and 3 Kilograms of sugar and pays ₹ 440. Person B purchases 6 Kilograms of
rice and 2 Kilograms of sugar and pays ₹ 400. Person C purchases 8 Kilograms of rice and 5 Kilograms of sugar and pays ₹ 720. Let us formulate a mathematical model to compute the price per
Kilogram of rice and the price per Kilogram of sugar. Let x be the price in rupees per Kilogram of rice and y be the price in rupees per Kilogram of sugar. Person A buys 5 Kilograms of rice and 3
Kilograms sugar and pays ₹ 440 . So, 5x + 3y = 440 . Similarly, by considering Person B and Person C, we get 6x + 2 y = 400 and 8x + 5 y = 720 . Hence the mathematical model is to obtain x and y
such that
5x + 3y = 440, 6x + 2 y = 400, 8x + 5 y = 720 .
In the above example, the values of x and y which satisfy one equation should also satisfy all the other equations. In other words, the equations are to be satisfied by the same values of x and y
simultaneously. If such values of x and y exist, then they are said to form a solution for the system of linear equations. In the three equations, x and y appear in first degree only. Hence they are
said to form a system of linear equations in two unknowns x and y . They are also called simultaneous linear equations in two unknowns x and y . The system has three linear equations in two unknowns
x and y .
The equations represent three straight lines in two-dimensional analytical geometry.
In this section, we develop methods using matrices to find solutions of systems of linear equations.
Tags : Applications of Matrices: Solving System of Linear Equations , 12th Mathematics : UNIT 1 : Applications of Matrices and Determinants
Study Material, Lecturing Notes, Assignment, Reference, Wiki description explanation, brief detail
12th Mathematics : UNIT 1 : Applications of Matrices and Determinants : Formation of a System of Linear Equations | Applications of Matrices: Solving System of Linear Equations | {"url":"https://www.brainkart.com/article/Formation-of-a-System-of-Linear-Equations_39070/","timestamp":"2024-11-04T22:11:13Z","content_type":"text/html","content_length":"43193","record_id":"<urn:uuid:c65e5ea6-a322-4570-99b7-40c21cac2b38>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00212.warc.gz"} |
represents a copula distribution with kernel distribution ker and marginal distributions dist[1], dist[2], … .
Background & Context
• CopulaDistribution[ker,{dist[1],dist[2],…,dist[n]}] represents a multivariate statistical distribution whose MarginalDistribution) is precisely dist[j], and for which the CDF of a dist[j]
-distributed random variate follows a uniform distribution (UniformDistribution). For a general copula distribution CopulaDistribution[ker,{dist[1],dist[2],…,dist[n]}], the probability density
function (PDF) of Y[j]=TransformedDistribution[F[j][x],xdist[j]] is equivalent to UniformDistribution[] whenever F[j][x] is the CDF of dist[j]. While all copula distributions share the above
properties, the characteristics and behavior of a specific copula distribution depend both on its kernel ker and on its marginals dist[1],dist[2],…,dist[n].
• In practice, a copula is a tool that describes dependence between variables, and in this context, varying ker allows investigation of different degrees of dependence (for example, {"FGM",α} best
models weak variable dependence, whereas "Product" allows analysis of independent variables). There are 11 predefined kernels ker that may be used to parametrize a copula distribution. These 11
can be split into roughly four groups, consisting of the independence-dependence kernels ("Product", "Maximal", and "Minimal"); the Archimedean kernels ({"Frank",α}, where for and for , {
"Clayton",c} for , {"GumbelHougaard",α} for , and {"AMH",α} for ); the distribution-derived kernels ({"Binormal",ρ} for ρ as in BinormalDistribution, {"Multinormal",Σ} for Σ as in
MultinormalDistribution, and {"MultivariateT",Σ,ν} for Σ, ν as in MultivariateTDistribution); and the non-associative kernels ({"FGM",α} for ), members of which share similar qualitative or
theoretical properties.
• Sklar's theorem proves the existence of a copula that "couples" any joint distribution with its univariate marginals via the relation and thus demonstrates that copula distributions are
ubiquitous in multivariate statistics. Copula distributions date as far back as the 1940s, though much of the terminology and machinery used today were developed in the 1950s and 1960s. Since
their inception, copulas have been used to model phenomena in areas including reliability theory, meteorology, and queueing theory, while specially purposed copulas and kernels have been
developed to serve as tools in fields such as survival analysis (via survival copulas) and mathematical finance (via panic copulas). Copula distributions are also of independent theoretical
interest in Monte Carlo theory and applied mathematics.
• Many relationships exist between CopulaDistribution[ker,{dist[1],…,dist[n]}] and various other distributions depending on the parameters ker and dist[j]. CopulaDistribution["Product",{dist[1],…,
dist[n]}] is equivalent to ProductDistribution[dist[1],…,dist[n]] for all distributions dist[j], and so the product copula of two instances of NormalDistribution is BinormalDistribution. In
addition, the product copula is equivalent to the binormal copula with zero correlation in the sense that the PDF of CopulaDistribution["Product",{dist[1],…,dist[n]}] is precisely the same as
that of CopulaDistribution[{"Binormal",0},{dist[1],…,dist[n]}] for all distributions dist[j]. Among distribution-derived kernels, a binormal copula with NormalDistribution marginals and a
multivariate -copula with StudentTDistribution marginals are equivalent to BinormalDistribution and MultivariateTDistribution, respectively, while a practically limitless number of qualitatively
similar relationships exist between Archimedean copulas and miscellaneous distributions.
open allclose all
Basic Examples (3)
Scope (32)
Basic Uses (6)
Define a product copula using two normal distributions:
Cumulative distribution function:
Define a Frank copula using two uniform distributions:
Define an FGM copula with beta distributions:
Moment and moment-generating function:
Define a maximal copula with discrete components:
Compute probabilities and expectations:
Define a minimal copula with Poisson distributions:
Statistical properties are calculated componentwise:
Copula Kernels (11)
Cumulative distribution function:
Cumulative distribution function:
Cumulative distribution function:
Cumulative distribution function:
Cumulative distribution function:
Cumulative distribution function:
A Farlie–Gordon–Morgenstern copula:
Cumulative distribution function:
Cumulative distribution function:
A multivariate Student copula:
Parametric Distributions (4)
Define a minimal copula with beta distributions as marginals:
Cumulative distribution function:
Define a maximal copula with different continuous marginals:
Cumulative distribution function:
Define a copula with Poisson marginal distributions:
Define a copula with negative binomial distribution marginals:
Nonparametric Distributions (3)
Derived Distributions (8)
Applications (6)
A system is composed of four components, each with lifespan exponentially distributed with parameter per hour. Dependencies in the time to failure are modeled by a Farlie–Gumbel–Morgenstern copula
with α1/3. Find the probability that no component fails before 500 hours:
Find the probability that one component will fail after 1000 hours:
Assume the values of two assets follow a geometric Brownian motion with drifts and and volatilities and , respectively. Assuming both initial values to be 1, find the bounds for the joint cumulative
distribution function of both assets at time :
Assuming the values below compare the plots of the CDFs:
Two firms have debts and and initial assets both equal to 1. Assume the values of the assets follow a geometric Brownian motion with drifts and and volatilities and , respectively. Find the joint
probability of the default at time assuming a Frank copula:
Default probability depending on α:
A Cauchy copula is a multivariate Student copula with one degree of freedom:
Visualize the density using a scatter plot:
Define a Gumbel–Hougaard copula for different values of the parameter:
Show how the value of the parameter influences the dependence between values:
Gumbel's bivariate logistic distribution is an AMH copula with logistic marginal distributions:
Visualize its probability density function:
Cumulative distribution function has the structure of CDF of the univariate logistic distribution:
Properties & Relations (5)
The product copula distribution of two normal distributions is a binormal distribution:
Product copula is equivalent to binormal copula with zero correlation:
Binormal copula with normal marginals is a BinormalDistribution:
Multivariate copula with Student marginals is a MultivariateTDistribution:
MarginalDistribution of a copula returns the component distributions:
Wolfram Research (2010), CopulaDistribution, Wolfram Language function, https://reference.wolfram.com/language/ref/CopulaDistribution.html (updated 2016).
Wolfram Research (2010), CopulaDistribution, Wolfram Language function, https://reference.wolfram.com/language/ref/CopulaDistribution.html (updated 2016).
Wolfram Language. 2010. "CopulaDistribution." Wolfram Language & System Documentation Center. Wolfram Research. Last Modified 2016. https://reference.wolfram.com/language/ref/CopulaDistribution.html.
Wolfram Language. (2010). CopulaDistribution. Wolfram Language & System Documentation Center. Retrieved from https://reference.wolfram.com/language/ref/CopulaDistribution.html | {"url":"https://reference.wolfram.com/language/ref/CopulaDistribution.html","timestamp":"2024-11-03T23:01:41Z","content_type":"text/html","content_length":"244388","record_id":"<urn:uuid:39312330-19a9-4de0-b101-8d06f20e3649>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00523.warc.gz"} |
Lesson 3: deciding MNIST prediction accuracy
I think traditionally it comes from ‘logistic regression’ and the use of the ‘sigmoid function’ (if you want to dig a bit deeper you can have a look at those terms).
Most of the time you would not predict ‘high’ and ‘low’ values but (something that we interpret as) probabilities, so values between 0 and 1. The question the model then tries to answer is 'Is the
given image a 3?" and the prediction 0.8 would mean that the model is 80% sure that the image is a 3.
A way to get from arbitrary values to values in [0,1] is the sigmoid function which looks like this:
The larger the value you put into the function, the closer the result is to 1 and the ‘higher the probability’ that the image is a 3. Same for lower / close to 0 / not a 3, so a 7.
Now the threshold you would pick here most likely is 0.5. If the prediction is 0.49 that means 49% it’s a 3, 51% it’s a 7, so we obviously go with the 7. The twist is: which value do we need to put
into the sigmoid function to land at 0.5? - Thats 0! So we can spare ourselves all that sigmoid business and just say: “values above 0 are label 3, lower than 0 are label 7”
Hope that makes sense | {"url":"https://forums.fast.ai/t/lesson-3-deciding-mnist-prediction-accuracy/100483/7","timestamp":"2024-11-07T15:45:47Z","content_type":"text/html","content_length":"16417","record_id":"<urn:uuid:943338ee-d0e5-4320-95d7-e80605dac40d>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00811.warc.gz"} |
Fast Fourier Transform API
Brief Forward Function Inverse Function
BFP FFT on single real signal bfp_fft_forward_mono() bfp_fft_inverse_mono()
BFP FFT on single complex signal bfp_fft_forward_complex() bfp_fft_inverse_complex()
BFP FFT on pair of real signals bfp_fft_forward_stereo() bfp_fft_inverse_stereo()
BFP spectrum packing bfp_fft_unpack_mono() bfp_fft_pack_mono()
Low-level decimation-in-time FFT fft_dit_forward() fft_dit_inverse()
Low-level decimation-in-frequency FFT fft_dif_forward() fft_dif_inverse()
FFT on real signal of float fft_f32_forward() fft_f32_inverse() | {"url":"https://www.xmos.com/documentation/XM-014785-PC/html/modules/core/modules/xcore_math/lib_xcore_math/doc/programming_guide/src/reference/fft/fft_index.html","timestamp":"2024-11-15T01:38:57Z","content_type":"text/html","content_length":"150066","record_id":"<urn:uuid:d14983d4-f624-4590-aea3-8a72bad624f7>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00761.warc.gz"} |
Math, Grade 7, Constructions and Angles, Classifying Triangles
Classify Triangles
Work Time
Classify Triangles
Use the Triangles Sketch interactive to explore equilateral, isosceles, scalene, acute, obtuse, and right triangles.
• For each triangle that is classified by sides, state everything you notice about the angles.
• For each triangle that is classified by angles, state everything you notice about the sides.
• Use the Exterior Angles button. What do you notice about the relationship of the exterior angles to the interior angles for each type of triangle?
INTERACTIVE: Triangles Sketch
• How are the angles in each figure related? What do the angles tell you about the figure?
• For each triangle classified by sides, what do you notice about the angles?
• Do any of the triangles that are classified by angles also have one of the characteristics of triangles that are classified by sides?
• What is the sum of the interior angles for each triangle?
• How does the sum of two of the angles compare to the measure of the third angle?
• If one of the angles is a right angle, what is the sum of the other two angles? | {"url":"https://goopennc.oercommons.org/courseware/lesson/5126/student/?section=3","timestamp":"2024-11-08T07:08:04Z","content_type":"text/html","content_length":"30858","record_id":"<urn:uuid:199e6927-c679-4247-94ff-4823d4c216a1>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00641.warc.gz"} |
The geometry of irrationals
Many years ago I remember reading about irrational numbers and the various surprising things they hide. One nice thing about irrationals is that even thought we think of them as being bad behaved and
ugly, we can still classify them further. For example, we have irrationals that are also
, that is, they are zeros of a polynomial with integer coefficients.
We can think that rationals are algebraic of degree 1, while algebraic of degree 2 include $\sqrt{2}, \sqrt{3} $,
the golden ratio
, etc.
Then we have numbers that are not algebraic, that is, they are not zeros of polynomials with integer coefficients. These are called
. Some of the famous ones in this category are $\pi, e$,
Liouville's constant
, etc.
Another way of studying irrational numbers is by measuring
how irrational are they.
This really means to study how well a number can be approximated with rational numbers, that is, loosely speaking, how fast the denominators of the fractions approximating the number have to grow to
stay close to it.
We can think that this has to do with the
of real numbers with approximations by rationals. In a more philosophical way, this irrationality measure can tell how well we can approximate numbers in the math world, with numbers in the real
was big into approximating irrationals with rationals trying to do it in the most efficient way, that is, controlling the size of the denominators. More recently,
made contributions into a deeper understanding of this, leading him to win the Fields Medal in 1958.
Looking at this Facebook post about the integer lattice in $\mathbb{R}^3$, I remember an attempt at defining another way of measuring the irrationality of a number.
First, consider the integer
in $\mathbb{R}^2$. Given a real number $m$, graph the line $y=mx$. Thus the line will hit the lattice if and only if $m$ is rational, so the idea is to analyze what happens when $m$ is irrational.
For this, consider the following: consider a radius $r>0$ and draw the biggest circular sector of radius $r$ that is symmetric around the line $y=mx$ and that does not contain any point of the
That is, the sector must not contain any of the blue points from the lattice. Now, the idea is to measure this area and treat it as a function of the radius $A_m(r)$. I created a
GeoGebra demonstration
to show the idea of the sector.
Here you can see how the area of the sector changes when the radius changes, and also how the procedure varies when considering different values for $m$.
Effectively what happens is that we are looking for the best approximation of the number $m$ using a fraction $p/q$ with $p^2+q^2\leq r^2$. This way we control the size of both numbers for the
approximation of $m$.
Thus we have that
where $f_m(r)$ is the best approximation to $m$ with a rational inside the circle of radius $r$, that is
$$f_m(r)=\min \big|m-\frac{p}{q}\big|\,,\quad \text{ where }p^2+q^2\leq r^2\,.$$
Now, the trick is to analyze $A_m(r)$ as $r\to\infty$. One might think that this is trivially zero, since any irrational can be written as the limit of a sequence of rationals, but we also have the
radius growing up to infinity, hence becoming an indeterminate form.
That is, it might be possible to propose another irrationality measure as
$$\nu(m)=\lim_{r\to\infty} \sup A_m(r)\,.$$
The $\sup$ just ensures the existence of $\nu(m)$, we don't even know if there is a limit!
If $m$ is rational, this is exactly zero, but the question would be what happens for $m$ irrational. I did some numerics to have an idea of this measure for a couple of cases. Of course this is no
evidence, as a limit like this cannot be trivially calculated using numerical methods, but at least gives a bit of insight.
For example, for $m=\sqrt{2}$ we have that the graph of $A_{\sqrt{2}}(r)$ seems to oscillate between 0.2 and 1.4. We can see here that the peaks happen when a new rational approximation is found.
Naively speaking, $\sqrt{2}$ is not hard to approximate with rationals, and hence we don't have to increase too much in $r$ to find a new approximation.
On the other hand, for $m=\pi$ we have a different behavior. We have a series of good approximations, but then suddenly we cannot find any more good approximations. Even though it looks like $A_\pi
(r)$ went to 0 around $r=400$, if we zoom in we can see that it is slowly rising finding a new approximation.
I believe this is a very interesting topic and I think there is something nice waiting to be discovered here. Even if it leads to the same notion as the Liouville-Roth exponent, this would provide a
more geometric interpretation of the behavior of irrational numbers.
2 comments:
1. This is interesting, could you clarify what are p and q in the lattice? and what is the horizontal axis in the last graphs. thanks
1. The p and q are the coordinates of the points in the integer lattice, that is (q,p) is in the plane and p,q are integers.
In the last graphs, we have the graph of A(r), for different parameters m. The horizontal axis is the radius r of the circular sector. | {"url":"http://towardsthelimitedge.pedromoralesalmazan.com/2015/09/the-geometry-of-irrationals.html","timestamp":"2024-11-11T21:04:47Z","content_type":"application/xhtml+xml","content_length":"84494","record_id":"<urn:uuid:71918d3f-e327-424e-9802-2fb710ea7998>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00658.warc.gz"} |
209 research outputs found
In this paper, we study the Nash dynamics of strategic interplays of n buyers in a matching market setup by a seller, the market maker. Taking the standard market equilibrium approach, upon receiving
submitted bid vectors from the buyers, the market maker will decide on a price vector to clear the market in such a way that each buyer is allocated an item for which he desires the most (a.k.a., a
market equilibrium solution). While such equilibrium outcomes are not unique, the market maker chooses one (maxeq) that optimizes its own objective --- revenue maximization. The buyers in turn change
bids to their best interests in order to obtain higher utilities in the next round's market equilibrium solution. This is an (n+1)-person game where buyers place strategic bids to gain the most from
the market maker's equilibrium mechanism. The incentives of buyers in deciding their bids and the market maker's choice of using the maxeq mechanism create a wave of Nash dynamics involved in the
market. We characterize Nash equilibria in the dynamics in terms of the relationship between maxeq and mineq (i.e., minimum revenue equilibrium), and develop convergence results for Nash dynamics
from the maxeq policy to a mineq solution, resulting an outcome equivalent to the truthful VCG mechanism. Our results imply revenue equivalence between maxeq and mineq, and address the question that
why short-term revenue maximization is a poor long run strategy, in a deterministic and dynamic setting
$f(R)$ gravity, capable of driving the late-time acceleration of the universe, is emerging as a promising alternative to dark energy. Various $f(R)$ gravity models have been intensively tested
against probes of the expansion history, including type Ia supernovae (SNIa), the cosmic microwave background (CMB) and baryon acoustic oscillations (BAO). In this paper we propose to use the
statistical lens sample from Sloan Digital Sky Survey Quasar Lens Search Data Release 3 (SQLS DR3) to constrain $f(R)$ gravity models. This sample can probe the expansion history up to $z\sim2.2$,
higher than what probed by current SNIa and BAO data. We adopt a typical parameterization of the form $f(R)=R-\alpha H^2_0(-\frac{R}{H^2_0})^\beta$ with $\alpha$ and $\beta$ constants. For $\beta=0$
($\Lambda$CDM), we obtain the best-fit value of the parameter $\alpha=-4.193$, for which the 95% confidence interval that is [-4.633, -3.754]. This best-fit value of $\alpha$ corresponds to the
matter density parameter $\Omega_{m0}=0.301$, consistent with constraints from other probes. Allowing $\beta$ to be free, the best-fit parameters are $(\alpha, \beta)=(-3.777, 0.06195)$.
Consequently, we give $\Omega_{m0}=0.285$ and the deceleration parameter $q_0=-0.544$. At the 95% confidence level, $\alpha$ and $\beta$ are constrained to [-4.67, -2.89] and [-0.078, 0.202]
respectively. Clearly, given the currently limited sample size, we can only constrain $\beta$ within the accuracy of $\Delta\beta\sim 0.1$ and thus can not distinguish between $\Lambda$CDM and $f(R)$
gravity with high significance, and actually, the former lies in the 68% confidence contour. We expect that the extension of the SQLS DR3 lens sample to the SDSS DR5 and SDSS-II will make constraints
on the model more stringent.Comment: 10 pages, 7 figures. Accepted for publication in MNRA
We consider markets consisting of a set of indivisible items, and buyers that have {\em sharp} multi-unit demand. This means that each buyer $i$ wants a specific number $d_i$ of items; a bundle of
size less than $d_i$ has no value, while a bundle of size greater than $d_i$ is worth no more than the most valued $d_i$ items (valuations being additive). We consider the objective of setting prices
and allocations in order to maximize the total revenue of the market maker. The pricing problem with sharp multi-unit demand buyers has a number of properties that the unit-demand model does not
possess, and is an important question in algorithmic pricing. We consider the problem of computing a revenue maximizing solution for two solution concepts: competitive equilibrium and envy-free
pricing. For unrestricted valuations, these problems are NP-complete; we focus on a realistic special case of "correlated values" where each buyer $i$ has a valuation v_i\qual_j for item $j$, where
$v_i$ and \qual_j are positive quantities associated with buyer $i$ and item $j$ respectively. We present a polynomial time algorithm to solve the revenue-maximizing competitive equilibrium problem.
For envy-free pricing, if the demand of each buyer is bounded by a constant, a revenue maximizing solution can be found efficiently; the general demand case is shown to be NP-hard.Comment: page2
We consider the optimal pricing problem for a model of the rich media advertisement market, as well as other related applications. In this market, there are multiple buyers (advertisers), and items
(slots) that are arranged in a line such as a banner on a website. Each buyer desires a particular number of {\em consecutive} slots and has a per-unit-quality value $v_i$ (dependent on the ad only)
while each slot $j$ has a quality $q_j$ (dependent on the position only such as click-through rate in position auctions). Hence, the valuation of the buyer $i$ for item $j$ is $v_iq_j$. We want to
decide the allocations and the prices in order to maximize the total revenue of the market maker. A key difference from the traditional position auction is the advertiser's requirement of a fixed
number of consecutive slots. Consecutive slots may be needed for a large size rich media ad. We study three major pricing mechanisms, the Bayesian pricing model, the maximum revenue market
equilibrium model and an envy-free solution model. Under the Bayesian model, we design a polynomial time computable truthful mechanism which is optimum in revenue. For the market equilibrium
paradigm, we find a polynomial time algorithm to obtain the maximum revenue market equilibrium solution. In envy-free settings, an optimal solution is presented when the buyers have the same demand
for the number of consecutive slots. We conduct a simulation that compares the revenues from the above schemes and gives convincing results.Comment: 27page
Recently, remarkable progress has been made by approximating Nash equilibrium (NE), correlated equilibrium (CE), and coarse correlated equilibrium (CCE) through function approximation that trains a
neural network to predict equilibria from game representations. Furthermore, equivariant architectures are widely adopted in designing such equilibrium approximators in normal-form games. In this
paper, we theoretically characterize benefits and limitations of equivariant equilibrium approximators. For the benefits, we show that they enjoy better generalizability than general ones and can
achieve better approximations when the payoff distribution is permutation-invariant. For the limitations, we discuss their drawbacks in terms of equilibrium selection and social welfare. Together,
our results help to understand the role of equivariance in equilibrium approximators.Comment: To appear in ICML 202
The approximation ratio has become one of the dominant measures in mechanism design problems. In light of analysis of algorithms, we define the smoothed approximation ratio to compare the performance
of the optimal mechanism and a truthful mechanism when the inputs are subject to random perturbations of the worst-case inputs, and define the average-case approximation ratio to compare the
performance of these two mechanisms when the inputs follow a distribution. For the one-sided matching problem, Filos-Ratsikas et al. [2014] show that, amongst all truthful mechanisms, random priority
achieves the tight approximation ratio bound of Theta(sqrt{n}). We prove that, despite of this worst-case bound, random priority has a constant smoothed approximation ratio. This is, to our limited
knowledge, the first work that asymptotically differentiates the smoothed approximation ratio from the worst-case approximation ratio for mechanism design problems. For the average-case, we show that
our approximation ratio can be improved to 1+e. These results partially explain why random priority has been successfully used in practice, although in the worst case the optimal social welfare is
Theta(sqrt{n}) times of what random priority achieves. These results also pave the way for further studies of smoothed and average-case analysis for approximate mechanism design problems, beyond the
worst-case analysis | {"url":"https://core.ac.uk/search/?q=author%3A(Deng%2C%20Xiaotie)","timestamp":"2024-11-04T19:09:39Z","content_type":"text/html","content_length":"144580","record_id":"<urn:uuid:cf4b84a2-5777-4dda-a3cc-5d772f48e3a6>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00315.warc.gz"} |
MathSciDoc: An Archive for Mathematician
Spectral Theory and Operator Algebra
Hanfeng Li Chongqing University, SUNY at Buffalo Andreas Thom TU Dresden
Dynamical Systems Spectral Theory and Operator Algebra mathscidoc:1704.02001
Distinguished Paper Award in 2018
Zhengwei Liu Yau Mathematical Sciences Center and Department of Mathematics, Tsinghua University, Beijing, 100084, China; Beijing Institute of Mathematical Sciences and Applications, Huairou
District, Beijing, 101408, China Sebastien Palcoux Beijing Institute of Mathematical Sciences and Applications, Huairou District, Beijing, 101408, China Jinsong Wu Institute for Advanced Study in
Mathematics, Harbin Institute of Technology, Harbin, 150001, China
Category Theory Functional Analysis Quantum Algebra Rings and Algebras Spectral Theory and Operator Algebra mathscidoc:2207.04003
Jing-Cheng Liu Hunan Normal University Jun Luo Chongqing University
Spectral Theory and Operator Algebra mathscidoc:1702.32001
Zhengwei Liu Harvard University, Cambridge, MA 02138
Mathematical Physics Quantum Algebra Spectral Theory and Operator Algebra mathscidoc:2207.22007
Kaifeng Bu Harvard University, Cambridge, MA, 02138, USA; Zhejiang University, Hangzhou, China Arthur Jaffe Harvard University, Cambridge, MA, 02138, USA Zhengwei Liu Harvard University, Cambridge,
MA, 02138, USA; Tsinghua University, Beijing, China Jinsong Wu Harvard University, Cambridge, MA, 02138, USA; Harbin Institute of Technology, Harbin, China
Functional Analysis Mathematical Physics Probability Spectral Theory and Operator Algebra mathscidoc:2207.12005
Zhengwei Liu Christopher Ryba
Category Theory Mathematical Physics Quantum Algebra Representation Theory Spectral Theory and Operator Algebra mathscidoc:2207.04005
Raymond Mortini Département de Mathématiques et Institut Élie Cartan de Lorraine, UMR 7502, Université de Lorraine Rudolf Rupp Fakultät für Angewandte Mathematik, Physik und Allgemeinwissenschaften,
Functional Analysis Spectral Theory and Operator Algebra mathscidoc:1701.12009
Rupert L. Frank Department of Mathematics, Princeton University Enno Lenzmann Institute for Mathematical Sciences, University of Copenhagen
Mathematical Physics Spectral Theory and Operator Algebra mathscidoc:1701.22001
Zhengwei Liu Scott Morrison David Penneys
Category Theory Quantum Algebra Spectral Theory and Operator Algebra mathscidoc:2207.04002
Zhengwei Liu Department of Mathematics and Department of Physics, Harvard University, Cambridge, MA, 02138, USA Jinsong Wu Institute of Advanced Study in Mathematics, Harbin Institute of Technology,
Harbin, 150001, China
Spectral Theory and Operator Algebra mathscidoc:2207.32002
Zhengwei Liu Harvard University, Cambridge, USA Jinsong Wu IASM, Harbin Institute of Technology and Harvard University, Harbin, China
Mathematical Physics Quantum Algebra Representation Theory Spectral Theory and Operator Algebra mathscidoc:2207.22006
Sorin Popa Mathematics Department, University of California, Los Angeles Stefaan Vaes Department of Mathematics, KU Leuven
Functional Analysis Group Theory and Lie Theory Spectral Theory and Operator Algebra mathscidoc:1701.12005
Junjie YUE Department of Mathematical Sciences, Tsinghua University, Beijing 100084, China Liping Zhang Department of Mathematical Sciences, Tsinghua University, Beijing 100084, China Mei Lu
Department of Mathematical Sciences, Tsinghua University, Beijing 100084, China
Spectral Theory and Operator Algebra mathscidoc:1612.32001
Fredrik Johansson Viklund Department of Mathematics, Columbia University Gregory F. Lawler Department of Mathematics and Department of Statistics, University of Chicago
Probability Spectral Theory and Operator Algebra mathscidoc:1701.28001
Chunlan Jiang Zhengwei Liu Jinsong Wu
Spectral Theory and Operator Algebra mathscidoc:1912.431049
Karl-Mikael Perfekt Centre for Mathematical Sciences, Lund University
Functional Analysis Spectral Theory and Operator Algebra mathscidoc:1701.12024
Hiroki Matui Graduate School of Science, Chiba University Yasuhiko Sato Graduate School of Science, Kyoto University
Quantum Algebra Rings and Algebras Spectral Theory and Operator Algebra mathscidoc:1701.29003
Takeshi Miura Department of Mathematics Faculty of Science, Niigata University Thomas Tonev Department of Mathematical Sciences, University of Montana
Spectral Theory and Operator Algebra mathscidoc:1701.32006
Erik Christensen Department of Mathematical Sciences, University of Copenhagen Allan M. Sinclair School of Mathematics, University of Edinburgh Roger R. Smith Department of Mathematics, Texas A&M
University Stuart A. White School of Mathematics and Statistics, University of Glasgow Wilhelm Winter Mathematisches Institut, Universität Münster
Spectral Theory and Operator Algebra mathscidoc:1701.32002
Zhengwei Liu Harvard University, Vanderbilt University Alex Wozniakowski Harvard University Arthur M. Jaffe Harvard University
Mathematical Physics Quantum Algebra Spectral Theory and Operator Algebra mathscidoc:1705.22001
Distinguished Paper Award in 2017
Zhengwei Liu
Spectral Theory and Operator Algebra mathscidoc:1912.431047
Adam Graham-Squire Department of Mathematics, High Point University
Spectral Theory and Operator Algebra mathscidoc:1701.01017
Dragan S. Djordjević Faculty of Sciences and Mathematics, University of Niš Milica Z. Kolundžija Faculty of Sciences and Mathematics, University of Niš
Functional Analysis Spectral Theory and Operator Algebra mathscidoc:1701.12020
Tran Dinh Ke Valeri Obukhovskii Ngai-Ching Wong Jen-Chih Yao
Spectral Theory and Operator Algebra mathscidoc:1910.43667
Denis Potapov School of Mathematics & Statistics, University of New South Wales Fedor Sukochev School of Mathematics & Statistics, University of New South Wales
Analysis of PDEs Spectral Theory and Operator Algebra mathscidoc:1701.03007 | {"url":"https://archive.ymsc.tsinghua.edu.cn/pacm_category/0132?show=view&size=25&from=1&target=searchall","timestamp":"2024-11-05T12:02:24Z","content_type":"text/html","content_length":"152697","record_id":"<urn:uuid:f5e6844f-9132-4120-a2f2-b0c44e356d26>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00340.warc.gz"} |
9th September 2013
I’ve always wondered how complicated it would be to write the code that listens to the numbers you dial on your phone, and works out what numbers you pressed.
I often enter my credit card details over the phone, and think: “If someone had a good enough microphone to record the sounds, they could probably get my information!”
The fancy name for this is: Dual-tone multi-frequency signalling. When you press the buttons, two sine waves are generated with different frequencies. The receiver then has to measure these two
frequencies and can work out what number you’ve pressed. I do love the similarity of this with formants (the way we create the different vowels sounds).
Whilst an FFT would easily do the trick, after a bit of research, it’s obvious that the most common algorithm to use is the Goertzel algorithm.
My time pocket principals say I should have just done an FFT and been done with it. The point here wasn’t to get a working decoder, but to understand how it is done, and to actually write some code
that I can have.
I decided to work in scilab. I was tempted by C/C++, but I fancied learning a bit more about scilab (and not fiddling with audio libraries).
I created a couple of simple functions to generate some pure “tones” to test the algorithm, as well as adding noise, and gaps.
There’s plenty of examples of the Goertzel code out there. I started off doing different overlapping partitions to sample the audio, but in the end took a far simpler approach.
I’m not exactly satisfied with the speed, or with the way I’ve implemented the divisions. Ironically the first time I wrote it was probably a better method, but the second is definitely faster. So I
certainly made some mistakes. Creating a matrix for all the frequencies at all the sample points probably was a bit silly… but I ran out of time.
In addition to the Goertzel I looked at dealing with noise, silence and series of numbers. The algorithm will sample the frequencies (by default) every 0.05 seconds, and will adjust to the sample
rate of the audio data. If there isn’t a strong signal, then it will assume there’s no number being pressed. If the value is the same as the previous check, no new number is written out.
I tested it by recording my pressing buttons on a phone, and it actually worked out great! I was quite surprised.
All in all a good way to spend a few hours!
SciLab code here:
(I would imagine it would be fairly easy to port to Matlab).
1 // This creates an array of the goertzel's response.
2 // Use: data x, at frequency f, sampled at points defined by at.
3 function [res] = goertzelArr(x, f, sampleRate, at)
4 a=0;
5 b=0;
6 c=0;
8 val = 2 * cos(2 * %pi * f / sampleRate);
9 res = zeros(1,length(at));
11 nextAt = 1;
12 nextTrig = at(1);
14 for i = 1:length(x),
15 a = val * b - c + x(i);
16 c = b;
17 b = a;
19 if i == nextTrig then
20 res(nextAt) = (b*b + c*c - b*c*val);
21 nextAt = nextAt + 1;
22 b = 0;
23 c = 0;
24 if (nextAt > length(at) ) then
25 return;
26 end
27 nextTrig = at(nextAt);
28 end
29 end
30 endfunction
32 // Main decoding function.
33 // Call with: evalTone(AUDIO_VECTOR, SAMPLE_RATE_OF_AUDIO [, time between samples])
34 function [t] = evalToneAt(x,sampleRate,timebetween)
35 // Default time between checks is 0.05
36 [out, inp] = argn(0);
37 if inp < 3 then, timebetween = 0.05, end
39 // Work out how many samples per check, and build array
40 spaceSamples = timebetween * sampleRate;
41 at = [spaceSamples:spaceSamples:length(x)];
43 // Relevant frequencies for number
44 checkComb = [1209, 1336, 1477, 1633,697, 770, 852, 941];
45 chars = ['1','2','3','A','4','5','6','B','7','8','9','C','*','0','#','D'];
47 // Create results array
48 r = zeros(8,length(at));
50 // Get results from goertzels
51 for i=1:8
52 r(i,:) = goertzelArr(x,checkComb(i),sampleRate,at);
53 end
55 lastVal = 0;
56 t = '';
57 // Loop through measurements
58 for i=1:length(at)
59 // Find best tone pair
60 col = 1;
61 row = 1;
62 colval = r(1,i);
63 rowval = r(5,i);
64 for j=2:4
65 if (r(j,i) > colval) then
66 colval = r(j,i);
67 col = j;
68 end
69 if (r(j+4,i) > rowval) then
70 rowval = r(j+4,i);
71 row = j;
72 end
73 end
74 // This is the best candidate tone pair
75 v = col + (row - 1) * 4;
77 // If the magnitude doesn't satisfy contraints ignore.
78 if (colval < spaceSamples | rowval < spaceSamples) then
79 lastVal = 0;
80 continue;
81 end
83 // If this is same tone as before. Ignore
84 if (v ~= lastVal) then
85 t = strcat([t,chars(v)]);
86 lastVal = v;
87 end
88 end
89 endfunction
These functions create the sample tones:
1 // Assistant function to create the tones
2 function [x] = soundChar(c, len, sampleRate)
4 [out, inp] = argn(0);
5 if inp < 3 then, sampleRate = 22050, end
6 if inp < 2 then, len = 0.5, end
8 checkCol = [1209, 1336, 1477, 1633];
9 checkRow = [697, 770, 852, 941];
11 chars = ['1','2','3','A';'4','5','6','B';'7','8','9','C';'*','0','#','D'];
13 sig1 = 0;
14 sig2 = 0;
16 for i = 1:size(chars,1)
17 for j = 1:size(chars,2)
18 if (chars(i,j) == c) then
19 sig2 = checkRow(i);
20 sig1 = checkCol(j);
21 end
22 end
23 end
25 v1 = len * 2 * %pi;
26 v2 = len * sampleRate;
28 s1 = sin(linspace(0, v1*sig1, v2));
29 s2 = sin(linspace(0, v1*sig2, v2));
31 x = s1 + s2;
33 endfunction
35 // Create a tone sequence eg: genTone('0123456789#*ABCD',0.2,44100,0.05);
36 // Will return an audio vector with each tone playing for 0.2 seconds,
37 // with gaps in between of 0.05 seconds at a sample rate of 44100
38 function [x] = genTone(s, len, sampleRate, gap)
39 [out, inp] = argn(0);
40 if inp < 4 then, gap = 0.2, end
41 if inp < 3 then, sampleRate = 22050, end
42 if inp < 2 then, len = 0.5, end
44 x = [];
45 for i = 1:length(s)
46 x = [x,soundChar(part(s,i), len, sampleRate),zeros(1,round(gap*sampleRate))];
47 end
49 endfunction
New approach to Time Management – Time Pockets
9th September 2013
I’ve been trying a variety of Project/Time Management techniques over the past months to try and improve my efficiency: work breakdown structures, schedules, pomodoro etc… None of them seemed to
really gel with me, personally.
In addition I’ve always had a massive list of other tasks that I wanted to get done/experiment with, and just never got round to it. Which is mildly depressing.
What’s been working so far is time pockets. At the moment I’m using the work days, and the weekend as two distinct pockets. But it might be a good idea to subdivide these further.
It works thus:
1. Pick a project/part of project to do, and an appropriately sized pocket.
2. Adjacent pockets should involve doing completely different work.
3. If the work doesn’t get done in that pocket: tough. You can’t work on it again, until a lot later.
So far it’s been really working! It acts as a great “push” to get things completed, and stops work from getting stale.
I’ve completed the redesign of the site and did some experimentation with some audio stuff.
So far it’s really improving my abilities to:
estimate completion times, focus on the important work first, boil down and simplify a task/project to the essentials.
The feeling of accomplishment of meeting a set deadline (and more importantly – avoiding the punishment of failure) is better than I was expecting. It will be interesting to see how I react if/when I
DO fail.
New Design
7th September 2013
Welcome to the new and improved THJSmith.com!
This has been something I’ve been wanting to do for a long time, but I finally decided to put a few days aside and do a custom design for this site.
Funnily enough the initial “design point” ended up getting scrapped, and I would like to find a better way to utilise the colour series throughout the body.
I enjoyed getting to to flex my design, CSS and javascript muscles on a smaller project.
Anyway, I hope you find it suitably unsettling?
Whatchu think?
How to bodge an autocue
4th August 2013
Autocues are really quite simple things; consisting of a camera, a screen, and a piece of glass. They allow you to read off your words, whilst looking straight at the camera.
I cobbled mine together from a PS:Eye, piece of glass from a photo-frame, and an iPad.
The most irritating thing was getting the text to display. I didn’t check for an app that would scroll the text. In the end I just had a static image (just rememeber to flip it about the y-axis).
The construction was just from junk I had on my desk. Sometimes a little mess can be helpful (that’s my excuse).
It worked, but wasn’t great. The text I was reading was very small (no scrolling), so you can see my eyes scanning across. Having bigger, scrolling text would have been a bonus. Being further from
the set up with the camera zoomed in more would probably mitigate the “scanning eyes” too.
All in all it was a lot of fun to build but it’s probably easier to just learn your lines!
Javascript Arguments
25th June 2013
It’s always tempting to optimise code before you need to. With Javascript I think it’s basically worth following some best practises as you go along, and then doing the “proper” thing of working out
where the big holes are.
The issue I have with NOT considering speed as you go through, is you end up with death by a thousand cuts. Internet users (Netizens?) are incredible fickle about speed.
So I’ve been taking reasonable efforts to make the code work fast. I recently stumbled across an article mentioning that using “arguments” was bad.
Specifically that:
function (a) {
return a;
is considerably better than:
function () {
return arguments[0];
Now, I only use the arguments variable if I want to work with a variable (and potentially large) number of arguments. Utility functions and what have you. The alternative to the above would be to
assume one parameter that was a list. Like so:
function (args) {
return args[0];
The cost here is building a list each time you call the function. I wanted to see what’s actually faster here and so I extended a JSPerf to examine:
Moral of the story: if you want to use variable number of arguments in utility functions the use of the arguments variable is better than building a list.
Homemade udon
4th June 2013
Every-now-and-then I think it’s worth putting a little extra work into dinner.
In “Space Brothers”, they get to a point where they start making some udon noodles. Mutta-kun says it’s easy, but just takes some time. Sounds like a plan!
So, off to the Japanese shop to get some stock ingredients (kombu etc.). In the end I just got a bottle of pre-prepared stock (it was cheaper), and some Japanese grape soda. The grape soda is
irrelevant. It’s just delicious. If you get the opportunity to try some, definitely do so. (Calpis/Calpico too!).
Made the noodles according to various recipes online. (I used a mixture of 00 and plain flower). The dough was a lot more satisfying to knead than bread dough. Wrapping the dough in plastic, and then
in towels to then stand on it just ended with the plastic tearing and the dough touching the towels. As I didn’t have proper zip-lock bags, I just muscled it to the right consistency.
Toasted sesame, mirin, soya-sauce, pak-choi, mushroom, carrot, spring onions, boiled egg and home made noodles: makes a delicious dinner.
Even better was the “sashimi” made from the excess noodle dough the next day.
I bought nori, but forgot to add it. Oh well. I guess that’s for next time eh! 🙂
Definitely worth the effort, and really not as hard as you might imagine. Biggest issue was slicing the noodles, as I didn’t really have any sharp/fancy knives. I bet you’d have an easier time of it!
Highly recommended. Give it a go! | {"url":"http://www.thjsmith.com/","timestamp":"2024-11-05T06:06:01Z","content_type":"text/html","content_length":"64494","record_id":"<urn:uuid:aa5f50ad-0a1d-4504-a275-d5b4d20b5261>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00243.warc.gz"} |
Plausible Value Procedures
The term "plausible values" refers to imputations of test scores based on responses to a limited number of assessment items and a set of background variables. Rather than require users to directly
estimate marginal maximum likelihood procedures (procedures that are easily accessible through AM), testing programs sometimes treat the test score for every observation as "missing," and impute a
set of pseudo-scores for each observation. The imputations are random draws from the posterior distribution, where the prior distribution is the predicted distribution from a marginal maximum
likelihood regression, and the data likelihood is given by likelihood of item responses, given the IRT models.
This section will tell you about analyzing existing plausible values. To learn more about where plausible values come from, what they are, and how to make them, click here.
If you are interested in the details of a specific statistical model, rather than how plausible values are used to estimate them, you can see the procedure directly:
When analyzing plausible values, analyses must account for two sources of error:
• Sampling error; and
• Imputation error.
This is done by adding the estimated sampling variance to an estimate of the variance across imputations.
The particular estimates obtained using plausible values depends on the imputation model on which the plausible values are based.
To estimate a target statistic using plausible values,
1. Estimate the statistic once for each of m plausible values. Let these estimates be
j={1,2,...m} for the m plausible values.
2. Calculate the average of the m estimates to obtain your final estimate:
If your are interested in the details of the specific statistics that may be estimated via plausible values, you can see:
To estimate the standard error, you must estimate the sampling variance and the imputation variance, and add them together:
1. Estimate the standard error by averaging the sampling variance estimates across the plausible values. In practice, most analysts (and this software) estimates the sampling variance as the
sampling variance of the estimate based on the estimating the sampling variance of the estimate based on the first plausible value. By default, AM estimates the sampling variance via Taylor
series approximation. Let V represent this variance.
2. Estimate the imputation variance as the variance across plausible values. This is given by
3. The final estimate of the standard is
Mislevy, R. J. (1991). Randomization-based inferences about latent variables from complex samples. Psychometrika, 56(2), 177-196.
Mislevy, R. J., Johnson, E. G., & Muraki, E. (1992). Scaling procedures in NAEP. Journal of Educational Statistics, 17(2), 131-154.
Rubin, D. B. (1987). Multiple Imputation for Non-response in Surveys. New York: Wiley.
Running the Plausible Values procedures is just like running the specific statistical models: rather than specify a single dependent variable, drop a full set of plausible values in the dependent
variable box. For example, NAEP uses five plausible values for each subscale and composite scale, so NAEP analysts would drop five plausible values in the dependent variables box. Be sure that you
only drop the plausible values from one subscale or composite scale at a time.
Other than that, you can see the individual statistical procedures for more information about inputting them:
NAEP uses five plausible values per scale, and uses a jackknife variance estimation. Currently, AM uses a Taylor series variance estimation method. This results in small differences in the variance
estimates. Most of these are due to the fact that the Taylor series does not currently take into account the effects of poststratification.
NAEP's plausible values are based on a composite MML regression in which the regressors are the principle components from a principle components decomposition. Essentially, all of the background data
from NAEP is factor analyzed and reduced to about 200-300 principle components, which then form the regressors for plausible values.
To learn more about the imputation of plausible values in NAEP, click here. | {"url":"https://am.air.org/Manual/Tools/Estimation-Methods/Plausible-Value-Procedures","timestamp":"2024-11-02T14:50:57Z","content_type":"text/html","content_length":"44866","record_id":"<urn:uuid:fc96b6d0-d815-41d9-bbb0-db9ea278be5a>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00651.warc.gz"} |
GAP Integration · Oscar.jl
This section explains how Oscar interacts with GAP.
This package provides a bidirectional interface between GAP and Julia. Its documentation describes how to call GAP functions in Julia code and vice versa, and how low level Julia objects can be
converted to GAP objects and vice versa.
When one works interactively in an Oscar session, calling GAP.prompt() opens a GAP session which has access to the variables in the Julia session, in particular to all Oscar functions and objects;
one can return to the Julia prompt by entering quit; in the GAP session.
For code involving Julia types that are defined in Oscar, GAP.jl cannot provide utility functions such as conversions to and from GAP.
• The GAP package OscarInterface (at gap/OscarInterface) is intended to contain the GAP code in question, for example the declarations of new filters and the installation of new methods.
Note that such code must be loaded at runtime into the GAP session that is started by Julia, and the OscarInterface package gets loaded in Oscar's __init__ function.
• The files in the directory src/GAP are intended to contain the Julia code in question, for example conversions from GAP to ZZRingElem, QQFieldElem, FinFieldElem, etc., and the construction of
isomorphisms between algebraic structures such as rings and fields in GAP and Oscar, via Oscar.iso_oscar_gap and Oscar.iso_gap_oscar.
• In Oscar code, global GAP variables can be accessed as members of GAP.Globals, but for the case of GAP functions, it is more efficient to use Oscar.GAPWrap instead.
For example, if one wants to call GAP's IsFinite then it is recommended to replace the call GAP.Globals.IsFinite(x)::Bool, for some GAP object x (a group or a ring or a list, etc.), by
Oscar.GAPWrap.IsFinite(x). This works only if the method in question gets defined in src/GAP/wrappers.jl, thus methods with the required signatures should be added to this file when they turn out
to be needed.
(The reason why we collect the GAP.@wrap lines in an Oscar file and not inside GAP.jl is that we can extend the list without waiting for releases of GAP.jl.)
• In GAP code, global Julia variables can be accessed as members of Julia, relative to its Main module. For example, one can call Julia.sqrt and Julia.typeof (or Julia.Base.sqrt and
Julia.Core.typeof) in GAP code.
In order to access variables from the Oscar module, it is not safe to use Julia.Oscar because the module Oscar is not always defined in Main. Instead, there is the global GAP variable Oscar.
Oscar.iso_oscar_gap(R::T) -> Map{T, GapObj}
Return an isomorphism f with domain R and codomain a GAP object S.
Elements x of R are mapped to S via f(x), and elements y of S are mapped to R via preimage(f, y).
Matrices m over R are mapped to matrices over S via map_entries(f, m), and matrices n over S are mapped to matrices over R via Oscar.preimage_matrix(f, n).
Admissible values of R and the corresponding S are currently as follows.
R S (in GAP.Globals)
ZZ Integers
QQ Rationals
residue_ring(ZZ, n)[1] mod(Integers, n)
finite_field(p, d)[1] GF(p, d)
cyclotomic_field(n)[1] CF(n)
number_field(f::QQPolyRingElem)[1] AlgebraicExtension(Rationals, g)
abelian_closure(QQ)[1] Cyclotomics
polynomial_ring(F)[1] PolynomialRing(G)
polynomial_ring(F, n)[1] PolynomialRing(G, n)
(Here g is the polynomial over GAP.Globals.Rationals that corresponds to f, and G is equal to Oscar.iso_oscar_gap(F).)
julia> f = Oscar.iso_oscar_gap(ZZ);
julia> x = ZZ(2)^100; y = f(x)
GAP: 1267650600228229401496703205376
julia> preimage(f, y) == x
julia> m = matrix(ZZ, 2, 3, [1, 2, 3, 4, 5, 6]);
julia> n = map_entries(f, m)
GAP: [ [ 1, 2, 3 ], [ 4, 5, 6 ] ]
julia> Oscar.preimage_matrix(f, n) == m
julia> R, x = polynomial_ring(QQ);
julia> f = Oscar.iso_oscar_gap(R);
julia> pol = x^2 + x - 1;
julia> y = f(pol)
GAP: x_1^2+x_1-1
julia> preimage(f, y) == pol
The functions Oscar.iso_oscar_gap and Oscar.iso_gap_oscar are not injective. Due to caching, it may happen that S stores an attribute value of Oscar.iso_gap_oscar(S), but that the codomain of this
map is not identical with or even not equal to the given R.
Note also that R and S may differ w.r.t. some structural properties because GAP does not support all kinds of constructions that are possible in Oscar. For example, if R is a non-simple number field
then S will be a simple extension because GAP knows only simple field extensions. Thus using Oscar.iso_oscar_gap(R) for objects R whose recursive structure is not fully supported in GAP will likely
cause overhead at runtime.
Oscar.iso_gap_oscar(R) -> Map{GapObj, T}
Return an isomorphism f with domain the GAP object R and codomain an Oscar object S.
Elements x of R are mapped to S via f(x), and elements y of S are mapped to R via preimage(f, y).
Matrices m over R are mapped to matrices over S via map_entries(f, m), and matrices n over S are mapped to matrices over R via Oscar.preimage_matrix(f, n).
Admissible values of R and the corresponding S are currently as follows.
S (in GAP.Globals) R
Integers ZZ
Rationals QQ
mod(Integers, n) residue_ring(ZZ, n)[1]
GF(p, d) finite_field(p, d)[1]
CF(n) cyclotomic_field(n)[1]
AlgebraicExtension(Rationals, f) number_field(g)[1]
Cyclotomics abelian_closure(QQ)[1]
PolynomialRing(F) polynomial_ring(G)[1]
PolynomialRing(F, n) polynomial_ring(G, n)[1]
(Here g is the polynomial over QQ that corresponds to the polynomial f, and G is equal to Oscar.iso_gap_oscar(F).)
julia> f = Oscar.iso_gap_oscar(GAP.Globals.Integers);
julia> x = ZZ(2)^100; y = preimage(f, x)
GAP: 1267650600228229401496703205376
julia> f(y) == x
julia> m = matrix(ZZ, 2, 3, [1, 2, 3, 4, 5, 6]);
julia> n = Oscar.preimage_matrix(f, m)
GAP: [ [ 1, 2, 3 ], [ 4, 5, 6 ] ]
julia> map_entries(f, n) == m
julia> R = GAP.Globals.PolynomialRing(GAP.Globals.Rationals);
julia> f = Oscar.iso_gap_oscar(R);
julia> x = gen(codomain(f));
julia> pol = x^2 + x + 1;
julia> y = preimage(f, pol)
GAP: x_1^2+x_1+1
julia> f(y) == pol
The functions Oscar.iso_gap_oscar and Oscar.iso_oscar_gap are not injective. Due to caching, it may happen that S stores an attribute value of Oscar.iso_oscar_gap(S), but that the codomain of this
map is not identical with or even not equal to the given R. | {"url":"https://docs.oscar-system.org/stable/DeveloperDocumentation/gap_integration/","timestamp":"2024-11-06T11:54:51Z","content_type":"text/html","content_length":"57371","record_id":"<urn:uuid:da0beb3d-1aa0-48d9-b6f2-1a9596e543b6>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00189.warc.gz"} |
RISC Activity Database
author = {Shaoshi Chen and Manuel Kauers and Michael F. Singer},
title = {{Telescopers for Rational and Algebraic Functions via Residues}},
language = {english},
abstract = {We show that the problem of constructing telescopers for functions of m variables is equivalent to the problem of constructing telescopers for algebraic functions of m - 1 variables
and present a new algorithm to construct telescopers for algebraic functions of two variables. These considerations are based on analyzing the residues of the input. According to experiments, the
resulting algorithm for rational functions of three variables is faster than known algorithms, at least in some examples of combinatorial interest. The algorithm for algebraic functions implies a
new bound on the order of the telescopers. },
number = {1201.1954},
year = {2012},
institution = {ArXiv},
length = {8} | {"url":"https://www3.risc.jku.at/publications/show-bib.php?activity_id=4412","timestamp":"2024-11-05T04:28:09Z","content_type":"text/html","content_length":"3516","record_id":"<urn:uuid:a505b9fa-905d-4ac3-b256-e888fc41c0f4>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00614.warc.gz"} |
An Available-to-Promise Allocation Decision Model Based on Assemble-to-Order Supply Chain
American Journal of Industrial and Business Management Vol.09 No.06(2019), Article ID:93448,11 pages
An Available-to-Promise Allocation Decision Model Based on Assemble-to-Order Supply Chain
Manxue Xu, Huaili Chen^
Institute of Logistics Science and Engineering, Shanghai Maritime University, Shanghai, China
Copyright © 2019 by author(s) and Scientific Research Publishing Inc.
This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).
Received: May 27, 2019; Accepted: June 27, 2019; Published: June 30, 2019
The main content of this research is about the ATP (Available to promise) allocation problem based on Assemble-to-Order Supply Chain. In the case of Multiple-Suppliers, the synchronous planning
method is selected to discuss the supply plan and demand plan and an ATP allocation decision model adapted to ATO supply chain production environment is established. Based on the equal-weight
combination forecasting method, the forecast demand was input into the ATP distribution model. The mixed integer linear programming method was adopted to solve the calculation and analyze the cost
and benefit. Finally, the optimal ATP distribution scheme was obtained.
ATO Supply Chain, ATP, Combination Forecast Method, Allocation Decision
1. Introduction
In today’s competitive market, the importance of customers is attracting more and more attention from companies. How to maintain the relationship with key customers has become an important part of
enterprise development. As a principle in the supply chain, the accurate and timely commitment of production enterprises to orders has become one of the important indicators to measure their
competitiveness. The factors that affect the timely delivery of enterprises mainly come from the uncertainty of the supply chain. In order to reduce the loss caused by the failure to fulfill orders
on time caused by the uncertainty, it is necessary to allocation in advance the enterprise’s supply chain ATP (Available to Promise).
Manufacturers are focusing on ATP as a retention strategy, in which they are forced to promise customers in advance how many they can deliver on a given delivery date. [1] takes the normal available
capacity, overtime available capacity and external available resources of an enterprise as the ATP of the enterprise, and establish the decision-making model of ATP cost allocation. [2] focuses on
the multiple-product ATP (MATP) strategy to maximize the manufacturer’s net profit. [3] establishes a flexible order configuration model for ATO supply chain by proposing an order configuration
strategy combining secondary delivery and substitution. [4] uses a dynamic short-term pseudo-order prediction to manage a promising assembly system. And [5] studies the supply chain commitment
allocation problem based on customer classification. [6] proposes an ATP model that considers two factors: customer priority and variance of penalty cost. [7] studies how to enhance the robustness of
commitment ability. [8] studies the commitment model of periodic order commitment and propose a dynamic order resource retention strategy. [9] attempts to solve the dynamic order commitment problem
and establish a mixed integer programming model with fuzzy constraints. [10] proposes a promising (ATP) model supporting TFT-LCD manufacturing order execution process decision. [11] proposes a
mathematical planning and ordering model from LHP production to inventory environment [12] studies the rationing of common components among multiple products in the order-configured system under the
uncertainty of order configuration. [13] proposes a mathematical model for processing heterogeneous ATP in fruit Supply Chains and a pricing strategy based on product SL at delivery. [14] discusses
the advanced commitment availability (AATP) in the assembly line scheduling problem of mixed model.
To sum up, although some studies have been conducted on the issue of allocation in advance of commitments, the practicability is not too high and is based on the single component supply source. On
this basis, this paper establishes a commitment allocation decision model adapted to ATO supply chain production environment under the condition of multiple supply sources of components. Based on the
combination forecasting method of customer demand forecasting, the forecast demand is incorporated into the ATP allocation model. The solution is solved by mixed integer linear programming to find
the solution with the maximum profit. However, the allocation of Available to Promise in this paper takes ATO type production enterprises in a single channel supply chain environment as the research
object, so as to conduct modeling and quantitative research. However, it is not suitable for ATO type production enterprises in the environment of dual-channel supply chain. Nevertheless, this paper
is of great reference value to ATO type production enterprises in terms of Available to Promise allocation.
2. Materials and Methods
2.1. Problem Description
In ATO supply chain, the final assembly process of products is driven by customer orders to meet the personalized needs of customers. In the process of customers placing orders and receiving goods,
the manufacturer only carries out the final assembly process of the products, which greatly shortens the production time of the products and realizes customers’ requirement of rapid delivery.
However, the subsequent limitation of assembly process production capacity and material supply capacity has become a new problem for ATO supply chain manufacturers to achieve market supply and demand
balance and improve customer service level. Therefore, this paper expects to solve the problem of rational allocation of resources under the condition of short supply by studying the ATP allocation
problem of ATO type production enterprises, so as to improve the response speed of supply chain and make reliable order fulfillment commitment.
Based on the above description, the problem can be reduced to the scenario described in Figure 1. The figure shows an assembly-type manufacturing enterprise with multiple suppliers and one factory
that can produce different types of products. Because each component has multiple suppliers, each product has multiple customers, and the lead time of enterprise assembly products is very short,
enterprises need to adopt ATO supply chain production operation mode. According to the forecast, the assembled product components are ordered by providing each supplier with a purchase order in
advance and produced and assembled according to the customer order.
2.2. Available-to-Promise Allocation Decision Model
2.2.1. Customer Demand Forecast
In order to improve the accuracy of customer demand forecasting, this paper adopts the equal-weight combination forecasting method which is composed of two forecasting methods commonly used in
current production enterprises, namely quadratic moving average method and quadratic exponential smoothing method. And are the predicted values of the quadratic moving average method and the
quadratic exponential smoothing method in the period t, respectively. The predicted results of the two prediction methods are combined into the new
Figure 1. Flow chart of ATO supply chain manufacturer.
predicted values according to the same weight w[i] = 0.5 (i = 1, 2). The weight combination prediction model formula is like the follows:
2.2.2. Assumptions and Notations Description
Assumptions the goal of ATO supply chain ATP configuration decision is to achieve maximum profit while responding quickly to customer orders and timely fulfilling the contract. The supply chain in
this article consists of three members: a set of suppliers, a factory, and a set of customers. In order to facilitate the solution and study the relationship between ATP configuration and related
costs and other parameters through calculation results, this model has the following assumptions. It is assumed that the component supply capacity, order cost, purchase cost, product supply price,
product and component inventory cost and product transportation cost of each supplier will remain unchanged for a period of time in the future. The production lead time is zero, and the production
capacity and the product component supply capacity are limited. Components are available at the beginning of each period. The same component manufacturer can order from multiple suppliers and bear
the corresponding costs. Manufacturers offer different prices for different levels of customers in different regions. The cost of preparation for production is ignored; Historical sales data are
available. The storage capacity and transportation capacity of the enterprise is unlimited. The safety inventory of finished products and components is ignored. Production and consumption are
synchronized, and no extra production is produced. The input data in the model, such as price, cost, component supply capacity, production capacity and product BOM table, has been obtained. Material
transportation cost shall be borne by the material supplier. Suppose that at the end of each period, the factory distributes goods to the customer. Table 1 describes the notations related to the
parameters and decisions variables of the proposed models.
Objective Function
$\begin{array}{c}\mathrm{max}Y=\underset{t\in T}{\sum }\underset{l\in L}{\sum }\underset{r\in R}{\sum }\underset{\tau \ge t}{\sum }\underset{p\in P}{\sum }\left({Q}_{tlr\tau p}^{\left(p\right)}\cdot
{{P}^{\prime }}_{tlrp}^{\left(p\right)}\right)-\underset{t\in T}{\sum }\underset{m\in M}{\sum }{C}_{m}^{\left(h\right)}\cdot \left(\frac{{B}_{tm}^{\left(m\right)}+{E}_{tm}^{\left(m\right)}}{2}\right)
\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}-\underset{t\in T}{\sum }\underset{p\in P}{\sum }\left({C}_{p}^{\left(h\right)}\cdot {E}_{tp}^{\left(p\right)}\right)-\underset{t\in T}{\sum }\underset
{p\in P}{\sum }\left({C}_{tp}^{\left(p\right)}\cdot {Q}_{tp}^{\left(p\right)}\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}-\underset{t\in T}{\sum }\underset{l\in L}{\sum }\underset{r\in R}{\
sum }\underset{\tau \ge t}{\sum }\underset{p\in P}{\sum }\left({Q}_{tlr\tau p}^{\left(p\right)}\cdot {C}_{lp}^{\left(d\right)}\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}-\underset{t\in T}
{\sum }\underset{m\in M}{\sum }\underset{s\in S}{\sum }\left({{P}^{\prime }}_{tms}^{\left(m\right)}\cdot {Q}_{tms}^{\left(m\right)}\right)-\underset{t\in T}{\sum }\underset{m\in M}{\sum }\underset{s\
in S}{\sum }\left({C}_{tms}^{\left(o\right)}\cdot {y}_{tms}^{\left(m\right)}\right)\end{array}$(2)
Objective function means total income minus material inventory cost, product inventory cost, product production cost, product transportation cost, material purchase cost and material order cost.
Constraint Condition 1: First the paper will give some Capacity constraints as follows.
$\underset{p\in P}{\sum }{Q}_{tp}^{\left(p\right)}\cdot {u}_{p}^{\left({c}^{\prime }\right)}\le {{C}^{\prime }}_{t}$(3)
Table 1. List of parameters, superscripts, sets and decision variables.
$\underset{l\in L}{\sum }\underset{p\in P}{\sum }{Q}_{tlr\tau p}^{\left(p\right)}\cdot {u}_{p}^{\left({c}^{\prime }\right)}={{C}^{\prime }}_{tr\tau }$(4)
Constraint Condition 2: And then it will list some product demand constraint as follows.
$\underset{t\le \tau }{\sum }{Q}_{tlr\tau p}^{\left(p\right)}\le {Q}_{\tau lrp}^{\left(p\right)}$(5)
${Q}_{tp}^{\left(p\right)}=\underset{l\in L}{\sum }\underset{r\in R}{\sum }\underset{\tau \ge t}{\sum }{Q}_{tlr\tau p}^{\left(p\right)}$(6)
$\underset{t\ge 2}{\sum }\underset{l\in L}{\sum }\underset{r\in R}{\sum }\underset{\tau \le t-1}{\sum }{Q}_{tlr\tau p}^{\left(p\right)}=0$(7)
Constraint Condition 3: Then it is about some component capability constraints.
$\underset{p\in P}{\sum }{Q}_{tp}^{\left(p\right)}\cdot {u}_{pm}^{\left(m\right)}\le \underset{s\in S}{\sum }{Q}_{tms}^{\left(m\right)}\cdot {y}_{tms}^{\left(m\right)}+{B}_{tm}^{\left(m\right)}$(8)
${Q}_{tms}^{\left(m\right)}\le {U}_{tms}^{\left(m\right)}$(9)
$\underset{l\in L}{\sum }\underset{p\in P}{\sum }{Q}_{tlr\tau p}^{\left(p\right)}\cdot {u}_{pm}^{\left(m\right)}={Q}_{tr\tau m}^{\left(m\right)}$(10)
Constraint Condition 4: Next it will list some component inventory balance constraints.
${B}_{tm}^{\left(m\right)}={E}_{\left(t-1\right)m}^{\left(m\right)}+\underset{s\in S}{\sum }{Q}_{tms}^{\left(m\right)}\cdot {y}_{tms}^{\left(m\right)}$(12)
${E}_{tm}^{\left(m\right)}={B}_{tm}^{\left(m\right)}-\underset{p\in P}{\sum }\left({Q}_{tp}^{\left(p\right)}\cdot {u}_{pm}^{\left(m\right)}\right)$(13)
Constraint Condition 5: At last it will give some Product inventory balance constraints.
${E}_{tp}^{\left(p\right)}={B}_{tp}^{\left(p\right)}+{Q}_{tp}^{\left(p\right)}-\underset{i=1}{\overset{t}{\sum }}\underset{l\in L}{\sum }\underset{r\in R}{\sum }\underset{\tau =t}{\sum }{Q}_{ilr\tau
${B}_{tp}^{\left(p\right)}={E}_{\left(t-1\right)p}^{\left(p\right)},\text{\hspace{0.17em}}t\ge 2$(16)
$\left\{{B}_{tm}^{\left(m\right)},{E}_{tm}^{\left(m\right)},{B}_{tp}^{\left(p\right)},{E}_{tp}^{\left(p\right)},{Q}_{tms}^{\left(m\right)},{Q}_{tp}^{\left(p\right)},{Q}_{tr\tau m}^{\left(m\right)},
{{C}^{\prime }}_{tr\tau },{Q}_{tlr\tau p}^{\left(p\right)}\right\}\ge 0,\text{Andasaninteger}$(17)
${y}_{tms}^{\left(m\right)}\in \left\{0,1\right\}$(18)
In the above model, $s\in S$ ; $m\in M$ ; $t\in T$ ; $p\in P$ ; $\tau \in T$ ; $l\in L$ . It mainly considers the constraints of production capacity, component supply capacity, inventory and customer
product demand.
3. Results and Discussion
According to the established ATO supply chain ATP allocation decision model, the relevant data of an assembly manufacturing enterprise was selected for simulation verification. Based on geographical
location, the enterprise divides customers into three regions and divides customers in each region into two priorities according to their importance. The production capacity of this enterprise is
fixed in each phase. It produces two products, P1 and P2, with a total production capacity of 4500 units. The production capacity required for each product of these two products is 3 units and 2
units respectively. There are three suppliers for each component. The BOM diagram of the two products is shown in Figure 2.
The initial inventory quantity, procurement cost, order cost, inventory cost of each material component and the supply limit of each supplier for each period of time for each material component are
shown in Table 2 and Table 3. Relevant cost and supply price information of each product are shown in Table 4 and Table 5. The demand data of the past ten periods of customers of two levels in three
regions are shown in Table 6.
Take the exponential smoothing coefficient a = 0.9, and the number of periods of quadratic exponential smoothing and quadratic moving average n = 4. First, use EXCEL software to obtain the predicted
product demand of the enterprise in the next five periods, as shown in Table 7. According to the equal weight linear combination prediction model, the demand of customers for the two products in the
next five years is obtained, and the production capacity and supplier component supply capacity of the enterprise are combined. Then, the ATP allocation in advance model proposed in this paper is
used to allocate the production capacity and components for customers of all levels in all regions in advance. The data in the case is imported into the model and solved by programming with
Lingo11.0. The procurement information of components in the next five phases is shown in Table 8 and the allocation information of production capacity in the next five phases is shown in Table 9.
Table 2. Procurement costs, ordering costs and component inventory costs for components purchased from supplier S1, S2, and S3.
Table 3. Initial inventory of component M and component supply limit tables of supplier S1, S2 and S3.
Table 4. Production cost, Inventory cost and Transportation cost of Product P.
Table 5. Supply prices of products to different rate customers in different regions.
Table 6. Historical demand of the first 1-10 periods of the customers.
Table 7. The result of equal weight combination prediction model for the next five periods.
Table 8. Next five periods of component procurement information.
Table 9. Next five periods of capacity allocation information.
For the example in this paper, it can be found from Table 7 that, for the manufacturer, the fluctuation range of product demand of customers at all levels of the region in the next five periods is
small. As can be seen from Table 8, in the next five periods, all components need to be purchased from multiple suppliers, so the enterprise should make a reasonable procurement plan and communicate
with corresponding suppliers about procurement issues as early as possible. It can be seen from Table 9 that the production capacity of the enterprise in the next five periods is sufficient in the
short term, and the vacant production capacity is not much. If the enterprise has new production planning in the future, it should consider adjusting the production capacity accordingly.
After the ATP allocation decision, the enterprise can make a reasonable procurement plan and arrange its production capacity according to the needs of different levels of customers in different
regions, so as to quickly allocate customer demands, so as to maintain the relationship between the enterprise and high-quality customers and ensure the profit and long-term development of the
enterprise. The total customer demand is 8680 units of products. In the case of multiple supply sources of components, the enterprise can provide 8677 unit products, the order satisfaction rate is
99.85%, and the total profit of the enterprise is 2005.77 million yuan. At the same time, components with multiple supply sources not only reduce the risk of supply chain, but also make full use of
enterprise production capacity, improve enterprise efficiency and customer service level.
4. Conclusion
In ATP allocation in advance model, this paper is on the basis of the results of predict demand, considering production capacity, component supply, customer level, transportation cost, storage cost
and other constraints, combined with the distribution rules for the ATP level customers in different times allocated material components, to speed up the efficiency of enterprise order allocation and
enhance the reliability of the enterprise order commitments. In the process of order allocation, whether the ATP allocation is reasonable will have a direct impact on the order performance rate and
corporate income, as well as the responsiveness and competitiveness of enterprises to customer orders. Therefore, it is of profound theoretical and practical significance to discuss how to reasonably
and scientifically allocate ATP.
Conflicts of Interest
The authors declare no conflicts of interest regarding the publication of this paper.
Cite this paper
Xu, M.X. and Chen, H.L. (2019) An Available-to-Promise Allocation Decision Model Based on Assemble-to-Order Supply Chain. American Journal of Industrial and Business Management, 9, 1475-1485. https:/ | {"url":"https://file.scirp.org/Html/12-2121524_93448.htm","timestamp":"2024-11-11T03:50:00Z","content_type":"application/xhtml+xml","content_length":"94821","record_id":"<urn:uuid:21a4800d-0236-4141-8436-094ba47f2d3b>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00567.warc.gz"} |
A projectile is shot from the ground at a velocity of 1 m/s and at an angle of (pi)/2. How long will it take for the projectile to land? | Socratic
A projectile is shot from the ground at a velocity of #1 m/s# and at an angle of #(pi)/2#. How long will it take for the projectile to land?
1 Answer
${v}_{f} = {v}_{i} + a t$
$\frac{\pi}{2}$ is straight up
$0 = 1 - \left(9.81 \times t\right)$ and $t = \frac{1}{9.81} \approx .1 s$
graph{-(50x)^2 +5 [-1, 1, 0, 5]}
Observe the arbitrary graph that I provided. It show that the projectile will climb up and come down after it reach maximum height. At maximum height the speed ${v}_{f} = 0$ knowing this write
${v}_{f} = {v}_{i} + a t$ we know that #v_i = 1m/s; and a = g = -9.81 m/s^2#
Thus $0 = 1 - \left(9.81 \times t\right)$ and $t = \frac{1}{9.81} \approx .1 s$
Impact of this question
1265 views around the world | {"url":"https://socratic.org/questions/a-projectile-is-shot-from-the-ground-at-a-velocity-of-1-m-s-and-at-an-angle-of-p","timestamp":"2024-11-14T18:13:37Z","content_type":"text/html","content_length":"33493","record_id":"<urn:uuid:7c1e6794-db0b-48d8-9d10-b1f0e8ee3183>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00862.warc.gz"} |
Making the Metric System EngagingMaking the Metric System Engaging
Physics Resources Teaching Science
Making the Metric System Engaging
When my husband and I lived in New Zealand for two years, we definitely had to get used to using the Metric System for everyday things like baking and paper size and recording our weights at doctors
But, as an architect, my husband LOVED using the Metric System once he made the leap from the (silly) system that he was used to here in the States (inches, feet, pounds, etc.).
At the beginning of the school year, I really drill how important it is that my students learn and understand the Metric System. This is a foundational science process skill that they will use all
year long and for the rest of their ‘careers’ as students of science!
For some reason there seems to be a totally unfounded stigma around the Metric System.
Usually when they first see the words ‘Metric System’ on the agenda board, my students suddenly have a look of dread and start making groaning sounds. Little do they know how EASY it is and that as
U.S. residents, they’ve been deprived of simple units measurement all their lives!
In this post, I am providing some ideas for teaching your science students how to use the Metric System!
Why the Imperial System is Silly
I have found it very effective to have students come to the conclusion themselves that the Imperial System of measurement is DIFFICULT, CONFUSING, and NOT INTUITIVE! This helps them to understand why
we don’t use the Imperial System in science, and it also helps them to buy in to why they DO need to learn METRICS!
To help students see these truths, I provide them with the conversions for the Imperial System.
I tell them that for mass, one ton divided by 2000 equals one pound and one pound divided by 16 equals one ounce. They look puzzled.
Then they usually know that there are 12 inches in a foot and some know that there are 3 feet in a yard.
But then I share that there are 1760 yards and 5280 feet in a mile (what? why?!).
Then we get to fluid and dry volume. There are 4 quarts in a gallon and 2 pints in a quart. There are 2 cups in a pint. Okay. But then there are 16 tablespoons in a cup and 3 teaspoons in a
And one bushel of apples divided by 4 equals a peck of apples.
At this point they are like, “Where did these random numbers come from?!”
Well, to be frank, these numbers came from antiquated things like the length of the King’s foot and the size of a barleycorn. These relationships are completely irrelevant and random now!
By this point, they are hoping there is a simpler system. When I ask if they’d like to learn a simpler system, they’re all like, “Heck YES!”.
NASA’s Measuring Failure
Newspaper cartoon depicting the measurement mix up by NASA and Lockheed Martin scientists that led to the Mars Climate Orbiter disaster (Source: Slideplayer.com)
Next I like to prove to my students that in the real world outside of the science classroom, people actually USE the Metric System. This is not something that they are memorizing for science class or
some rote and irrelevant skill that they are being forced to learn that they’ll never use again.
This is a great time to discuss NASA’s big giant measuring failure. In 1999, because of an unbelievable math error, NASA lost its $125-million Mars Climate Orbiter!!!
Why?! Because spacecraft engineers FAILED TO CONVERT FROM IMPERIAL TO METRIC MEASUREMENTS when exchanging vital data before the craft was launched!
This video explains what happened. And this unfortunate and extremely expensive measurement mix up is a great introduction to the SIMPLICITY of the Metric System!
Breaking Down the Ruler – The Magic Number 10
Break out your rulers here! When I first started teaching middle school science, I skipped over this part of the lesson. I assumed that 13 year-olds know how a ruler works and how to measure the
length of something. BIG MISTAKE. Most of them do not.
It’s a worthy investment to have a ruler for EACH student in your class for this part. You might even want to have little magnifying glasses to really drive home the big idea here. If you have a
document camera in your classroom, use it! This way you can zoom in on the ruler increments.
Begin by pointing out that many rulers contain BOTH the Imperial AND the Metric System. They need to notice the ‘inches’ and ‘mm’ or ‘cm’ units on their rulers and double-check that they are using
Next, have students use a piece of paper to trace their ruler and then use the ruler to carefully draw out all of the lines on the ruler onto their paper. They should now count the number of tiny
(millimeter) lines in each centimeter. Review that the MAGIC NUMBER is 10!
King Henry Slider – Converting Between Metric Units
King Henry Died By Drinking Chocolate Milk. This is my personal mnemonic device of choice for the prefixes of the Metric System — the words of course stand for Kilo, Hecto, Deka, Base Unit, Deci,
Centi, Milli. I call the process of moving the decimal point in numbers to the left and right “The King Henry Slider”.
So once my students memorize the prefixes, we begin doing ‘The Slide’. If you want to go from a smaller unit to a larger one, simply slide to the left! If you want to go from a larger unit to a
smaller one, simply slide to the right! (Sounds like we’re at a wedding dance party, I know!).
To teach students to use “The King Henry Slider,” have them first write out the prefixes (KHDBDCM). Then, teach them to put their finger on the unit they are starting with and hop to the left or
right to the unit they need to convert to, counting the number of hops. They’ll move their decimal place this many times in that direction.
Inevitably in every class, you will have someone raise their hand and offer that one could instead divide or multiply by multiples of 10 to convert between units. Yes, of course you can!
But, because of my totally mixed math-ability science classes, I find that this method gets too icky and stressful for many students. I like to teach them “The King Henry Slider” to keep them feeling
successful and invested!
Accuracy and Precision
Here is a science process skill that a lot of teachers seem to breeze right over. But I drill this one because it’s simple but POWERFUL: understanding Accuracy versus Precision.
Many people would tell you that these two words mean the same thing but they don’t! So what is the difference and why is making this distinction even important?
• ACCURACY is how close a measurement is to the “true” or “accepted” value.
• PRECISION is how close a group of measurements are to each other.
Targets are a great analogy to use to help students visualize the difference between accuracy and precision. Here are the four scenarios depicted in the diagram above where I have thrown three darts
at a target:
• ACCURATE but NOT PRECISE: My darts all land around the ring just outside the bullseye. I have accuracy (close to the bullseye), but not precision (my three darts landed in different spots).
• PRECISE but NOT ACCURATE: My darts all land far from the bullseye but they have all hit the same spot. I have precision (my darts landed really close to each other), but not accuracy (my darts
are nowhere near the bullseye).
• BOTH ACCURATE and PRECISE: My darts all land right on the bullseye. I have both accuracy (I hit the intended target — the bullseye!) and precision (all three darts landed on the same spot).
• NEITHER ACCURATE NOR PRECISE: My darts all land in different random places far from the bullseye. I have neither accuracy (my darts are far from the intended target) nor precision (my darts are
far from each other).
Students like this analogy because it’s simple and it makes sense. I like to show this TEDed video at this point too.
Now, getting back to the Metric System… how do we make accurate and precise measurements using metrics?
Well, accuracy is of course going to come down to knowing how to use the measuring tool (ruler, graduated cylinder, balance, etc.) and taking our time with our measurements.
Precision is going to mean we will be estimating one additional digit for every measurement that we make. But this is EASY using the metric system and this is the part that I DRILL!
If I measure a pencil and the graphite tip lands between the 17.5 centimeters and 17.6 centimeters mark, then my pencil is definitely 17.5 centimeters but it’s not 17.6 centimeters. It’s somewhere in
between. I NEED TO ESTIMATE ONE ADDITIONAL DIGIT here.
This is where the magnifying glasses may come in handy! If I look closely, I can estimate that my pencil is about 17.57 centimeters. The “7” is the estimated digit that gives this measurement
Now once I teach this mini-lesson, I expect my students to estimate one additional digit for every measurement they take from here on out whether it be for length with a ruler, for volume with a
graduated cylinder, or for mass with a balance! I want them to get the hang of this so they begin doing this as good practice.
No Naked Numbers!
I like to DRILL this at the beginning of the school year because it applies ALL YEAR! There shall be NO NAKED NUMBERS in this classroom! In other words, every number needs to have a unit. We need to
distinguish between 11 centimeters and 11 poke bowls, 297 grams and 297 zebras, 173 milliliters and 173 ways to say I Love You!
Teach the Metric System with Doodle Notes
Like what you’re seeing from the images above and think your students might too? Check out these Metric System Cornell Doodle Notes! These scaffolded notes will help you to explain these concepts.
Not sure what Cornell Doodle Notes are? Check out this blog post to learn more!
80’s Party!: Measuring with the Metric Ruler
Now for the hands on. Let your students master working with length in metrics before jumping into volume or mass. The visual breakdown of the ruler will help them understand the increments and how
they are related by the MAGIC NUMBER 10. They will more easily be able to apply this to understanding metric volume and mass!
This fun FREE ’80’s party’ measuring length activity has students measure various objects on a piece of paper and then around the classroom. This activity also provides students a chance to practice
making BOTH ACCURATE and PRECISE measurements!
The students use the metric units of millimeters and centimeters. They need to pay attention to which unit is requested. While ‘getting ready for the party’, students make 20 measurements of length
of objects on paper.
Then, students measure actual things in the classroom. This part of the activity depends on what materials you have available.
Grab some mustaches, party hats, fake glasses, beach balls, and other toys from your local dollar store! Put all of these items on a few tables in your classroom and either tell the students what
they should measure or have the students choose items to measure in whatever way they’d like. For example, they may choose to measure the diameter of a party hat, or the circumference of a beach ball
(they will need string for this!), or the length of a faux mustache.
This is the time to really STRESS PRECISION! For every single measurement, students should be ESTIMATING ONE ADDITIONAL DIGIT!
Color Fashion Show Challenge: Measuring with the Graduated Cylinder
This activity is so much fun to do at the beginning of the school year. It helps students to practice measuring with graduated cylinders in a creative way.
Students use richly-colored water in the primary colors (red, yellow, blue) to create new secondary shades (orange, green, purple). This is an inquiry with just one parameter: their secondary colors’
volume MUST be 25 mL. So for example, they could try mixing 12 mL of red with 13 mL of yellow to make an orange, then they could try 10 mL of red and 15 mL of yellow to make a different orange.
They will create at least 3 shades of each secondary color and write down the quantities of each color in the provided data table.
After experimenting and creating 3 to 4 shades, each group will choose its favorite shade of orange, green, and purple to use for its Rainbow Rack.
Once each group of students has created a new ‘Rainbow Rack’ containing the primary colors and three new secondary colors, then they use these colors to create their ‘designer’ colors! For example,
they might create colors called ‘Purple Haze’ or ‘Tiger Lily Orange’ or ‘Jupiter Red’. They write ‘recipes’ for the colors that they create. Students collaborate and share their designer color
recipes on a Google Sheets data table.
You can host a Designer Color Fashion Show and have your students share and vote on their favorite colors!
Mass Mini Labs: Measuring with the Balance
Once my students have mastered measuring length and liquid volume using the metric system, I jump into mass with these Mini Labs activities!
First, I teach my students how to properly use a triple beam balance. They have to understand how to read the rider values and how to estimate one additional digit to make a precise measurement using
the balance. Then, three fun hands-on mini labs allow them to practice their measuring skills!
For the “Bowl of Fun!” lab, I gather fun random objects from the dollar store (this usually falls in October so I try to include Halloween themed objects like mini jack-o-lanterns and spider rings!).
The students practice making precise massings of objects of their choice from the bowls!
Bowl of Fun Measuring Mass Mini Lab
For the “Clay Creations” lab, I give groups of students a hunk of modeling clay. They work as a team to build a creation, continually adding to the figure and massing it six times. They find the
average mass of the clay that was added each round. This is great practice to review finding an average.
For the “Small Change” lab, I provide the groups pennies from various decades. They find the average mass of their pennies, compare it to the average of other groups’ pennies, hypothesize why the
averages are so different, and design an experimental procedure to test their hypothesis. This is a fun way to review the scientific method, also!
I hope that these ideas and activities help you to relieve your students’ anxiety about the Metric System! I’d love to hear more ideas that you have for teaching the Metric System in the comments
You may also be interested in checking out this bundle of resources, which includes the hands-on activities above, plus Cornell Doodle Notes on The Metric System and Measuring Length, Volume and
Mass! By the way, the Cornell Doodle Notes contain the doodles included in this post!
No Comments
Next Story
Organization Systems for Back to School! | {"url":"https://sunrisescienceclassroom.com/metric-system/","timestamp":"2024-11-14T11:48:26Z","content_type":"text/html","content_length":"178972","record_id":"<urn:uuid:f54bf32a-d068-49e9-86c8-e1dbd0ef69d8>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00515.warc.gz"} |
[Solved] Raju's father's age is 5 years more than three times
Raju's father's age is 5 years more than three times of Raju's age. Raju's father is 44 years old, then age of Raju is -
Answer (Detailed Solution Below)
Option 3 : 13 years
Rajasthan Computer Teacher Programming Fundamentals Mock Test
21.1 K Users
10 Questions 10 Marks 12 Mins
Raju's father's age is 5 years more than three times of Raju's age.
The age of Raju's Father = 44 years
Let the age of Raju be x
According to the question
⇒ 3x + 5 = 44
⇒ 3x = (44 – 5)
⇒ 3x = 39
⇒ x = 13 years
∴ The age of Raju is 13 years.
Latest Rajasthan Computer Teacher Updates
Last updated on Jun 5, 2024
The Rajasthan Computer Teacher Notification 2024 is yet to be released. Earlier, the Rajasthan Computer Teacher Result was out on 29th September 2023. The Rajasthan Staff Selection Board had declared
the results of the Exam which was held on 18th & 19th June 2022. The previous recruitment notification was released for 10157 posts of Computer Teachers. Candidates who had qualified for the written
exam will appear for the Document Verification round. The RSMSSB had announced 10157 vacancies for Basic Computer Teacher and Senior Computer Teacher posts. Candidates with a Graduate/ Post Graduate
degree could apply for these posts. Candidates can refer previous year paper for their preparation. | {"url":"https://testbook.com/question-answer/rajus-fathers-age-is-5-years-more-than-t--62b0c0e4dc1f8a3586bdd6c6","timestamp":"2024-11-14T08:19:02Z","content_type":"text/html","content_length":"202193","record_id":"<urn:uuid:1f9871fd-b172-4d66-bac1-90f509e3d898>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00211.warc.gz"} |
Education News & Blogs – NASA Jet Propulsion Laboratory
Update: March 16, 2020 – The answers to the 2020 NASA Pi Day Challenge are here! View the illustrated answer key (also available as a text-only doc).
In the News
Our annual opportunity to indulge in a shared love of space exploration, mathematics and sweet treats has come around again! Pi Day is the March 14 holiday that celebrates the mathematical constant
pi – the number that results from dividing any circle's circumference by its diameter.
Besides providing an excuse to eat all varieties of pie, Pi Day gives us a chance to appreciate some of the ways NASA uses pi to explore the solar system and beyond. You can do the math for yourself
– or get students doing it – by taking part in the NASA Pi Day Challenge. Find out below how to test your pi skills with real-world problems faced by NASA space explorers, plus get lessons and
resources for educators.
How It Works
The ratio of any circle's circumference to its diameter is equal to pi, which is often rounded to 3.14. But pi is what is known as an irrational number, so its decimal representation never ends, and
it never repeats. Though it has been calculated to trillions of digits, we use far fewer at NASA.
Pi is useful for all sorts of things, like calculating the circumference and area of circular objects and the volume of cylinders. That's helpful information for everyone from farmers irrigating
crops to tire manufacturers to soup-makers filling their cans. At NASA, we use pi to calculate the densities of planets, point space telescopes at distant stars and galaxies, steer rovers on the Red
Planet, put spacecraft into orbit and so much more! With so many practical applications, it's no wonder so many people love pi!
In the U.S., 3.14 is also how we refer to March 14, which is why we celebrate the mathematical marvel that is pi on that date each year. In 2009, the U.S. House of Representatives passed a resolution
officially designating March 14 as Pi Day and encouraging teachers and students to celebrate the day with activities that teach students about pi.
The NASA Pi Day Challenge
This year's NASA Pi Day Challenge poses four puzzlers that require pi to compare the sizes of Mars landing areas, calculate the length of a year for one of the most distant objects in the solar
system, measure the depth of the ocean from an airplane, and determine the diameter of a distant debris disk. Learn more about the science and engineering behind the problems below or click the link
to jump right into the challenge.
› Take the NASA Pi Day Challenge
› Educators, get the lesson here!
Mars Maneuver
Long before a Mars rover touches down on the Red Planet, scientists and engineers must determine where to land. Rather than choosing a specific landing spot, NASA selects an area known as a landing
ellipse. A Mars rover could land anywhere within this ellipse. Choosing where the landing ellipse is located requires compromising between getting as close as possible to interesting science targets
and avoiding hazards like steep slopes and large boulders, which could quickly bring a mission to its end. In the Mars Maneuver problem, students use pi to see how new technologies have reduced the
size of landing ellipses from one Mars rover mission to the next.
Cold Case
In January 2019, NASA's New Horizons spacecraft sped past Arrokoth, a frigid, primitive object that orbits within the Kuiper Belt, a doughnut-shaped ring of icy bodies beyond the orbit of Neptune.
Arrokoth is the most distant Kuiper Belt object to be visited by a spacecraft and only the second object in the region to have been explored up close. To get New Horizons to Arrokoth, mission
navigators needed to know the orbital properties of the object, such as its speed, distance from the Sun, and the tilt and shape of its orbit. This information is also important for scientists
studying the object. In the Cold Case problem, students can use pi to determine how long it takes the distant object to make one trip around the Sun.
Coral Calculus
Coral reefs provide food and shelter to many ocean species and protect coastal communities against extreme weather events. Ocean warming, invasive species, pollutants, and acidification caused by
climate change can harm the tiny living coral organisms responsible for building coral reefs. To better understand the health of Earth's coral reefs, NASA's COral Reef Airborne Laboratory, or CORAL,
mission maps them from the air using spectroscopy, studying how light interacts with the reefs. To make accurate maps, CORAL must be able to differentiate among coral, algae and sand on the ocean
floor from an airplane. And to do that, it needs to calculate the depth of the ocean at every point it maps by measuring how much sunlight passes through the ocean and is reflected upward from the
ocean floor. In Coral Calculus, students use pi to measure the water depth of an area mapped by the CORAL mission and help scientists better understand the status of Earth's coral reefs.
Planet Pinpointer
Our galaxy contains billions of stars, many of which are likely home to exoplanets – planets outside our solar system. So how do scientists decide where to look for these worlds? Using data gathered
by NASA's Spitzer Space Telescope, researchers found that they're more likely to find giant exoplanets around young stars surrounded by debris disks, which are made up of material similar to what's
found in the asteroid belt and Kuiper Belt in our solar system. Sure enough, after discovering a debris disk around the star Beta Pictoris, researchers later confirmed that it is home to at least two
giant exoplanets. Learning more about Beta Pictoris' debris disk could give scientists insight into the formation of these giant worlds. In Planet Pinpointer, put yourself in the role of a NASA
scientist to learn more about Beta Pictoris' debris disk, using pi to calculate the distance across it.
Join the conversation and share your Pi Day Challenge answers with @NASAJPL_Edu on social media using the hashtag #NASAPiDayChallenge
Blogs and Features
Related Lessons for Educators
Related Activities for Students
NOAA Video Series: Coral Comeback
Facts and Figures
Missions and Instruments
TAGS: K-12 Education, Math, Pi Day, Pi, NASA Pi Day Challenge, Events, Space, Educators, Teachers, Parents, Students, STEM, Lessons, Problem Set, Mars 2020, Perseverance, Curiosity, Mars rovers, Mars
landing, MU69, Arrokoth, New Horizons, Earth science, Climate change, CORAL, NASA Expeditions, coral reefs, oceans, Spitzer, exoplanets, Beta Pictoris, stars, universe, space telescope, Climate TM
In the News
We visited Pluto!
On July 14, 2015 at 4:49 a.m. PDT, NASA's New Horizons spacecraft sped past Pluto – a destination that took nearly nine and a half years to reach – and collected scientific data along with images of
the dwarf planet.
Pluto, famous for once being the ninth planet, was reclassified as a dwarf planet in 2006 after new information emerged about the outer reaches of our solar system. Worlds similar to Pluto were
discovered in the region of our solar system known as the Kuiper Belt. The Kuiper Belt – named for astronomer Gerard Kuiper – is a doughnut-shaped area beyond the orbit of Neptune that is home to
Pluto, other dwarf planets such as Eris, Makemake, and Haumaea, as well as hundreds of thousands of other large icy bodies, and perhaps trillions of comets orbiting our sun. Over the next several
years, the New Horizons spacecraft is expected to visit one to two more Kuiper Belt objects.
Even though it will take 16 months for New Horizons to return all the Pluto science data to Earth, we have already made some interesting and important discoveries about Pluto.
Why It's Important
Through careful measurements of new images, scientists have determined that Pluto is actually larger than previously thought: 2,370 kilometers in diameter. This is important information for
scientists because it helps them understand the composition of Pluto. Because of the orbital interactions between Pluto and its moon Charon, Pluto’s mass is well known and understood. Having a more
precise diameter gives scientists the ability to more accurately calculate the average density. A greater diameter means Pluto’s density is less than we thought.
If you do the math, you’ll see that Pluto’s calculated density dropped from 2,051 kg/m^3 to 1,879 kg/m^3 with this new finding. Most rock has a density between 2000-3000 kg/m^3 and ice at very cold
temperatures has a density of 927 kg/m^3, so we can conclude that Pluto is a bit more icy than previously believed. In addition to helping scientists calculate the density of Pluto, this measurement
confirms Pluto as the largest known object in the Kuiper Belt.
Teach It
We’ve provided some math problems (and answers) for you to use in the classroom. They’re a great way to provide students with real-world examples of how the math they’re learning in class is used by
scientists. There are also some additional resources below that you can use to integrate the Pluto flyby into your lessons, or use the flyby as a lesson opener!
Pluto Math Problems
1. Find the radius(r) of Pluto.
2,370 kilometers ÷ 2 = 1,185 km
2. Find the circumference of Pluto.
C = 2 π r = 7,446 km
3. Find the surface area of Pluto.
SA = 4 π r^2 = 17,646,012 km^2
4. Find the volume of Pluto.
4/3 π r^3 = 6,970,174,651 km^3
5. Find the density of Pluto in kg/m^3.
Pluto mass = 1.31 × 10^22kg
Convert volume in km^3 to m^3: 6,970,174,651 × 1,000,000,000 = 6.970174651 × 10^18m^3
1.31 × 10^22kg / 6.970174651 × 10^18m^3 = 1,879 kg/m^3
6. How does this new density calculation compare to the previous calculation (2051 kg/m^3) when Pluto’s diameter was thought to be 2,302 km?
Explore More
Take a look at some of the lessons, videos, activities and interactives related to Pluto. They’re a great way to engage students in STEM and learning more about their solar system!
• Video: What is a Dwarf Planet? (K-12)
Dwarf planets are a lot like regular planets. What’s the big difference? Find out in 60 seconds.
• Activity: Solar System Bead Activity (4-8)
The solar system is big, and Pluto is way out there! Students calculate scale distances to create a model of objects in our solar system.
Next Generation Science Standards: MS-ESS1-3
Common Core Math: 4.MD.A.2, 5.NBT.B.7
• Activity: Calculating Solar Power in Space (6-8)
Calculate how much light Pluto receives from the Sun, compared to Earth.
Common Core Math: 6.EE.A.1, 6.EE.A.2, 8.EE.A.1, 8.EE.A.2.C
• Resource: Pluto Facts and Figures
Get lots of facts and figures about this dwarf planet in the Kuiper belt!
• Interactive: Eyes on Pluto
Explore Pluto and find out where New Horizons is now!
• Participate: Pluto Time
Though Pluto is a distant world with very different characteristics from Earth, for just a moment near dawn and dusk each day, you can experience “Pluto Time.” This is when the amount of light
reaching Earth matches that of noon on Pluto. Find out exactly when Pluto Time happens in your area and share your photos online!
• News and Images: NASA New Horizons Website
Get the latest news and images from NASA's New Horizons mission.
TAGS: Pluto, New Horizons, Math, Teachable Moment
The first set of close-up images returned from NASA's New Horizons mission reveal surprises and new insights about Pluto and its moons.
See the latest images on NASA's New Horizons website
New Horizons became the first mission to explore Pluto on July 14, 2015 when it flew within 7,800 miles (12,500 kilometers) of the dwarf planet.
Stay tuned for related educational resources and activities.
After a 3-billion-mile journey and a decade of space flight, NASA's New Horizons mission became the first to explore the dwarf planet Pluto on Tuesday, passing within 7,800 miles (12,500 kilometers)
of the distant and mysterious world.
Scientists are awaiting a series of status updates from the spacecraft that indicate it survived the flyby and is in good health -- scheduled for about 6 p.m. PDT -- as well as data and images
collected during the closest approach. | {"url":"https://www.jpl.nasa.gov/edu/news/tag/New+Horizons","timestamp":"2024-11-04T18:23:05Z","content_type":"text/html","content_length":"167529","record_id":"<urn:uuid:cb3d673d-9328-4fd5-aba3-302b796e3d7d>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00308.warc.gz"} |
Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.
does it make a different if I just read as 19.
@HeatherNewton wrote: read in numeric values with decimal if I am sure my numeric data is 17.2 does it make a different if I just read as 19.
I do not know what you mean. SAS has two types of variables. Fixed length character strings and floating point numbers.
Normally you read in TEXT STRINGS and if you use a NUMERIC informat you get a floating point number.
You can attach a format to the numeric variable to tell SAS how to PRINT the values, but it has no effect on what values are actually stored. So if you used the format 17.2 it will print two digits
to the right of decimal point and leave 14 character spaces to print the whole number part of the value (including the negative sign for values less than zero) for a total of 17 characters.
When reading in values from text file you normally just use the normal numeric informat, WITHOUT ANY DECIMAL WIDTH. SAS will know where to place the decimal point based on the appearance of the
period in the string. So a strings like:
Only if the strings have explicitly not include a decimal point (to save one character in the text file) would you include a decimal width on the informat you use to read the value. The decimal width
says what power of 10 to use to move the decimal point.
So if you wanted to represent the number 12,345.67 with only 7 characters you could write it as:
And then read it using an informat like 7.2 and the result would be the number 12,345.67.
And you want to know if there are any disadvantages in imputing a decimal known to be between the 17th and 18th digits. Is that correct?
The short answer is: neither one of these is a good idea, if absolute precision is required, The largest consecutive whole number stored as a numeric with complete accuracy is 9,007,199,254,740,992 -
which is only 16 digits. BTW, this is true for all software using double precision floating point storage - not just SAS.
BUT... are these id variables, or are they actual measures of something? If ID variables, then store them as character values, which will maintain absolute precision. You could make a
twenty-character variable with a decimal character inserted in position 18 if you want it for readability.
It's often worth to just write a little demo program to see what happens and then eventually ask questions if the result isn't as expected and you can't find an explanation.
I'm a bit guessing what you're really asking for so here just a few "things":
1. SAS can only store numerical values with full precision up to 15 digits (and partially 16 digit numbers; the biggest integer with full precision is 9007199254740992 ).
2. If your source string is longer than 15 digits then you need of course to use an informat that matches - but SAS will not store the number with full precision (=exactly as the source string).
Numeric Precision
3. For a w.d informat like 17.2: NEVER use the .w component for an informat unless you understand exactly why you're doing this.
Does the result of below sample code make sense to you? If not then please ask targeted questions.
data demo; infile datalines truncover; input @1 source_string $19. @1 var1 19. @1 var2 17. @1 var3 17.2 @1 var4 best32. ; format var1-var4 best32.; label var1="Informat: 19." var2="Informat: 17."
var3="Informat: 17.2" var4="Informat: best32." ; datalines; 123.12 123.123 123 9007199254740992 900719925474099.2 9007199254740992111 ; proc print data=demo label; run;
As you can see in above results if using informat 17.2 the source string gets divided by 10**2 if it's an integer - which is almost always not what you want to happen.
You can also see in row 4 and beyond what happens if the informat is shorter than the source string - or what happens if the source string exceeds the biggest number SAS can store with full
Apart from the points from @Patrick about numeric precision, there is one more thing: If your data is represented as 17.2, you should not read them as "19.", but as "17." - the format/informat
specification "17.2" means that the total length of the representation is 17, not 19 (17+2), including decimals and decimal separator.
Other than that, if the data representation is not always with decimals, you may be better off using the informat "17.", as "17.2" will add a decimal comma if there is none. (e.g. "1234567890" gets
read as 12345678.90).
SAS Innovate 2025 is scheduled for May 6-9 in Orlando, FL. Sign up to be first to learn about the agenda and registration!
Learn how use the CAT functions in SAS to join values from multiple variables into a single value. | {"url":"https://communities.sas.com/t5/SAS-Programming/decimal/td-p/914790","timestamp":"2024-11-11T01:29:19Z","content_type":"text/html","content_length":"232061","record_id":"<urn:uuid:87ed334c-c591-4d59-b999-40fdf296c920>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00606.warc.gz"} |
MHT-CET | Syllabus | 2025 Mock Test Free | Online Test Series & Study Materials | AI Teacher
#Mark: EaseToLearn is a dedicated platform for MHT CET 2024 engineering entrance exams preparation. Please go through easetolearn.com/engineering to learn & assess your knowledge about MHT CET
Technical Education Exams pattern, syllabus, reviews, analyses, etc.
Click here to join the online MHT CET mock- test series of EASETOLEARN
For MHT-CET Engineering, Pharmacy, Pharm D., B. Planning courses:
• The syllabus and criteria of questions of question will be based on the Maharashtra State Board of Secondary, and Higher Secondary Education
• The Weightage Approximation will be – 20% of Std. XI curriculum, and 80% of Std. XII curriculum for MHT-CET examination.
Unlike JEE (Main), there will be no Negative Marking, but the difficulty level will be the same for Physics, Mathematics, Physics, Chemistry. The difficulty level of the Biology examination will also
be at par with NEET.
The examination question will be based on:
(i) Std. XII – Physics, Chemistry, Biology, and Mathematics.
(i) Std. XI – Same as mentioned above.
Physics: Measurements, Scalars, and Vectors, Force, Friction in solids and liquids, Refraction of Light, Ray optics, Magnetic effect of electric current, Magnetism.
Chemistry: Some basic concepts of chemistry, States of Matter: Gases and liquids, Redox reaction, Surface chemistry, Nature of chemical bond, Hydrogen, s-Block elements(Alkali and alkaline earth
metals), Basic principles and techniques in organic chemistry, Alkanes.
Mathematics : Trigonometric functions, Trigonometric functions of Compound Angles, Factorization Formulae, Straight Line, Circle and Conics, Sets, Relations and Functions, Probability, Sequences, and
Biology : Section I - Botany
Diversity in organisms, Biochemistry of cell, Plant Water Relations, and Mineral Nutrition, Plant Growth and Development.
Section II – Zoology
Organization of Cell, Animal tissues, Human Nutrition, Human Respiration
Learn & Find a brief on the syllabus of Physic, Chemistry, and Mathematics for HSc Board Examination (For exact details of syllabus, contact the relevant authorities). Learn about these selected
topics, subjects, and keywords on easetolearn.com
HSC Syllabus (Std. XII) of HSC Boards of Maharashtra
1. Circular motion
Angular displacement, Angular velocity and angular acceleration, Relation between linear velocity and angular velocity, Uniform circular motion, Radial acceleration, Centripetal and centrifugal
forces, Banking of roads, Vertical circular motion due to earth’s gravitation, Equation for velocity and energy at different positions of vertical circular motion. Kinematical equations for circular
motion in analogy with linear motion.
2. Gravitation
Newton’s law of gravitation, Projection of satellite, Periodic time, Statement of Kepler’s laws of motion, Binding energy and escape velocity of a satellite, Weightlessness condition in orbit,
Variation of ‘g’ due to altitude, latitude, depth and motion, Communication satellite and its uses.
3. Rotational motion
Definition of M.I., K.E. of rotating body, Rolling motion, Physical significance of M.I., Radius of gyration, Torque, Principle of parallel and perpendicular axes, M.I. of some regular shaped bodies
about specific axes, Angular momentum and its conservation.
4. Oscillations
Explanation of periodic motion, S.H.M., Differential equation of linear S.H.M. Projection of U.C.M. on any diameter, Phase of S.H.M., K.E. and P.E. in S.H.M., Composition of two S.H.M.’s having same
period and along same line, Simple pendulum, Damped S.H.M.
5. Elasticity
General explanation of elastic property, Plasticity, Deformation, Definition of stress and strain, Hooke’s law, Poisson’s ratio, Elastic energy, Elastic constants and their relation, Determination of
‘Y’, Behaviour of metal wire under increasing load, Applications of elastic behaviour of materials.
6. Surface tension
Surface tension on the basis of molecular theory, Surface energy, Surface tension, Angle of contact, Capillarity and capillary action, Effect of impurity and temperature on surface tension.
7. Wave motion
Simple harmonic progressive waves, Reflection of transverse and longitudinal waves, Change of phase, Superposition of waves, Formation of beats, Doppler Effect in sound.
8. Stationary waves
Study of vibrations in a finite medium, Formation of stationary waves on string, Study of vibrations of air columns, Free and Forced vibrations, Resonance.
9. Kinetic theory of gases and Radiation
Concept of an ideal gas, Assumptions of kinetic theory, Mean free path, Derivation for pressure of a gas, Degrees of freedom, Derivation of Boyle’s law, Thermodynamics- Thermal equilibrium and
definition of temperature, 1st law of thermodynamics, 2nd law of thermodynamics, Heat engines and refrigerators, Qualitative idea of black body radiation,Wein’s displacement law, Green house effect,
Stefan’s law, Maxwell distribution, Law of equipartition of energy and application to Specific heat capacities of gases.
10. Wave theory of light
Wave theory of light, Huygens’ Principle, Construction of plane and spherical wave front, Wave front and wave normal, Reflection at plane surface, Refraction at plane surface, Polarisation,
Polaroids, Plane polarised light, Brewster’s law, Doppler effect in light.
11. Interference and diffraction
Interference of light, Conditions for producing steady interference pattern, Young’s experiment, Analytical treatment of interference bands, Measurement of
wavelength by biprism experiment, Diffraction due to single slit, Rayleigh’s criterion, Resolving power of a microscope and telescope, Difference between interference and diffraction.
12. Electrostatics
Gauss’ theorem proof and applications, Mechanical force on unit area of a charged conductor, Energy density of a medium, Dielectrics and electric polarisation, Concept of condenser, Capacity of
parallel plate condenser, Effect of dielectric on capacity, Energy of charged condenser, Condensers in series and parallel, van-de- Graaff generator.
13. Current electricity
Kirchhoff’s law, Wheatstone’s bridge, Meter bridge, Potentiometer.
14. Magnetic effects of electric current
Ampere’s law and its applications, Moving coil galvanometer, Ammeter, Voltmeter, Sensitivity of moving coil galvanometer, Cyclotron.
15. Magnetism
Circular current loop as a magnetic dipole, Magnetic dipole moment of revolving electron, Magnetisation and magnetic intensity, Diamagnetism, Paramagnetism, Ferromagnetism on the basis of domain
theory, Curie temperature.
16. Electromagnetic inductions
Laws of electromagnetic induction, proof of, e = – dØ dt Eddy currents, Self induction and mutual induction, Need for displacement current, Transformer, Coil rotating in uniform magnetic induction,
Alternating currents, Reactance and impedance, LC oscillations (qualitative treatment only) Power in a.c circuit with resistance, inductance and capacitance, Resonant circuit, Wattless current, AC
17 Electrons and photons
Photoelectric effect, Hertz and Lenard’s observations, Einstein’s equation, Particle nature of light.
18 Atoms, Molecules and Nuclei
Alpha particle scattering experiment, Rutherford’s model of atom. Bohr’s model, Hydrogen spectrum, Composition and size of nucleus, Radioactivity, Decay law, massenergy relation, mass defect, B.E.
per nucleon and its variation with mass number, Nuclear fission and fusion, de Broglie hypothesis, Matter waves – wave
nature of particles, Wavelength of an electron, Davisson and Germer experiment, Continuous and characteristics X-rays.
19 Semiconductors
Energy bands in solids, Intrinsic and extrinsic semiconductors, P-type and Ntype semiconductor, P-N junction diode, I-V characteristics in forward and reverse bias, Rectifiers, Zener diode as a
voltage regulator, Photodiode, Solar cell, I-V characteristics of LED, Transistor action and its characteristics, Transistor as an amplifier (CE mode), Transistor as a switch, Oscillators and Logic
gates (OR, AND, NOT, NAND, NOR)
20 Communication systems
Elements of communication system, bandwidth of signals, bandwidth of transmission medium, Need for modulation, Production and detection of an amplitude modulated wave, space communication,
Propagation of electromagnetic waves in atmosphere.
Unit 1: Solid State
Classification of solids based on different forces; molecular, ionic, covalent and metallic solids, amorphous and crystalline solids (elementary idea), unit cell in two dimensional and three
dimensional lattices, calculation of density of unit cell, packing in solids, voids, number of atoms per unit cell in a cubic unit cell, point defects, electrical and magnetic properties, Band theory
of metals, conductors and semiconductors and insulators and n and p type semiconductors.
Unit 2 : Solutions and colligative properties
Types of solutions, expression of concentration of solids in liquids, solubility of gases in liquids, solid solutions, colligative properties –relative lowering of vapor pressure,Raoult’s law
elevation of boiling point, depression of freezing point, osmotic pressure, determination of molecular masses using colligative properties, abnormal molecular mass.Van’t Hoff factor and calculations
involving it.
Unit 3 :Chemical thermodynamics and energetic Concepts of system
Types of systems,surroundings. Work, heat, energy, extensive and intensive properties, state functions. First law of thermodynamics – internal energy and enthalpy,
Hess’ law of constant heat summation, enthalpy of bond dissociation, combustion, formation, atomization, sublimation. Phase transition,
ionization and solution and dilution Introduction of entropy as a state function, free energy change for spontaneous and non spontaneous processes, and equilibrium constant. Second and third law of
Unit 4: Electrochemistry
Redox reactions, conductance in electrolytic solutions, specific and molar conductivity, variations of conductivity with concentration, Kohlrausch’s Law, electrolysis and laws of electrolysis
(elementary idea), dry cell –electrolytic and galvanic cells; lead accumulator, EMF of a cell, standard electrode potential, Nernst equation and its application to chemical cells, fuel cells;
corrosion. Relation between Gibb’s energy change and emf of a cell.
Unit 5: Chemical kinetics Rate of reaction (average and instantaneous)
factors affecting rate of reaction; concentration, temperature, catalyst; order and molecularity of a reaction; rate law and specific rate constant, integrated rate equations and half life (only for
zero and first order reactions); concept of collision theory (elementary idea, no mathematical treatment). Activation energy, Arrhenius equation.
Unit 6 :General principles and processes of isolation of elements
Principles and methods of extraction – concentration, oxidation, reduction electrolytic method and refining; occurrence and principle of extraction of aluminium, copper, zinc and iron
Unit 7: p-Block elements Group 15 elements
General introduction, electronic configuration, occurrence, oxidation states, trends in physical and chemical properties; nitrogen – preparation, properties and uses; compounds of nitrogen;
preparation and properties of ammonia and nitric acid, oxides of nitrogen (structure only); Phoshorous-allotropic forms; compounds of phosphorous; preparation and properties of phosphine, halides
(PCl3,PCl5) and oxoacids (elementary idea only).
Group 16 elements
General introduction, electronic configuration, oxidation states, occurrence, trends in physical and chemical properties; dioxygen; preparation, properties and uses;
Classification of oxides, simple oxides; Ozone. Sulphur – allotropic forms; compounds of sulphur; preparation, properties and uses of sulphur dioxide; sulphurc acid; industrial process of
manufacture, properties and uses, oxoacids of sulphur (structures only).
Group 17 elements:
General introduction, electronic configuration, oxidation states, occurrence, trends in physical and chemical properties; compounds of halogens; preparation, properties and uses of chlorine and
hydrochloric acid, interhalogen compounds, oxoacids of halogens (structure only).
Group 18 elements:
General introduction, electronic configuration. Occurrence, trends in physical and chemical properties, uses.
Unit 8 : d and f Block Elements
d-Block Elements - General introduction, electronic configuration, occurrence and characteristics of transition metals, general trends in properties of the first row transition metals – metallic
character, ionization enthalpy, oxidation states, ionic radii, colour, catalytic property, magnetic properties, interstitial compounds, alloy formation preparation and properties of K2Cr2O7 and
f-Block elements- Lanthanoids – Electronic configuration, oxidation states, chemical reactivity and lanthanoid contraction and its consequences. Actinoids – Electronic configuration, oxidation
states. Comparison with lanthanoids.
Unit 9: Coordination compounds
Coordination compounds – Introduction, ligands, coordination number, colour, magnetic properties and shapes, IUPAC nomenclature of mononuclear coordination compounds, bonding; Werner’s theory, VBT,
CFT. isomerism, (structural and stereo) importance of coordination compounds (in qualitative analysis, extraction of metals and biological systems).
Unit 10 : Halogen derivatives of alkanes (and arenes)
Haloalkanes : Nomenclature, nature of C-X bond, physical and chemical properties, mechanism of substitution reactions. Stability of carbocations,R-S and d-l configuration
Haloarenes : Nature of C-X bond, substitution reactions (directive influence of halogen for monosubstituted compounds only) stability of carbocations, R-S and d-l configurations. Uses and
environmental effects of – dichloromethane, thrichloromethane, tetrachloromethane, iodoform, freons, DDT.
Unit 11 : Alcohols, phenols and ethers
Alcohols : Nomenclature, methods of preparation, physical and chemical properties (of primary alcohols only); identification of primary, secondary and tertiary alcohols; mechanism of dehydration,
uses of methanol and ethanol.
Phenols: Nomenclature, methods of preparation, physical and chemical properties, acidic nature of phenol, electrophillic substitution reactions, uses of phenols.
Ethers : Nomenclature, methods of preparation, physical and chemical properties, uses.
Unit 12 : Aldehydes, ketones and carboxylic acids
Aldehydes and ketones : Nomenclature, nature of carbonyl group, methods of preparation. Physical and chemical properties, mechanism of nucleophilic addition, reactivity of alpha hydrogen in
aldehydes; uses.
Carboxylic acids : Nomenclature, acidic nature, methods of preparation, physical and chemical properties; uses.
Unit 13: Organic compounds containing nitrogen
Nitro compounds-General methods of preparation and chemical reactions
Amines : Nomenclature, classification, structure, methods of preparation, physical and chemical properties, uses, identification of primary, secondary and tertiary amines. Cyanides and isocyanides:
Will be mentioned at relevant places in context.
Diazonium salts: Preparation, chemical reactions and importance in synthetic organic chemistry.
Unit 14: Biomolecules
Carbohydrates: Classification (aldoses and ketoses), monosaccahrides d-l configuration (glucose and fructose), oligosaccharides (sucrose, lactose, maltose), polysaccharides (starch, cellulose,
glycogen), importance.
Proteins: Elementary idea of α -amino acids, peptide, linkage, polypeptides, proteins; structure of amines-primary, secondary, tertiary structure and quaternary structures (qualitative idea only),
denaturation of proteins; enzymes.
Lipids and hormones (elementary idea) excluding structure, their classification and functions.
Vitamins: Classification and functions.
Nucleic acids: DNA and RNA
Unit 15: Polymers
Classification - natural and synthetic, methods of polymerization (addition and condensation), copolymerization. Some important polymers; natural and synthetic like polythene, nylon, polyesters,
bakelite, and rubber. Biodegradable and non biodegradable polymers.
Unit 16: Chemistry in everyday life
1. Chemicals in medicines : analgesics, tranquilizers, antiseptics, disinfectants, antimicrobials, antifertility drugs, antibiotics, antacids, antihistamines elementary idea of antioxidants
2. Chemicals in food : Preservatives, artificial sweetening agents.
3. Cleansing agents : Soaps and detergents, cleansing action.
Section I – BOTANY
Unit 1: Genetics and Evolution
Chapter 1 - Genetic Basis of Inheritance:
Mendelian inheritance. Deviations from Mendelian ratio (gene interactionincomplete dominance, co-dominance, multiple alleles and Inheritance of blood
groups), Pleiotropy, Elementary idea of polygenic inheritance.
Chapter 2 - Gene: its nature, expression and regulation: Modern concept of gene in brief-cistron, muton and recon. DNA as genetic material, structure of DNA as given by
Watson and Cricks model, DNA Packaging, semi conservative replication
of eukaryotic DNA.
RNA: General structure, types and functions. Protein Synthesis; central dogma,
Transcription; Translation-Genetic Code,
Gene Expression and Gene Regulation (The Lac operon as a typical model of
gene regulation).
Unit 2: Biotechnology and its application
Chapter 3 - Biotechnology: Process and Application :
Genetic engineering (Recombinant DNA technology):
Transposons, Plasmids, Bacteriophages; Producing Restriction Fragments, Preparing and cloning a DNA Library, Gene Amplification (PCR).
Application of Biotechnology in Agriculture – BT crops
Biosafety Issues (Biopiracy and patents)
Unit 3: Biology and Human Welfare
Chapter 4 - Enhancement in Food Production
Plant Breeding Tissue Culture: Concept of Cellular Totipotency, Requirements of Tissue Culture (in brief), Callus Culture, Suspension Culture. Single Cell Protein. Biofortification.
Chapter 5 - Microbes in Human Welfare:
Microbes in Household food processing. Microbes in Industrial Production. Microbes in Sewage Treatment. Microbes in Biogas (energy) Production. Microbes as Biocontrol Agents.
Microbes as Biofertilizers.
Unit 4: Plant Physiology :
Chapter 6 - Photosynthesis
Autotrophic nutrition Site of Photosynthesis Photosynthetic Pigments and their role. Light-Dependent Reactions (Cyclic and non-cyclic photophosphorylation)
Light-Independent Reactions (C3 and C4 Pathways)
Chemiosmotic hypothesis, Photorespiration, Factors affecting Photosynthesis. Law of limiting factors.
Chapter 7 - Respiration ATP as currency of Energy Mechanism of Aerobic (Glycolysis, TCA Cycle and Electron Transport System) and Anaerobic Respiration. Fermentation
Exchange of gases Amphibolic pathway. Respiratory quotient of Nutrients. Significance of Respiration.
Unit 5: Reproduction in Organisms
Chapter 8 - Reproduction in Plants Modes of Reproduction (Asexual and Sexual).
Asexual reproduction; uniparental modesvegetative propagation, micropropagation Sexual Reproduction: structure of flower Development of male gametophyte,
Structure of anatropous ovule. Development of female Gametophyte.
Pollination: Types and Agencies. Outbreeding devices; pollen-pistil interaction. Double Fertilization: Process and Significance.
Post-fertilization changes (development of endosperm and embryo, development of seed and formation of fruit)
Special modes-apomixis, parthenocarpy, polyembryony. Significance of seed and fruit formation.
Unit 6: Ecology and Environment
Chapter 9: Organisms and Environment -I : Habitat and Niche Ecosystems: Patterns, components, productivity and decomposition, energy flow; pyramids of number, biomass,
energy; nutrient cycling (carbon and phosphorous). Ecological succession, Ecological servicescarbon fixation, pollination, oxygen release. Environmental issues:
agrochemicals and their effects, solid waste management, Green house effect and global warming, ozone depletion, deforestation, case studies (any two).
Section II - ZOOLOGY
Unit 1: Genetics and Evolution :
Chapter 10 - Origin and the Evolution of Life :
Origin of Life: Early Earth, Spontaneous, assembly of organic compounds, Evolution: Darwin’s contribution, Modern Synthetic Theory of evolution, Biological Evidences, Mechanism of evolution; Gene
flow and genetic drift;Hardy- Weinberg principle; Adaptive radiation. Origin and Evolution of Human being.
Chapter 11 - Chromosomal Basis of Inheritance
The Chromosomal Theory. Chromosomes. Linkage and Crossing Over.
Sex-linked Inheritance (Haemophilia and colour blindness).
Sex Determination in Human being, birds, honey bee.Mendelian disorders in humans-Thalassemia. Chromosomal disorders in human: Down’s syndrome,
Turner’s syndrome and Klinfelter’s syndrome.
Unit 2: Biotechnology and its application
Chapter 12- Genetic Engineering and Genomics DNA Finger Printing. Genomics and Human Genome Project. Biotechnological Applications in Health:Human insulin and vaccine production, Gene Therapy.
Transgenic animals.
Unit 3: Biology and Human Welfare
Chapter 13- Human Health and Diseases Concepts of Immunology: Immunity Types, Vaccines, Structure of Antibody, Antigen-Antibody Complex, Antigens on blood cells. Pathogens and Parasites (Amoebiasis,
Malaria, Filariasis, Ascariasis, Typhoid, Pneumonia, Common cold and ring worm). Adolescence, drug and alcohol abuse. Cancer and AIDS.
Chapter 14- Animal Husbandry
Management of Farms and Farm Animals. Dairy. Poultry. Animal Breeding. Bee-Keeping. Fisheries. Sericulture Lac culture
Unit 4: Human Physiology
Chapter 15- Circulation Blood composition and coagulation, Blood groups. Structure and pumping action of Heart. Blood Vessels.
Pulmonary and Systemic Circulation. Heart beat and Pulse. Rhythmicity of Heart beat. Cardiac output, Regulation of cardiac activity.
Blood related disorders: Hypertension, coronary artery disease, angina pectoris, and heart failure. ECG, Lymphatic System (Brief idea): Composition of lymph and its functions.
Chapter 16- Excretion and osmoregulation
Modes of excretion-Ammonotelism, ureotelism, uricotelism. Excretory System.
Composition and formation of urine. Role of Kidney in Osmoregulation.
Regulation of kidney function: reninangiotensin, atrial natriuretic factor, ADH and Diabetes inspidus, role of other organs in excretion. Disorders; Kidney failure, Dialysis, Kidney stone (renal
calculi). Transplantation. Uraemia, nephritis.
Chapter 17- Control and Co-ordination
Nervous System Structure and functions of brain and Spinal cord, brief idea about PNS and ANS. Transmission of nerve impulse. Reflex action. Sensory receptors (eye and ear), Sensory perception,
general idea of other sense organs. Endocrine System Endocrine glands Hormones and their functions Mechanism of hormone action. Hormones as messengers and regulators.
Hormonal imbalance and diseases: Common disorders (Dwarfism, Acromegaly, cretinism, goiter, exopthalmic goiter, Diabetes mellitus, Addison’s disease)
Unit 5: Reproduction in Organisms
Chapter 18- Human Reproduction
Reproductive system in male and female. Histology of testis and ovary. Reproductive cycle. Production of gametes, fertilization, implantation. Embryo development up to three germinal layers.
Pregnancy, placenta, parturition and lactation (Elementary idea). Reproductive health-birth control, Contraception and sexually transmitted diseases.MTP, Amniocentesis; Infertility
and assisted reproductive technologies- IVF, ZIFT, GIFT (elementary idea for general awareness).
Unit 6: Ecology and Environment
Chapter 19-Organisms and Environment-II :
Population and ecological adaptations: population interactions-mutualism, competition, predation, parasitism, population attributes- growth, birth rate
and death rate, age distribution. Biodiversity and its conservation- Biodiversity- concept, patterns, importance, loss. Threats to and need for biodiversity conservation, Hotspots, endangered
organisms,extinction,red data book, biosphere reserves, national parks and sanctuaries. Environmental issues: air pollution and its control, water pollution and its control and radioactive waste
management. (Case studies any two)
Unit 1: Sets, Relations And Functions
Sets and their representation; Union, intersection and complement of sets and their algebraic properties; Power set; Relation, Types of relations, equivalence relations, functions;. one-one, into and
onto functions, composition of functions.
Unit 2: Complex Numbers And Quadratic Equations
Complex numbers as ordered pairs of reals, Representation of complex numbers in the form a+ib and their representation in a plane, Argand diagram, algebra of complex numbers, modulus and argument (or
amplitude) of a complex number, square root of a complex number, triangle inequality, Quadratic equations in real and complex number system and their solutions. Relation between roots and
co-efficients, nature of roots, formation of quadratic equations with given roots.
Unit 3: Matrices And Determinants
Matrices, algebra of matrices, types of matrices, determinants and matrices of order two and three. Properties of determinants, evaluation of determinants, area of triangles using determinants.
Adjoint and evaluation of inverse of a square matrix using determinants and elementary transformations, Test of consistency and solution of simultaneous linear equations in two or three variables
using determinants and matrices.
Unit 4: Permutations And Combinations
Fundamental principle of counting, permutation as an arrangement and combination as selection, Meaning of P (n,r) and C (n,r), simple applications.
Principle of Mathematical Induction and its simple applications
Unit 6: Binomial Theorem and Its Simple Applications:
Binomial theorem for a positive integral index, general term and middle term, properties of Binomial coefficients and simple applications.
Unit 7: Sequences And Series
Arithmetic and Geometric progressions, insertion of arithmetic, geometric means between two given numbers. Relation between A.M. and G.M. Sum upto n terms of special series: Sn, Sn2, Sn3. Arithmetic
- Geometric progression.
Unit 8: Limit, Continuity and Differentiability
Real - valued functions, algebra of functions, polynomials, rational, trigonometric, logarithmic and exponential functions, inverse functions. Graphs of simple functions. Limits, continuity and
differentiability. Differentiation of the sum, difference, product and quotient of two functions. Differentiation of trigonometric, inverse trigonometric, logarithmic, exponential, composite and
implicit functions; derivatives of order upto two. Rolles and Lagranges Mean Value Theorems. Applications of derivatives: Rate of change of quantities, monotonic - increasing and decreasing
functions, Maxima and minima of functions of one variable, tangents and normals.
Unit 9: Integral Calculus
Integral as an anti - derivative. Fundamental integrals involving algebraic, trigonometric, exponential and logarithmic functions. Integration by substitution, by parts and by partial fractions.
Integration using trigonometric identities.
Evaluation of simple integrals of the type
Integral as limit of a sum. Fundamental Theorem of Calculus. Properties of definite integrals. Evaluation of definite integrals, determining areas of the regions bounded by simple curves in standard
Unit 10: Differential Equations
Ordinary differential equations, their order and degree. Formation of differential equations. Solution of differential equations by the method of separation of variables, solution of homogeneous and
linear differential equations of the type:
-- + p (x) y = q (x)
Unit 11: Co-Ordinate Geometry
Cartesian system of rectangular co-ordinates in a plane, distance formula, section formula, locus and its equation, translation of axes, slope of a line, parallel and perpendicular lines, intercepts
of a line on the coordinate axes.
Straight lines
Various forms of equations of a line, intersection of lines, angles between two lines, conditions for concurrence of three lines, distance of a point from a line, equations of internal and external
bisectors of angles between two lines, coordinates of centroid, orthocentre and circumcentre of a triangle, equation of family of lines passing through the point of intersection of two lines.
Circles, conic sections
Standard form of equation of a circle, general form of the equation of a circle, its radius and centre, equation of a circle when the end points of a diameter are given, points of intersection of a
line and a circle with the centre at the origin and condition for a line to be tangent to a circle, equation of the tangent. Sections of cones, equations of conic sections (parabola, ellipse and
hyperbola) in standard forms, condition for y = mx + c to be a tangent and point (s) of tangency.
UNIT 12: Three Dimensional Geometry
Coordinates of a point in space, distance between two points, section formula, direction ratios and direction cosines, angle between two intersecting lines. Skew lines, the shortest distance between
them and its equation. Equations of a line and a plane in different forms, intersection of a line and a plane, coplanar lines.
UNIT 13: Vector Algebra
Vectors and scalars, addition of vectors, components of a vector in two dimensions and three dimensional space, scalar and vector products, scalar and vector triple product.
Unit 14: Statistics And Probability
Measures of Dispersion: Calculation of mean, median, mode of grouped and ungrouped data. Calculation of standard deviation, variance and mean deviation for grouped and ungrouped data.
Probability: Probability of an event, addition and multiplication theorems of probability, Bayes theorem, probability distribution of a random variate, Bernoulli trials and Binomial distribution.
Unit 15: Trigonometry
Trigonometrical identities and equations. Trigonometrical functions. Inverse trigonometrical functions and their properties. Heights and Distances.
Unit 16: Mathematical Reasoning
Statements, logical operations and, or, implies, implied by, if and only if. Understanding of tautology, contradiction, converse and contrapositive. | {"url":"https://easetolearn.in/engineering/mht-cet/syllabus","timestamp":"2024-11-03T15:31:33Z","content_type":"text/html","content_length":"165455","record_id":"<urn:uuid:eaf5a284-e61a-4ab9-81b7-3b224f216f84>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00833.warc.gz"} |
Eire 1971 2p Coin Value
How much is an Eire 1971 2p coin worth? The Eire 1971 2p coin is a rare and valuable coin. It was minted in very small numbers, and as a result, it is highly sought-after by collectors.
Editor’s Notes: The value of an Eire 1971 2p coin can vary depending on a number of factors, including its condition and rarity. However, in general, these coins are worth a significant amount of
We’ve done extensive research and analysis to put together this guide to help you determine the value of your Eire 1971 2p coin.
Key Differences/Takeaways:
Eire 1971 2p Coin
Mintage 1,000,000
Composition Bronze
Weight 7.12 grams
Diameter 25.9 mm
Obverse A harp
Reverse The denomination “2p”
Main Article Topics:
• History of the Eire 1971 2p coin
• Factors that affect the value of an Eire 1971 2p coin
• How to determine the value of your Eire 1971 2p coin
• Where to sell your Eire 1971 2p coin
Eire 1971 2p Coin Value
The Eire 1971 2p coin is a rare and valuable coin. It was minted in very small numbers, and as a result, it is highly sought-after by collectors. The value of an Eire 1971 2p coin can vary depending
on a number of factors, including its condition and rarity. However, in general, these coins are worth a significant amount of money.
• Mintage: 1,000,000
• Composition: Bronze
• Weight: 7.12 grams
• Diameter: 25.9 mm
• Obverse: A harp
• Reverse: The denomination “2p”
These key aspects all contribute to the value of the Eire 1971 2p coin. The mintage number is low, which makes the coin scarce. The bronze composition gives the coin a distinctive appearance and
makes it more durable than coins made of other metals. The weight and diameter of the coin are also important factors, as they determine the coin’s size and feel. The obverse and reverse designs of
the coin are also important, as they add to the coin’s overall aesthetic appeal.
The mintage of a coin refers to the number of coins that were produced. The mintage of the Eire 1971 2p coin was 1,000,000, which is a relatively low number. This low mintage number is one of the
factors that contributes to the coin’s value.
The mintage number is important because it determines the rarity of a coin. The lower the mintage number, the rarer the coin is. Rare coins are more valuable than common coins because there are fewer
of them available to collectors.
In the case of the Eire 1971 2p coin, the low mintage number has made it a valuable coin. Collectors are willing to pay a premium for these coins because they are rare and difficult to find.
Here is a table that summarizes the key points about the mintage of the Eire 1971 2p coin:
Eire 1971 2p Coin
Mintage 1,000,000
Rarity Rare
Value Valuable
The composition of the Eire 1971 2p coin is bronze, which is an alloy of copper and tin. Bronze is a strong and durable metal, which makes it ideal for use in coins. It is also a relatively
inexpensive metal, which makes it a cost-effective choice for coin production.
The composition of the Eire 1971 2p coin has a significant impact on its value. Bronze is a valuable metal, and as a result, coins made of bronze are worth more than coins made of other metals. The
bronze composition of the Eire 1971 2p coin is one of the factors that contributes to its high value.
In addition to its value, the bronze composition of the Eire 1971 2p coin also gives it a distinctive appearance. Bronze coins have a warm, golden color that is different from the color of coins made
of other metals. This distinctive appearance makes the Eire 1971 2p coin more attractive to collectors.
Overall, the composition of the Eire 1971 2p coin is an important factor that contributes to its value and appeal.
Eire 1971 2p Coin
Composition Bronze
Value Valuable
Appearance Distinctive golden color
The weight of a coin is an important factor in determining its value. The weight of a coin can indicate its metal content, which can in turn affect its value. In the case of the Eire 1971 2p coin,
its weight of 7.12 grams is significant because it indicates that the coin is made of bronze.
• Bronze Content: The weight of the Eire 1971 2p coin is consistent with the weight of other bronze coins of the same size and denomination. This indicates that the coin is made of bronze, which is
a valuable metal. The bronze content of the coin is one of the factors that contributes to its high value.
• Durability: Bronze is a strong and durable metal, which makes it ideal for use in coins. Coins made of bronze are less likely to be damaged or worn than coins made of other metals. The durability
of the Eire 1971 2p coin is another factor that contributes to its high value.
• Rarity: The weight of the Eire 1971 2p coin can also be used to determine its rarity. Coins that are made of valuable metals are often melted down and reused, which can reduce their availability.
The fact that the Eire 1971 2p coin is made of bronze and weighs 7.12 grams suggests that it is a rare coin. The rarity of the coin is another factor that contributes to its high value.
Overall, the weight of 7.12 grams is an important factor in determining the value of the Eire 1971 2p coin. The weight of the coin indicates that it is made of bronze, which is a valuable metal. The
bronze content of the coin, its durability, and its rarity all contribute to its high value.
The diameter of a coin is the distance across the coin from one edge to the other. The diameter of the Eire 1971 2p coin is 25.9 mm. This is a relatively large diameter for a coin of this
The diameter of a coin is important because it can affect the coin’s value. Larger coins are often worth more than smaller coins, because they contain more metal. In the case of the Eire 1971 2p
coin, the large diameter is one of the factors that contributes to its high value.
In addition to its value, the diameter of the Eire 1971 2p coin also affects its appearance. Larger coins are more noticeable than smaller coins, and they can make a more impressive statement. The
large diameter of the Eire 1971 2p coin makes it a visually appealing coin that is sure to attract attention.
Overall, the diameter of 25.9 mm is an important factor in determining the value and appearance of the Eire 1971 2p coin. The large diameter of the coin makes it more valuable and more visually
Eire 1971 2p Coin
Diameter 25.9 mm
Value Valuable
Appearance Visually appealing
The obverse of the Eire 1971 2p coin features a harp, which is a national symbol of Ireland. The harp has been used on Irish coins for centuries, and it is a reminder of the country’s rich musical
• Symbol of Ireland: The harp is a powerful symbol of Ireland, and its presence on the Eire 1971 2p coin is a reminder of the country’s proud history and culture.
• National pride: The harp is a source of national pride for the Irish people, and its presence on the coin is a way to express that pride.
• Artistic merit: The harp is a beautiful and iconic image, and its presence on the coin adds to the coin’s overall aesthetic appeal.
• Collectibility: The harp is a popular design element on coins, and the Eire 1971 2p coin is no exception. The coin’s unique design makes it a popular choice for collectors.
Overall, the harp on the obverse of the Eire 1971 2p coin is a significant design element that contributes to the coin’s value and appeal.
The reverse of the Eire 1971 2p coin features the denomination “2p”. This is an important design element because it tells us the value of the coin. The denomination is also important for collectors,
as it helps them to identify and categorize the coin.
The denomination of a coin can have a significant impact on its value. In general, coins with a higher denomination are worth more than coins with a lower denomination. This is because coins with a
higher denomination contain more metal. In the case of the Eire 1971 2p coin, the denomination of “2p” indicates that the coin is worth two pence. This is a relatively low denomination, which means
that the coin is not worth a lot of money. However, the coin’s rarity and historical significance make it valuable to collectors.
Overall, the denomination “2p” on the reverse of the Eire 1971 2p coin is an important design element that contributes to the coin’s value and appeal.
Eire 1971 2p Coin
Denomination 2p
Value Low denomination
Collectibility Valuable to collectors
FAQs about Eire 1971 2p Coin Value
This section provides answers to frequently asked questions about the Eire 1971 2p coin value.
Question 1: How much is an Eire 1971 2p coin worth?
The value of an Eire 1971 2p coin can vary depending on a number of factors, including its condition and rarity. However, in general, these coins are worth a significant amount of money.
Question 2: What is the mintage of the Eire 1971 2p coin?
The mintage of the Eire 1971 2p coin is 1,000,000.
Question 3: What is the composition of the Eire 1971 2p coin?
The composition of the Eire 1971 2p coin is bronze.
Question 4: What is the weight of the Eire 1971 2p coin?
The weight of the Eire 1971 2p coin is 7.12 grams.
Question 5: What is the diameter of the Eire 1971 2p coin?
The diameter of the Eire 1971 2p coin is 25.9 mm.
Question 6: What is the value of the Eire 1971 2p coin based on its design?
The design of the Eire 1971 2p coin contributes to its value. The harp on the obverse of the coin is a national symbol of Ireland, and the denomination “2p” on the reverse of the coin is important
for collectors. These design elements add to the coin’s overall aesthetic appeal and collectibility.
Summary: The Eire 1971 2p coin is a rare and valuable coin. Its value is determined by a number of factors, including its condition, rarity, composition, weight, diameter, and design. This coin is
highly sought-after by collectors and is considered to be a valuable addition to any collection.
Next Article Section: The History of the Eire 1971 2p Coin
Tips for Determining the Value of an Eire 1971 2p Coin
The Eire 1971 2p coin is a rare and valuable coin. Its value can vary depending on a number of factors, including its condition, rarity, composition, weight, diameter, and design. Here are some tips
for determining the value of your Eire 1971 2p coin:
Tip 1: Check the condition of the coin.The condition of a coin is one of the most important factors in determining its value. Coins that are in mint condition are worth more than coins that are
damaged or worn. When checking the condition of a coin, look for scratches, dents, and other signs of wear and tear.Tip 2: Determine the rarity of the coin.The rarity of a coin is another important
factor in determining its value. Coins that are rare are worth more than coins that are common. The mintage of a coin can give you an idea of its rarity. The mintage is the number of coins that were
produced. Coins with a low mintage are rarer than coins with a high mintage.Tip 3: Identify the composition of the coin.The composition of a coin can also affect its value. Coins that are made of
valuable metals, such as gold or silver, are worth more than coins that are made of less valuable metals, such as copper or nickel. The Eire 1971 2p coin is made of bronze, which is a valuable metal.
Tip 4: Weigh the coin.The weight of a coin can also be an indicator of its value. Coins that are made of valuable metals are typically heavier than coins that are made of less valuable metals. The
Eire 1971 2p coin weighs 7.12 grams.Tip 5: Measure the diameter of the coin.The diameter of a coin can also be an indicator of its value. Coins that are larger in diameter are typically worth more
than coins that are smaller in diameter. The Eire 1971 2p coin has a diameter of 25.9 mm.Tip 6: Examine the design of the coin.The design of a coin can also affect its value. Coins that have
intricate or unique designs are worth more than coins that have simple or common designs. The Eire 1971 2p coin has a harp on the obverse and the denomination “2p” on the reverse.
Summary: By following these tips, you can get a good idea of the value of your Eire 1971 2p coin. However, it is important to remember that the value of a coin can also be affected by other factors,
such as its historical significance and its demand among collectors. If you are unsure about the value of your coin, you can always consult with a professional coin dealer.
Conclusion: The Eire 1971 2p coin is a valuable and collectible coin. By understanding the factors that affect its value, you can make an informed decision about how to sell or trade your coin.
The Eire 1971 2p coin is a valuable and collectible coin, with a numismatic value that far exceeds its face value. The coin is in high demand among collectors, and careful examination is needed to
establish its precise value accurately.
Several important factors determine the value of these coins, including their condition, rarity, composition, weight, diameter, and design. Coins that are in mint condition, rare, made of valuable
metals, and have intricate or unusual designs are worth more than those with less exceptional characteristics. By understanding these factors, coin collectors, historians, and investors can make
well-informed decisions regarding the acquisition and valuation of these coins. | {"url":"https://coinfyp.com/eire-1971-2p-coin-value/","timestamp":"2024-11-01T20:29:07Z","content_type":"text/html","content_length":"149505","record_id":"<urn:uuid:0798784f-6cc3-4cad-8a0d-2206783f5746>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00052.warc.gz"} |
To better control heap memory allocations, three existing flags based on fractions, 1/N for a provided value of N, are deprecated (-XX:MaxRAMFraction=xxx, -XX:MinRAMFaction=xxx and -XX:InitialRAMFraction=xxx) and three new flags based on percentages, from 0.0 to 100.0, are being introduced (-XX:MaxRAMPercentage -XX:MinRAMPercentage and -XX:InitialRAMPercentage).
Using the -XX:MaxRAMFraction options, we can only set fractional values 1/2, 1/3, 1/4 etc. Customers would like the ability to select larger amounts beyond 1/2 of available RAM. This can be accomplished by setting hard coded amounts using -Xmx but the requesting customer would like this value to be based on the amount of available memory. In the case where 60% of available host RAM is desired the user would like a flag which would allow them to specify 60.
Deprecate three existing Hotspot flags -XX:MaxRAMFraction=xxx, -XX:MinRAMFaction=xxx and -XX:InitialRAMFraction=xxx) and add three new flags (-XX:MaxRAMPercentage -XX:MinRAMPercentage and -XX:InitialRAMPercentage) which allow floating point values to be used to specify the percentage of available host memory to be used for Max, Min and Initial Heap sizes.
Here is my proposed patch for the flag name changes:
product(uintx, MaxRAMFraction, 4,
"Maximum fraction (1/n) of real memory used for maximum heap "
- "size")
+ "size. "
+ "Deprecated, use MaxRAMPercentage instead")
range(1, max_uintx)
product(uintx, MinRAMFraction, 2,
"Minimum fraction (1/n) of real memory used for maximum heap "
+ "size on systems with small physical memory size. "
+ "Deprecated, use MinRAMPercentage instead")
+ range(1, max_uintx)
+ product(uintx, InitialRAMFraction, 64,
+ "Fraction (1/n) of real memory used for initial heap size. "
+ "Deprecated, use InitialRAMPercentage instead")
+ range(1, max_uintx)
+ product(double, MaxRAMPercentage, 25.0,
+ "Maximum percentage of real memory used for maximum heap size")
+ range(0.0, 100.0)
+ product(double, MinRAMPercentage, 50.0,
+ "Minimum percentage of real memory used for maximum heap"
"size on systems with small physical memory size")
- range(1, max_uintx)
- product(uintx, InitialRAMFraction, 64,
- "Fraction (1/n) of real memory used for initial heap size")
- range(1, max_uintx)
+ range(0.0, 100.0)
+ product(double, InitialRAMPercentage, 1.5625,
+ "Percentage of real memory used for initial heap size")
+ range(0.0, 100.0)
The new flags, if specified, will override the deprecated older flags. | {"url":"https://bugs.java.com/bugdatabase/view_bug.do?bug_id=8186315","timestamp":"2024-11-04T17:51:02Z","content_type":"text/html","content_length":"28735","record_id":"<urn:uuid:b8263058-555e-4610-9e3b-b2c8b574dc1d>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00520.warc.gz"} |
The Knight's Tour Problem (using Backtracking Algorithm)
Ever wondered how a computer playing as a chess player improves its algorithm to defeat you in the game? In this article, we will learn about the Knight's Tour Problem, the various ways to solve it
along with their time and space complexities. So, let's get started!
What is Knight's Tour Problem?
Just for a reminder, the knight is a piece in chess that usually looks like a horse and moves in an L-shaped pattern. This means it will first move two squares in one direction and then one square in
a perpendicular direction.
The Knight's Tour problem is about finding a sequence of moves for the knight on a chessboard such that it visits every square on the board exactly one time. It is a type of Hamiltonian path problem
in graph theory, where the squares represent the vertices and the knight's moves represent the edges of the graph.
This problem has fascinated many mathematicians for centuries, but the solutions they found were very difficult. The simple solution will be to find every conceivable configuration and selecting the
best one is one technique to solve this problem. But that will take a load of time.
One popular solution to solve the Knight's tour problem is Warnsdorff's rule, which involves choosing the next move that leads to a square with the fewest available moves. There is also a
backtracking approach.
But first moving to all that, let's take a quick understanding of the Hamiltonian path problem.
Hamiltonian Path Problem
The Hamiltonian path problem is a well-known problem in graph theory that asks whether a given graph contains a path that visits every vertex exactly once.
A path that visits every vertex exactly once is called a Hamiltonian path, and a graph that contains a Hamiltonian path is called a Hamiltonian graph.
Let's take an example of the Hamiltonian path problem. Suppose we have a graph with five vertices and the following edges:
This graph can be represented as:
The problem is to find a path that visits each vertex exactly once. In this case, a Hamiltonian path would be A-C-D-B-E, which visits each vertex exactly once. However, if we remove the edge between
D and E, the graph is no longer Hamiltonian, as there is no path that visits every vertex exactly once.
Knight's Tour Backtracking Algorithm
There are various ways to solve the Knight's Tour Problem. In the programming world, Backtracking can be one answer. You can learn the
basics of Backtracking
here with some other popular problems.
The backtracking algorithm works by exploring all possible moves for the knight, starting from a given square, and backtracking to try different moves if it reaches a dead end.
Here's the basic outline of the backtracking algorithm to solve the Knight's tour problem:
1. Choose a starting square for the knight on the chessboard.
2. Mark the starting square as visited.
3. For each valid move from the current square, make the move and recursively repeat the process for the new square.
4. If all squares on the chessboard have been visited, we have found a solution. Otherwise, undo the last move and try a different move.
5. If all moves have been tried from the current square and we have not found a solution, backtrack to the previous square and try a different move from there.
6. If we have backtracked to the starting square and tried all possible moves without finding a solution, there is no solution to the problem.
We have given the full C++ program for Backtracking Algorithm to solve Knight's Tour Problem below:
using namespace std;
const int N = 8;
int board[N][N];
int moveCount = 0;
vector<pair<int, int>> validMoves(int row, int col) {
vector<pair<int, int>> moves;
int rowMoves[] = {-2, -1, 1, 2, 2, 1, -1, -2};
int colMoves[] = {1, 2, 2, 1, -1, -2, -2, -1};
for (int i = 0; i < 8; i++) {
int newRow = row + rowMoves[i];
int newCol = col + colMoves[i];
if (newRow >= 0 && newRow < N && newCol >= 0 && newCol < N && board[newRow][newCol] == 0) {
moves.push_back(make_pair(newRow, newCol));
return moves;
bool solve(int row, int col, int moveNum) {
board[row][col] = moveNum;
if (moveNum == N*N) {
return true;
vector<pair<int, int>> moves = validMoves(row, col);
for (pair<int, int> move : moves) {
int newRow = move.first;
int newCol = move.second;
if (solve(newRow, newCol, moveNum+1)) {
return true;
board[row][col] = 0;
return false;
int main() {
int startRow = 0, startCol = 0;
solve(startRow, startCol, 1);
cout << "Number of moves: " << moveCount << endl;
for (int i = 0; i < N; i++) {
for (int j = 0; j < N; j++) {
cout << board[i][j] << " ";
cout << endl;
return 0;
Number of moves: 64
Check the image below before we explain the code:
In this implementation, we first define a function validMoves which takes the current row and column of the knight as input and returns a vector of pairs representing all the valid moves that the
knight can make from that position.
The solve function is a recursive function that takes the current row and column of the knight, as well as the current move number, as input. We mark the current square as visited by setting board
[row][column] it to the current move number, and then we recursively try all possible moves from the current position.
If we reach the last move, then we found a solution and return true. If no valid move is found from the current position, we backtrack by setting the current square to 0 and returning false.
In the main function, we start the solve function at a specified starting position and then output the number of moves it took to find a solution and print the final chessboard with the solution.
Time & Space Complexity for Backtracking
The backtracking algorithm used to solve the Knight's Tour problem has an exponential time complexity. The number of possible paths for the knight grows very quickly as the size of the chessboard
increases, which means that the time taken to explore all possible paths grows exponentially.
The exact time complexity of the Knight's Tour Backtracking algorithm is O(8^(n^2)), where n is the size of the chessboard. This is because each move has a maximum of 8 possible directions, and we
have to explore all possible moves until we find a solution.
The space complexity of the backtracking algorithm is O(n^2), where n is the size of the chessboard. So, we can say that the backtracking algorithm is efficient for smaller chessboards.
Warnsdorff's Rule
Warndorff's rule is a heuristic greedy algorithm used to solve this problem. It tries to move the knight to the square with the fewest valid moves, hoping that this will lead to a solution.
Here's an overview of how Warndorff's rule algorithm works:
1. Start with a random square on the chessboard.
2. From the current square, consider all possible moves and count the number of valid moves from each adjacent square.
3. Move to the square with the lowest number of valid moves. In case of a tie, move to the square with the lowest number of valid moves from its adjacent squares.
4. Repeat steps 2 and 3 until all squares on the chessboard have been visited.
Here is the pseudocode for Warndorff's rule algorithm:
initialize the chessboard
choose a random starting square
while there are unvisited squares:
mark the current square as visited
for each adjacent square to the current square:
count the number of valid moves from that square
move to the adjacent square with the lowest count
The time complexity of Warndorff's rule algorithm is O(n^2 log n), where n is the size of the chessboard. This is because we have to visit each square once, and for each square, we have to compute
the number of valid moves from each adjacent square. The space complexity of the algorithm is O(n^2) since we need to store the chessboard and the current position of the knight.
Overall, The Knight's Tour problem is related to chess, and solving it can help chess players improve their strategy and become better at the game. In the real world, you can also use it to design
efficient algorithms for finding the shortest path between two points in a network.
Now we know The Knight's Tour Problem and its solutions using Backtracking and Warnsdorff's Rule. It has several applications in various fields such as Game theory, Network Routing etc, making it an
important problem to study in computer science and mathematics. | {"url":"https://favtutor.com/blogs/knight-tour-problem","timestamp":"2024-11-12T07:42:20Z","content_type":"text/html","content_length":"91578","record_id":"<urn:uuid:d628d21f-6efd-4172-9dc6-a12cf181ab63>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00555.warc.gz"} |
Design of Experiments - 230414 Flashcards
What is design of Experiments?
Influence of multiple inputs (factors) in outputs (responses), experiment with all the factors at the same time.
Goal: More results, less experiments.
Agriculture: what factors are important to get an outcome. Medicine, Pharmaceutical companies: which of the factors influence the other ones?
Factors: temp, humidity. Response: thickness of coating.
Trial and error, not very good method:
* Temperature, time -> Yield.
* Adjust one of them -> Yield. Adjust another one -> Yield.
OFAT (One Factor at a Time):
* More efficient, but could still fail. Not exploring the whole behavior of the system in the design space.
Full factorial DOE: Test the extremal points (+1 max and -1 min values) and its combinations (2^k).
Discover trends, not necessarily the optimal solution, less tests.
Do it twice to find if the order of the experiments does not generate any unexpected interaction and evaluate statistical scattering.
Shorter testing, cost effective, statistical tolerancing.
Expert knowledge (or random numbers and evaluate after that), select factors and response, design space, choose DOE design,
Run experiments, Find optimal settings, Test runs with optimal parameter, Modify process
1. Measurability: pressure, temperature, material types, fyber content.
2. Adjustability:
Factors: construction of design spaces.
Responses: Real effects vs. Random results, prediction of process results (approximation function)
1. Each probability goes from 0 to 1.
2. Safe event: 1
3. Probability of two events that are mutually exclusive: 0
How does the density function relates to the cumulability frequency?
1. Failure probability is the integral of the density function.
square root of the number of elements
Why is the Weibull distribution so used?
Same distribution, variation in Weibull parameters result in different shapes (exponential and normal)
Hypothesis tests. Only for normally distributed data.
Null hypothesis: mean values are identical. Otherwise: there is an actual effect on the factor. | {"url":"https://www.brainscape.com/flashcards/design-of-experiments-230414-13280449/packs/21052633","timestamp":"2024-11-07T05:50:50Z","content_type":"text/html","content_length":"92653","record_id":"<urn:uuid:1ce55149-7014-4550-bf7d-7880af457319>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00051.warc.gz"} |
IOM Free Sample PDF Papers 2024 for Class 5 (Updated)
SilverZone - IOM PDF Sample Papers for Class 5
Class 5 sample paper & practice questions for International Olympiad of Mathematics (IOM) level 1 are given below. Syllabus for level 1 is also mentioned for these exams. You can refer these sample
paper & quiz for preparing for the exam.
Sample Questions from Olympiad Success:
Q.1 Q.2 Q.3 Q.4 Q.5 Q.6 Q.7 Q.8 Q.9 Q.10
Q.1 You have 10 tenths in your bank account. As monthly interest, you get 5 tens added upon. Your father added 25 ten and your mother added 30 tens in your account. You spend 2 tens as your mobile
recharge. What is the amount left in your bank account if another 60 tens are added by you?
a) Rs. 1150
b) Rs. 1199
c) Rs. 1181
d) Rs. 1201
Q.2 A person sold 148.19 litres of apple juice on Wednesday and 17.12 litres more than this quantity on Thursday. In total, how many litres of apple juice did he sell?
a) 313.5 litres
b) 323.5 litres
c) 330 litres
d) 350 litres
Q.3 A sloth crawls along the circumference of a circle at a speed which lets it complete one circle in 1:18:00 hours. Find the angle that it will cover in 195 seconds:
a) 15°
b) 17°
c) 19°
d) 29°
Q.4 Sunil drove 63.9823 km to visit his grandmother. When he reached his grandmother's house he noticed that the car had used up 3.31 litre of petrol. How many km can the car go in 1 litre?
a) 19.65 km/l
b) 19.33 km/l
c) 19.5 km/l
d) 20.33 km/l
Bhakti started boiling water for tea, and she simultaneously started boiling milk for coffee. After some time she found that the temperature of the water rise 20 degrees more and the temperature
Q.5 of the milk was 15 degrees less than their boiling point. Now calculate the difference between the current temperature of the water and milk: (Boiling point of milk is 100.16 degrees and water is
100 degree)
a) 34.84 degress
b) 134.84 degress
c) 194.84 degress
d) 94.84 degress
Q.6 In the morning, the temperature was -24℃. It decreased by 6 degrees in the afternoon and increased by 20 degrees in the evening. What was the temperature in the evening?
a) 4℃
b) (-4℃)
c) 24℃
d) (-10℃)
Q.7 Shilpa's father gave her Rs. 154 to spend. She spent Rs. 74.43 to buy some toys and Rs. 9.78 to buy some books. How much money is she left with in the end?
a) Rs 69.85
b) Rs 79.69
c) Rs 69.79
d) Rs 79.79
Q.8 Iqbal and Sachin have bookshelves of the same size and holding books of the same sizes. Iqbal's shelf is 7/8 full and Sachin's shelf is 12/15 full. If 351 more books of the same size are
distributed among them, then they can fill their shelves completely. How many books does Sachin have now?
a) 864
b) 865
c) 800
d) 850
Q.9 What will be the temperature of the 4th month if it is the average temperature of 3 months? The average temperature of the 1st month is 24℃, the average temperature of month 2 is 11℃ more than
the average of the 1st month and the average temperature of month 3 is 8 less than the sum of the average temperature of the 1st month and the second month?
a) 40.66℃
b) 36℃
c) 26.66℃
d) 36.66℃
Q.10 Your friend walks 1.749 metres in 3.3 hours. You start walking along with him but your speed is slower than his speed. You both have a common destination. It took you 5 hours to reach the
destination then what will be the distance that your friend will cover in 10 hours?
a) 5.3 m
b) 5.5 m
c) 3.599 m
d) None of the above
Your Score: 0/10
Sample PDF of SilverZone - International Olympiad of Mathematics (IOM) PDF Sample Papers for Class 5:
If your web browser doesn't have a PDF Plugin, you can Click here to download the PDF
Answers to Sample Questions from Olympiad Success:
Q.1 )c Q.2 )a Q.3 )a Q.4 )b Q.5 )a Q.6 )d Q.7 )c Q.8 )a Q.9 )d Q.10 )a
Answers to Sample Questions from Olympiad Success:
Q.1 : c | Q.2 : a | Q.3 : a | Q.4 : b | Q.5 : a | Q.6 : d | Q.7 : c | Q.8 : a | Q.9 : d | Q.10 : a | {"url":"https://www.olympiadsuccess.com/silverzone-iom-free-pdf-sample-papers-class-5","timestamp":"2024-11-11T02:01:22Z","content_type":"text/html","content_length":"160689","record_id":"<urn:uuid:09767efe-31fc-420d-9092-6434512329b0>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00282.warc.gz"} |
Hierarchical large-scale elastic metamaterials for passive seismic wave mitigation
Issue EPJ Appl. Metamat.
Volume 8, 2021
Frontiers in microwave, photonic, and mechanical metamaterials
Article Number 14
Number of page(s) 8
DOI https://doi.org/10.1051/epjam/2021009
Published online 08 June 2021
EPJ Appl. Metamat.
, 14 (2021)
Research Article
Hierarchical large-scale elastic metamaterials for passive seismic wave mitigation
^1 CNRS, Centrale Lille, ISEN, Univ. Lille, Univ. Valenciennes, UMR 8520 - IEMN, 59000 Lille, France
^2 Dipartimento di Fisica, Università degli Studi di Torino, Via Pietro Giuria 1, 10125 Torino, Italy
^3 Department of Mechanical Engineering, CU Boulder, 1111 Engineering Drive, UCB 427 Boulder, CO 80309, USA
^4 Department of Applied Science and Technology, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10124 Torino, Italy
^5 Department of Physics, University of Torino, Via Pietro Giuria 1, 10125 Torino, Italy
^6 University of Trento, Laboratory of Bio-Inspired, Bionic, Nano, Meta Materials & Mechanics, Department of Civil, Environmental and Mechanical Engineering, Via Mesiano 77, 38123 Trento, Italy
^* e-mail: marco.miniaci@gmail.com; marco.miniaci@univlille.fr
^a Also at: Queen Mary University of London, School of Engineering & Materials Science, Mile End Road, London E1 4NS, UK.
Received: 5 November 2020
Accepted: 11 May 2021
Published online: 8 June 2021
Large scale elastic metamaterials have recently attracted increasing interest in the scientific community for their potential as passive isolation structures for seismic waves. In particular,
so-called “seismic shields” have been proposed for the protection of large areas where other isolation strategies (e.g. dampers) are not workable solutions. In this work, we investigate the
feasibility of an innovative design based on hierarchical design of the unit cell, i.e. a structure with a self-similar geometry repeated at different scales. Results show how the introduction of
hierarchy allows the conception of unit cells exhibiting reduced size with respect to the wavelength while maintaining the same or improved isolation efficiency at frequencies of interest for
earthquake engineering. This allows to move closer to the practical realization of such seismic shields, where low-frequency operation and acceptable size are both essential characteristics for
Key words: Seismic phononic crystals / metamaterials / hierarchical organization / transient-dynamic analysis / vibration isolation
© M. Miniaci et al., Published by EDP Sciences, 2021
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution,
and reproduction in any medium, provided the original work is properly cited.
1 Introduction
Structural integrity and adherence to civil engineering requirements is of paramount importance in the construction of all types of buildings, especially when considering the protection from seismic
hazards in large or strategic constructions, such as hospitals, nuclear sites, long span bridges, dams, etc. Seismic waves derive from the superposition of different types of elastic waves, including
bulk waves (longitudinal and shear), and waves localized at the surface, also known as surface acoustic waves (SAWs) [1]. The latter are the most destructive, because of their reduced attenuation
during propagation. Indeed, their slower decay with respect to bulk waves is the main cause for partial or even total collapse of buildings and structures during earthquakes [2]. As a consequence,
the design of buildings and structures capable of withstanding large ground vibrations has been the focus of research for many decades, and a great number of isolating strategies has been proposed,
including passive, active, hybrid and semi-active approaches [3–5]. However, despite these efforts, a commonly accepted method for the design of seismic-resistant buildings has not yet been
In 1999, Meseguer et al. [6] theoretically proposed for the first time to attenuate seismic SAWs through band gaps (BGs) obtained by drilling huge periodic holes in the ground. The dimensions of the
holes proposed at that time were of the order of hundreds of meters in diameter and thousands of meters in depth. Such an extreme proposed solution derived from the measurement of the attenuation of
SAWs in a scaled experiment of a marble quarry, performed in the kHz frequency regime. After this initial intuition, many years passed without substantial further investigation linking phononic
crystals (PCs) and seismic wave attenuation, up to 2014, when Brûlé et al. [7] performed large scale experiments on meter-size periodic holes drilled in the ground. In this occasion the attenuation
potential of periodic structures was measured in the 50Hz frequency range and for bulk waves.
From then on, researchers have considered more and more types of PCs as a potential solution for reducing the displacement induced in buildings by seismic waves, including surface localized modes.
Several designs have been proposed, based on both large-scale PCs and resonant structures [8,9], trying to meet the stringent requirements of wave attenuation at extremely low frequencies (below
5Hz) [10], practical manufacturing designs [8], and other technological constraints. Among the resonator-based approaches, the most investigated consists in constructing pillar-like structures at
the surface with the goal of redirecting seismic SAWs towards the bulk at a specific angle. This approach is also known in the literature as the “resonant metawedge” [9,11]. Its main drawback is the
fact that the effect is rather narrow in frequency, since it relies on the resonance frequency of the resonators composing the system. Krödel et al. [12] showed that it was possible to enlarge the
effective frequency range of these structures by using resonators with slightly different resonance frequencies. Other researchers explored the possibility of using buried resonators [13], or
drilling periodic trench barriers [14,15], also including the possibility of saturated soil [16]. These methods also provided good attenuation, in larger frequency ranges, since their performance
derives from Bragg scattering.
In the majority of these numerical studies, the analysis has been restricted to 2D models, assuming the geometry of the resonators/cavities to be infinite in one of the two horizontal directions. As
a consequence, the full potential of the geometry of the resonators/cavities has been scarcely explored so far. Here, we overcome this limitation and, exploiting full 3D finite element modeling, we
propose an innovative design strategy based on a hierarchical organization of large-scale PCs in the horizontal plane of the ground. We show that the introduction of hierarchy allows the conception
of structures achieving isolation efficiency at lower frequencies if compared to the corresponding non hierarchical geometry, allowing to reduce the operating frequencies of the seismic barrier down
to half their “non hierarchical” values, while leaving the unit cell size unaltered. Hierarchy also allows to manipulate the effective unit cell mechanical properties without the need of multiple
materials with different mechanical characteristics. This is proved through band diagram calculation and transient dynamic analysis. Finally, the difference in terms of behavior adopting different
boundary conditions (plate-like or in a half-space assumption) is also discussed.
2 Description of the models
We consider hierarchical structures, where hierarchy is understood in the sense that a representative unit cell comprises multiple arrangements of inhomogeneities at various scales. If the same
arrangement occurs at every scale, the pattern is called self-similar [17]. In this work, we consider structures with self-similar cross-like cavities at two different hierarchical levels, as
presented in Figure 1A and B, in a sandy soil matrix with Young's modulus E[soil]=20MPa, Poisson's ratio ν[soil]=0.3, mass density ρ[soil]=1800 kg·m^−3, as reported in Table 1.
At first, we consider unit cells under the simplified assumption of free-free boundary conditions applied to the top and bottom surfaces, for the purpose of comparing the dynamic behavior of the
standard and the hierarchical unit cells. Finite element models of the ordinary and hierarchical unit cells that compose the seismic hierarchical shield are shown in Figure 1A and B. The ordinary
unit cell consists of a cross-like cavity dug into the soil (Fig. 1A). The cross-like cavity divides the unit cell into two types of regions, referred in the following as (i) masses and (ii)
connectors (see black arrows in Fig. 1A). The hierarchical unit cell is obtained by adding 4 additional cross-like cavities (see Fig. 1B and its inset), scaled down by a factor of 20, so as to fit in
the regions connecting the four large square masses of the non-hierarchical structure. The corresponding geometrical parameters are: A=5 m, B=0.9A, C=0.2A and H=A for the ordinary unit cell
and a=A/20, b=0.9a, c=0.2a and H=A for the hierarchical one. Further geometrical details are reported in Table 2. Contrary to previous studies dealing with hierarchical organization, where
the hierarchy was reproduced over the entire ordinary unit cell [17], here we introduce hierarchy only in the connectors, which are mainly responsible for the low frequency alteration of the BG [17].
The design strategy is implemented for only a single hierarchical level, but the procedure, in principle, can be recursively extended to n hierarchical levels. The inclusions have been chosen in the
shape of a cross-like cavity, due to its excellent capability to open low frequency large BGs [18,19].
The band structures are computed for an infinite array of cells in the in-plane directions, i.e. with periodic conditions along the x- and y-directions, and adopting standard Bloch-Floquet formalism,
assuming a linear elastic behavior for the soil. The resulting eigenvalue problem is solved by varying the non-dimensional wavevector k^* along the irreducible path [Γ−X], with Γ≡(0, 0) and X(≡π
/A, 0), with A the lattice parameter using the commercial software Cosmol 5.5.
As a second step, we assume free boundary conditions applied to the top surface of the unit cell and of infinite half-space at the bottom surface of the unit cell, which better reflects the condition
of surface seismic wave propagation. This is modeled by adding a 8λ-long soil-like domain plus a 2λ-long PML region (where λ is the wavelength) below the unit cell, as shown in Figure 2A, where for
the sake of synthesis, only the hierarchical unit cell model is reported. Dispersion diagrams are calculated by applying the same aforementioned procedure.
Figure 2B reports a schematic representation of the finite element models used to perform dynamic transient calculations to evaluate and compare the attenuation performances of the ordinary and
hierarchical unit cells. The model consists of a soil slab of length L[x]=300 m, infinite in width in the y-direction (obtained by applying continuity conditions) and of height L[z]=8λ+H;100
m (λ has been chosen according to the longest wavelength for the lowest frequency of 2Hz). Elastic waves are excited at the top-left edge of the model by means of a unitary imposed displacement in
the z-direction (red arrows in Fig. 2B). Two input signals are considered: (i) a triangular-like excitation (left panel of Fig. 2C) and (ii) a 15-cycle Hanning window modulated sinusoidal tone burst
excitation (right panel of Fig. 2C), exhibiting a rather broad-band and narrow-band frequency content, respectively. Absorbing boundary (low-reflecting) conditions are imposed at the bottom and right
edges of the model, using material data from the neighboring domain to develop a perfect impedance match for pressure and shear waves, to avoid unwanted reflections. Symmetry conditions are applied
to the left edge of the model in order to be consistent with the type of induced excitation.
Fig. 1
Dispersion diagrams I − free/free assumption. (A) Schematic representation of the FE models for the ordinary and (B) hierarchical large-scale metamaterial (the inset provides an enlargement of the
region where the second hierarchical level is introduced) under the assumption of free-free boundary conditions (BCs) applied to the top and bottom surfaces of the unit cell and periodic boundary
conditions (PBCs) at lateral surfaces. Dispersion curves of the (C) ordinary and (D) hierarchical large-scale elastic metamaterial, showing a remarkable enhancement of the width of the BGs (orange
rectangles) and their shift towards lower frequencies. Geometrical parameters of the unit cells for both ordinary and hierarchical structure are provided in the text.
Table 1
Material constants for the sandy soil model used in the simulations [8].
Table 2
Geometrical parameters of the unit cells presented in Figure 1A,B and Figure 2A. All the given parameters are in [m].
Fig. 2
Numerical models. (A) Schematic representation of the FE models used for the calculation of the dispersion curves localized at the surface. Free boundary conditions are applied to the top surface
of the unit cell and a semi-infinite half-space is modeled through 8λ-long soil-like domain+a 2λ-long PML region. For the sake of synthesis, only the hierarchical unit cell is reported. (B)
Schematic representation of the FE models used for the transient calculations. In this case, only the non hierarchical model is reported. (C) Time evolution and frequency content of the input
signals used in simulations: (left panel) a triangular-like excitation and 21-cycle Hanning window modulated sinusoidal tone bursts centered at (central panel) 3.8Hz and (right panel) 2.4Hz,
respectively. The frequency spectrum clearly shows a rather broad-band and narrow-band frequency content, respectively. Displacement units are in decimeters [dm].
3 Results
Figure 1C and D reports the band structure of the ordinary and hierarchical unit cells, respectively, under the assumption of free-free boundary conditions applied to the top and bottom surfaces. We
observe that hierarchy allows to obtain (i) a remarkable enhancement of the BG width (orange rectangles), as well as (ii) a BG shift towards lower frequencies. This twofold effect is a consequence of
(i) the flattening of the dispersion curve (indicated by the black arrows in Fig. 1C,D) separating the two BGs of the ordinary structure and (ii) of a general decrease of the stiffness of the
connectors, implying a downshift of the dispersion curves in the [0−4] Hz range.
Next, the SAW band structures for the ordinary and hierarchical unit cells are investigated and reported in Figure 3A and D for the [2−8] Hz frequency range. Given the approximation of half-space
condition as a 8λ-long soil-like domain plus a 2λ-long PML region below the unit cell, a plethora of modes appear in the diagrams. However, the majority of them are not localized at the surface. To
highlight this, curves are color-coded according to the parameter p, which is an indicator of the normalized center of energy distribution along the z-axis [20]:$p = ( 1 − ∫ V E ⋅ z d V H ∫ V E d V )
,$(1)having taken the integrals over the unit cell volume V, and where E is the free elastic energy density and H the total height of the unit cell. The parameter p varies from 0 to 1, where SAW-like
modes are those with p approaching 0, as suggested by Graczyowki et al. [20]. We chose here p∈[0, 0.3]. Once this sorting rule is applied, the diagrams can be simplified, selecting only SAW modes,
as shown in Figure 3B and E. Some of the dispersion curves are not continuous along the whole Brillouin path Γ−X because they undergo a transformation from true-SAWs to pseudo-SAWs (i.e., waves
propagating with exponential attenuation, due to energy leakage into the bulk), as detailed in [20,21]. Figure 3A and D also reports the longitudinal and shear bulk wave velocities as dashed black
and red lines, respectively. This allows to evaluate the frequency of transition from surface to longitudinal/shear bulk waves and to make the distinction between true- and pseudo-SAWs.
Figure 3B and E includes SAW-type modes with a predominantly out-of-plane polarization, with a weak out-of-plane polarization, and mainly in-plane polarized (see mode shapes reported in Fig. 3C and
F). Due to the nature of the excitation source considered in this study (forced out-of-plane displacement), we are interested in SAW-type modes with mainly out-of-plane polarization and we expect
that while modes of the first type will be strongly excited in our system, those with a weak out-of-plane polarization will only provide a limited contribution to the vertical displacement, while
those of the third type (in-plane polarized) will hardly be activated. As a consequence, we can identify two types of BGs that have been color-coded, using grey for those inhibiting the propagation
of waves regardless of their polarization (in-plane, out-of-plane, mixed), and yellow to indicate BGs mainly inhibiting the propagation of in-plane polarized modes, referred to as total and partial
BGs, respectively. Comparing the two band structures of the ordinary unit cell (Fig. 3B) and of the hierarchical one (Fig. 3E), we observe that the introduction of hierarchy also leads to (i) an
enhancement of the BG width and (ii) a BG shift towards lower frequencies, as in the case of free-free boundary conditions. Specifically, the lowest BG is shifted from the [3.6−4.0] Hz range down
to [2.2−2.7], while the highest one, starting at 5.68Hz in the non-hierarchical structure, begins at 5.1Hz in the hierarchical case. Notice that this is obtained while keeping the unit cell
lattice parameter size unaltered. However, it is also important to remark that BGs are strongly reduced in width and appear in rather different frequency ranges, if compared to those deriving from
the application of free-free boundary conditions.
The confirmation of the lowest BG opening/shift is then verified through a transient numerical simulation of wave propagation conducted on a finite size domain comprising a sandy soil block and 4
unit cells disposed in the x-direction, as shown in Figure 2B. Elastic waves are excited at the top-left edge of the model by means of a unitary imposed displacement in the z-direction (red arrows in
Fig. 2B), according to the time-histories reported in Figure 2C of Section 2. Such pulses have been chosen according to the band structure reported in Figure 3B and E and to highlight the filtering
capabilities of the designed hierarchical large scale PC compared with the non-hierarchical one. In both excitation cases, transient explicit simulations have been run for T=8 s in order to allow
the wave coming from the left side to reach the right end of the simulation domain.
The out-of-plane time transient displacements for both types of excitation are recorded at the end of the periodic structure (see Fig. 2B) and Fourier transformed to highlight the differences of the
two responses for the ordinary and hierarchical large-scale PCs both in the time and frequency domains. Figure 4A reports the displacement along the z-direction, as well as its frequency content (
Fig. 4B) for the excitation case of the triangular pulse reported in Figure 2C, characterized by a broad-band frequency content. The strongest achieved attenuation is shifted from slightly below 6Hz
for the ordinary structure to 5.1Hz in the hierarchical case (corresponding to the beginning of the total BG). Examining the frequency content of the transmitted displacement field at lower
frequencies, it emerges that the hierarchical case is also capable of providing large attenuation in the [2.2−2.7] Hz frequency regime, in good agreement with the dispersion diagrams reported in
Figure 3B and E. The slightly higher amplitude level observed in the partial BG frequency regions (i.e., those covered by the yellow rectangles) derives from modes that, although mainly in-plane
polarized, still exhibit some out-of-plane displacement components. This is in good agreement with the two different dispersion diagrams and clearly confirms the possibility of using hierarchical
unit cell organization to downshift stop-band behavior.
To gain further insight into this behavior, the time-domain simulations are repeated, this time with the previously defined narrow band signal (see Sect. 2), with modulations centered at 3.8 and
2.4Hz for the non-hierarchical and hierarchical cases, respectively. Figure 5 shows the instantaneous out-of-plane displacement fields at specific times, namely before and after the wave has reached
the periodic structures, for both the non-hierarchical and hierarchical cases. The wave pulse is strongly reflected in both cases. However, from a quantitative point of view, the maximum recorded
displacements with respect to a model where the surface wave propagates freely (i.e. without any PC) are almost 8 and 3 times larger for the non-hierarchical and hierarchical cases, respectively.
Note that part of the attenuation efficiency difference between the two cases is directly related to the fact that the hierarchical structure is evaluated at a lower frequency, namely 2Hz instead of
Fig. 3
Dispersion diagrams II − free/half-space assumption. Row dispersion diagrams deriving from the Bloch-Floquet calculations for the (A) ordinary and (D) hierarchical large-scale metamaterial, under
the hypothesis of free boundary condition applied to the top surface of the unit cell and semi-infinite half-space modeled through 8λ-long soil-like domain+a 2λ-long PML region on the bottom
surface of the unit cell. Curves are color-coded allowing to highlight modes localized at the surface (p≤0.3) and those radiating in the bulk (p>0.3). (B, E) Dispersion diagrams reporting only
the modes localized at the surface for the dispersion diagrams reported in (A) and (D), respectively. Total and partial BGs are reported as grey and yellow rectangles, respectively. (C, F) Mode
shapes with large vertical components predominantly out-of-plane, in-plane and mixed polarization are indicated with corresponding symbols in the (B, E) band diagrams.
Fig. 4
Time histories and Fourier transform. Out-of-plane time transient displacement and its Fourier transform for the broad-band excitation for both the ordinary (A, black line) and hierarchical (B,
blue line) structures. The signals are acquired immediately after the end of the periodic structure. The Fourier transforms of the signals allow to show the difference of the two responses in terms
of frequency content, showing the ability of the hierarchical unit cell to provide large attenuation in the [2.2−2.7] Hz frequency range and above 5.1Hz, in good agreement with the dispersion
diagrams reported in Figure 3B and E. BGs calculated in Figure 3B and E are superimposed onto each frequency response, to visually highlight how the frequency responses matches the dispersion
Fig. 5
Full wavefield reconstruction. Out-of-plane full wavefield for the (A) ordinary and (B) hierarchical seismic barrier at two different time steps: before (top panels) and after (bottom panels) the
wave has reached the periodic structure.
4 Conclusion
This study contributes to the conception of a new generation of earthquake-proof barriers capable of protecting sensitive or strategic structures, introducing the concept of hierarchical design. We
have investigated the effect of hierarchy on dispersion diagrams and verified results through time-transient simulations, and highlighted the possibility of realizing unit cells capable of opening
frequency BGs below 5Hz, despite being similar in size to previous designs. Results prove the strategy to be practical for civil structures.
MaMi has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska Curie grant agreement N. 754364. NK, FB, ASG, MO have received funding
from the project “Metapp” (No. CSTO160004) funded by Fondazione San Paolo. MaMi, MaMO, FB, ASG and NMP are funded by the European Union's Horizon 2020 FET Open (“Boheme”) under grant agreement No.
Cite this article as: Marco Miniaci, Nesrine Kherraz, Charles Cröenne, Matteo Mazzotti, Maryam Morvaridi, Antonio S. Gliozzi, Miguel Onorato, Federico Bosia, Nicola Maria Pugno, Hierarchical
large-scale elastic metamaterials for passive seismic wave mitigation, EPJ Appl. Metamat. 8, 14 (2021)
All Tables
Table 1
Material constants for the sandy soil model used in the simulations [8].
Table 2
Geometrical parameters of the unit cells presented in Figure 1A,B and Figure 2A. All the given parameters are in [m].
All Figures
Fig. 1
Dispersion diagrams I − free/free assumption. (A) Schematic representation of the FE models for the ordinary and (B) hierarchical large-scale metamaterial (the inset provides an enlargement of the
region where the second hierarchical level is introduced) under the assumption of free-free boundary conditions (BCs) applied to the top and bottom surfaces of the unit cell and periodic boundary
conditions (PBCs) at lateral surfaces. Dispersion curves of the (C) ordinary and (D) hierarchical large-scale elastic metamaterial, showing a remarkable enhancement of the width of the BGs (orange
rectangles) and their shift towards lower frequencies. Geometrical parameters of the unit cells for both ordinary and hierarchical structure are provided in the text.
In the text
Fig. 2
Numerical models. (A) Schematic representation of the FE models used for the calculation of the dispersion curves localized at the surface. Free boundary conditions are applied to the top surface
of the unit cell and a semi-infinite half-space is modeled through 8λ-long soil-like domain+a 2λ-long PML region. For the sake of synthesis, only the hierarchical unit cell is reported. (B)
Schematic representation of the FE models used for the transient calculations. In this case, only the non hierarchical model is reported. (C) Time evolution and frequency content of the input
signals used in simulations: (left panel) a triangular-like excitation and 21-cycle Hanning window modulated sinusoidal tone bursts centered at (central panel) 3.8Hz and (right panel) 2.4Hz,
respectively. The frequency spectrum clearly shows a rather broad-band and narrow-band frequency content, respectively. Displacement units are in decimeters [dm].
In the text
Fig. 3
Dispersion diagrams II − free/half-space assumption. Row dispersion diagrams deriving from the Bloch-Floquet calculations for the (A) ordinary and (D) hierarchical large-scale metamaterial, under
the hypothesis of free boundary condition applied to the top surface of the unit cell and semi-infinite half-space modeled through 8λ-long soil-like domain+a 2λ-long PML region on the bottom
surface of the unit cell. Curves are color-coded allowing to highlight modes localized at the surface (p≤0.3) and those radiating in the bulk (p>0.3). (B, E) Dispersion diagrams reporting only
the modes localized at the surface for the dispersion diagrams reported in (A) and (D), respectively. Total and partial BGs are reported as grey and yellow rectangles, respectively. (C, F) Mode
shapes with large vertical components predominantly out-of-plane, in-plane and mixed polarization are indicated with corresponding symbols in the (B, E) band diagrams.
In the text
Fig. 4
Time histories and Fourier transform. Out-of-plane time transient displacement and its Fourier transform for the broad-band excitation for both the ordinary (A, black line) and hierarchical (B,
blue line) structures. The signals are acquired immediately after the end of the periodic structure. The Fourier transforms of the signals allow to show the difference of the two responses in terms
of frequency content, showing the ability of the hierarchical unit cell to provide large attenuation in the [2.2−2.7] Hz frequency range and above 5.1Hz, in good agreement with the dispersion
diagrams reported in Figure 3B and E. BGs calculated in Figure 3B and E are superimposed onto each frequency response, to visually highlight how the frequency responses matches the dispersion
In the text
Fig. 5
Full wavefield reconstruction. Out-of-plane full wavefield for the (A) ordinary and (B) hierarchical seismic barrier at two different time steps: before (top panels) and after (bottom panels) the
wave has reached the periodic structure.
In the text
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on
Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while. | {"url":"https://epjam.edp-open.org/articles/epjam/full_html/2021/01/epjam200018/epjam200018.html","timestamp":"2024-11-02T04:45:59Z","content_type":"text/html","content_length":"113910","record_id":"<urn:uuid:b1693ed2-0750-4862-895e-db626cc91ad4>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00898.warc.gz"} |
Lab Report 1: PHASE VOCODER PITCH-SHIFT
Oliver Marlan 311271383 (SID)
Lab Report 1: PHASE VOCODER PITCH-SHIFT
Digital Audio Systems, DESC9115, Semester 1 2012
Graduate Program in Audio and Acoustics
Faculty of Architecture, Design and Planning, The University of Sydney
For this Lab Report I will be looking at a Matlab function that performs a pitch shift effect using Phase Vocoder. I have supplied the main function ‘pitchshift_oly.m’ along with ’fusionFrames.m’ and
‘createFrames’ which are required for certain parts of the function to work.
The audio I have supplied to be used with the function is a short segment of dialoguecalled ‘heroin’.
The script I have supplied to run the function is called ’script_for_pitchshift.m’
This script will import the audio file ‘heroin’ using the wavread function. It will then perform the function pitchshift_oly, play the pitch-shifted output of ‘heroin’ and make it into a wave file
called ‘ShiftedHeroin’ using the wavwrite function.
Finally, it will plot the output ‘ShiftedHeroin’ in a simple frequency vs. time plot.
What does the function do?
The simple answer is that this function will alter the pitch of an input audio file by an amount specified by the user.
There are, however, a number of processes that occur within the function that make this happen.
The main steps are:
- The audio signal is separated into frames
- Phase Vocoder method applied
- Resampled to be played at chosen sample rate.
Firstly, Input signal is divided into frames of size determined by input argument ‘windowSize’ with the frames overlapping each other. In this case, I have set the frame size to 1024 samples, as this
is appropriate with a sampling frequency of 44,100 samples per second.
The distance between the beginnings of each frame is called the ‘hop’ and will determine the number of samples that the frames overlap each other. The size of of overlap is determined by input
argument ‘hopSize’. By setting the overlap to 256 samples the frames will overlap 75% of each other.
A Hanning window is applied to each frame, and then each window is analyzed in the frequency domain using the Fast Fourier Transform (FFT).
The FFT takes N consecutive samples out of the signal z (n)and performs a mathematical operation to yield N samples X (k) of the spectrum of the signal.[2]
Now that we are in the frequency domain, the phase differencebetween frames, called phase-shift, is calculated.
The difference between frames is now removed so that they can be shifted in time without too many discontinuities or undesired artifacts in later processing.
The inverse discrete Fourier transform (IDFT) is performed on each frame spectrum.
The IDFT allows the transform from the discrete-frequency domain to the discrete- time domain. The IDFT algorithm is given by:
The result is then windowed with a Hanning window to obtain . Windowing is used this time to smooth the signal. This process is shown in this equation. [1]
(Synthesis stage equation)
Each frame is then overlap-added as shown in the equation below. The variable stands for the number of frame and represents the unit step function. [1]
(Overlap-add of the synthesis frames equation)
Finally, the signal is resampled to the new pitch as specified by the input argument ‘step’.
One step is equal to one semitone. So to input a value of 4 for ‘step’, the pitch will be shifted 2 whole tones and input a value of 12 the pitch will be shifted by an octave. The number of steps to
be shifted is called the ‘scaling factor’ and is shown in the function as:
alpha = 2^(step/12);
This can be used to find out what the final frequency will be after the pitch shifting has taking place, and can be shown in the mathematical expression:
A procedure called interpolationis used to perform the resampling task. This involves placing a number samples, with a value of zero, between each of the samples obtained from memory. The resulting
signal is a digital impulse train, containing the desired voice band. [3]
Plots and Graphs
FIG 1. Original ‘heroin’ waveform FIG 2. Output ‘ShiftedHeroin’ waveform
Figures 1 and 2 clearly show that some amplitude content has been altered, while the duration of the signal has not changed at all, which is the desired affect.
FIG 3. Original ‘heroin’ Freq. Response FIG 4. Output ‘ShiftedHeroin’ freq. response
Figures 3 and 4 show the frequency response before and after the pitch shift effect.
We can clearly see that the pitch of the signal has been lowered as desired, while the harmonic structure of the signal has been maintained as desired.
[1]Grondin François, “Guitar Pitch Shifter”, Matlab code and Mathematic Equations
[2] DAFX: Digital Audio Effects, Chapter 1, West Sussex: John Wiley & Sons, 2002.
[3] The Scientist and Engineer's Guide to DSP, Chapter 3, Stephen Smith, Digital Book. | {"url":"https://docsbay.net/doc/606872/lab-report-1-phase-vocoder-pitch-shift","timestamp":"2024-11-10T10:49:51Z","content_type":"text/html","content_length":"17322","record_id":"<urn:uuid:752ee268-a28f-40b3-b0b4-d5e300cca6bc>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00637.warc.gz"} |
drifter 0.2.1
• tiny corrections requested by CRAN
drifter 0.2
• DALEX2 is replaced by DALEX
• ceterisParibus2 is replaced by ingredients
drifter 0.1
• calculate_covariate_drift() calculates 1d inverse intersection distances between two datasets
• calculate_residuals_drift() calculates 1d inverse intersection distances between two residuals calculated on old and new data
• calculate_model_drift() calculates distances between PDP curves calculated for new and old model
• check_drif() executes all tests for drift | {"url":"http://cran.stat.auckland.ac.nz/web/packages/drifter/news/news.html","timestamp":"2024-11-12T20:20:32Z","content_type":"application/xhtml+xml","content_length":"1980","record_id":"<urn:uuid:96aed5b6-d469-43b5-b7ed-03173747093b>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00589.warc.gz"} |
Design of the class-e power amplifier with finite dc feed inductance under maximum-rating constraints
This paper proposes an analytical expression set to determine the maximum values of currents and voltages in the Class-E Power Amplifier (PA) with Finite DC-Feed Inductance (FDI) under the following
assumptions—ideal components (e.g., inductors and capacitors with infinite quality factor), a switch with zero rise and fall commutation times, zero on-resistance, and infinite off-resistance, and an
infinite loaded quality factor of the output resonant circuit. The developed expressions are the average supply current, the RMS (Root Mean Square) current through the DCfeed inductance, the peak
voltage and current in the switch, the RMS current through the switch, the peak voltages of the output resonant circuit, and the peak voltage and current in the PA load. These equations were obtained
from the circuit analysis of this ideal amplifier and curve-fitting tools. Furthermore, the proposed expressions are a useful tool to estimate the maximum ratings of the amplifier components. The
accuracy of the expressions was analyzed by the circuit simulation of twelve ideal amplifiers, which were designed to meet a wide spectrum of application scenarios. The resulting Mean Absolute
Percentage Error (MAPE) of the maximum-rating constraints estimation was 2.64%.
Profundice en los temas de investigación de 'Design of the class-e power amplifier with finite dc feed inductance under maximum-rating constraints'. En conjunto forman una huella única. | {"url":"https://perfilesycapacidades.javeriana.edu.co/es/publications/design-of-the-class-e-power-amplifier-with-finite-dc-feed-inducta","timestamp":"2024-11-07T06:23:51Z","content_type":"text/html","content_length":"64363","record_id":"<urn:uuid:1db74de5-0c3e-4c76-b6b5-47189f702b95>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00808.warc.gz"} |
ECCC - Reports beginning with G
Other G
TR24-070 | 10th April 2024
Xinyu Mao, Guangxu Yang, Jiapeng Zhang
Gadgetless Lifting Beats Round Elimination: Improved Lower Bounds for Pointer Chasing
The notion of query-to-communication lifting theorems is a generic framework to convert query lower bounds into two-party communication lower bounds. Though this framework is very generic and
beautiful, it has some inherent limitations such as it only applies to lifted functions. In order to address this issue, we propose gadgetless ... more >>>
TR00-068 | 13th July 2000
Peter Auer, Nicolo Cesa-Bianchi, Yoav Freund, Robert E. Schapire
Gambling in a rigged casino: The adversarial multi-armed bandit problem
In the multi-armed bandit problem, a gambler must decide which arm
of K non-identical slot machines to play in a sequence of trials
so as to maximize his reward.
This classical problem has received much attention because of the
simple model it provides of the trade-off between
exploration ... more >>>
TR15-149 | 8th September 2015
Mohammad Hajiabadi, Bruce Kapron
Gambling, Computational Information, and Encryption Security
We revisit the question, originally posed by Yao (1982), of whether encryption security may be characterized using computational information. Yao provided an affirmative answer, using a
compression-based notion of computational information to give a characterization equivalent to the standard computational notion of semantic security. We give two other equivalent characterizations.
... more >>>
TR15-021 | 5th February 2015
Stephen A. Fenner, Daniel Grier, Jochen Messner, Luke Schaeffer, Thomas Thierauf
Game Values and Computational Complexity: An Analysis via Black-White Combinatorial Games
A black-white combinatorial game is a two-person game in which the pieces are colored either black or white. The players alternate moving or taking elements of a specific color designated to them
before the game begins. A player loses the game if there is no legal move available for his ... more >>>
TR24-053 | 10th March 2024
Noam Mazor, Rafael Pass
Gap MCSP is not (Levin) NP-complete in Obfustopia
Revisions: 1
We demonstrate that under believable cryptographic hardness assumptions, Gap versions of standard meta-complexity problems, such as the Minimum Circuit Size problem (MCSP) and the Minimum
Time-Bounded Kolmogorov Complexity problem (MKTP) are not NP-complete w.r.t. Levin (i.e., witness-preserving many-to-one) reductions.
In more detail:
- Assuming the existence of indistinguishability obfuscation, and ... more >>>
TR98-026 | 5th May 1998
Richard Beigel
Gaps in Bounded Query Hierarchies
Prior results show that most bounded query hierarchies cannot
contain finite gaps. For example, it is known that
P<sub>(<i>m</i>+1)-tt</sub><sup>SAT</sup> = P<sub><i>m</i>-tt</sub><sup>SAT</sup> implies P<sub>btt</sub><sup>SAT</sup> = P<sub><i>m</i>-tt</sub><sup>SAT</sup>
and for all sets <i>A</i>
<li> FP<sub>(<i>m</i>+1)-tt</sub><sup><i>A</i></sup> = FP<sub><i>m</i>-tt</sub><sup><i>A</i></sup> implies FP<sub>btt</sub><sup><i>A</i></sup> = FP<sub><i>m</i>-tt</sub><sup><i>A</i></sup>
<li> P<sub>(<i>m</i>+1)-T</sub><sup><i>A</i></sup> = P<sub><i>m</i>-T</sub><sup><i>A</i></sup> implies P<sub>bT</sub><sup><i>A</i></sup> = ... more >>>
TR17-067 | 21st April 2017
Benny Applebaum
Garbled Circuits as Randomized Encodings of Functions: a Primer
Yao's garbled circuit construction is a central cryptographic tool with numerous applications. In this tutorial, we study garbled circuits from a foundational point of view under the framework of \
emph{randomized encoding} (RE) of functions. We review old and new constructions of REs, present some lower bounds, and describe some applications. ... more >>>
TR18-027 | 8th February 2018
Jaroslaw Blasiok, Venkatesan Guruswami, Preetum Nakkiran, Atri Rudra, Madhu Sudan
General Strong Polarization
Revisions: 1
Ar\i kan's exciting discovery of polar codes has provided an altogether new way to efficiently achieve Shannon capacity. Given a (constant-sized) invertible matrix $M$, a family of polar codes can be
associated with this matrix and its ability to approach capacity follows from the $\textit{polarization}$ of an associated $[0,1]$-bounded martingale, ... more >>>
TR14-040 | 30th March 2014
Hamed Hatami, Pooya Hatami, Shachar Lovett
General systems of linear forms: equidistribution and true complexity
Revisions: 1
The densities of small linear structures (such as arithmetic progressions) in subsets of Abelian groups can be expressed as certain analytic averages involving linear forms. Higher-order Fourier
analysis examines such averages by approximating the indicator function of a subset by a function of bounded number of polynomials. Then, to approximate ... more >>>
TR08-027 | 4th December 2007
Till Tantau
Generalizations of the Hartmanis-Immerman-Sewelson Theorem and Applications to Infinite Subsets of P-Selective Sets
The Hartmanis--Immerman--Sewelson theorem is the classical link between the exponential and the polynomial time realm. It states that NE = E if, and only if, every sparse set in NP lies in P. We
establish similar links for classes other than sparse sets:
1. E = UE if, and only ... more >>>
TR05-142 | 1st December 2005
Vadim Lyubashevsky, Daniele Micciancio
Generalized Compact Knapsacks are Collision Resistant
The generalized knapsack problem is the following: given $m$ random
elements $a_1,\ldots,a_m\in R$ for some ring $R$, and a target $t\in
R$, find elements $z_1,\ldots,z_m\in D$ such that $\sum{a_iz_i}=t$
where $D$ is some given subset of $R$. In (Micciancio, FOCS 2002),
it was proved that for appropriate choices of $R$ ... more >>>
TR04-095 | 3rd November 2004
Daniele Micciancio
Generalized compact knapsacks, cyclic lattices, and efficient one-way functions from worst-case complexity assumptions
We investigate the average case complexity of a generalization of the compact knapsack problem to arbitrary rings: given $m$ (random) ring elements a_1,...,a_m in R and a (random) target value b in
R, find coefficients x_1,...,x_m in S (where S is an appropriately chosen subset of R) such that a_1*x_1 ... more >>>
TR18-074 | 23rd April 2018
Daniel Kane, Shachar Lovett, Shay Moran
Generalized comparison trees for point-location problems
Let $H$ be an arbitrary family of hyper-planes in $d$-dimensions. We show that the point-location problem for $H$
can be solved by a linear decision tree that only uses a special type of queries called \emph{generalized comparison queries}. These queries correspond to hyperplanes that can be written as a linear
... more >>>
TR97-061 | 12th November 1997
Eli Biham, Dan Boneh, Omer Reingold
Generalized Diffie-Hellman Modulo a Composite is not Weaker than Factoring
The Diffie-Hellman key-exchange protocol may naturally be
extended to k>2 parties. This gives rise to the generalized
Diffie-Hellman assumption (GDH-Assumption).
Naor and Reingold have recently shown an efficient construction
of pseudo-random functions and reduced the security of their
construction to the GDH-Assumption.
In this note, we ... more >>>
TR18-064 | 3rd April 2018
Markus Bläser, Christian Ikenmeyer, Gorav Jindal, Vladimir Lysikov
Generalized Matrix Completion and Algebraic Natural Proofs
Algebraic natural proofs were recently introduced by Forbes, Shpilka and Volk (Proc. of the 49th Annual {ACM} {SIGACT} Symposium on Theory of Computing (STOC), pages {653--664}, 2017) and
independently by Grochow, Kumar, Saks and Saraf~(CoRR, abs/1701.01717, 2017) as an attempt to transfer Razborov and Rudich's famous barrier result (J. Comput. ... more >>>
TR13-103 | 24th July 2013
Gábor Ivanyos, Marek Karpinski, Youming Qiao, Miklos Santha
Generalized Wong sequences and their applications to Edmonds' problems
We design two deterministic polynomial time algorithms for variants of a problem introduced by Edmonds in 1967: determine the rank of a matrix $M$ whose entries are homogeneous linear polynomials
over the integers. Given a linear subspace $\mathcal{B}$ of the $n \times n$ matrices over some field $\mathbb{F}$, we consider ... more >>>
TR12-170 | 30th November 2012
Scott Aaronson, Travis Hance
Generalizing and Derandomizing Gurvits's Approximation Algorithm for the Permanent
Around 2002, Leonid Gurvits gave a striking randomized algorithm to approximate the permanent of an n×n matrix A. The algorithm runs in O(n^2/?^2) time, and approximates Per(A) to within ±?||A||^n
additive error. A major advantage of Gurvits's algorithm is that it works for arbitrary matrices, not just for nonnegative matrices. ... more >>>
TR96-007 | 29th January 1996
Miklos Ajtai
Generating Hard Instances of Lattice Problems
Comments: 1
We give a random class of n dimensional lattices so that, if
there is a probabilistic polynomial time algorithm which finds a short
vector in a random lattice with a probability of at least 1/2
then there is also a probabilistic polynomial time algorithm which
solves the following three ... more >>>
TR13-185 | 24th December 2013
Fu Li, Iddo Tzameret
Generating Matrix Identities and Proof Complexity Lower Bounds
Revisions: 3
Motivated by the fundamental lower bounds questions in proof complexity, we investigate the complexity of generating identities of matrix rings, and related problems. Specifically, for a field $\
mathbb{F}$ let $A$ be a non-commutative (associative) $\mathbb{F}$-algebra (e.g., the algebra Mat$_d(\mathbb{F})\;$ of $d\times d$ matrices over $\mathbb{F}$). We say that a non-commutative ... more
TR04-037 | 14th April 2004
Elmar Böhler, Christian Glaßer, Bernhard Schwarz, Klaus W. Wagner
Generation Problems
Given a fixed computable binary operation f, we study the complexity of the following generation problem: The input consists of strings a1,...,an,b. The question is whether b is in the closure of
{a1,...,an} under operation f.
For several subclasses of operations we prove tight upper and lower bounds for the ... more >>>
TR05-060 | 30th May 2005
Philippe Moser
Generic Density and Small Span Theorem
We refine the genericity concept of Ambos-Spies et al, by assigning a real number in $[0,1]$ to every generic set, called its generic density.
We construct sets of generic density any E-computable real in $[0,1]$.
We also introduce strong generic density, and show that it is related to packing ... more >>>
TR05-162 | 23rd December 2005
Yunlei Zhao, Jesper Buus Nielsen, Robert H. Deng, Feng Dengguo
Generic yet Practical ZK Arguments from any Public-Coin HVZK
Revisions: 1
In this work, we present a generic yet practical transformation from any public-coin honest-verifier zero-knowledge (HVZK) protocols to normal zero-knowledge (ZK) arguments. By ``generic", we mean
that the transformation is applicable to any public-coin HVZK protocol under any one-way function (OWF) admitting Sigma-protocols. By ``practical" we mean that the transformation ... more >>>
TR19-060 | 18th April 2019
Scott Aaronson, Guy Rothblum
Gentle Measurement of Quantum States and Differential Privacy
In differential privacy (DP), we want to query a database about $n$ users, in a way that "leaks at most $\varepsilon$ about any individual user," even conditioned on any outcome of the query.
Meanwhile, in gentle measurement, we want to measure $n$ quantum states, in a way that "damages the ... more >>>
TR96-053 | 6th August 1996
Yosi Ben-Asher, Ilan Newman
Geometric Approach for Optimal Routing on Mesh with Buses
Revisions: 1
The architecture of 'mesh of buses' is an important model in parallel computing. Its main advantage is that the additional broadcast capability can be used to overcome the main disadvantage of the
mesh, namely its relatively large diameter. We show that the addition of buses indeed accelerates routing times. Furthermore, ... more >>>
TR20-029 | 6th March 2020
Swastik Kopparty, Guy Moshkovitz, Jeroen Zuiddam
Geometric Rank of Tensors and Subrank of Matrix Multiplication
Motivated by problems in algebraic complexity theory (e.g., matrix multiplication) and extremal combinatorics (e.g., the cap set problem and the sunflower problem), we introduce the geometric rank as
a new tool in the study of tensors and hypergraphs. We prove that the geometric rank is an upper bound on the ... more >>>
TR96-052 | 2nd October 1996
Martin Dietzfelbinger
Gossiping and Broadcasting versus Computing Functions in Networks
The fundamental assumption in the classical theory of
dissemination of information in interconnection networks
(gossiping and broadcasting) is that atomic pieces of information
are communicated. We show that, under suitable assumptions about
the way processors may communicate, computing an n-ary function
that has a "critical input" (e.g., ... more >>>
TR05-116 | 12th October 2005
Alex Samorodnitsky, Luca Trevisan
Gowers Uniformity, Influence of Variables, and PCPs
Gowers introduced, for d\geq 1, the notion of dimension-d uniformity U^d(f)
of a function f: G -> \C, where G is a finite abelian group and \C are the
complex numbers. Roughly speaking, if a function has small Gowers uniformity
of dimension d, then it ``looks random'' on ... more >>>
TR12-050 | 25th April 2012
Avraham Ben-Aroya, Gil Cohen
Gradual Small-Bias Sample Spaces
Revisions: 3
A $(k,\epsilon)$-biased sample space is a distribution over $\{0,1\}^n$ that $\epsilon$-fools every nonempty linear test of size at most $k$. Since they were introduced by Naor and Naor [SIAM J.
Computing, 1993], these sample spaces have become a central notion in theoretical computer science with a variety of applications.
When ... more >>>
TR15-162 | 9th October 2015
Eric Allender, Joshua Grochow, Cris Moore
Graph Isomorphism and Circuit Size
Revisions: 1
We show that the Graph Automorphism problem is ZPP-reducible to MKTP, the problem of minimizing time-bounded Kolmogorov complexity. MKTP has previously been studied in connection with the Minimum
Circuit Size Problem (MCSP) and is often viewed as essentially a different encoding of MCSP. All prior reductions to MCSP have applied ... more >>>
TR10-050 | 25th March 2010
Samir Datta, Prajakta Nimbhorkar, Thomas Thierauf, Fabian Wagner
Graph Isomorphism for $K_{3,3}$-free and $K_5$-free graphs is in Log-space
Graph isomorphism is an important and widely studied computational problem, with
a yet unsettled complexity.
However, the exact complexity is known for isomorphism of various classes of
graphs. Recently [DLN$^+$09] proved that planar graph isomorphism is complete for log-space.
We extend this result of [DLN$^+$09] further
to the ... more >>>
TR02-037 | 21st May 2002
Vikraman Arvind, Piyush Kurur
Graph Isomorphism is in SPP
We show that Graph Isomorphism is in the complexity class
SPP, and hence it is in $\ParityP$ (in fact, it is in $\ModkP$ for
each $k\geq 2$). We derive this result as a corollary of a more
general result: we show that a {\em generic problem} $\FINDGROUP$ has
an $\FP^{\SPP}$ ... more >>>
TR99-033 | 19th August 1999
Vikraman Arvind, J. Köbler
Graph Isomorphism is Low for ZPP$^{\mbox{\rm NP}}$ and other Lowness results.
We show the following new lowness results for the probabilistic
class ZPP$^{\mbox{\rm NP}}$.
1. The class AM$\cap$coAM is low for ZPP$^{\mbox{\rm NP}}$.
As a consequence it follows that Graph Isomorphism and several
group-theoretic problems known to be in AM$\cap$coAM are low for
ZPP$^{\mbox{\rm ... more >>>
TR10-117 | 22nd July 2010
Arkadev Chattopadhyay, Jacobo Toran, Fabian Wagner
Graph Isomorphism is not AC^0 reducible to Group Isomorphism
We give a new upper bound for the Group and Quasigroup
Isomorphism problems when the input structures
are given explicitly by multiplication tables. We show that these problems can be computed by polynomial size nondeterministic circuits of unbounded fan-in with $O(\log\log n)$ depth and $O(\log^2 n)
$ nondeterministic bits, ... more >>>
TR15-032 | 21st February 2015
Vikraman Arvind, Johannes Köbler, Gaurav Rattan, Oleg Verbitsky
Graph Isomorphism, Color Refinement, and Compactness
Revisions: 2
Color refinement is a classical technique used to show that two given graphs $G$ and $H$
are non-isomorphic; it is very efficient, although it does not succeed on all graphs. We call a graph $G$ amenable to color refinement if the color-refinement procedure succeeds in distinguishing $G$
from any non-isomorphic ... more >>>
TR11-077 | 8th May 2011
Albert Atserias, Elitza Maneva
Graph Isomorphism, Sherali-Adams Relaxations and Expressibility in Counting Logics
Two graphs with adjacency matrices $\mathbf{A}$ and $\mathbf{B}$ are isomorphic if there exists a permutation matrix $\mathbf{P}$ for which the identity $\mathbf{P}^{\mathrm{T}} \mathbf{A} \mathbf{P}
= \mathbf{B}$ holds. Multiplying through by $\mathbf{P}$ and relaxing the permutation matrix to a doubly stochastic matrix leads to the notion of fractional isomorphism. We show ... more >>>
TR98-075 | 9th December 1998
Adam Klivans, Dieter van Melkebeek
Graph Nonisomorphism has Subexponential Size Proofs Unless the Polynomial-Time Hierarchy Collapses.
We establish hardness versus randomness trade-offs for a
broad class of randomized procedures. In particular, we create efficient
nondeterministic simulations of bounded round Arthur-Merlin games using
a language in exponential time that cannot be decided by polynomial
size oracle circuits with access to satisfiability. We show that every
language with ... more >>>
TR06-116 | 19th July 2006
Amin Coja-Oghlan
Graph partitioning via adaptive spectral techniques
We study the use of spectral techniques for graph partitioning. Let G=(V,E) be a graph whose vertex set has a ``latent'' partition V_1,...,V_k. Moreover, consider a ``density matrix'' E=(E_vw)_{v,w
in V} such that for v in V_i and w in V_j the entry E_{vw} is the fraction of all possible ... more >>>
TR98-031 | 4th May 1998
Dimitris Fotakis, Paul Spirakis
Graph Properties that Facilitate Travelling
In this work, we study two special cases of the metric Travelling Salesman
Problem, Graph TSP and TSP(1,2). At first, we show that dense instances of
TSP(1,2) and Graph TSP are essentially as hard to approximate as general
instances of TSP(1,2).
Next, we present an NC algorithm for TSP(1,2) that ... more >>>
TR99-047 | 10th November 1999
Wolfgang Slany
Graph Ramsey games
We consider combinatorial avoidance and achievement games
based on graph Ramsey theory: The players take turns in coloring
still uncolored edges of a graph G, each player being assigned a
distinct color, choosing one edge per move. In avoidance games,
completing a monochromatic subgraph isomorphic to ... more >>>
TR14-121 | 22nd September 2014
Sebastian Mueller
Graph Structure and Parity Games
We investigate the possible structural changes one can perform on a game graph without destroying the winning regions of the players playing a parity game on it. More precisely, given a game graph $
(G,p)$ for which we can efficiently compute winning regions, can we remove or add a vertex or ... more >>>
TR11-032 | 11th March 2011
Fabian Wagner
Graphs of Bounded Treewidth can be Canonized in AC$^1$
In recent results the complexity of isomorphism testing on
graphs of bounded treewidth is improved to TC$^1$ [GV06] and further to LogCFL [DTW10].
The computation of canonical forms or a canonical labeling provides more information than
isomorphism testing.
Whether canonization is in NC or even TC$^1$ was stated ... more >>>
TR18-049 | 14th March 2018
Stasys Jukna, Hannes Seiwert
Greedy can also beat pure dynamic programming
Revisions: 1
Many dynamic programming algorithms are ``pure'' in that they only use min or max and addition operations in their recursion equations. The well known greedy algorithm of Kruskal solves the minimum
weight spanning tree problem on $n$-vertex graphs using only $O(n^2\log n)$ operations. We prove that any pure DP algorithm ... more >>>
TR16-145 | 16th September 2016
Markus Bläser, Gorav Jindal, Anurag Pandey
Greedy Strikes Again: A Deterministic PTAS for Commutative Rank of Matrix Spaces
Revisions: 2
We consider the problem of commutative rank computation of a given matrix space, $\mathcal{B}\subseteq\mathbb{F}^{n\times n}$. The problem is fundamental, as it generalizes several computational
problems from algebra and combinatorics. For instance, checking if the commutative rank of the space is $n$, subsumes problems such as testing perfect matching in graphs ... more >>>
TR10-151 | 30th September 2010
Raghunath Tewari, N. V. Vinodchandran
Green’s Theorem and Isolation in Planar Graphs
We show a simple application of Green’s theorem from multivariable calculus to the isolation problem in planar graphs. In particular, we construct a skew-symmetric, polynomially bounded, edge weight
function for a directed planar graph in logspace such that the weight of any simple cycle in the graph is non-zero with ... more >>>
TR05-149 | 7th December 2005
Eric Allender, David Mix Barrington, Tanmoy Chakraborty, Samir Datta, Sambuddha Roy
Grid Graph Reachability Problems
Revisions: 1
We study the complexity of restricted versions of st-connectivity, which is the standard complete problem for NL. Grid graphs are a useful tool in this regard, since
* reachability on grid graphs is logspace-equivalent to reachability in general planar digraphs, and
* reachability on certain classes of grid graphs gives ... more >>>
TR14-073 | 14th May 2014
Shachar Lovett, Cris Moore, Alexander Russell
Group representations that resist random sampling
Revisions: 1
We show that there exists a family of groups $G_n$ and nontrivial irreducible representations $\rho_n$ such that, for any constant $t$, the average of $\rho_n$ over $t$ uniformly random elements
$g_1, \ldots, g_t \in G_n$ has operator norm $1$ with probability approaching 1 as $n \rightarrow \infty$. More quantitatively, we ... more >>> | {"url":"https://eccc.weizmann.ac.il/title/G","timestamp":"2024-11-07T16:09:07Z","content_type":"application/xhtml+xml","content_length":"58893","record_id":"<urn:uuid:799012cc-862e-4356-919a-8c1614b4993a>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00777.warc.gz"} |
Confusion with the Lesson
So I am very confused with this lesson. I am not sure how to explain it to my students when I don’t understand it for myself.
I have a Master’s degree in computer science and I don’t understand all the math involved in this. I freely admit that to my students. I think the key take away is that you can have a public key
that’s used to encode a message but unless you have the private key you can’t decode it. The internet is secure because it would take far too long to guess the private key and decode the message.
What do others think?
2 Likes
The math is hard to understand and students do not need to know the math. I don’t spend too much time on the activity guide except demonstrating the beans and cups activity. I stress on how the mod
function is a one-way function that makes it hard to solve the problem backwards, i.e. determine the divisor based on knowing the remainder. Here is a video that has a simple explanation of public
and private key encryption. I hope this helps.
7 Likes
Sangeeta (@bhatnagars) is right. Student’s don’t need to know the math, only that asymmetric encryption is a thing that allows anyone to encrypt things with a public key, but only the person in
possession of a private key can decrypt.
As an author of this lesson I SO WANT people to understand the math though – actually what I want is really just for anyone to take one step further to understand why/how it’s possible (with math) to
create these public and private keys, because I find it unsatisfying any time someone says what I just said above
I don’t think it’s not the math itself that is hard to understand, it’s just multiplication + modulo, but the application and certain properties of numbers and how they work. Also, there are a lot of
steps and ideas you have to string together, and I can see how it’s easy to get lost in the steps. So this is making me want to take another crack at it, just for the satisfaction.
Would it be worth it to do this?
3 Likes
Maybe if there was just a video solving a few simple versions of this problem without the widget it would be helpful
2 Likes
I can appreciate your passion for the math involved, and what students’ understanding of that math might accomplish, but I personally feel that that level of complexity should be explored in an
actual math class. My understanding has always been that CSP is more of a survey course, one that allows students to be exposed to the “big ideas” of computer science, with the hope that they will
get excited about what they’re learning and possibly go on to major in CS or pursue some related degree, where perhaps understanding concepts like these at a much deeper level is necessitated. My 2¢
I am a first time AP CSP teacher. In the beginning I was very excited to teach this lesson and my kids very excited too. But even with my degree in computer science and engineering and it took me one
week to wrap my head around the math and the properties of modulo operation.
So I focused more on Modulo operation and the takeaways from the worksheet. I thought its fine as long as the students understand the MOD operation, prime factorization and how the use of large
numbers makes the public key encryption possible.
I had the students play around the public key crypto widget and had them see the difference when they choose small numbers Vs big numbers for the public modulus. I didnt go into the algorithm and
math itself but I did put out the resource included in the lesson “How and Why Does the Public Key Crypto Really Work?”. It is actually explains the math behind very well.
With all this being said I too agree that just like “TSP” and “MST”, it is very complicated topic to be handled in an High School Computer Science class.
The problem I’m having with this lesson has less to do with the math and more to do with how all of the discussion, activities and their associated parts are all supposed to tie nicely together to
make kids say, ‘oh, I get it’.
For instance, before getting into the math, the last abstract vision of encryption came from the cup and beans activity. In that activity, there was a private key and a public ‘cup’, so the kids are
primed to make connections back to that activity once modulo makes an appearance.
Thing seem promising when doing the clock arithmetic thought experiment. They totally get that there are infinite possibilities for the amount of time that has passed between 4:00 and 3:00. And then
they (and I) get lost in the modulus clock rabbit hole. My kids understand the modulo operation, and can calculate it without the widget. They breeze through Step 1 of the Multiplication + Modulo
activity guide. At this point everyone is still trusting that we’re going someplace with this…
The kids move on to Step 2 of the activity guide. The first part (top half, labelled ‘Experiment’) is just a calculator exercise. And the point was that…um…there is no pattern to the computation of
each modulus problem even after holding either A or B constant and incrementing the other one up in a pattern? Not sure, so let’s go finish out the second part (bottom half, labelled ‘Experiment 2’).
The first question I always get is, “What was so hard about that?” The kids easily figure out that they need to find a number that when divided by 101 will leave a remainder of 1, and start to
generate a list before even reading the ‘Takeaways’ section: 102, 203, 304, etc. So then they use logic to tackle the first problem: Is there a number that when multiplied by 2 will give a result of
102 or 203, or 304, etc? Since they get a nice whole number of 51 on the first try, they stop there and move on to the next problem, using the same logic.
However, the confusion comes in when they read the Takeaway. It says, “Thanks to some special properties of prime numbers with a prime clock size there’s only one solution to each modulus equation.”
Huh? Why can’t 152 also be a solution to the first problem? Reading further there is an implication that the solution has to be less than the clock size (which would then jibe with the theorem that
the above bolded statement is based on), but since they didn’t know where the activity was taking them, there is no ‘aha’ moment. They’re just confused. They don’t know why solving the equations was
supposed to be hard, especially since none of them just randomly guessed answers. They also have no clue how all the arithmetic relates to private and public keys (i.e. How is the private and public
key generated?), although there still seems to be a promise that the explanation is coming. In other words, we get so far into the weeds that by the time Activity 3 rolls around, everyone is so
exhausted from trying to figure out where it’s all going that no one cares anymore. Keep in mind that it’s not the complexity of the math (with such small numbers), but with trying to connect it back
to a cup and beans. After 2 years of doing this part of the lesson, I’m thinking that just giving them a quicker intro to the modulus, along with having them entertain the thought of a prime clock
size of 50 digits, would help to get their minds in a better place for the last activity with the public key crypto widget.
Just trying to make sense of it all. I suspect many teachers would rather skip this lesson than try to do the same and risk coming up short in a classroom full of kids (you can only do so much
hand-waving), but I’m going to keep trying.
@asalas thanks for your take on the task! I also notice that students feel comfortable with cups/beans as a demo and they can see that modulo is kinda cool and that it is a “one way function” which
makes it harder to “undo” than simple addition/subtraction from cups and beans.
I have moved to demoing the widget in class where the whole class is “Eve” and myself and another student play Alice and Bob. I focus on the takeaway as “see, Eve has no idea what is going on!” and
then I share the “How does this actually work?” handout with the class as optional reading.
Again, this is just another “twist” in the lesson. If you open up the purple book, this is what it says about the knowledge needed for the AP test:
So… as a math teacher, I am thinking “WHAT?! But the math is the fun part!” But I also know that by spending too much time talking math, some of my students will think “This is NOT for me, I am not a
math person” (unfortunately, student identities as “mathematicians” have already been formed by the time they get to me… and that identity is hard to change, BUT their identities as computer
scientists are being formed in my class each day).
Again, I am always tweaking this lesson too - I haven’t found the perfect balance but I do think students get what they need to out of the lesson for the AP test. I also know my die-hard math kids
will read the document that talks about how it really works and feel fulfilled.
I am wondering how others adapt this lesson for their contexts. What do you add, take out, tweak or ignore?
Thanks again Ms. Salas - your “making sense of it” helped me think about the lesson too - I am sure I will tweak it again next year based on your thoughts here!
2 Likes
Thanks so much for responding Kaitie. It’s always great to meet a fellow math teacher --I like the math too!!! It’s fun to introduce kids who have taken calculus to an operation they have never heard
of and to be able to answer the proverbial question, ‘when is anyone ever going to use this?’ They all agree at this point that encryption is fundamentally important to their lives, and they have
been hooked enough over the previous weeks to actually care (or at least be curious) about the mathematical detail.
However, I am a curriculum writer at heart. And I love, love, love this curriculum. Even all the typos are charming to me, and I interpret them as a sign that the writers care more about content than
spelling (although one day I might just submit every spelling correction to this forum in one big file). Occasionally though, there is a gap. And my brain wants to fill that gap. Such is the case
with parts of this lesson.
The irony is that there are more addendums and resources for this lesson than any other one in the entire curriculum! All, I’m sure, in an attempt to fill gaps and answer questions. The resources are
great, and you can tell they are well researched and someone probably spent days laboring over how to present the ideas in them. But there are a few key unaddressed parts that remain that if I
understood conceptually, I could then edit the activity guide and make this lesson one that both teachers and kids find both powerful and empowering.
So I presented this lesson today. It wasn’t bad, but I wouldn’t say it was good. There was some side-eye from the kids. Mostly there was obedient respect. That might be the dream of some teachers,
but I prefer the opportunity to answer the targeted questions of kids who mostly understand the concept while just needing a few points of clarification. If that’s not happening, it means they mostly
don’t understand the concept and don’t want to look dumb by asking too many questions. I can relate
2 Likes
Hi Baker,
So I have finally pored through the entire lesson again and I can now articulate something that is really bothering me. I’m hoping you can help me out.
The equation for finding Alice’s public key is: (private * public) mod clock = 1.
An example given in the ‘How and Why does Public Key Crypto Work?’ resource is the equation:
(x * 5) mod 17 = 1.
The narrative goes on to say that the only way to solve this is through brute force trial and error, that there is no way to approximate or get close to or narrow down the answer in any way. (The
same language was used for the examples in the ‘Multiplication + Modulo Activity Guide’, going so far as to state that solving these types of equations was ‘hard’).
However, what I see is an equation that can be solved by using a little more than trial and error. For instance, you can start by calculating (1 times 17) + 1 to get a value that would yield a result
of 1 when moded. Then you would check to see if this value is divisible by 5. It is not, so the next step is to try (2 times 17) + 1 and check for divisibility by 5 again. Bingo, I found x.
Almost every kid used this method to solve the equations in the activity guide, which were: (2 * x) mod 101 =1 and (3 * x) mod 101 =1 and (4 * x) mod 101 = 1. If they hadn’t read ahead to the
Takeaway, they generated the list 102, 203, 304, etc. by calculating (1 times 101) + 1, (2 times 101) + 1, (3 times 101) + 1, etc and then checking to see if these values were divisible by the 2, 3,
and 4 in the equations.
I know this process is much more laborious with large numbers, but since the language used in the narrative made it seem like wild guesses were the only way to solve these, I keep thinking that I’m
missing something.
Thank you for any insight you can provide.
I think you’ve pretty much got it. The process you are describing IS simply trial and error. You’re using a little human intuition along the way, but if you had to write a program to try to find a
solution, you would do something that would run through a set of possibilities. And for very small modulus(es) you can do this very easily. However, you are not “solving for x” using any formula. You
are using a procedure (an algorithm) for trying out some possibilities until you find a solution. Even if you have some way to cut down the possible values quickly, it’s still brute force. The number
of possibilities you have to try to crack the secret value is at least proportional to the size of the modulus, whereas Alice and Bob’s calculations stay the same in terms of the amount of time they
need to compute.
So as your adversary here I would ask: (1) did you try some of the 4-digit values for modulus? (2) what if the public modulus was a 20-digit number, or even a 100-digit number? It would take Alice &
Bob just about the same amount of time to do their calculations with big numbers but it would take Eve much much much longer. It might be unreasonable, in fact.
As mentioned in that doc I think you were referring to, this procedure of using (private * public) mod clock = 1 is actually NOT how the real thing works – it is a function that shares many of the
same properties as RSA encryption. You can see the real math here: https://simple.wikipedia.org/wiki/RSA_algorithm At it’s core you’ll see that RSA uses multiplication + modulo, but also properties
of exponentiation and prime numbers and a few more calculations to arrive at the public/private key pair. Not to mention, much much, larger numbers.
However, we think that using the modular multiplicative inverse is a good facsimile of the public-private key scenario that is accessible to high schoolers. That said, this lesson is indeed
complicated because to understand what’s happening you do need to daisy-chain a lot of ideas together. The whole notion that Bob can send Alice a secret message using publicly available info that
only Alice can decrypt takes a while to wrap your head around in the first place. We see a lot of videos for novices (including our own!) that use analogies for how public and private keys work, that
don’t use numbers at all. Our aim here was to make a widget that shows some numbers and allows a student to see what it takes to crack some simple values. It’s not too bad on a small scale, but
hopefully you could imagine with some really big numbers (as real encryption works) it would be hard.
Anyway, I hope that helps. Happy to keep chatting about it. Let me know if you have questions.
1 Like
This was my second year trying this lesson. I pretty much did what you did and had two students be Alice and Bob. I first had Alice guess Bob’s number and we did this a few times with small numbers,
each time using slightly larger numbers. Then I had the rest of the class be Eve, and made Alice and Bob use small numbers again. Each time we gradually increased the size of the numbers.
I made this Walk Through, this helped me visualize the instructions.
For the cups and beans activity, although it seems like every time I do this, the students miscount!!! BUT I can’t even count how many times I have referred back to this activity.
If we don’t worry about the modulo being a prime number, there’s an easy (and non trial & error) approach to this problem. Pick the prikey and pubkey and solve for the modulo (clock).
Set some value for pubkey and prikey (I picked these out of the air)
prikey = 29
pubkey = 39
clock = prikey * pubkey -1
So, clock = 1130
If you want to select a pubkey and generate a modulo such that the modulo is a prime number, it’s a bit more work. You can write a program that sets a prikey and solves for a pubkey with a prime
number modulo. I wrote this up in Python, if anyone is interested (I think it works reliably | {"url":"https://forum.code.org/t/confusion-with-the-lesson/19797?u=bhatnagars","timestamp":"2024-11-08T18:46:44Z","content_type":"text/html","content_length":"60705","record_id":"<urn:uuid:43b7e255-c0fb-4512-8e0a-0f99af1eb093>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00150.warc.gz"} |
Learning and Testing Junta Distributions with Subcube Conditioning
We study the problems of learning and testing junta distributions on {-1, 1}^n with respect to the uniform distribution, where a distribution p is a k-junta if its probability mass function p(x)
depends on a subset of at most k variables. The main contribution is an algorithm for finding relevant coordinates in a k-junta distribution with subcube conditioning Bhattacharyya and Chakraborty
(2018); Canonne et al. (2019). We give two applications: • An algorithm for learning k-junta distributions with Õ(k/ε^2) log n + O(2^k/ε^2) subcube conditioning queries, and • An algorithm for
testing k-junta distributions with Õ((k + √n)/ε^2) subcube conditioning queries. All our algorithms are optimal up to poly-logarithmic factors. Our results show that subcube conditioning, as a
natural model for accessing high-dimensional distributions, enables significant savings in learning and testing junta distributions compared to the standard sampling model. This addresses an open
question posed by Aliakbarpour et al. (2016).
Bibliographical note
Publisher Copyright:
© 2021 X. Chen, R. Jayaram, A. Levi & E. Waingarten.
ASJC Scopus subject areas
• Artificial Intelligence
• Software
• Control and Systems Engineering
• Statistics and Probability
Dive into the research topics of 'Learning and Testing Junta Distributions with Subcube Conditioning'. Together they form a unique fingerprint. | {"url":"https://cris.haifa.ac.il/en/publications/learning-and-testing-junta-distributions-with-subcube-conditionin","timestamp":"2024-11-10T12:42:51Z","content_type":"text/html","content_length":"52090","record_id":"<urn:uuid:a4785eee-be1f-4d1a-8a29-860eec31455a>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00216.warc.gz"} |
Adjustable Robust Nonlinear Network Design under Demand Uncertainties
We study network design problems for nonlinear and nonconvex flow models under demand uncertainties. To this end, we apply the concept of adjustable robust optimization to compute a network design
that admits a feasible transport for all, possibly infinitely many, demand scenarios within a given uncertainty set. For solving the corresponding adjustable robust mixed-integer nonlinear
optimization problem, we show that a given network design is robust feasible, i.e., it admits a feasible transport for all demand uncertainties, if and only if a finite number of worst-case demand
scenarios can be routed through the network. We compute these worst-case scenarios by solving polynomially many nonlinear optimization problems. Embedding this result for robust feasibility in an
adversarial approach leads to an exact algorithm that computes an optimal robust network design in a finite number of iterations. Since all of the results are valid for general potential-based flows,
the approach can be applied to different utility networks such as gas, hydrogen, or water networks. We finally demonstrate the applicability of the method by computing robust gas networks that are
protected from future demand fluctuations.
View Adjustable Robust Nonlinear Network Design under Demand Uncertainties | {"url":"https://optimization-online.org/2024/03/adjustable-robust-nonlinear-network-design-under-demand-uncertainties/","timestamp":"2024-11-09T07:31:26Z","content_type":"text/html","content_length":"85752","record_id":"<urn:uuid:7304e6df-17fb-4f21-9958-fb1e1d4049c9>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00874.warc.gz"} |
In mathematics, the indefinite orthogonal group, O(p, q) is the Lie group of all linear transformations of an n-dimensional real vector space that leave invariant a nondegenerate, symmetric bilinear
form of signature (p, q), where n = p + q. It is also called the pseudo-orthogonal group^[1] or generalized orthogonal group.^[2] The dimension of the group is n(n − 1)/2.
The indefinite special orthogonal group, SO(p, q) is the subgroup of O(p, q) consisting of all elements with determinant 1. Unlike in the definite case, SO(p, q) is not connected – it has 2
components – and there are two additional finite index subgroups, namely the connected SO^+(p, q) and O^+(p, q), which has 2 components – see § Topology for definition and discussion.
The signature of the form determines the group up to isomorphism; interchanging p with q amounts to replacing the metric by its negative, and so gives the same group. If either p or q equals zero,
then the group is isomorphic to the ordinary orthogonal group O(n). We assume in what follows that both p and q are positive.
The group O(p, q) is defined for vector spaces over the reals. For complex spaces, all groups O(p, q; C) are isomorphic to the usual orthogonal group O(p + q; C), since the transform ${\displaystyle
z_{j}\mapsto iz_{j}}$ changes the signature of a form. This should not be confused with the indefinite unitary group U(p, q) which preserves a sesquilinear form of signature (p, q).
In even dimension n = 2p, O(p, p) is known as the split orthogonal group.
Squeeze mappings, here r = 3/2, are the basic hyperbolic symmetries.
The basic example is the squeeze mappings, which is the group SO^+(1, 1) of (the identity component of) linear transforms preserving the unit hyperbola. Concretely, these are the matrices ${\
displaystyle \left[{\begin{smallmatrix}\cosh(\alpha )&\sinh(\alpha )\\\sinh(\alpha )&\cosh(\alpha )\end{smallmatrix}}\right],}$ and can be interpreted as hyperbolic rotations, just as the group SO(2)
can be interpreted as circular rotations.
In physics, the Lorentz group O(1,3) is of central importance, being the setting for electromagnetism and special relativity. (Some texts use O(3,1) for the Lorentz group; however, O(1,3) is
prevalent in quantum field theory because the geometric properties of the Dirac equation are more natural in O(1,3).)
Matrix definition
One can define O(p, q) as a group of matrices, just as for the classical orthogonal group O(n). Consider the ${\displaystyle (p+q)\times (p+q)}$ diagonal matrix ${\displaystyle g}$ given by
${\displaystyle g=\mathrm {diag} (\underbrace {1,\ldots ,1} _{p},\underbrace {-1,\ldots ,-1} _{q}).}$
Then we may define a symmetric bilinear form ${\displaystyle [\cdot ,\cdot ]_{p,q}}$ on ${\displaystyle \mathbb {R} ^{p+q}}$ by the formula
${\displaystyle [x,y]_{p,q}=\langle x,gy\rangle =x_{1}y_{1}+\cdots +x_{p}y_{p}-x_{p+1}y_{p+1}-\cdots -x_{p+q}y_{p+q}}$ ,
where ${\displaystyle \langle \cdot ,\cdot \rangle }$ is the standard inner product on ${\displaystyle \mathbb {R} ^{p+q}}$ .
We then define ${\displaystyle \mathrm {O} (p,q)}$ to be the group of ${\displaystyle (p+q)\times (p+q)}$ matrices that preserve this bilinear form:^[3]
${\displaystyle \mathrm {O} (p,q)=\{A\in M_{p+q}(\mathbb {R} )|[Ax,Ay]_{p,q}=[x,y]_{p,q}\,\forall x,y\in \mathbb {R} ^{p+q}\}}$ .
More explicitly, ${\displaystyle \mathrm {O} (p,q)}$ consists of matrices ${\displaystyle A}$ such that^[4]
${\displaystyle gA^{\mathrm {tr} }g=A^{-1}}$ ,
where ${\displaystyle A^{\mathrm {tr} }}$ is the transpose of ${\displaystyle A}$ .
One obtains an isomorphic group (indeed, a conjugate subgroup of GL(p + q)) by replacing g with any symmetric matrix with p positive eigenvalues and q negative ones. Diagonalizing this matrix gives a
conjugation of this group with the standard group O(p, q).
The group SO^+(p, q) and related subgroups of O(p, q) can be described algebraically. Partition a matrix L in O(p, q) as a block matrix:
${\displaystyle L={\begin{pmatrix}A&B\\C&D\end{pmatrix}}}$
where A, B, C, and D are p×p, p×q, q×p, and q×q blocks, respectively. It can be shown that the set of matrices in O(p, q) whose upper-left p×p block A has positive determinant is a subgroup. Or, to
put it another way, if
${\displaystyle L={\begin{pmatrix}A&B\\C&D\end{pmatrix}}\;\mathrm {and} \;M={\begin{pmatrix}W&X\\Y&Z\end{pmatrix}}}$
are in O(p, q), then
${\displaystyle (\operatorname {sgn} \det A)(\operatorname {sgn} \det W)=\operatorname {sgn} \det(AW+BY).}$
The analogous result for the bottom-right q×q block also holds. The subgroup SO^+(p, q) consists of matrices L such that det A and det D are both positive.^[5]^[6]
For all matrices L in O(p, q), the determinants of A and D have the property that ${\textstyle {\frac {\det A}{\det D}}=\det L}$ and that ${\displaystyle |{\det A}|=|{\det D}|\geq 1.}$ ^[7] In
particular, the subgroup SO(p, q) consists of matrices L such that det A and det D have the same sign.^[5]
Assuming both p and q are positive, neither of the groups O(p, q) nor SO(p, q) are connected, having four and two components respectively. π[0](O(p, q)) ≅ C[2] × C[2] is the Klein four-group, with
each factor being whether an element preserves or reverses the respective orientations on the p and q dimensional subspaces on which the form is definite; note that reversing orientation on only one
of these subspaces reverses orientation on the whole space. The special orthogonal group has components π[0](SO(p, q)) = {(1, 1), (−1, −1)}, each of which either preserves both orientations or
reverses both orientations, in either case preserving the overall orientation.
The identity component of O(p, q) is often denoted SO^+(p, q) and can be identified with the set of elements in SO(p, q) that preserve both orientations. This notation is related to the notation O^+
(1, 3) for the orthochronous Lorentz group, where the + refers to preserving the orientation on the first (temporal) dimension.
The group O(p, q) is also not compact, but contains the compact subgroups O(p) and O(q) acting on the subspaces on which the form is definite. In fact, O(p) × O(q) is a maximal compact subgroup of O(
p, q), while S(O(p) × O(q)) is a maximal compact subgroup of SO(p, q). Likewise, SO(p) × SO(q) is a maximal compact subgroup of SO^+(p, q). Thus, the spaces are homotopy equivalent to products of
(special) orthogonal groups, from which algebro-topological invariants can be computed. (See Maximal compact subgroup.)
In particular, the fundamental group of SO^+(p, q) is the product of the fundamental groups of the components, π[1](SO^+(p, q)) = π[1](SO(p)) × π[1](SO(q)), and is given by:
│ π[1](SO^+(p, q)) │ p = 1 │ p = 2 │ p ≥ 3 │
│ q = 1 │ C[1] │ Z │ C[2] │
│ q = 2 │ Z │ Z × Z │ Z × C[2] │
│ q ≥ 3 │ C[2] │ C[2] × Z │ C[2] × C[2] │
Split orthogonal group
In even dimensions, the middle group O(n, n) is known as the split orthogonal group, and is of particular interest, as it occurs as the group of T-duality transformations in string theory, for
example. It is the split Lie group corresponding to the complex Lie algebra so[2n] (the Lie group of the split real form of the Lie algebra); more precisely, the identity component is the split Lie
group, as non-identity components cannot be reconstructed from the Lie algebra. In this sense it is opposite to the definite orthogonal group O(n) := O(n, 0) = O(0, n), which is the compact real form
of the complex Lie algebra.
The group SO(1, 1) may be identified with the group of unit split-complex numbers.
In terms of being a group of Lie type – i.e., construction of an algebraic group from a Lie algebra – split orthogonal groups are Chevalley groups, while the non-split orthogonal groups require a
slightly more complicated construction, and are Steinberg groups.
Split orthogonal groups are used to construct the generalized flag variety over non-algebraically closed fields.
See also | {"url":"https://www.knowpia.com/knowpedia/Indefinite_orthogonal_group","timestamp":"2024-11-09T10:49:15Z","content_type":"text/html","content_length":"132739","record_id":"<urn:uuid:c8881121-5c2f-4049-a6de-6b2e30562f79>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00230.warc.gz"} |
3,692 research outputs found
Some papers have disputed when cost differentials among privately provided public goods make income transfer policy effective. This paper clarifies the different assumptions underlying this
disputation and shows that original cost equalization is a necessary and sufficient condition to hold the transfer neutrality.artificial cost differential
While conventional approaches to fiscal decentralization suggest that decentralization lowers the power of redistribution among regions, recent theories argue that fiscal decentralization works as a
commitment device. In this manner, where the budget in a given region is highly dependent on transfers from the central government, there is an incentive for effort following fiscal decentralization.
The former effect is argued to increase regional inequality, while the latter suggests a decrease in regional inequality. However no known empirical work has directly examined the relationship
between fiscal decentralization and regional inequality. In this paper, cross-sectional data for the United States, excluding the convergence of regional income, are used to derive the net
relationship. It is also the case that the direction of this effect on regional inequality depends on how fiscal decentralization is promoted. While the former distribution effect directly depends on
the central government's share of power, the latter incentive effect depends on autonomy. Two measures that represent the power of the central government and autonomy are used to identify these
effects. The results indicate that local expenditure or revenue share in fiscal decentralization has no significant effect on regional inequality, while the achievement of autonomy by fiscal
decentralization has a negative effect on regional inequality. This supports the theory that fiscal decentralization works as a commitment device. The results also show that how fiscal
decentralization is promoted is important for how it impacts on regional inequality.
A possible rationale for institutional conservatism, i.e., reluctance to adjust actions in accordance with external environmental changes, may be found in the payoff stabilisation effect it
strategically affords. Suppose, for example, that one of the duopolists is capable of adjusting its action, either price or quantity, in response to unexpected demand fluctuations. Then the other
duopolist, if incapable of such adjustments, recuperates some of the meager opportunities when the shock is negative whilst forgoing lucrative profit opportunities when the demand shock is positive,
thereby "smoothes" its profits across varying states of demand in exchange for a small loss in expected profits, as opposed to when being as adjustable as its competitor. Similar qualitative results
hold true both in Cournot and in Bertrand, and by extension, in a larger class of situations where economic decision makers interact through either strategic substitution or strategic
Frequent revision of firms' actions facilitates to sustain tacit collusion. Even when some, not all, firms revise their actions with enhanced frequency, the change contributes positively to collusive
sustainability, i.e., lowering the critical discount factor. In this sense, the added frequency in revising actions can be viewed as a common good shared among oligopolists. Particularly noteworthy
is the fact that, in a large class of environments, a firm's deviation can be deterred by no more than one punisher, implying that at most two firms need to invest in frequent revision in order to
sustain collusion.
The electronic and magnetic properties of concentrated and diluted ferromagnetic semiconductors are investigated by using the Kondo lattice model, which describes an interband exchange coupling
between itinerant conduction electrons and localized magnetic moments. In our calculations, the electronic problem and the local magnetic problem are solved separately. For the electronic part an
interpolating self-energy approach together with a coherent potential approximation (CPA) treatment of a dynamical alloy analogy is used to calculate temperature-dependent quasiparticle densities of
states and the electronic self-energy of the diluted local-moment system. For constructing the magnetic phase diagram we use a modified RKKY theory by mapping the interband exchange to an effective
Heisenberg model. The exchange integrals appear as functionals of the diluted electronic self-energy being therefore temperature- and carrier-concentration-dependent and covering RKKY as well as
double exchange behavior. The disorder of the localized moments in the effective Heisenberg model is solved by a generalized locator CPA approach. The main results are: 1) extremely low carrier
concentrations are sufficient to induce ferromagnetism; 2) the Curie temperature exhibits a strikingly non-monotonic behavior as a function of carrier concentration with a distinct maximum; 3) $T_C$
curves break down at critical $n/x$ due to antiferromagnetic correlations and 4) the dilution always lowers $T_C$ but broadens the ferromagnetic region with respect to carrier concentration.Comment:
11 pages, 5 figure
In his seminal work on fiscal federalism, Oates (1972) addressed the socalled Decentralization Theorem, which states that, if such factors as scale economies and spillovers are left out of
consideration, a decentralized system is always more efficient than a centralized system for the supply of local public goods. Based on his analytical framework, we contrarily show that a centralized
system is at times more efficient than a decentralized system under a democratic decision rule (Proposition 2). The key to such a possibility is the interests of minorities that may be sacrificed in
each lower district under decentralization. That is, when the majority adopts an extreme policy that is far from minorities' tastes in a lower district under decentralization, if instead a moderate
policy which is closer to minorities' tastes were chosen under centralization, then the interests of minorities would be saved. As a result, centralization could attain higher social welfare than
In this paper, it is shown that real indeterminacy of stationary equilibria generically arises in most matching models with perfectly divisible fiat money. In other words, the real indeterminacy
follows from the condition for stationarity of money holdings, and surprisingly it has nothing to do with the other specifications, e.g., the bargaining procedures, of the models. Thus if we assume
the divisibility of money in money search models, it becomes quite difficult to make accurate predictions of the effects of some policies. | {"url":"https://core.ac.uk/search/?q=authors%3A(Akai)","timestamp":"2024-11-11T08:54:00Z","content_type":"text/html","content_length":"120617","record_id":"<urn:uuid:f3e547e3-1d20-4bb9-8c19-128853149b8b>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00598.warc.gz"} |
Gross: Function Definitions and Calls
Table of contents
Let us start turning the previous language, Fraud, into a real programming language. Let us say you write an expression to compute a particular value. Writing an expression for that will only work
for the single computation you have written down. For example, if you want to compute a factorial of 5, you will have to write a program (* 5 (* 4 (*(* 5 (* 4 (* 3 (* 2 1)))) that will compute the
result as 120. If you want to find the factorial of any number, you will have to write down infinite number of expressions in Fraud. This means the expressiveness of our language is still severely
The solution is to bring in the computational analog of inductive data. Just like we can describe our programs by inductively composing the structs we defined, our language needs to be able to define
arbitrarily long running computations. Crucially, these arbitrarily long running computations need to be described by finite sized programs. The analog of inductive data are recursive functions.
So let’s now remove this limitation by incorporating functions, and in particular, recursive functions, which will allow us to compute over arbitrarily large data with finite-sized programs. We will
call this language Gross.
Concrete Syntax
Note: Complete implementation of the code shown in this unit can be found in the gross implementation on GitHub.
We will extend the syntax by introducing a new syntactic category of programs, which consist of a sequence of function definitions followed by an expression:
(define (f0 x00 ...) e0)
(define (f1 x10 ...) e1)
And the syntax of expressions will be extended to include function calls:
where fi is one of the function names defined in the program.
Note that functions can have any number of parameters and, symmetrically, calls can have any number of arguments. A program consists of zero or more function definitions followed by an expression.
An example concrete Gross program is:
(define (fact n)
(if (zero? n) 1
(* n (fact (sub1 n)))))
(fact 5)
Thus the complete grammar for Gross looks like below. Note how we have added definitions, and function application to expressions, and programs represent a sequence of defintions followed by an
Abstract Syntax
We will define new structs for the AST corresponding to the new additions we made to the concrete syntax. The grammar for the AST nodes look like:
Defining the new structs, our AST definition looks like:
#lang racket
(provide Val Var UnOp BinOp If Let App Defn Prog Err Err?)
;; type Expr =
;; | (Val v)
;; | (Var x)
;; | (UnOp u e)
;; | (BinOp b e e)
;; | (If e e e)
;; | (Let x e e)
;; | (App f es)
(struct Val (v) #:prefab)
(struct Var (x) #:prefab)
(struct UnOp (u e) #:prefab)
(struct BinOp (b e1 e2) #:prefab)
(struct If (e1 e2 e3) #:prefab)
(struct Let (x e1 e2) #:prefab)
(struct App (f args) #:prefab)
;; type Definition =
;; | (Defn f xs e)
(struct Defn (f args e) #:prefab)
;; type Prog =
;; | (Prog ds e)
(struct Prog (defns e) #:prefab)
(struct Err (err) #:prefab)
If you notice carefully, you will see our programs are not just s-expression any more. It is a sequence of s-expressions that need to parsed into the AST. All but the last should be a definition,
with the final one being an expression like all previous languages. So we will have to change our parser accordingly.
We update the parse function to add a match clause for function application, where the first item is matched to be a symbol, i.e., the name of the function, followed by the list of all arguments that
are parsed to make the App struct.
;; S-Expr -> Expr
(define (parse s)
(match s
[(? integer?) (Val s)]
[(? boolean?) (Val s)]
[(? symbol?) (Var s)]
[(list (? unop? u) e) (UnOp u (parse e))]
[(list (? binop? b) e1 e2) (BinOp b (parse e1) (parse e2))]
[`(if ,e1 ,e2 ,e3) (If (parse e1) (parse e2) (parse e3))]
[`(let ((,x ,e1)) ,e2) (Let x (parse e1) (parse e2))]
[`(,(? symbol? f) ,@args) (App f (map parse args))]
[_ (error "Parse error!")]))
Now, to parse function definitions, we define a new function parse-defn that given an s-expression, matches against a (define ( start, followed by the name of the function and the list of names of
arguments, followed by the body of the function defintion. These are used to create the final Defn struct.
;; S-Expr -> Defn
(define (parse-defn s)
(match s
[`(define (,(? symbol? f) ,@args) ,e) (Defn f args (parse e))]
[_ (error "Parse error!")]))
Finally, we define a parse-prog function to put these two pieces together to parse it into a full program. Remember, parse-prog now has to deal with a list of s-expressions:
;; List S-Expr -> Prog
(define (parse-prog s)
(match s
[(cons e '()) (Prog '() (parse e))]
[(cons defn rest) (match (parse-prog rest)
[(Prog d e) (Prog (cons (parse-defn defn) d) e)])]))
The parse-prog function has two cases. First, is the base case where there are no definitions, so the final expression is parsed with the definitions list empty. Second, recursively parse the
definitions, and build the Prog AST node from intermediate results.
Let us try parsing the factorial function through our new parser:
> (parse-prog '((define (fact n)
(if (zero? n) 1
(* n (fact (sub1 n)))))
(fact 5)))
'#s(Prog (#s(Defn fact (n) #s(If #s(UnOp zero? #s(Var n)) #s(Val 1) #s(BinOp * #s(Var n) #s(App fact (#s(UnOp sub1 #s(Var n)))))))) #s(App fact (#s(Val 5))))
Meaning of Functions
With parsing handled, we can now start giving meaning to Gross programs:
• The meaning of (define (f x0 ... xn) e) is the definition of a function f that takes arguments x0 through xn and executes its body e. The act of defining the function does not produce any result.
• The meaning of (f e0 ... en) is application of an already defined function f, with arguments e0 through en.
Before we go further, let’s establish a few definitions. It will help us understand what we mean by function application. In the definition of (fact n) we refer to n as a formal parameter. When we
apply fact to an expression as in (fact 5) we refer to 5 as an actual parameter or an argument. We refer to the function applied to an expression such as (fact 5) as an application of fact. Finally,
we refer to the expression that defines fact, (if (zero? ...) ...) as the body of fact and the scope of n as the body of fact.
Given all these nice definitions we can describe the application of fact to any actual parameter as substituting its formal parameter, n, with the actual parameter from the body of fact. We can be a
just little more formal and general. The application of any function to an actual parameter is the substitution of its formal parameter with its actual parameter in its body. Looking back to (fact 5)
in light of this definition, applying fact to 5 is substituting n with 5 in (if (zero? n) 1 (* n (fact (sub1 n)))).
We can rephrase the above in terms of using an environment. The application of fact to any actual parameter as creating a new environment with its formal parameter, n bound tothe actual parameter in
the body of fact. The application of any function to an actual parameter is the binding of its formal parameter with its actual parameter in a fresh environment of its body. Looking back to (fact 5)
in light of this definition, applying fact to 5 is binding n to 5 in an environment for (if (zero? n) 1 (* n (fact (sub1 n)))).
Formally stated, this looks as below. Notice, this time on the left of ⊢ we have D and E, denoting the list of definitions and the environment respectively.
The rule states to apply a function, we have to evaluate the arguments, check if the function we are calling is in the set of defined functions, and then create the new environment and evaluate the
body of the defined function.
As we have both definitions D and the environment E on the left of the ⊢, the interp function also accepts these two things in addition to the expression it is evaluating. We add a case for function
application App. The interp-app function implements the E-App rule above. It looks up the function in the list of the definitions, followed by evaluating all the arguments in the given environment.
Once done, it makes a new environment with the formal and actual parameters bound together and executes the body of the function.
;; interp :: Defns -> Env -> Expr -> Val
(define (interp defn env e)
(match e
[(Val v) v]
[(Var x) (lookup env x)]
[(UnOp u e) (interp-unop defn env u e)]
[(BinOp b e1 e2) (interp-binop defn env b e1 e2)]
[(If e1 e2 e3) (interp-if defn env e1 e2 e3)]
[(Let x e1 e2) (interp defn
(store env x (interp defn env e1))
[(App f actual) (interp-app defn env f actual)]))
(define (interp-app defn env f actual-args)
(match (lookup-defn f defn) ; lookup the function defintions
[(cons formal-args body) (let ((interped-args (map (λ (arg)
(interp defn env arg))
(interp defn (zip formal-args interped-args) body))]))
The lookup-defn function is similar to the lookup function we wrote for the environment in Fraud. It traverses the list of definitions and if the name matches returns the pair of actual parameters
and body of the function.
;; lookup-defn :: Symbol -> Listof Defn -> (Symbols, Expr)
(define (lookup-defn f defns)
(match defns
['() (raise (Err (string-append "Definition not found: " (symbol->string f))))]
[(cons d rest) (match d
[(Defn name args body) (if (eq? name f)
(cons args body)
(lookup-defn f rest))])]))
Finally, as our programs are not just expressions, but represented by the Prog struct, we write a function a function to interpret Prog called interp-prog. It calls the interp function with the list
of definitions and an empty environment. We also update the interp-err (the error handling wrapper function) to now call the interp-prog function.
;; interp-prog :: Prog -> Val
(define (interp-prog prog)
(match prog
; '() is the empty environment
[(Prog defns e) (interp defns '() e)]))
;; interp-err :: Expr -> Val or Err
(define (interp-err e)
(with-handlers ([Err? (λ (err) err)])
(interp-prog e)))
With that we can run the example of factorial function through our implementation of Gross:
> (interp-err (parse-prog '((define (fact n)
(if (zero? n) 1
(* n (fact (sub1 n)))))
(fact 5))))
We can convert the above factorial function to a test case:
(check-equal? (interp-err (parse-prog '((define (fact n)
(if (zero? n) 1
(* n (fact (sub1 n)))))
(fact 5)))) 120)
We can add a test for a mutually recursive function that tests if a given argument is odd or even:
(check-equal? (interp-err (parse-prog '((define (odd? x)
(if (zero? x) #f
(even? (sub1 x))))
(define (even? x)
(if (zero? x) #t
(odd? (sub1 x))))
(odd? 45)))) #t)
A couple of things to note:
• since the function definition environment is passed along even when interpreting the body of function definitions, this interpretation supports recursion, and even mutual recursion.
• functions are not values (yet). We cannot bind a variable to a function. We cannot make a list of functions. We cannot compute a function. The first position of a function call is a function
name, not an arbitrary expression. Nevertheless, we have significantly increased the expressivity of our language. | {"url":"https://sankhs.com/eecs662/notes/09-functions/","timestamp":"2024-11-09T18:48:02Z","content_type":"text/html","content_length":"54042","record_id":"<urn:uuid:d59d39c6-56c9-4ca3-a64c-05d4503fddb7>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00183.warc.gz"} |
predict.TBFcox.sep: Prediction methods for CoxTBF objects with separate estimates
Predicts survival probabilities at given times. Compatible with predictSurvProb functions required by pec package. Predicts objects with fitted with sep=TRUE
# S3 method for TBFcox.sep
predict(object, newdata, times, ...)
a dataframe with the same variables as the original data used to fit the object
a vector of times to predict survival probability for
A data frame of survival probabilities with rows for each row of newdata and columns for each time. | {"url":"https://www.rdocumentation.org/packages/glmBfp/versions/0.0-60/topics/predict.TBFcox.sep","timestamp":"2024-11-08T15:51:54Z","content_type":"text/html","content_length":"55505","record_id":"<urn:uuid:7269c006-4de7-4716-9816-255d91b3394c>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00743.warc.gz"} |
Text Area
学生必须修满31个学分:其中包括13个学分的核心课 (core courses) 和18个学分的选修课 (elective courses)。全日制学生预期在一年内完成整个课程 (两个正规学期: 秋季及春季);兼读制学生则预期在两年内完成。
Text Area
MSDM 5001
Introduction to Computational and Modeling Tools
The basics about CPU, GPU and their applications in high performance computing; introduction of the operating systems; introduction of the parallel program design, implementation and applications in
physics and other areas; basics about quantum computation: the concept, algorithm and future hardware.
Text Area
MSDM 5002
Scientific Programming and Visualization
The Python programming language and its application to scientific programming (packages such as Scipy, Numpy, Matplotlib); introduction to Matlab, Mathematica, Excel and R; visualization techniques
for data from scientific computing, everyday life, social media, business, medical imaging, etc. (stock price, housing price, highway traffic data, weather data, fluid dynamics data) (3 hours lecture
in computer lab)
Text Area
MSDM 5003
Stochastic Processes and Applications
Probability theory; maximum likelihood; Bayesian techniques; principal component analysis, data transformation and filtering; Brownian motion and stochastic processes; cross-correlations; power laws;
log-normal distribution and extreme value distributions; Maxwell-Boltzmann distribution; Monte Carlo methods; agent-based models; evolutionary games; Black-Scholes equation.
Text Area
MSDM 5004
Numerical Methods and Modeling in Science
Fundamental numerical techniques: error, speed and stability, integrals, derivatives, interpolation and extrapolation, least squares fitting, solution of linear algebraic equations, mathematical
optimization, ordinary differential equations, partial differential equations; Fourier and spectral applications, random processes, Monte Carlo simulations, simulated annealing.
Text Area
MSDM 6771
Data-Driven Modeling Seminars and Tutorials
All students in the MSc in Data-Driven Modeling program are required to take this course. Appropriate seminars and small group tutorials are scheduled to expose students to a variety of issues in
data science and industry, and to enhance students' communication with industry experts and faculty. This course lasts for one year. The students are required to attend the seminars and tutorials in
two regular terms. For students of MSc in Data-Driven Modeling only. Graded PP, P or F.
Text Area
MSDM 5005
Innovation in Practice
Three topics will be selected each term. For each topic, specialists from the industry will be invited to introduce the industrial landscape and related issues. Students will then form groups to
explore methodology of collecting useful data and propose innovative solutions related to the topics based on real data. This course enables students to apply mathematical theories to real context
and gives students hands-on experience on data science.
Text Area
MSDM 5051
Algorithm and Object-Oriented Programming for Modeling
Data structures (such as list, queue, stack), algorithms (such as recursion, sorting and searching), concepts and design patterns of object-oriented programming are introduced. Students are expected
to understand and use these techniques to handle data.
Text Area
MSDM 5053
Quantitative Analysis of Time Series
The course introduces some fundamental concepts of time series, including strict stationarity and weak stationarity, and series correlation. Students will study some classical time series models,
including autoregressive model, moving averages model and ARMA model, seasonal ARIMA models, multivariate time series models, and some new financial time series models, including ARCH and GARCH
models. Students will also learn the forecasting techniques based on those time series models and build up time series models for real time series data in natural science, engineering and economics.
Text Area
MSDM 5054
Statistical Machine Learning
This course introduces modern methodologies in machine learning, including tools in both supervised learning and unsupervised learning. Examples include linear regression and classification,
tree-based methods, kernel methods and principal component analysis. Students will practice R or Python, and apply them to real data analysis.
Text Area
MSDM 5055
Deep Learning for Modeling: Concepts, tools, and techniques
This course introduces deep learning methodologies, including basic concepts, programming frameworks, and practical techniques. Topics include regression neural network, convolutional neural network,
generative adversarial network, variational autoencoder, normalizing flow, reinforcement learning, and sequential models. Students will learn to implement typical models in PyTorch and apply them to
various datasets in many real-world applications.
Text Area
MSDM 5056
Network Modelings
Empirical study of networks in social science, economics, finance, biology and technology, network models: random networks, small world networks, scale free networks, spatial and hierarchical
networks, evolving networks, methods to generate them with a computer, dynamical processes on complex networks: network search, epidemic spreading, rumor and information spreading, community
detection algorithms, applications of network theory.
Text Area
MSDM 5057
Business Literacy for Data Professionals
This course is designed to build essential business acumen to equip students with the attitudes, skills and knowledge needed to thrive in today’s data-driven world. Diving into specialized modules
covering management, decision making, business communications and collaborations, the course will help students to develop a foundational understanding of key business concepts and terminology.
Students will work in groups and get exposure to industry experts to gain authentic learning experience.
Text Area
MSDM 5058
Information Science
This course will cover: (1) decision theory and its applications to finance; options and payoff diagrams, binomial trees; (2) portfolio management of financial time series using mean variance
analysis; (3) evolutionary computation for optimization, with applications in finding good prediction rules in finance; (4) measure of information, various information entropies, and methods of
maximum entropy; (5) game theory and its applications in competitive situations; (6) multi-agent systems modelling and applications to social networks and financial systems.
Text Area
MSDM 5059
Operations Research and Optimization
This course will introduce the concepts and techniques of optimization and modeling in systems and applications with many variables and constraints. Topics to be discussed include pivot tables,
linear programming, network flow models, project management, support vector algorithms, kernel methods, convex sets, duality, Lagrange multipliers, 1-D optimization algorithms, unconstrained
optimization, guided random search methods, and constrained optimization.
Text Area
MSDM 6980
Computational Modeling and Simulation Project
Under the supervision of a faculty member, students will carry out an independent research project on computational modeling and simulation. At the end of the course, students need to summarize their
results in the form of short theses and give oral presentations. Enrollment in the course requires approval by the course coordinator and supervisor.
Text Area
PHYS 5120
Computational Energy Materials and Electronic Structure Simulations
This course will introduce atomistic computational methods to model, understand, and predict the properties and behavior of real materials including solids, liquids, and nanostructures. Their
applications to sustainable energy will be discussed. Specific topics include: density-functional theory (DFT), Kohn-Sham equations, local and semi-local density approximations and hybrid
functionals, basis sets, pseudopotentials; Hartree-Fock method; ab initio molecular dynamics with interatomic interactions derived on the fly from DFT, Car-Parrinello molecular dynamics; Monte-Carlo
sampling; computational spectroscopy from first principles, IR and Raman. Students will learn how to use free open-source packages to do materials simulations on a Linux computer cluster. Students
should have basic knowledge of quantum mechanics. The instructor's approval is required for taking this course.
Text Area
除 MSc(DDM) 课程的选修课外, 学生在课程主任批准的情况下, 还可以选修金融数学理学硕士课程所开办的 3 个学分选修课。
(适用于 2024-25 及以后入学的学生) | {"url":"https://msdm.hkust.edu.hk/zh-hans/curriculum","timestamp":"2024-11-01T19:29:56Z","content_type":"text/html","content_length":"101871","record_id":"<urn:uuid:bfffeb13-14d9-4b16-a7dd-23cbd652d697>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00663.warc.gz"} |
How do you convert 315 degrees into radians? | Socratic
How do you convert 315 degrees into radians?
3 Answers
With this proportion:
in whitch ${\alpha}_{d}$ is the measure of the angle in degree,
and ${\alpha}_{r}$ is the measure of the angle in radians.
So, if you want to convert an angle from radians in degree:
andif you want to convert an angle from degree to radians:
In our case:
$\frac{7}{4} \pi$ radians
To change from degrees to radians use the following formula:
degrees#*(pi# radians$/ 180$ degrees#)#
the $\left(\frac{\pi}{180}\right)$ means that for every $\pi$ radians you go around the unit circle, you've gone $180$ degrees.
So, taking our $315$ degrees and plugging into our equation we get:
$315$ degrees#*(pi# radians$/ 180$ degrees#)#
The "degrees" cancel out, then we are left with:
$\frac{315}{180} \cdot \pi$ radians
$315$ and $180$ are both divisible by $45$, so
$\frac{315}{180} = \frac{7}{4}$
So, then we just need to multiply by $\pi$ radians and we get:
$\frac{7}{4} \cdot \pi$ radians = $\frac{7}{4} \pi$ radians
$\frac{7}{4} \pi$$\text{radians}$
The formula for converting degrees to radians is
$\rightarrow \text{radians} = \frac{315 \cdot \pi}{180}$
$\rightarrow \text{radians} = \frac{{\cancel{315}}^{7} \cdot \pi}{\cancel{180}} ^ 4$
Hope this helps! :)
Impact of this question
85528 views around the world | {"url":"https://api-project-1022638073839.appspot.com/questions/how-do-you-convert-315-degrees-into-radians#389659","timestamp":"2024-11-01T19:06:10Z","content_type":"text/html","content_length":"37760","record_id":"<urn:uuid:f93df181-7ef8-4a27-ba80-06305df76db0>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00081.warc.gz"} |
complex fractions worksheet 7th grade
Alg - Mixed Expressions & Complex Fractions Worksheet | PDF
Complex Fractions in Algebra Lesson Plans & Worksheets
50+ Fractions worksheets for 7th Grade on Quizizz | Free & Printable
Complex Fractions Worksheet 7th Grade - Fill Online, Printable ...
Fractions Worksheet
Complex Fractions in Algebra Lesson Plans & Worksheets
Complex Fractions Worksheet 7th Grade - Fill Online, Printable ...
Simplifying Fractions Worksheets - Math Monks
Simplifying Fractions Worksheets - Math Monks
Lesson 2 Skills Practice Complex Fractions And Unit Rates - Fill ...
Free Complex Fractions Worksheet Collection for Kids
Complex Fractions Practice, Math, Elementary, Math, Fractions
Complex Fractions Fashion Themed Math Worksheets | Aged 9-11
COMPLEX FRACTIONS to UNIT RATE Maze, Riddle, Coloring Page by ...
7th Grade 1-2: Complex Fractions and Unit Rates
50+ Fractions worksheets for 7th Grade on Quizizz | Free & Printable
Solve - Exponents and order of operations;complex fractions
Simplifying Fractions Worksheets - Math Monks
More complex fraction problems | 5th grade Math Worksheet ...
50+ Fractions worksheets for 7th Class on Quizizz | Free & Printable
Worksheets for fraction addition
50+ Fractions worksheets for 7th Grade on Quizizz | Free & Printable
Complex Fractions Definition and Example | Math, Pre-algebra ...
Simplifying Complex Fractions | Math, Arithmetic, Fractions ...
Complex Fractions worksheet | Live Worksheets
Complex Fractions Fashion Themed Math Worksheets | Aged 9-11 | {"url":"https://worksheets.clipart-library.com/complex-fractions-worksheet-7th-grade.html","timestamp":"2024-11-04T18:29:19Z","content_type":"text/html","content_length":"22514","record_id":"<urn:uuid:c3998656-befd-4f39-b939-56d587c15bf6>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00836.warc.gz"} |
Profit Margin vs Markup: Learn the Difference
Also, determine if the software offers any free trials, free versions or discounts. This information can usually be found in the frequently asked questions (FAQs) section of the software’s pricing
website page. When you hire a QuickBooks Online Accountant, you access the expertise QuickBooks offers them. Some real estate accounting software offers access to multiple parties, such as property
owners, property management companies, accountants, vendors, and more. Stessa currently integrates with AppFolio, allowing users to import income and expense transactions […] | {"url":"http://www.i-liveradio.com/profit-margin-vs-markup-learn-the-difference","timestamp":"2024-11-04T05:06:34Z","content_type":"text/html","content_length":"66665","record_id":"<urn:uuid:bcc576d5-581f-4fa9-aa19-efe7f604ceab>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00346.warc.gz"} |
A view on coupled cluster perturbation theory using a bivariational Lagrangian formulation
We consider two distinct coupled cluster (CC) perturbation series that both expand the difference between the energies of the CCSD (CC with single and double excitations) and CCSDT (CC with single,
double, and triple excitations) models in orders of the Møller-Plesset fluctuation potential. We initially introduce the E-CCSD(T–n) series, in which the CCSD amplitude equations are satisfied at the
expansion point, and compare it to the recently developed CCSD(T–n) series [J. J. Eriksen et al., J. Chem. Phys. 140, 064108 (2014)], in which not only the CCSD amplitude, but also the CCSD
multiplier equations are satisfied at the expansion point. The computational scaling is similar for the two series, and both are term-wise size extensive with a formal convergence towards the CCSDT
target energy. However, the two series are different, and the CCSD(T–n) series is found to exhibit a more rapid convergence up through the series, which we trace back to the fact that more
information at the expansion point is utilized than for the E-CCSD(T–n) series. The present analysis can be generalized to any perturbation expansion representing the difference between a parent CC
model and a higher-level target CC model. In general, we demonstrate that, whenever the parent parameters depend upon the perturbation operator, a perturbation expansion of the CC energy (where only
parent amplitudes are used) differs from a perturbation expansion of the CC Lagrangian (where both parent amplitudes and parent multipliers are used). For the latter case, the bivariational
Lagrangian formulation becomes more than a convenient mathematical tool, since it facilitates a different and faster convergent perturbation series than the simpler energy-based expansion.
Coupled cluster (CC) theory^1–4 is perhaps the most powerful method for describing dynamical electron correlation effects within the realm of modern quantum chemistry. The CC singles and doubles
(CCSD) model,^5 in which the cluster operator is truncated at the level of double excitations, is a robust and useful model, but it is well-known that the effects of triple (and higher-level)
excitations need to be taken into account in order to obtain highly accurate results that may compete with the accuracy of experiments.^6 However, the steep scaling of the CC singles, doubles, and
triples^7,8 (CCSDT) and CC singles, doubles, triples, and quadruples^9,10 (CCSDTQ) models limits their use to rather modest molecular systems. For this reason, a computationally tractable alternative
to the expensive iterative CCSDT and CCSDTQ models is to develop approximate models, for which the important triples and/or quadruples contributions are determined from a perturbation analysis and
hence included in a cheap and non-iterative fashion. A plethora of different models for the approximate treatment of triples and/or quadruples effects have been suggested, and we refer to Ref. 11 for
a recent theoretical overview of approximate non-iterative triples and quadruples models and Refs. 12 and 13 for a numerical comparison of many of these.
In the present work, we focus on perturbation theory within a CC framework, where a Møller-Plesset (MP) partitioning of the Hamiltonian is performed,^14 and the energy difference between a
zeroth-order (parent) CC model and a higher-level (target) CC model is expanded in orders of the perturbation (the fluctuation potential). In particular, we will base the perturbation analysis on a
bivariational CC Lagrangian obtained by adding to the CC target energy the CC amplitude equations with associated Lagrange multipliers. We note that the linearly parametrized state formally spanned
by the Lagrange multipliers is often referred to as the CC Λ-state,^15,16 and that this is, in general, different from the exponentially parametrized CC state. As pointed out by Arponen,^17,18
extensively exploited in the CC Lagrangian formulation,^19 and recently discussed by Kvaal,^20,21 the CC energy may be interpreted as a CC functional in both the CC amplitudes and the Λ-state
parameters. We will show that the fastest convergence is obtained when these two sets of state parameters are treated on an equal footing in the perturbative expansion of the energy difference
between a parent and a target CC model. Thus, we will distinguish between a perturbation expansion of the CC energy, for which only parent amplitudes are used at the expansion point, and a
perturbation expansion of the CC Lagrangian, for which both parent amplitudes and parent multipliers are used at the expansion point. At first sight, the bivariational Lagrangian formulation might
seem like an unnecessary complication, since, for the target model, the Lagrangian formally equals the energy. The purpose of this work is, however, to highlight and exemplify that not only is the
Lagrangian formulation a convenient mathematical tool that may simplify the derivation of various perturbation expansions; in many cases, the Lagrangian formulation will actually lead to different
perturbation series than the corresponding energy formulation.
To exemplify this difference, we consider two perturbation series which both expand the difference between the energies of the CCSD and CCSDT models. We initially introduce the E-CCSD(T–n) series, in
which the CCSD amplitude equations are satisfied at the expansion point, and next compare it to the recently developed CCSD(T–n) series,^11 in which not only the CCSD amplitude equations are
satisfied at the expansion point, but also the CCSD multiplier equations. Despite depending on the fluctuation potential to infinite order in the space of all single and double excitations, the CCSD
amplitudes are formally considered as zeroth-order parameters in both the E-CCSD(T–n) and CCSD(T–n) series, since the CCSD model represents the expansion point. Similarly, the CCSD multipliers, which
too depend on the fluctuation potential, are considered as zeroth-order parameters for the CCSD(T–n) series, but not so for the E-CCSD(T–n) series. The path from the CCSD expansion point towards the
CCSDT target energy, as defined by a perturbation expansion, is thus different within the E-CCSD(T–n) and CCSD(T–n) series, and as will be shown in the present work, the CCSD(T–n) series is the more
rapidly converging of the two, since more information is utilized at the expansion point. We finally reiterate that the lowest-order contribution of the CCSD(T–n) series (that of the CCSD(T–2) model)
is identical to the triples-only part of the CCSD(2) model of Gwaltney and Head-Gordon^22,23 and the second-order model of the CC(2)PT(m) series of Hirata et al.,^24–26 the CCSD(T–3) model is
identical to the triples-only part of the third-order CC(2)PT(m) model, while for fourth and higher orders, the CCSD(T–n) and CC(2)PT(n) series are different.^27 The E-CCSD(T–n) series, however,
differs from the aforementioned perturbation series for all corrections.
The present study is outlined as follows. In Section II, we derive the E-CCSD(T–n) series and compare it to the CCSD(T–n) series in order to illustrate the importance of treating parent amplitudes
and multipliers on an equal footing. In Section III, we present numerical results for the E-CCSD(T–n) and CCSD(T–n) energies, while some concluding remarks are given in Section IV.
In this section, we consider two perturbation series that expand the difference between the CCSD and CCSDT energies in orders of the perturbation. In Section II A, we develop a new energy-based
perturbation series denoted the E-CCSD(T–n) series, which can be formulated in terms of CC amplitudes without the need for invoking a CC Lagrangian. Next, in Section II B, we develop a common
bivariational framework, in which we recast the E-CCSD(T–n) and CCSD(T–n) series. Finally, we present a theoretical comparison between the two series in Section II C in order to exemplify how the CC
Lagrangian framework may lead to a perturbation series that is inherently different from that which arise from the corresponding energy formulation.
A. Perturbation expansion based on the CCSDT energy
In this work, we use a MP partitioning of the Hamiltonian
where $fˆ$ is the Fock operator and $Φˆ$ is the fluctuation potential. We consider first the CCSD model, which we choose as the common reference point for the perturbation expansions to follow. The
CCSD energy, E^CCSD, and associated amplitude equations may be written as
where 〈μ[1]| and 〈μ[2]| represent a singly and a doubly excited state with respect to the HF determinant, |HF〉, and the (non-Hermitian) CCSD similarity-transformed Hamiltonian is given by
with $∗Tˆ1$ and $∗Tˆ2$ being the CCSD singles and doubles cluster operators. Throughout the paper, we will use asterisks to denote CCSD quantities and generally use the generic notation $Tˆi=
∑μitμiτˆμi$ for a cluster operator at excitation level i, where $τˆμi$ is an excitation operator and t[μ[i]] is the associated amplitude.
We now parametrize the difference between the CCSD and CCSDT energies in terms of correction amplitudes, δt[μ[i]], which represent the difference between the CCSD and CCSDT amplitudes. The correction
amplitudes are expanded in orders of the fluctuation potential, and the CCSDT amplitudes, t[μ[i]], may thus be written as
where $tμi(0)=∗tμi$ for i = 1, 2 and $tμ3(0)=0$. We emphasize that, since we have chosen to expand the CCSDT amplitudes around the CCSD reference point, the CCSD amplitudes, {^∗t[μ[1]], ^∗t[μ[2]]},
are zeroth-order by definition. The {δt[μ[1]], δt[μ[2]]} amplitudes thus represent corrections to the CCSD amplitudes, while {δt[μ[3]]} are the CCSDT triples amplitudes.
The CCSDT cluster operator may now be written as $Tˆ=∗Tˆ+δTˆ$, where $δTˆ$ contains the correction amplitudes, and the CCSDT energy may be obtained by projecting the CCSDT Schrödinger equation,
$e−δTˆHˆ∗TˆeδTˆ|HF〉=ECCSDT|HF〉$, against 〈HF|,
where we have carried out a Baker-Campbell-Hausdorff (BCH) expansion, while the CCSDT amplitude equations are obtained by projection against the combined excitation manifold of all single, double,
and triple excitations out of the HF reference,
The order analysis of Eq. (2.1.7) is identical to the one performed in Ref. 11 (orders are counted in $Φˆ$), and the resulting amplitudes are thus the same. Compactly, these are given by
where ϵ[μ[i]] is the orbital energy difference between the virtual and occupied spin-orbitals of excitation μ[i], and the right-hand sides of the equations contain all terms of order n, i.e., the sum
of the orders of all δT operators plus one (for the fluctuation potential) equals n. For example, the first-order singles and doubles corrections are zero, $δtμ1(1)=δtμ2(1)=0$, while the first-order
triples corrections are given as $δtμ3(1)=−ϵμ3−1〈μ3|Φˆ∗Tˆ|HF〉$. By collecting terms in orders of the fluctuation potential, the CCSDT energy in Eq. (2.1.6) may now be expanded as
where we have used the fact that the first-order singles and doubles amplitudes vanish to restrict the summations. We denote the perturbation series defined by Eqs. (2.1.9) as the E-CCSD(T–n) series
to emphasize that it is based on a perturbation expansion of the CCSDT energy around the CCSD energy point, at which the CCSD amplitude equations are satisfied. We note that this notation is not to
be confused with ECC, which is usually used as an acronym for extended coupled cluster theory in the literature.^17,18
From Eq. (2.1.9), it follows that the first non-vanishing energy correction to the E-CCSD(T–n) series is of third order. The two lowest-order corrections are given in Eqs. (A4a) and (A4b) of the
Appendix. The E-CCSD(T–n) series is evidently different from the CCSD(T–n) series developed in Ref. 11, which starts at second order. Both series, however, describe the difference between the CCSD
and CCSDT energies using a MP partitioning of the Hamiltonian, and the correction amplitudes are identical. The only apparent difference is that the CCSDT energy is the central quantity for the
E-CCSD(T–n) series, while the CCSDT Lagrangian forms the basis for the CCSD(T–n) series. In Section II B, we develop a general Lagrangian framework to enable a direct comparison of the E-CCSD(T–n)
and CCSD(T–n) series in Section II C.
B. A general bivariational Lagrangian framework
The CCSDT Lagrangian may be obtained by adding to the CCSDT energy the amplitude equations in Eq. (2.1.7) with associated (undetermined) multipliers,
where we have chosen the following parametrization of the CCSDT multipliers in analogy with Eq. (2.1.5):
If the expansion of the CCSDT multipliers in Eq. (2.2.2) is left untruncated, these will equal the parameters of the linearly parametrized CCSDT Λ-state. The notation for the Lagrangian, $LCCSDT(t
(0),t̄(0),δt,δt̄)$, highlights that the amplitudes and multipliers at the expansion point are t^(0) and $t̄(0)$, respectively, while δt and $δt̄$ represent the correction amplitudes and correction
multipliers, respectively. By setting $t(0)=t̄(0)=0$, we arrive at a MP-like perturbation expansion, albeit one that has the CCSDT energy as the target energy instead of the full configuration
interaction (FCI) energy. In this work, however, we focus on the case for which the CCSD amplitudes are used as zeroth-order amplitudes ($tμ1(0)=∗tμ1$, $tμ2(0)=∗tμ2$, $tμ3(0)=0$), while
considering different choices of zeroth-order multipliers. In particular, we show below how the E-CCSD(T–n) series may be recovered by choosing $t̄μi(0)=0$ (i = 1, 2, 3), while the CCSD(T–n) series
corresponds to using CCSD multipliers as zeroth-order multipliers ($t̄μ1(0)=∗t̄μ1$, $t̄μ2(0)=∗t̄μ2$, $t̄μ3(0)=0$). Equations for the CCSD multipliers are obtained by requiring the CCSD Lagrangian
to be stationary with respect to variations in the CCSD amplitudes,
In the following, we let $t̄μi(0)$ represent a general zeroth-order multiplier to treat the two series on an equal footing. Equations for the correction amplitudes are determined by requiring $LCCSDT
(∗t,t̄(0),δt,δt̄)$ to be stationary with respect to variations in the correction multipliers
which reproduces the CCSDT amplitude equations in Eq. (2.1.7). It follows that the equations for the correction amplitudes are independent of the choice of zeroth-order multipliers, and the
correction amplitudes for the E-CCSD(T–n) and CCSD(T–n) series are therefore identical to all orders. Equations for the CCSDT multipliers are obtained by requiring $LCCSDT(∗t,t̄(0),δt,δt̄)$ to be
stationary with respect to variations in the amplitudes
Unlike for the correction amplitudes in Eq. (2.1.7), the precise form of the multiplier equations will depend upon the choice of zeroth-order multipliers, and the correction multipliers for the
E-CCSD(T–n) and CCSD(T–n) series will therefore be different. To keep this distinction clear, we will refer to the correction multipliers associated with the choices $t̄(0)=0$ and $t̄(0)=∗t̄$ as $δt̄E$
and $δt̄L$, respectively, while the generic notation $δt̄$ may refer to either of the series. We note that at infinite order, the multipliers—as defined within either the E-CCSD(T–n) or CCSD(T–n)
series—are identical (assuming that both expansions converge), i.e.,
where $t̄$ is the final set of converged CCSDT multipliers (Λ-state parameters).
To simplify the comparison of the two series, it proves convenient to expand the Lagrangian in Eq. (2.2.1) in a form that emphasizes the dependence on the CCSD multiplier equation in Eq. (2.2.4),
where we have set t^(0) = ^∗t and used the CCSD amplitude equations in Eq. (2.1.3).
By choosing $t̄(0)=0$ in Eq. (2.2.8), we arrive at the Lagrangian $LCCSDT(∗t,0,δt,δt̄E)$, which reads
Since the $δt̄E$ multipliers multiply the CCSDT amplitude equations in Eq. (2.1.7), they may be eliminated from Eq. (2.2.9), and it follows that $LCCSDT(∗t,0,δt,δt̄E)$ equals the CCSDT energy in Eq.
By evaluating $LCCSDT(∗t,0,δt,δt̄E)$ to different orders in the fluctuation potential, we thus arrive at the E-CCSD(T–n) series in Eq. (2.1.9), for which the energy corrections may be expressed
exclusively in terms of correction amplitudes. The evaluation of the E-CCSD(T–n) energy corrections using Eq. (2.1.9) corresponds to using the n + 1 rule, where amplitudes to order n determine the
energy to order n + 1. Alternatively, by exploiting that $LCCSDT(∗t,0,δt,δt̄E)$ is variational in the amplitudes as well as the multipliers, it may be evaluated to different orders in the fluctuation
potential by using the 2n + 1 and 2n + 2 rules^28–30 for the amplitudes and multipliers (i.e., the amplitudes/multipliers to order n determine the Lagrangian to order 2n + 1/2n + 2). These two
approaches are of course equivalent, but the use of the 2n + 1 and 2n + 2 rules allows for an easy comparison of the E-CCSD(T–n) series to the CCSD(T–n) series (cf. the Appendix).
By setting $t̄(0)=∗t̄$ in Eq. (2.2.8), the resulting Lagrangian $LCCSDT(∗t,∗t̄,δt,δt̄L)$ becomes
where we have used the CCSD multiplier equations in Eq. (2.2.4). An expansion of $LCCSDT(∗t,∗t̄,δt,δt̄L)$ in orders of the fluctuation potential defines the recently proposed CCSD(T–n) series.^11 The
E-CCSD(T–n) series begins at third order (see Eq. (2.1.9)), while the CCSD(T–n) series starts already at second order. The series are evidently different, and in Section II C, we perform an explicit
comparison of them.
C. Comparison of the E-CCSD(T–n) and CCSD(T–n) series
In this section, we compare the E-CCSD(T–n) and CCSD(T–n) series—first, we discuss the origin of the difference between the series from a formal point of view, and next, we compare the two
lowest-order multipliers and energy corrections for the two series.
As shown in Section II B, both series can be derived from Eq. (2.2.8) with different choices of parent multipliers, resulting in the Lagrangians, $LCCSDT(∗t,0,δt,δt̄E)$ and $LCCSDT(∗t,∗t̄,δt,δt̄L)$,
for the E-CCSD(T–n) and CCSD(T–n) series, respectively. The $LCCSDT(∗t,0,δt,δt̄E)$ Lagrangian reduces to the standard CCSDT energy expression, because no zeroth-order multipliers enter the Lagrangian
and because the correction multipliers, $δt̄$, are multiplied by the CCSDT amplitude equations (cf. Eqs. (2.2.9) and (2.2.10)). However, the $LCCSDT(∗t,∗t̄,δt,δt̄L)$ Lagrangian cannot be subject to a
similar reduction, since certain terms in the CCSDT amplitude equations were cancelled when the CCSD multiplier equations were used to manipulate Eq. (2.2.8) to arrive at Eq. (2.2.11). Consequently,
the CCSD multipliers cannot be removed from Eq. (2.2.11), and $LCCSDT(∗t,∗t̄,δt,δt̄L)$ cannot be reduced to the standard CCSDT energy expression.
In the same way as the CCSD amplitudes formally depend on the fluctuation potential to infinite order in the space of all single and double excitations, so do the CCSD multipliers (or CCSD Λ-state
parameters). Thus, since the CCSD multipliers are not counted as zeroth-order parameters in the E-CCSD(T–n) series (unlike in the CCSD(T–n) series), orders are necessarily counted differently in the
perturbative expansions of $LCCSDT(∗t,∗t̄,δt,δt̄L)$ and $LCCSDT(∗t,0,δt,δt̄E)$; however, we may compare the two series by comparing their leading-order, next-to-leading-order, etc., corrections to one
another, which we will do numerically in Section III.
On that note, we have theoretically compared the lowest- and next-to-lowest-order corrections of the two series in the Appendix. In summary, we find that for the $LCCSDT(∗t,∗t̄,δt,δt̄L)$ Lagrangian, a
good zeroth-order description of the CCSDT Λ-state in the space of all single and double excitations is used (the CCSD Λ-state), and the lowest-order multiplier correction therefore occurs in the
triples space (cf. Eq. (A8)). For the $LCCSDT(∗t,0,δt,δt̄E)$ Lagrangian, on the other hand, there is no representation of the CCSDT Λ-state at zeroth order, and the leading-order contributions to this
state (in the form of the first-order multipliers, $δt̄μiE(1)$) hence occur within the singles and doubles space, while triples multipliers first enter the E-CCSD(T–n) series at the following
(second) order (cf. Eq. (A3)). In fact, we find that the first- and second-order singles and doubles multipliers, ${δt̄μiE(1),δt̄μiE(2)}$ for i = 1, 2, are nothing but the two lowest-order
contributions to the CCSD Λ-state parameters, if the CCSD multiplier equations in Eq. (2.2.4) are solved perturbatively (cf. Eq. (A12)). Furthermore, the lowest-order E-CCSD(T–n) and CCSD(T–n) energy
corrections are found to have the same structural form; however, they are expressed in terms of different sets of multipliers (cf. Eqs. (A7) and (A11)). Thus, the E-CCSD(T–n) series may be viewed as
attempting to compensate for the poor (non-existing) guess at the CCSDT Λ-state by mimicking the CCSD(T–n) series as closely as possible within a perturbational framework. For both series, a
perturbative solution of the CCSDT Λ-state is embedded into the energy corrections, and the E-CCSD(T–n) series is thus trailing behind the CCSD(T–n) series from the onset of the perturbation
expansion. Again, this motivates the claim that it is advantageous to consider the CC energy as the stationary point of an energy functional in both the CC and Λ-state parameters, and hence that
perturbative expansions are optimally carried out whenever the two states are treated on an equal footing.^17,18,20,21 For higher orders in the perturbation, a direct comparison of the E-CCSD(T–n)
and CCSD(T–n) series is more intricate, but we note that they will differ to all orders. Only in the infinite-order limit are the two bound to agree ($LCCSDT(∗t,0,δt,δt̄E)$ and $LCCSDT(∗t,∗t̄,δt,δt̄L)$
both equal the CCSDT energy), but this is a natural consequence of the fact that the CC energy may be described in terms of fully converged CC amplitudes alone.
In conclusion, the $LCCSDT(∗t,0,δt,δt̄E)$ and $LCCSDT(∗t,∗t̄,δt,δt̄L)$ Lagrangians have the same parent energy (CCSD) and the same target energy (CCSDT), and all contributions of both series will be
trivially term-wise size extensive to all orders, as they are all expressed in terms of (linked) commutator expressions. However, the paths between these two CC energies, as defined by a perturbation
expansion, are obviously very different for the two series. Conceptually, the expansion point for the E-CCSD(T–n) series is formally the CCSD energy (only the CCSD amplitude equations are satisfied),
while the expansion point for the CCSD(T–n) series is the CCSD Lagrangian (the CCSD amplitude and CCSD multiplier equations are satisfied). For the energy-based E-CCSD(T–n) series, the Lagrangian is
thus merely a mathematical tool that allows for correction energies to be obtained using amplitude and multiplier corrections that satisfy the 2n + 1 and 2n + 2 rules. On the contrary, the CCSD(T–n)
series is deeply rooted within a bivariational Lagrangian formulation and has no energy-based analogue.
The performance of the models of the CCSD(T–n) series has recently been theoretically as well as numerically compared to a variety of alternative triples models for two sets of closed-shell^12 and
open-shell species,^31,32 and the formal convergence of the series (through sixth order in the perturbation) has been confirmed. Furthermore, the performance of the higher-order CCSD(T–n) models with
respect to the target CCSDT model was found to be essentially independent of the HF reference used, and thus, independent of the spin of the ground state. In this section, we assess the numerical
performance of the E-CCSD(T–n) models (once again measured against results obtained with the target CCSDT model) in order to compare the rate of convergence throughout the series to that of the CCSD
(T–n) series.
We here use the two test sets previously used in Refs. 12, 13, 31, and 32: (i) 17 closed-shell molecules, all optimized at the all-electron CCSD(T)/cc-pCVQZ level of theory, and (ii) 18 open-shell
atoms and radicals, all optimized at the all-electron CCSD(T)/cc-pVQZ level of theory. For a specification of the members of the closed- and open-shell test sets as well as tabularized molecular
geometries, cf. Ref. 6 and the papers describing the HEAT thermochemical model,^33–35 respectively. All of the closed-shell calculations are based on a restricted HF (RHF) reference, while
unrestricted HF (UHF) as well as restricted open-shell HF (ROHF) trial functions has been used for the open-shell calculations. The correlation-consistent cc-pVTZ basis set^36 is used throughout for
all of the reported valence-electron (frozen-core) results, and the Aquarius program^37 has been used for all of the calculations.
In Figure 1, we consider the performance of the five lowest-order models of the E-CCSD(T–n) and CCSD(T–n) hierarchies. Mean recoveries (in %) of the triples correlation energy, E^T = E^CCSDT − E^
CCSD, are presented in Figures 1(a), 1(c), and 1(e), while mean deviations from E^T (in kcal/mol) are presented in Figures 1(b), 1(d), and 1(f). In all cases, we report statistical error measures
generated from the individual results, cf. the supplementary material.^38 As noted in Section II C, the E-CCSD(T–n) and CCSD(T–n) series start at third and second orders, respectively, but we may
group these together like E-CCSD(T–3)/CCSD(T–2), E-CCSD(T–4)/CCSD(T–3), etc.
The results in Figure 1 show that the CCSD(T–n) models in general yield smaller mean and standard errors than their E-CCSD(T–n) counterparts, and the CCSD(T–n) series furthermore exhibits a more
stable convergence than the E-CCSD(T–n) series. In other words, the rate of convergence is improved in the CCSD(T–n) series over the E-CCSD(T–n) series. For most of the considered molecules, the
E-CCSD(T–n) corrections are negative/positive for uneven/even orders, which leads to the oscillatory convergence behavior observed for the E-CCSD(T–n) series in Figure 1. Some oscillatory behavior is
also observed for the CCSD(T–n) series, but this is much less prominent, and primarily observed beyond fourth order. The superior stability of the CCSD(T–n) series compared to the E-CCSD(T–n) series
is also manifested in the smaller standard deviations for the former (in Figure 1 represented in terms of standard errors of the mean). Some of the molecules, however, differ considerably from the
mean trends in Figure 1. For example, methylene (CH[2]) and ozone (O[3]) are notoriously difficult cases due to significant multireference character,^6,12 and, for both, we observe a rather slow
convergence throughout either of the series. While for O[3], the convergence towards the CCSDT limit is oscillatory, for CH[2], the convergence is stable, yet slow, cf. Table S4 of the supplementary
material^38 for the E-CCSD(T–n) results. Similar, but significantly less pronounced problems, are observed for the CCSD(T–n) series.^31 Finally, we note how the results obtained using the two
open-shell references (UHF and ROHF) are similar, and also that the general behavior for the mean deviations is similar to the RHF results.
From an application point of view, the CCSD(T–n) series is practically converged onto the CCSDT limit at the CCSD(T–4) model (robust for closed- and open-shell systems), while two additional
corrections (two additional orders in the perturbation) are needed in the E-CCSD(T–n) series in order to match these results (the E-CCSD(T–7) model). However, even for the E-CCSD(T–7) model, the
standard deviations (for both recoveries and deviations) are larger than for the CCSD(T–4) model. If results more accurate than those provided by the CCSD(T–4)/E-CCSD(T–7) models are desired, it is
in general necessary to also account for the effects of quadruple excitations, as the quadruples energy contribution may easily exceed the difference between the CCSDT and CCSD(T–4) energies. In such
cases, the recently proposed CCSDT(Q–n) models^11,13 may offer attractive alternatives to the iterative CCSDTQ model. These models are theoretically on par with the CCSD(T–n) models, but expand the
CCSDTQ–CCSDT energy difference, rather than the CCSDT–CCSD difference, in orders of the MP fluctuation potential.
In conclusion, a number of similarities exist between the E-CCSD(T–n) and CCSD(T–n) series, but both the magnitude of the (individual) errors as well as the oscillatory convergence pattern are
significantly reduced in the CCSD(T–n) series, as compared to the E-CCSD(T–n) series. This improvement is in line with the theoretical analysis in Section II C. More information about the expansion
point is used in the CCSD(T–n) series, where the CCSD amplitudes and CCSD multipliers are both built into the perturbative corrections, to yield a faster and more balanced rate of convergence than
that observed for the E-CCSD(T–n) series, in which only the CCSD amplitudes are used to construct the energy corrections. Based on the results for the five lowest-order models in Figure 1, the E-CCSD
(T–n) and CCSD(T–n) series both appear to converge for all of the considered molecules, although slowly for some of the notoriously difficult cases. However, a formal convergence analysis is required
to firmly establish whether the series indeed converge (i.e., establish the radius of convergence for the series). This will be the subject of a forthcoming paper.
We have developed the E-CCSD(T–n) perturbation series and compared it to the recently proposed CCSD(T–n) series in order to gain new insights into the importance of treating amplitudes and
multipliers (parameters of the Λ-state) on an equal footing whenever perturbation expansions are developed within CC theory. Both series represent a perturbation expansion of the difference between
the CCSD and CCSDT energies, and they share the same common set of correction amplitudes. The E-CCSD(T–n) series formally describes an expansion around the CCSD energy point (CCSD amplitude equations
are satisfied), while the CCSD(T–n) series may be viewed as an expansion around the CCSD Lagrangian point (CCSD amplitude equations and CCSD multiplier equations are satisfied). The two series are
therefore different, and the CCSD(T–n) series is found to converge more rapidly towards the CCSDT target energy, since all available information at the CCSD expansion point is utilized.
The presented analysis may be generalized to any perturbation expansion representing the difference between a parent CC model and a higher-level target CC model. For developments of CC perturbation
expansions, we thus generally advocate the use a bivariational Lagrangian CC formulation to ensure an optimal rate of convergence in terms of term-wise size extensive corrections towards the target
energy. For example, two perturbation series formulated around the CCSDT energy (E-CCSDT(Q–n)) and CCSDT Lagrangian (CCSDT(Q–n)) expansion points, respectively, to describe an expansion towards the
CCSDTQ target energy in orders of the Møller-Plesset fluctuation potential, are also bound to exhibit different rates of convergence, following a similar line of arguments.
In quantum chemistry, a Lagrangian energy functional has traditionally been viewed merely as a convenient mathematical tool for deriving perturbative expansions, however, one that would give rise to
expansions that are identical to those based on the standard energy. The present work highlights how this equivalence between energy- and Lagrangian-based perturbation theory only holds whenever the
zeroth-order parameters do not depend on the perturbing operator, as is, for example, the case for standard MP perturbation theory where the zeroth-order parameters vanish. Thus, when the
zeroth-order parameters are independent of the perturbing operator, a Lagrangian formulation is merely of mathematical convenience, but, for perturbation-dependent zeroth-order parameters (e.g., like
those of the right- (CC) and left-hand (Λ) eigenstates of a non-Hermitian CC similarity-transformed Hamiltonian), a bivariational Lagrangian formulation is in general expected to lead to a faster and
more stable convergence than a corresponding energy formulation. This is an important point to keep in mind for future developments and applications involving perturbation expansions.
K.K., J.J.E., and P.J. acknowledge support from the European Research Council under the European Union’s Seventh Framework Programme No. (FP/2007-2013)/ERC Grant Agreement No. 291371. J.O.
acknowledges support from the Danish Council for Independent Research, No. DFF-4181-00537, and D.A.M. acknowledges support from the US National Science Foundation (NSF) under Grant No. ACI-1148125/
APPENDIX: EXPLICIT LOWEST-ORDER E-CCSD(T–n) AND CCSD(T–n) CORRECTION ENERGIES
In the present appendix, we compare the lowest- and next-to-lowest-order corrections of the E-CCSD(T–n) and CCSD(T–n) series. For this, we need a closed-form expression for the CCSDT multipliers
(i.e., the CCSDT Λ-state parameters), which, from Eq. (2.2.6), reads
where we will again partition the CCSDT cluster operator, $Tˆ$, as $Tˆ=*Tˆ+δTˆ$. Eq. (A1) may now be expanded in orders of the fluctuation potential (cf. Eq. (2.2.2)),
If $t̄(0)=0$, the multiplier corrections in Eq. (A2) will be those belonging to the E-CCSD(T–n) series, and the two lowest-order corrections are given by
It follows that the E-CCSD(T–n) series has non-vanishing first-order multipliers only in the singles and doubles space ($δt̄μ3E(1)=0$), and second-order multipliers for all excitation levels (i = 1,
2, 3).
Using Eq. (2.1.9), the two leading-order corrections of the E-CCSD(T–n) series may be evaluated using the n + 1 rule for the amplitudes,
Alternatively, using the Lagrangian in Eq. (2.2.9) and the 2n + 1/2n + 2 rules^28–30 for the amplitudes/multipliers, E^(3) and E^(4) may be written as
The expressions in Eqs. (A4) and (A5) are of course equivalent as may be verified by explicit comparison. Finally, the E^(4) energy in Eq. (A5b) may be further recast by expanding the second-order
correction amplitudes, δt^(2), given in Eq. (2.1.8),
By taking the sum of the third- and fourth-order E-CCSD(T–n) energies in Eqs. (A4a) and (A6), respectively, we can write
To evaluate the two lowest-order energy corrections of the CCSD(T–n) series ($t̄(0)=∗t̄$), we only need to consider the first-order multipliers, which read^11
By applying the 2n + 1/2n + 2 rules to Eq. (2.2.11), the two leading corrections of the CCSD(T–n) series are given as
We may now compare the energy sums in Eqs. (A7) and (A11). Since the two leading-order multiplier corrections of the E-CCSD(T–n) series are independent of the triple excitations in the CCSDT ansatz,
these will equal the two lowest-order contributions to the CCSD Λ-state parameters, i.e.,
where $O(3)$ denotes terms of third and higher orders in the fluctuation potential. For this reason, the first term on the right-hand side of Eq. (A7) may be viewed as mimicking the first term on the
right-hand side of Eq. (A11), and the same applies for the second term on the right-hand side of the two equations, by noticing that the $t̄3E(2)$ multipliers of Eq. (A3b) are similar to the $t̄3L(1)$
multipliers of Eq. (A8b), with the notable exception that $t̄E(1)→*t̄$ in moving from the E-CCSD(T–n) to the CCSD(T–n) series.
J. Chem. Phys.
Adv. Chem. Phys.
, and
Phys. Rev. A
R. J.
Many–Body Methods in Chemistry and Physics: Many–Body Perturbation Theory and Coupled–Cluster Theory
Cambridge University Press
Cambridge, UK
G. D.
R. J.
J. Chem. Phys.
, and
Molecular Electronic–Structure Theory
1st ed.
Wiley & Sons, Ltd.
West Sussex, England
R. J.
J. Chem. Phys.
G. E.
H. F.
Chem. Phys. Lett.
J. Chem. Phys.
S. A.
R. J.
J. Chem. Phys.
J. J.
, and
J. Chem. Phys.
J. J.
, and
J. Chem. Phys.
J. J.
D. A.
, and
J. Chem. Phys.
M. S.
Phys. Rev.
N. C.
H. F.
J. Chem. Phys.
A. C.
G. E.
J. E.
T. J.
, and
H. F.
J. Chem. Phys.
Ann. Phys.
, and
Phys. Rev. A
Adv. Quantum Chem.
J. Chem. Phys.
Mol. Phys.
S. R.
Chem. Phys. Lett.
S. R.
J. Chem. Phys.
, and
R. J.
J. Chem. Phys.
A. A.
, and
J. Chem. Phys.
, and
J. Chem. Phys.
J. J.
, and
J. Chem. Phys.
J. Chem. Phys.
Theor. Chim. Acta
A. J.
, and
J. Chem. Phys.
P. G.
A. G.
E. F.
B. A.
, and
J. F.
J. Chem. Phys.
Y. J.
P. G.
A. G.
, and
J. F.
J. Chem. Phys.
M. E.
A. K.
, and
J. F.
J. Chem. Phys.
T. H.
, Jr.
J. Chem. Phys.
J. R.
J. F.
, and
J. J.
J. Parallel Distrib. Comput.
© 2016 AIP Publishing LLC.
AIP Publishing LLC | {"url":"https://pubs.aip.org/aip/jcp/article/144/6/064103/194302/A-view-on-coupled-cluster-perturbation-theory","timestamp":"2024-11-13T06:12:32Z","content_type":"text/html","content_length":"381085","record_id":"<urn:uuid:2031e201-3007-4d2a-acf7-7f8cd2aa8771>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00324.warc.gz"} |
Representations of Graph
Executive Summary
In this comprehensive exploration of sequential and linked representations of graphs, we traverse the depths of both structures, their strengths, weaknesses, and applicability. We kick-off by
establishing the core problem—efficiently storing and manipulating graph data, where both representations find their relevance. This thesis uncovers the basics before diving into complex facets like
adjacency matrix and list, incidence matrix, and list, alongside the C++ code implementation. Further, we cross-examine these concepts with Object-Oriented Programming (OOP), revealing surprising
links. Lastly, we steer our lens to real-world applications, clarifying the theoretical fog with practical examples. Be prepared to grapple with advanced-level concepts, carefully broken down for
seamless comprehension. The curtain will rise on lesser-known topics like multi-graphs and hyper-graphs in subsequent discussions.
Imagine you are building a large-scale social network. You find yourself facing a computational challenge—how do you efficiently store and process information about connections between millions of
users? One potential solution would be to leverage data structures known as graphs, more specifically, the sequential or linked representation of graphs.
A graph, in computer science parlance, is an abstract data type that aims to implement the graph concept from mathematics—a graph is composed of vertices (also called nodes) and edges (connections
between nodes). Two main types of graph representation, sequential and linked, offer different trade-offs regarding memory usage and operation speed. Understanding these can dramatically impact how
you design and implement your social network.
Before diving deep, let's lay the foundation by briefly examining the basics of graphs, their types, and components. After this, we'll progress towards the complexities of sequential and linked
representation in graphs, spiced up with appropriate C++ implementation. Ready to embark on this exhilarating journey? Let's begin!
1. Graph Basics
A graph G is an ordered pair G := (V, E), comprising a set V of vertices or nodes, together with a set E of edges or arcs. Each edge is a 2-element subset of V. Every graph must have at least one
vertex. The vertices may be part of the graph structure, or may be external entities represented by integer indices or references. If the edges in a graph are all one-way, the graph is a directed
graph or a digraph.
Graphs are used to represent many real-world things such as systems of roads, airline flights from city to city, how the Internet is connected, etc. They can be used to model many types of relations
and processes in various scientific structures.
2. Graph Representations
There are two common ways to represent a graph, sequential (also known as adjacency matrix) and linked (also known as adjacency list). Each has its advantages and disadvantages, and the right choice
depends on the specific parameters of the problem at hand.
2.1 Adjacency Matrix
Adjacency Matrix is a 2D array of size V x V where V is the number of vertices in a graph. A slot adj[i][j] = 1 indicates that there is an edge from vertex i to vertex j. The adjacency matrix for an
undirected graph is always symmetric. This representation is also used for representing weighted graphs.
An adjacency matrix is a way of representing a graph as a matrix of booleans. If the value of matrix[i][j] is '1', it means there is an edge connecting vertices i and j, otherwise, there is no edge.
It is particularly convenient for dense graphs, where the number of edges is close to the number of vertices squared.
The adjacency matrix provides constant-time access to determine if there is an edge between two nodes, but it requires O(V^2) space, where V is the number of vertices. It's ideal for dense graphs,
which contain a large number of edges.
• Easy to implement
• Removing an edge takes O(1) time
• Edge lookup (whether an edge exists between vertex ‘u’ and vertex ‘v’) is efficient and can be done in O(1) time
• Consumes more space O(V^2)
• Adding a vertex takes O(V^2) time
class Graph {
int V;
int** adj;
Graph(int V);
void addEdge(int v, int w);
void printGraph();
Graph::Graph(int V) {
this->V = V;
adj = new int*[V];
for (int i = 0; i < V; i++) {
adj[i] = new int[V];
for (int j = 0; j < V; j++)
adj[i][j] = 0;
void Graph::addEdge(int v, int w) {
adj[v][w] = 1;
adj[w][v] = 1;
void Graph::printGraph() {
for (int v = 0; v < V; ++v) {
for (int w = 0; w < V; ++w)
cout << adj[v][w] << " ";
cout << "\n";
This C++ code creates an adjacency matrix and defines methods to add edges and print the graph. Notice the nested loops in the printGraph method, reflecting the O(V^2) time complexity.
C Implementation
#include <cstdio>
int main(){
int n, m;
scanf("%d %d", &n, &m);
int adjMat[n + 1][n + 1];
for (int i = 0; i < m; i++) {
int u, v;
scanf("%d %d", &u, &v);
adjMat[u][v] = 1;
adjMat[v][u] = 1;
return 0;
2.2 Adjacency List
Contrarily, an adjacency list represents a graph as an array of linked lists. The index of the array represents a vertex and each element in its linked list represents the other vertices that form an
edge with the vertex. The adjacency list is preferable for sparse graphs, where the number of edges is far less than the number of vertices squared.
Adjacency List is an array of linked lists. The size of the array is equal to the number of vertices. An entry array[i] represents the linked list of vertices adjacent to the ith vertex. This
representation can also be used torepresent a weighted graph where the weights of edges can be stored as lists of pairs.
The adjacency list requires less space, O(V + E), where V is the number of vertices and E is the number of edges. However, to find out whether an edge exists between two vertices requires traversing
a linked list, taking O(V) time in the worst case.
• Saves space, especially for sparse graphs. Space taken is O(|V|+|E|).
• Adding a vertex is easier.
• Can accommodate weights on edges efficiently.
class Graph {
int V;
list* adj;
Graph(int V);
void addEdge(int v, int w);
void printGraph();
Graph::Graph(int V) {
this->V = V;
adj = new list[V];
void Graph::addEdge(int v, int w) {
void Graph::printGraph() {
for (int v = 0; v < V; ++v) {
cout << "\n Adjacency list of vertex " << v << "\n head ";
for (auto x : adj[v])
cout << "-> " << x;
This C++ code creates an adjacency list and defines methods to add edges and print the graph. Observe the O(V + E) space complexity in the printGraph method.
3. Incidence Matrix and Incidence List
Besides adjacency matrix and adjacency list, graphs can also be represented using incidence matrix and incidence list. An incidence matrix is a 2D boolean matrix, where the row represents vertices
and the column represents edges. The entry of row i and column j is '1' if the vertex i is part of edge j, otherwise '0'. Incidence list, on the other hand, is similar to adjacency list, but instead
of lists of vertices, we have lists of edges for every vertex.
4. Graphs and Object-Oriented Programming
Now, let's shift our lens to view graphs from an Object-Oriented Programming (OOP) perspective. In OOP, a graph can be seen as an object, with vertices and edges being its member variables. The
operations like adding an edge, deleting a vertex, etc., can be member functions. This encapsulation of data and operations within an object aligns well with the graph data structure. The adjacency
list and adjacency matrix can be considered different strategies for implementing the same object - the graph. This is a classical example of the Strategy Pattern in OOP.
5. Applications of Graphs
Graphs have widespread applications in various domains. From computer networks, where nodes represent computers and edges represent connections, to transportation networks, where nodes represent
intersections and edges represent roads. In fact, social networks like Facebook and LinkedIn use graphs to represent users as nodes and friendships as edges. Here, efficient graph representations,
traversal, and manipulation algorithms are critical for performance and scalability.
After this whirlwind exploration of sequential and linked representations of graphs, we hope you're left with a sense of awe and curiosity. Understanding these representations is like unlocking a new
level in the game of data structures, opening the doors to more advanced algorithms and real-world applications.
What Lies Beyond?
In the subsequent article, we'll be charting our course through the intriguing waters of multi-graphs and hyper-graphs. Buckle up, because this journey will expand your horizons, deepening your
understanding of the vast and complex world of graphs. Stay tuned and keep exploring! | {"url":"https://dmj.one/edu/su/course/csu1051/class/sequential-linked-graph-representations","timestamp":"2024-11-10T22:40:42Z","content_type":"text/html","content_length":"17108","record_id":"<urn:uuid:3ff3fc69-c4f5-4cab-a692-09dd203ddd7e>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00109.warc.gz"} |
Financial Functions
ACCRINT The accrued interest for a security that pays interest periodically.
ACCRINTM The accrued interest for a security that pays interest at maturity.
AMORDEGRC The depreciation of an asset in a single period (straight-line, implicit coefficient).
AMORLINC The depreciation of an asset in a single period (straight-line).
COUPDAYBS The number of days between the previous coupon date and the settlement date.
COUPDAYS The number of days between the coupon dates on either side of the settlement date.
COUPDAYSNC The number of days between the settlement date and the next coupon date.
COUPNCD The next coupon date after the settlement date.
COUPNUM The number of coupons between the settlement date and the maturity date.
COUPPCD The previous coupon date before the settlement date.
CUMIPMT The cumulative interest payment on a loan between two dates.
CUMPRINC The cumulative principal paid on a loan between two dates.
DB The depreciation of an asset in a single period (declining balance method).
DDB The depreciation of an asset in a single period (double or triple declining balance method).
DISC The interest rate (or discount rate) for a security held to maturity.
DOLLARDE The dollar fraction expressed as a decimal.
DOLLARFR The dollar decimal expressed as a fraction.
DURATION The annual duration of a security that pays interest periodically.
EFFECT The effective interest rate given a nominal interest rate and compounding frequency.
FV The future value of a series of equal cash flows at regular intervals.
FVSCHEDULE The future value of an initial principal after applying compound interest rates.
INTRATE The interest rate for a security held to maturity.
IPMT The interest amount paid on a given period on a loan with fixed interest.
IRR The interest rate for a series of unequal cash flows at regular intervals (implicit reinvestment rate).
ISPMT (Compatibility) The interest paid for a given period in a series of equal cash flows at regular intervals (incorrectly).
MDURATION The modified duration for a security that pays interest periodically.
MIRR The interest rate for a series of unequal cash flows at regular intervals (explicit reinvestment rate).
NOMINAL The nominal interest rate over a period given an annual interest rate.
NPER The number of periods for an investment.
NPV The net present value of a series of unequal cash flows at regular intervals.
ODDFPRICE The price per $100 face value of a security with an odd first period.
ODDFYIELD The yield of a security with an odd first period.
ODDLPRICE The price per $100 face value of a security with an odd last period.
ODDLYIELD The yield of a security with an odd last period.
PDURATION The number of periods required by an investment to reach a specified value.
PMT The full amount (principal + interest) paid every period on a loan with fixed interest.
PPMT The principal amount paid on a given period on a loan with fixed interest.
PRICE The price of a security that pays periodic interest.
PRICEDISC The price of a discounted security (no interest payments).
PRICEMAT The price of a security that pays interest at maturity.
PV The present value of a series of equal cash flows at regular intervals.
RATE The interest rate for a series of equal cash flows at regular intervals.
RECEIVED The amount received at the end when a security is held to maturity.
RRI The equivalent interest rate for the growth of an investment.
SLN The depreciation of an asset in a single period (straight-line method).
STOCKHISTORY (2024) The historical data about a financial instrument.
SYD The depreciation of an asset in a single period (sum-of-years digits method).
TBILLEQ The yield (bond-equivalent) for a treasury bill.
TBILLPRICE The price per $100 face value for a treasury bill.
TBILLYIELD The yield for a treasury bill.
VDB The depreciation of an asset in a single period (variable declining balance method).
XIRR The interest rate for a series of unequal cash flows at irregular intervals (implicit reinvestment rate).
XNPV The net present value of a series of unequal cash flows at irregular intervals.
YIELD The interest rate (annual) for a series of equal cash flows at regular intervals.
YIELDDISC The interest rate (annual) for a discounted security (no interest payments).
YIELDMAT The interest rate (annual) for a security that pays interest at maturity.
The accrued interest for a security that pays interest periodically.
The accrued interest for a security that pays interest at maturity.
The depreciation of an asset in a single period (straight-line, implicit coefficient).
The depreciation of an asset in a single period (straight-line).
The number of days between the previous coupon date and the settlement date.
The number of days between the coupon dates on either side of the settlement date.
The number of days between the settlement date and the next coupon date.
The next coupon date after the settlement date.
The number of coupons between the settlement date and the maturity date.
The previous coupon date before the settlement date.
The cumulative interest payment on a loan between two dates.
The cumulative principal paid on a loan between two dates.
The depreciation of an asset in a single period (declining balance method).
The depreciation of an asset in a single period (double or triple declining balance method).
The interest rate (or discount rate) for a security held to maturity.
The dollar fraction expressed as a decimal.
The dollar decimal expressed as a fraction.
The annual duration of a security that pays interest periodically.
The effective interest rate given a nominal interest rate and compounding frequency.
The future value of a series of equal cash flows at regular intervals.
The future value of an initial principal after applying compound interest rates.
The interest rate for a security held to maturity.
The interest amount paid on a given period on a loan with fixed interest.
The interest rate for a series of unequal cash flows at regular intervals (implicit reinvestment rate).
(Compatibility) The interest paid for a given period in a series of equal cash flows at regular intervals (incorrectly).
The modified duration for a security that pays interest periodically.
The interest rate for a series of unequal cash flows at regular intervals (explicit reinvestment rate).
The nominal interest rate over a period given an annual interest rate.
The number of periods for an investment.
The net present value of a series of unequal cash flows at regular intervals.
The price per $100 face value of a security with an odd first period.
The yield of a security with an odd first period.
The price per $100 face value of a security with an odd last period.
The yield of a security with an odd last period.
The number of periods required by an investment to reach a specified value.
The full amount (principal + interest) paid every period on a loan with fixed interest.
The principal amount paid on a given period on a loan with fixed interest.
The price of a security that pays periodic interest.
The price of a discounted security (no interest payments).
The price of a security that pays interest at maturity.
The present value of a series of equal cash flows at regular intervals.
The interest rate for a series of equal cash flows at regular intervals.
The amount received at the end when a security is held to maturity.
The equivalent interest rate for the growth of an investment.
The depreciation of an asset in a single period (straight-line method).
(2024) The historical data about a financial instrument.
The depreciation of an asset in a single period (sum-of-years digits method).
The yield (bond-equivalent) for a treasury bill.
The price per $100 face value for a treasury bill.
The yield for a treasury bill.
The depreciation of an asset in a single period (variable declining balance method).
The interest rate for a series of unequal cash flows at irregular intervals (implicit reinvestment rate).
The net present value of a series of unequal cash flows at irregular intervals.
The interest rate (annual) for a series of equal cash flows at regular intervals.
The interest rate (annual) for a discounted security (no interest payments).
The interest rate (annual) for a security that pays interest at maturity.
© 2024 Better Solutions Limited. All Rights Reserved. © 2024 Better Solutions Limited Top | {"url":"https://bettersolutions.com/excel/functions/financial-category.htm","timestamp":"2024-11-03T06:13:02Z","content_type":"text/html","content_length":"55121","record_id":"<urn:uuid:5d74a353-b933-435c-9606-f56c9f3aa3cf>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00103.warc.gz"} |
Posts for the month of February 2021
Goal: Figure out how to make the probe velocity and range normally distributed within our FORTRAN model.
Method: First added a local variable called 'technology' to the program. Then used random walks to have it be normally distributed around the technological capabilities of the abiogenesis seed (first
habited system). Finally, I made the probe range and velocity relativistic functions of this new technology variable. The table below shows the initial values inputted into the model.
v0 0.0001c=30 km/s Initial Probe Velocity
r0 10 lyr Initial Probe Range
t0=r0/v0 100,000 years Initial Probe Lifetime
Click here to see the PDF which summarizes calculation/implementation of the normally distributed probe ranges/distances
Click here for more information about the input variables for the models shown below.
• The pink boxes surround systems that are uninhabited.
• All simulations begin with 10 systems originally habited.
• Galaxy Model shows time with unit Myr. Total runtime being 1000 Myr.
• In contrast, the periodic box model has a total runtime of 1 Myr. Thus I show frames instead of time, where each frame is approximately 1/1000 Myr.
• x=(number of habited systems)/(total number of systems)
Galaxy Model
Periodic Box Model
Analytic Model
Notes about the Federrath+14 jet/outflow model:
Jet parameters in our model:
Jet Data
jet_radius 16 size of outflow region in finest level cells
jet_collimation .2618 pi/12 !collimation of outflow
jet_temp 30000. jet temp in Kelvin
jet_index 1. exponent of collimation
jet_masslossrate 2e0 solar masses / yr
lcorrect T Apply conservative correction
jet_vrad 430.75 km/s radial velocity of jet, use Keplerian velocity for m2 and 1 solar radius
jet_vphi .5 km/s approximate rotation speed of jet
spin_axis 0d0,0d0,1d0 outflow axis
1. understand the feedback module. what are the initial profiles of density and velocity inside the spherical cones launching the jets?
2. compute how much of the jet material is unbound, using the spline potential, velocity profile and density profile.
3. plot the jet tracer density
Early Asteroid Magnetization
Adding lineouts of the day side to illustrate the issues of theoretical estimates of amplification.
Moon Impact Magnetization
Settled on a spherical field distribution of the form:
Will add another blog post about the equations and plot for current, and magnetic potential.
Hot Neptunes
I have forgotten how to compile code it seems. Should be an easy fix.
Goal: Figure out how to make the probe velocity and range normally distributed within our FORTRAN model.
Method: First added a local variable called 'technology' to the program. Then used random walks to have it be normally distributed around the technological capabilities of the abiogenesis seed (first
habited system). Finally, I made the probe range and velocity relativistic functions of this new technology variable. See the pdf below for more details about calculation/implementation.
Click here to see the PDF which summarizes my progress so far and potential steps.
Current Model Output (Colored by Technology, Pink=Unsettled)
v0 0.0001c=30 km/s Initial Probe Velocity
r0 10 lyr Initial Probe Range
t0=r0/v0 100,000 years Initial Probe Lifetime
Figure: The above gif shows the temporal evolution of a model galaxy. Beginning with 10 habited solar systems (ie: abiogensis seeds), these systems send probes out to nearby systems, thus making
those systems inhabited and repeating the process until the entire galaxy is filled with life. The uninhabited systems are shown as the pink boxes. Note that a selection effect results in the systems
on the outer edge of the galaxy gaining high technological abilities before systems near the center.
Based on the impactor conditions from Oran et al. (2020) https://advances.sciencemag.org/content/6/40/eabb1475 and its supplementary material https://advances.sciencemag.org/content/suppl/2020/09/
28/6.40.eabb1475.DC1/abb1475_SM.pdf we have most impact plasma conditions apart from the magnetic field (which they do not seem to inject by any mechanism).
Impact Plasma conditions from iSALE-2D :
1. Initial Vapour temperature: 2000 K (varies down to 500 K for some models)
2. Wind Speed: 400 km/s to 1000 km/s
3. Wind Density: Plots in S4, but no analytical form. Looks proportional to y.
4. Magnetic field: Probably several equations work.
Assuming the field distribution is similar to that of a very thick (radius ), finite length (), current carrying wire, the field and vector potential in cylindrical coordinates are:
5. Resistivity Profile: Same as used for the NSF proposal.
We can start | {"url":"https://bluehound.circ.rochester.edu/astrobear/blog/2021/2","timestamp":"2024-11-08T04:27:26Z","content_type":"text/html","content_length":"60762","record_id":"<urn:uuid:bf818592-2592-4a39-8229-ef1c95978228>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00828.warc.gz"} |
OpenAlgebra.com: Free Algebra Study Guide & Video Tutorials
As long as the indices are the same, we can multiply the radicands together using the following property.
Since multiplication is commutative, you can multiply the coefficients and the radicands together and then simplify.
Take care to be sure that the indices are the same before multiplying. We will assume that all variables are positive.
Divide radicals using the following property.
Divide. (Assume all variables are positive.)
Rationalizing the Denominator
A simplified radical expression cannot have a radical in the denominator. When the denominator has a radical in it, we must multiply the entire expression by some form of 1 to eliminate it. The basic
steps follow.
Rationalize the denominator:
Multiply numerator and denominator by the 5th root of of factors that will result in 5th powers of each factor in the radicand of the denominator.
Notice that all the factors in the radicand of the denominator have powers that match the index. This was the desired result, now simplify.
Rationalize the denominator.
This technique does not work when dividing by a binomial containing a radical. A new technique is introduced to deal with this situation.
Rationalize the denominator:
Multiply numerator and denominator by the conjugate of the denominator.
And then simplify. The goal is to eliminate all radicals from the denominator.
Rationalize the denominator.
Video Examples on YouTube:
It turns out that we have all the tools necessary to simplify complex algebraic fractions.
Rational Expressions and Equations Playlist on YouTube
The numerator and denominator of these rational expressions contain fractions and look very intimidating. We will outline two methods for simplifying them.
Method 1: Obtain a common denominator for the numerator and denominator, multiply by the reciprocal of the denominator, then factor and cancel if possible.
Method 2: Multiply the numerator and denominator of the complex fraction by the LCD of all the simple fractions then factor and cancel if possible.
To illustrate what happened after we multiplied by the LCD we could add an extra step.
For the following solved problems, both methods are used. Choose whichever method feels most comfortable for you.
Simplify using method 1. Simplify using method 2.
Video Examples on YouTube: | {"url":"https://www.openalgebra.com/search/label/divide","timestamp":"2024-11-08T11:07:58Z","content_type":"application/xhtml+xml","content_length":"90521","record_id":"<urn:uuid:f65102d8-00d1-437b-bd09-73456e2f8d86>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00182.warc.gz"} |
On the supercritically diffusive magnetogeostrophic equations
We address the well-posedness theory for the magneto-geostrophic equation, namely an active scalar equation in which the divergence-free drift velocity is one derivative more singular than the active
scalar. In the presence of supercritical fractional diffusion given by (Δ) ^γ with 0<γ<1, we discover that for γ>1/2 the equations are locally well-posed, while for γ<1/2 they are ill-posed, in the
sense that there is no Lipschitz solution map. The main reason for the striking loss of regularity when γ goes below 1/2 is that the constitutive law used to obtain the velocity from the active
scalar is given by an unbounded Fourier multiplier which is both even and anisotropic. Lastly, we note that the anisotropy of the constitutive law for the velocity may be explored in order to obtain
an improvement in the regularity of the solutions when the initial data and the force have thin Fourier support, i.e. they are supported on a plane in frequency space. In particular, for such
well-prepared data one may prove the local existence and uniqueness of solutions for all values of γ(0, 1). In fact, these solutions are global in time when γ[1/2, 1).
ASJC Scopus subject areas
• Statistical and Nonlinear Physics
• Mathematical Physics
• General Physics and Astronomy
• Applied Mathematics
Dive into the research topics of 'On the supercritically diffusive magnetogeostrophic equations'. Together they form a unique fingerprint. | {"url":"https://nyuscholars.nyu.edu/en/publications/on-the-supercritically-diffusive-magnetogeostrophic-equations","timestamp":"2024-11-12T16:16:33Z","content_type":"text/html","content_length":"51811","record_id":"<urn:uuid:20a63bf5-4e24-4b43-b804-223f60dea845>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00060.warc.gz"} |
\(\mathrm{\LaTeX}\) in VS Code
Table of Contents
• Nowadays most mathematicians use \(\mathrm{\LaTeX}\). Since it is quite a straightforward software to use, one may not think much about how to write \(\mathrm{\LaTeX}\) more efficiently. In this
course I will walk you through how to use \(\mathrm{\LaTeX}\) in VS Code. This course is targeted for those who already know how to write \(\mathrm{\LaTeX}\) but want to improve their efficiency.
I'll teach you only the basics. I hope that after this course you can explore and enjoy more interesting and advanced techniques in your own.
Lecture 1
• We learn basics on VS Code and LaTeX Workshop.
• Preparations for Lecture 1
Lecture 2
• We learn BibTeX and Vim.
• Prepartion for Lecture 2
Lecture 3
• We learn Git and GitHub for version control and collaboration.
Preparations for Lecture 3
Installing Git
• Mac
□ Type the following in your terminal will install Git if you don't have it already.
git --version
□ See the instruction here if it doesn't work.
• Windows
□ Download "64-bit Git for Windows Setup" (or 32-bit depending on your machine) here.
□ Install Git using default settings.
□ Restart your computer.
Find a partner to practice Git together.
• If you want me to find someone for you, send me an email at this address: jangsookim@skku.edu | {"url":"https://jangsookim.github.io/lectures/vscode/vscode_lecture0.html","timestamp":"2024-11-10T08:04:02Z","content_type":"application/xhtml+xml","content_length":"14858","record_id":"<urn:uuid:e0a0bd81-ddc6-44c0-9de8-812e4647ecdc>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00633.warc.gz"} |
The require extension defines the non-standard \require macro that allows you to load extensions from within a math expression in a web page. For example:
\(\require{enclose} \enclose{circle}{x}\)
would load the enclose extension, making the following \enclose command available for use.
An extension only needs to be loaded once, and then it is available for all subsequent typeset expressions.
This extension is already loaded in all the components that include the TeX input jax, other than input/tex-base. To load the require extension explicitly (when using input/tex-base for example), add
'[tex]/require' to the load array of the loader block of your MathJax configuration, and add 'require' to the packages array of the tex block.
window.MathJax = {
loader: {load: ['[tex]/require']},
tex: {packages: {'[+]': ['require']}}
Since the require extension is included in the combined components that contain the TeX input jax, it may already be in the package list. In that case, if you want to disable it, you can remove it:
window.MathJax = {
tex: {packages: {'[-]': ['require']}}
require Options
Adding the require extension to the packages array defines a require sub-block of the tex configuration block with the following values:
MathJax = {
tex: {
require: {
allow: {
base: false,
'all-packages': false
defaultAllow: true
allow: {...}
This sub-object indicates which extensions can be loaded by \require. The keys are the package names, and the value is true to allow the extension to be loaded, and false to disallow it. If an
extension is not in the list, the default value is given by defaultAllow, described below.
defaultAllow: true
This is the value used for any extensions that are requested, but are not in the allow object described above. If set to true, any extension not listed in allow will be allowed; if false, only
the ones listed in allow (with value true) will be allowed.
require Commands
The require extension implements the following macros: \require | {"url":"https://docs.mathjax.org/en/v3.2-latest/input/tex/extensions/require.html","timestamp":"2024-11-03T00:24:08Z","content_type":"text/html","content_length":"24932","record_id":"<urn:uuid:0209d75d-cc19-49d4-bb8d-64706d34bb75>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00063.warc.gz"} |
An interactive applet and associated web page that shows that angle-angle-angle (AAA) is not enough to prove congruence. The applet shows two triangles, one of which can be dragged to resize it,
showing that although they have the same angles they are not the same size and thus not congruent. The web page describes all this and has links to other related pages. Applet can be enlarged to full
screen size for use with a classroom projector. This resource is a component of the Math Open Reference Interactive Geometry textbook project at http://www.mathopenref.com.
Material Type:
John Page
Date Added: | {"url":"https://resourcebank.ca/browse?f.keyword=congruent","timestamp":"2024-11-12T19:09:25Z","content_type":"text/html","content_length":"167528","record_id":"<urn:uuid:d624c6ad-7265-42af-bd40-ed7616cb9b7c>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00503.warc.gz"} |
Doppler echocardiography
Doppler echocardiography is a procedure that uses Doppler ultrasonography to examine the heart.[1] An echocardiogram uses high frequency sound waves to create an image of the heart while the use of
Doppler technology allows determination of the speed and direction of blood flow by utilizing the Doppler effect.
An echocardiogram can, within certain limits, produce accurate assessment of the direction of blood flow and the velocity of blood and cardiac tissue at any arbitrary point using the Doppler effect.
One of the limitations is that the ultrasound beam should be as parallel to the blood flow as possible. Velocity measurements allow assessment of cardiac valve areas and function, any abnormal
communications between the left and right side of the heart, any leaking of blood through the valves (valvular regurgitation), calculation of the cardiac output and calculation of E/A ratio[2] (a
measure of diastolic dysfunction). Contrast-enhanced ultrasound-using gas-filled microbubble contrast media can be used to improve velocity or other flow-related medical measurements.
An advantage of Doppler echocardiography is that it can be used to measure blood flow within the heart without invasive procedures such as cardiac catheterization.
In addition, with slightly different filter/gain settings, the method can measure tissue velocities by tissue Doppler echocardiography. The combination of flow and tissue velocities can be used for
estimating left ventricular filling pressure, although only under certain conditions.[3]
Although "Doppler" has become synonymous with "velocity measurement" in medical imaging, in many cases it is not the frequency shift (Doppler shift) of the received signal that is measured, but the
phase shift (when the received signal arrives). However, the calculation result will end up identical.
This procedure is frequently used to examine children's hearts for heart disease because there is no age or size requirement.
2D Doppler imaging
Unlike 1D Doppler imaging, which can only provide one-dimensional velocity and has dependency on the beam to flow angle,[4] 2D velocity estimation using Doppler ultrasound is able to generate
velocity vectors with axial and lateral velocity components. 2D velocity is useful even if complex flow conditions such as stenosis and bifurcation exist. There are two major methods of 2D velocity
estimation using ultrasound: Speckle tracking and crossed beam Vector Doppler, which are based on measuring the time shifts and phase shifts respectively.[5]
Vector Doppler
Vector Doppler is a natural extension of the traditional 1D Doppler imaging based on phase shift. The phase shift is found by taking the autocorrelation between echoes from two consecutive firings.
[6] The main idea of Vector Doppler is to divide the transducer into three apertures: one at the center as the transmit aperture and two on each side as the receive apertures. The phase shifts
measured from left and right apertures are combined to give the axial and lateral velocity components. The positions and the relative angles between apertures need to be tuned according to the depth
of the vessel and the lateral position of the region of interest.[5]
Speckle tracking
Speckle tracking, which is a well-established method in video compression and other applications, can be used to estimate blood flow in ultrasound systems. The basic idea of speckle tracking is to
find the best match of a certain speckle from one frame within a search region in subsequent frames.[5] The decorrelation between frames is one of the major factors degrading its performance. The
decorrelation is mainly caused by the different velocity of pixels within a speckle, as they do not move as a block. This is less severe when measuring the flow at the center, where the changing rate
of the velocity is the lowest. The flow at the center usually has the largest velocity magnitude, called "peak velocity". It is the most needed information in some cases, such as diagnosing stenosis.
[7] There are mainly three methods of finding the best match: SAD (Sum of absolute difference), SSD (Sum of squared difference) and Cross correlation. Assume ${\displaystyle X_{0}(i,j)}$ is a pixel
in the kernel and ${\displaystyle X_{1}(i+\alpha ,j+\beta )}$ is the mapped pixel shifted by ${\displaystyle (\alpha ,\beta )}$ in the search region.[8]
SAD is calculated as: ${\displaystyle D(\alpha ,\beta )=\sum _{i=1}\sum _{j=1}|X_{0}(i,j)-X_{1}(i+\alpha ,j+\beta )|}$
SSD is calculated as: ${\displaystyle D(\alpha ,\beta )=\sum _{i=1}\sum _{j=1}(X_{0}(i,j)-X_{1}(i+\alpha ,j+\beta ))^{2}}$
Normalized cross correlation coefficient is calculated as: ${\displaystyle \rho (\alpha ,\beta )={\frac {\sum _{i=1}\sum _{j=1}(X_{0}(i,j)-{\bar {X_{0}}})(X_{1}(i+\alpha ,j+\beta )-{\bar {X_{1}}})}{\
sqrt {(\sum _{i=1}\sum _{j=1}(X_{0}(i,j)-{\bar {X_{0}}})^{2})(\sum _{i=1}\sum _{j=1}(X_{1}(i+\alpha ,j+\beta )-{\bar {X_{1}}})^{2})}}}}$
where ${\displaystyle {\bar {X_{0}}}}$ and ${\displaystyle {\bar {X_{1}}}}$ are the average values of ${\displaystyle X_{0}(i,j)}$ and ${\displaystyle X_{1}(i,j)}$ respectively. The ${\displaystyle
(\alpha ,\beta )}$ pair that gives the lowest D for SAD and SSD, or the largest ρ for the cross correlation, is selected as the estimation of the movement. The velocity is then calculated as the
movement divided by the time difference between the frames. Usually the median or average of multiple estimations is taken to give more accurate result.[8]
Sub pixel accuracy
In ultrasound systems, lateral resolution is usually much lower than the axial resolution. The poor lateral resolution in the B-mode image also results in poor lateral resolution in flow estimation.
Therefore, sub pixel resolution is needed to improve the accuracy of the estimation in the lateral dimension. In the meantime, we could reduce the sampling frequency along the axial dimension to save
computations and memories if the sub pixel movement is estimated accurately enough. There are generally two kinds of methods to obtain the sub pixel accuracy: interpolation methods, such as parabolic
fit, and phase based methods in which the peak lag is found when the phase of the analytic cross correlation function crosses zero.[9]
Interpolation method (parabolic fit)
As shown in the right figure, parabolic fit can help find the real peak of the cross correlation function. The equation for parabolic fit in 1D is:[4] ${\displaystyle k_{int}=k_{s}-{\frac {(R_{12}(k_
where ${\displaystyle R_{12}}$ is the cross correlation function and ${\displaystyle k_{s}}$ is the originally found peak. ${\displaystyle k_{int}}$ is then used to find the displacement of
scatterers after interpolation. For the 2D scenario, this is done in both the axial and lateral dimensions. Some other techniques can be used to improve the accuracy and robustness of the
interpolation method, including parabolic fit with bias compensation and matched filter interpolation.[10]
Phase based method
The main idea of this method is to generate synthetic lateral phase and use it to find the phase that crosses zero at the peak lag.[9]
The right figure illustrates the procedure of creating the synthetic lateral phase, as a first step. Basically, the lateral spectrum is split in two to generate two spectra with nonzero center
frequencies. The cross correlation is done for both the up signal and down signal, creating ${\displaystyle R_{up}}$ and ${\displaystyle R_{down}}$ respectively.[9] The lateral correlation function
and axial correlation function are then calculated as follows: ${\displaystyle R_{lateral}=R_{up}*R_{down}^{*};R_{axial}=R_{up}*R_{down}}$
where ${\displaystyle R_{down}^{*}}$ is the complex conjugate of ${\displaystyle R_{down}}$.
They have the same magnitude, and the integer peak is found using traditional cross correlation methods. After the integer peak is located, a 3 by 3 region surrounding the peak is then extracted with
its phase information. For both the lateral and axial dimensions, the zero crossings of a one-dimensional correlation function at the other dimension’s lags are found, and a linear least squares
fitted line is created accordingly. The intersection of the two lines gives the estimate of the 2D displacement.[9]
Comparison between vector Doppler and speckle tracking
Both methods could be used for 2D Velocity Vector Imaging, but Speckle Tracking would be easier to extend to 3D. Also, in Vector Doppler, the depth and resolution of the region of interest are
limited by the aperture size and the maximum angle between the transmit and receive apertures, while Speckle Tracking has the flexibility of alternating the size of the kernel and search region to
adapt to different resolution requirement. However, vector Doppler is less computationally complex than speckle tracking.
Volumetric flow estimation
Velocity estimation from conventional Doppler requires knowledge of the beam-to-flow angle (inclination angle) to produce reasonable results for regular flows and does a poor job of estimating
complex flow patterns, such as those due to stenosis and/or bifurcation. Volumetric flow estimation requires integrating velocity across the vessel cross-section, with assumptions about the vessel
geometry, further complicating flow estimates. 2D Doppler data can be used to calculate the volumetric flow in certain integration planes.[11] The integration plane is chosen to be perpendicular to
the beam, and Doppler power (generated from power Doppler mode of Doppler ultrasound) can be used to differentiate between the components that are inside and outside the vessel. This method does not
require prior knowledge of the Doppler angle, flow profile and vessel geometry.[11]
Promise of 3D
Until recently, ultrasound images have been 2D views and have relied on highly trained specialists to properly orient the probe and select the position within the body to image with only few and
complex visual cues. The complete measurement of 3D velocity vectors makes many post processing techniques possible. Not only is the volumetric flow across any plane measurable, but also, other
physical information such as stress and pressure can be calculated based on the 3D velocity field. However, it is quite challenging to measure the complex blood flow to give velocity vectors, due to
the fast acquisition rate and the massive computations needed for it. Plane wave technique is thus promising as it can generate very high frame rate.[12]
See also
1. "Echocardiogram". MedlinePlus. Retrieved 2017-12-15.
2. Abdul Latif Mohamed, Jun Yong, Jamil Masiyati, Lee Lim, Sze Chec Tee. The Prevalence Of Diastolic Dysfunction In Patients With Hypertension Referred For Echocardiographic Assessment of Left
Ventricular Function. Malaysian Journal of Medical Sciences, Vol. 11, No. 1, January 2004, pp. 66-74
3. Ommen, S. R.; Nishimura, R. A.; Appleton, C. P.; Miller, F. A.; Oh, J. K.; Redfield, M. M.; Tajik, A. J. (10 October 2000). "Clinical Utility of Doppler Echocardiography and Tissue Doppler
Imaging in the Estimation of Left Ventricular Filling Pressures : A Comparative Simultaneous Doppler-Catheterization Study". Circulation. 102 (15): 1788–1794. doi:10.1161/01.CIR.102.15.1788. PMID
11023933. Retrieved 12 July 2012.
4. J. A. Jensen, Estimation of Blood Velocities Using Ultrasound, A Signal Processing Approach, New York: Cambridge University Press, 1996.
5. P. S. a. L. L. Abigail Swillens, "Two-Dimensional Blood Velocity Estimation With Ultrasound: Speckle Tracking Versus Crossed-Beam Vector Doppler Based on Flow Simulations in a Carotid Bifurcation
Model," IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control, pp. 327-338, 2010.
6. R. S. C. Cobbold, Foundations of Biomedical Ultrasound, Oxford University Press, 2007.
7. G. Reutern, M. Goertler, N. Bornstein, M. Sette, D. Evans, A. Hetzel, M. Kaps, F. Perren, A. Razumovky, T. Shiogai, E. Titianova, P. Traubner, N. Venketasubramanian, L. Wong and M. Yasaka,
"Grading Carotid Stenosis Using Ultrasonic Methods," Stroke, Journal of the American Heart Association, vol. 43, pp. 916-921, 2012.
8. J. Luo and E. E. Konofagou, "A Fast Motion and Strain Estimation," in Ultrasound Symposium, 2010.
9. X. Chen, M. J. Zohdy, S. Y. Emelianov and M. O'Donnell, "Lateral Speckle Tracking Using Synthetic Lateral Phase," IEEE Transactions on Ultrasonics, Ferroelectrcs and Frequency Control, vol. 51,
no. 5, pp. 540-550, 2004.
10. X. Lai and H. Torp, "Interpolation Methods for Time-Delay Estimation Using Cross-Correlation Method for Blood Velocity Measurement," IEEE Transactions on Ultrasonics, Ferroelectrcs and Frequency
Control, vol. 46, no. 2, pp. 277-290, 1999.
11. M. Richards, O. Kripfgans, J. Rubin, A. Hall and J. Fowlkes, "Mean Volume Flow Estimation in Pulsatile Flow Conditions," Ultrasound in Med. & Biol., vol. 35, pp. 1880-1891, 2009.
12. J. Udesen, F. Gran, K. Hansen, J. Jensen, C. Thomsen and M. Nielsen, "High Frame Rate Blood Vector Velocity Imaging Using Plane Waves: Simulations and Preliminary Experiments," IEEE Transactions
on Ultrasonics, Ferroelectrics and Frequency Control, vol. 55, no. 8, pp. 1729-1743, 2008.
External links | {"url":"http://medbox.iiab.me/kiwix/wikipedia_en_medicine_2019-12/A/Doppler_echocardiography","timestamp":"2024-11-14T07:43:19Z","content_type":"text/html","content_length":"92559","record_id":"<urn:uuid:1b884c77-4b95-44db-9701-499e10ea21aa>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00486.warc.gz"} |
The Advanced Course requires a bit more mathematics, although nothing too difficult.
Most of the common prefexes were introduced in the Intermediate Course, the last few just need to be added in:
Giga (G) = * 1000,000,000
Mega (M) = * 1000,000
kilo (k) = * 1000
milli (m) = 1/1000th
micro (μ) = 1/1000,000th
nano (n) = 1/1000,000,000th
pico (p) = 1/1000,000,000,000th
Power Calculations
You were introduced to ohms law (V=IR) and the power calculation (P=VI) at foundation level, using substitution the power formula can be written differently:
P=VI, substituting V=IR into the equation gives P = IR * I, this is normally written:
P = I^2R
P=VI, substituting I=V/R into the equation gives P = V * V/R, this is normally written:
P = V^2/R
Series/Parallel Resistors
Resistor Symbol:
Old Resistor Symbol:
When constructing circuits, it is often the case that you don't have the exact value that is needed and so by connecting components in series or parallel you can get the value required.
Firstly dealing with resistors, if you connect two resistors in series you simply need to add the values together.
With parallel resistors you use the formula as follows:
1/R = 1/R1 + 1/R2 + 1/R3 .......
So for two resistors of 5600 ohms this gives 1/R = 1/5600 + 1/5600
Which then becomes 1/R = 2/5600
To get R you can transpose it to give:
R/1 = 5600/2 which can be written as R = 5600/2 OR 2800 OHMS
So if you have two resistors of the same value in parallel the total value becomes half of one of the original values.
As long as the values are the same this also works for 3 in parallel giving 1/3 of one of the original values etc.
The formula can be adapted to give the result for two resistors:
1/R = 1/R1 + 1/R2 = (R2 + R1)/(R1 * R2)
To get R this is than transposed to give:
R = (R1 * R2) / (R1 + R2)
Either this can be used or you can add the resistors using the standard formula:
1/R = 1/R1 + 1/R2 + 1/R3 .......
So for a 560 ohm resistor in parallel with a 360 ohm resistor in parallel with a 56 ohm resistor:
1/R = 1/560 + 1/360 + 1/56 = 1/560 + 1/360 + 10/560
1/R = 11/560 + 1/360 = ((11*360)+ (1*560)) / (560 * 360)
1/R = (3960 + 560) / 201600 = 4520 / 201600 = 452/20160 = 226/10080 = 113/5040
So R = 5040/113 = 44.6 Ohms
Potential Dividers
Using two resistors in series between the positive supply and ground it is possible to Divide the voltage using two resistors, this is called a potential divider.
The voltage at the centre point is proportional to the two resistor values and is given by:
V[out]=V[in] * R[2] / (R[1] + R[2])
So if you have a 9V supply and want to produce 3V using a potential divider, if R[2]=4K7 what does R[1] need to be.
Firstly we need to transpose the formula:
V[out]R[1] + V[out]R[2] = V[in] * R[2]
V[out]R[1] = V[in] * R[2] - V[out]R[2]
R[1] = (V[in] * R[2] - V[out]R[2]) / V[out]
R[1] = (9 * 4700 - 3 * 4700) / 3 = (42300 - 14100)/3 = 28200/3 = 9400 ohms
In the above example you could probably work out that R[1] needed double the value of R[2], but it won't always be so obvious and so you can then use the formula to do the calculation.
Capacitor Symbol:
Capacitors are made up of conductive plates separated by a dielectric (insulator), the formula to calculate the capacitance is as follows:
C = kA/d where k = ε[0]ε[r]
C is the capacitance in Farads
k is the dielectric constant
ε[0] is the electric constant (8.854*10^-12Fm^-1)
ε[r] is the permittivity of the material relative to a vacuum (dry air is 1.000536)
A is the area of the overlap of the two plates in square metres
d is the distance between the plates in metres
You need to be aware of and understand the above formula
Time Constant
The time to charge a capacitor up to 63% is defined by the formula:
T = CR
This is also the time it takes to discharge to 37%
Series/Parallel Capacitors
Adding Capacitors together is the exact opposite of what you do with resistors:
Capacitors in Parallel are just added together to give a total value.
Capacitors in Series use the formula:
1/C = 1/C[1] + 1/C[2] + 1/C[3] .......
Inductor Symbol:
Series/Parallel Inductors
Series and Parallel Inductors can be added together the same way as resistors are added.
When capacitors and inductors are used in radio circuits the reactance can be calculated as follows
X[L] = 2πfL
X[C] = 1/(2πfC)
Impedance is the combination of reactance and resistance and because the voltage and current are in phase for resistors and 90 degrees out of phase for reactance adding the two is slightly more
For series connected RC or RL circuits:
|Z| = squareroot(R^2 + X^2)
The Impedace will also have a phase angle which can be calculated using:
tanθ = X / R
to get θ you use the tan^-1 button on your calculator.
The voltage across the Resistance and Reactance can be calculated in a similar manner:
|V[Z]| = squareroot(V[R]^2 + V[X]^2)
For resonance X[C] = X[L]
The formula for calculating resonant frequency is as follows:
f = 1 / [2* π * squareroot(L * C)]
Transposition of the resonant frequency formula
The resonant frequency of a circuit is given as:
f = 1 / (2π * squareroot(L * C))
you will need to be able to transpose this to calculate C or L:
squareroot(L * C) = 1/(2πf)
LC = (1/2πf)^2
C = [(1/2πf)^2]/L or L = [(1/2πf)^2]/C
you will only be given:
f = 1 / (2π * squareroot(L * C))
so if you struggle to transpose the formula you will need to memorise the transposed forms.
The Q of a circuit
The Magnification factor Q, or Quality of a series tuned circuit can be worked out by looking at the current (I) flowing through the circuit at resonance, which will consist of the source voltage
e.g. 100mV divided by the series resistance e.g. 100R (because the L and C will cancel each other out) which in this case would be 100mV/100R = 1mA.
At resonance the impedance of the L (X[L]) or C (X[C]) can be calculated e.g. if X[c] = X[L] = 1000, you multiply this impedance by the current (I) it will give you 1mA * 1000R = 1V which is 10 times
the source Voltage (100mV). So in this case The Magnification factor Q = 10.
The Q of a circuit can also be caluculated by measuring the bandwith at the half power point (0.707 of the voltage), then applying the formula:
Q = f[c]/(f[u] - f[l]) = centre frequency / bandwidth
Another method of calculating Q is to use the formula:
Q = 2πfL/R or 1/(2πfCR)
For a parallel tuned circuit the AC Resistance is at a maximum at Resonance, this resistance is called the Dynamic resistance, R[D] it can be calculated as follows:
R[D] = L/CR
The value of Q can also be worked out using R[D]:
Damping resistors can be used to reduce the likelyhood of a tuned circuit oscillating, adding in damping resistors will however reduce the Q.
Quarter Wave Impedance Transformer
A quarter wave length of transmission line can be used as an Impedance Transformer, the impedance of the transmission line Z[o] required can be calculated using the following:
Z[o]^2 = Z[in] * Z[out]
So if you want to connect 50 ohm impedance (Z[in]) to a 100 ohm load (Z[out]) you will need to calculate the impedance of the quarter wave transmission line:
Z[o]^2 = 50 * 100 = 5000
Z[o] = 70.7 ohms
Therefore if you connect a quarter wave of 70.7 ohm transmission line between the 50 ohm feeder and the 100 ohm load you shouldn't get a mismach.
Field Strength Calculation
To calculate the field strength around your antenna you first need to work out the erp:
erp = power * gain (linear)
Therefore if directly in front of your antenna you have a gain of 3dbd (relative to a dipole) you will have double the power in that direction. So in this situation if you are transmitting 8W the erp
would be 2 * 8W = 16W
To calculate the Field Strength E you can then use the formula:
E = [7 * squareroot(erp)]/d
So at a distance of 2 metres in front of the antenna mentioned above the 16W erp becomes an Electric Field Strength of:
E = [7 * squareroot(16)]/2
E = [7 * 4]/2
E = 28/2 = 14 Volts per Metre
Transformer Symbol:
An AC current in a coil produces an alternating magnetic field and if it is adjacent to another coil an AC current will be induced in the second coil, this effect is called mutual inductance
Ignoring losses the ratio of the Primary to the Secondary of a transformer determines the Output voltage related to the input:
V[s]=V[P] * N[s]/N[p]
The current is inversely proportional:
I[P]=I[S] * N[s]/N[p]
and the impedance ratio for a transformer is as follows:
Z[P]=Z[S] * (N[p]/N[s])^2
Eddy currents are loops of induced current which you can get in the cores of transformers producing heating and decreasing the efficiency. They can be reduced by using either a laminated or ferrite
PLL (Phase Lock Loops)
PLLs are used to produce discrete frequencies in radios, they use a phase comparator to compare a divided down (divided by A) reference clock (f[crystal])with the output of a Voltage Controlled
Oscillator (VCO) (f[out]) divided by a programmable divider (divide by N) and use the comparator output to adjust the VCO frequency.
When using a PLL you would apply the formula:
F[out] = (f[crystal] * N)/A
f[step] = f[crystal] / A
FM Modulation
FM Modulation is where the carrier frequency is altered by an amount proportional to the amplitude of the applied (AF) audio signal. The deviation (Δf) is how much the carrier varies when the
modulated signal is applied.
Carson's rule
Theoretically a carrier that is Frequency Modulated by a broad spectrum of frequencies (eg. Audio) has infinite bandwidth, however 98% of the bandwidth of the signal can be calculated by using the
formula below:
bw = 2(AF[max] + Δf)
VSWR (Voltage Standing Wave Ratio)
When you connect a radio to an antenna via some coax, if there is an impedance mismatch, then when you transmit some of the AC voltage will get reflected back down the coax. The interaction between
the forward standing wave Voltage (V[f]) and the reflected standing wave Voltage (V[r]) can produce unacceptable feed point impedances which can damage your radio and are therefore to be avoided.
The VSWR can be calulated using the formula below:
SWR = V[max]/V[min] = (V[f] + V[r]) / (V[f] - V[r])
Diode Symbol:
A conductor is material where each atom has a surplus of electrons which can easily be encouraged to move from one atom to the next.
Using a battery connected to a conductor, the electrons flow from the cathode (-ve) to the anode (+ve), this is the opposite way to how conventional current is viewed, therefore what is normally
described is hole flow (the flow of the gaps left by the electrons flowing in the opposite direction).
By treating silicon it can be made to either have an excess of electrons, or a deficit.
silicon treated to have an excess of electrons is described as having been negatively doped (n) and silicon treated to have a deficit is described as having been positively doped (p).
If you join some p doped silicon to n doped silicon you will get a pn junction and the electrons will easily flow across that junction from the n doped material to the p doped material, but not very
easily in the opposite direction. This configuration is called a diode and even in the forward direction you will still need around 0.7V to get the current to flow.
The direction of the arrow in the diode symbol is the direction conventional current will flow easily.
BJTs (Bipolar Junction Transistors)
Transistor Symbol:
A Bipolar Junction Transistor consists of either doped silicon in an npn configuration or pnp configuration.
With the pnp transistor the electrons will flow across the base-emitter (np) junction [like a diode](in the opposite direction of the arrow on the transistor symbol), but not want to flow across the
collector-base (pn) junction [like a diode], by supplying extra electrons to the base (n) you can overcome the reverse biased collector-base (pn) diode junction and allow current to flow from the
collector to the emitter.
Using convention current flow (hole flow) current flows into the base of an npn transistor to the emitter, allowing a proportionally greater current to flow from the collector to the emitter, this is
the transistor's current gain β.
Radios often have a PTT (Push To Talk) output which can be used to drive other circuits, the circuit which is often used is an open collector NPN transistor. The one below is from the FT897D
schematic diagram:
The transistor used is a 2SD2211 and by looking at the datasheet for that transistor, one can find the transistor's β, DC current transfer ratio, current gain or hfe (all the same thing), in this
case using test conditions of Vcc = 5V AND Ic=0.1A it is specified as having a minimum β of 120 and a maximum of 390.
When PTT is enabled the transistor is being fed 5V, through the 470 ohm resistor and then through what is effectively a diode (pn junction of the transistor), so to calculate the current into the
base of the transistor I[b] you need to subtract 0.7V (for the base-emitter voltage drop) from 5V to give 4.3V across the 470 ohm resistor.
As V=IR and I=V/R you can calculate the base current I[b]:
I[b] = 4.3V / 470 = 0.00915 Amps
This would normally be written as 9.15 mA
To work out that the gain of the transistor is enough to switch on an output relay you would use the minimum β value from the data sheet and multiply it by the base current I[b] to give the collector
current I[c]:
I[c] = β * I[b] = 120 * 9.15mA = 1.10 Amps
The output of the transistor (collector) is connected to a reverse biased diode to earth, the reason for this is so that if you connect the output to an inductive load (e.g. a relay) when you turn
the transistor off you can get a large negative voltage which would damage the transistor if the diode wasn't there to keep the negative voltage down to around -0.7V.
There is also a capacitor (0.01μF) and an EMI Suppression Filter inductor BLM21P300SPT (30 ohms impedance, 0.015 ohms DC resistance, max current = 3 Amps) to limit any rf being fed back into the
radio from external circuits.
Although the transistor has a maximum Ic of 1.5 Amps and we worked out Ic would be at least 1.10 Amps and the EMI Suppression Filter is rated at 3 Amps the FT897D manual specifies that the output is
only rated to drive a relay coil with a current of up to 400mA. This may be due to the width of the tracks on the printed circuit board.
The Ic we worked out was based on the minimum β and normally it isn't a good idea to rely on the gain of the transistor to ensure that you don't exceed the maximum collector current of the device
(make sure the load resistor at the collector is enough to keep the current down).
In this situation if you are using a Voltage supply of 13.8V and the maximum current allowed is 400mA, the saturation voltage (collector emitter voltage) for a BJT is around 0.2V, so 13.8V - 0.2V =
V/I = R = 13.6V/0.4A = 34 ohms
Therefore the minimum load resistance the output can drive is 34 ohms.
Power Levels (dB)
RF power gain/loss in normally measured in decibels, and is defined by the following formula:
Gain (loss) = 10log[10](Power out/Power in) dB
So if your amplifier has an input of 1W and an output of 10W you can work out the gain in dB as follows:
Gain (loss) = 10log[10](10/1) dB = 10log[10](10) dB = 10 * 1 dB = 10dB
If you measured the voltage on the input and the output of your amplifier as you know the input and output impedance this gives:
Gain (loss) = 10log[10](V[out]^2 / R[out]) / (V[in]^2 / R[in])
If R[in] = R[out] you can cancel out the two resistors giving:
Gain (loss) = 10log[10](V[out]^2) / (V[in]^2)
One of the standard properties of logarithms is that:
10log[10](a^2/b^2 = 20log[10](a/b)
Therefore the gain can be re-written as:
Gain (loss) = 20log[10](V[out]) / V[in])
When using Decibels for RF Power it is assumed that the input and output impedance are the same.
Antenna Gain (dB)
When quoting the gain of a Yagi Antenna you would normally use dBd, which is gain relative to a dipole (in dB), the formula to work it out is as follows:
Gain = 10Log[10](power from Yagi)/(power from dipole) dBd
Return Loss (dB)
The Return Loss is related to the SWR in that the lower (better) the SWR, the higher the Return Loss, Return loss is calculated as follows:
Return Loss = 10log[10](Reflected power / Incident power) dB
Incident power is sometimes called Forward power
Velocity Factor
The velocity of light in a vacuum (c) is 299, 792, 458 m/s, the speed of a radio wave in a vacuum is the same. However radio signals travel at a slower speed through conductors and the ratio between
the velocity of light and the speed radio signals travel through a specific conductor is called the Velocity factor (VF), this can be written as follows:
v = VF * c
To make calculations easier it is normal to round c up to 300,000,000 m/s
The wavelength of a radio wave is related to its Frequency by the formula:
v = fλ
where v is the velocity of the radio wave, usually between 0.5 and 1.0 times the speed of light, f is the frequency and λ is the wavelength.
Using v = 300,000,000 m/s, so at 145,375,000 Hz (145.375MHz) the wavelength λ can be calculated as follows:
λ = v/f = 300,000,000 / 145,375,000 = 2.064 metres
You may note that this is very close to 2 metres which is why 145.375MHz is described as being in the 2 metre frequency band.
The period of the waveform can also be calculated using the formula:
T = 1/f
Where T is in Seconds and f is in Hz.
PSU Voltages
When designing power supplies you need to be aware that mains voltage is rms, and you need to use peak voltage when calculating maximum voltage levels and the DC output after rectification and
smoothing. The RMS (Root Mean Squared) Voltage is related to the peak voltage using the formula:
V[rms] = V[peak] / 1.414
The number 1.414 is actually the square root of 2, but for most calculations 1.414 will do.
So using a 230V in 15V out transformer your output (with no load) will be 15V RMS, you will need to calculate the DC voltage, so that you can calculate the voltage rating of your power supply
V[peak] = 15V * 1.414 = 21.21V
So after smoothing you will have 21.21V, although you will probably lose around 10% under full load conditions, which is the reason for using a regulated power supply rather than just a transformer,
rectifier and smoothing capacitor.
Last modified: July 07 2017 15:22:34. | {"url":"http://www.staffordengineeringsystems.co.uk/advanced.php","timestamp":"2024-11-03T23:25:39Z","content_type":"text/html","content_length":"23715","record_id":"<urn:uuid:6e5202be-01dd-4a85-a5cf-f1aad7678b53>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00775.warc.gz"} |
orksheets for 8th Class
Recommended Topics for you
Number Sense and Operations
Number Sense Fluency Triads
Number Sense with Decimals
Number Sense Review Day 2
PA: Number Sense Strand (8.1 - 8.4)
8th Grade Unit 8 Number Sense
PRE-Algebra SOL Review, number sense
Statistics & Number Sense
3/10 Warm up Number Sense
Number and Number Sense (7/8)
Short Review on Number Sense
Explore Number Sense Worksheets by Grades
Explore Number Sense Worksheets for class 8 by Topic
Explore Other Subject Worksheets for class 8
Explore printable Number Sense worksheets for 8th Class
Number Sense worksheets for Class 8 are an essential tool for teachers looking to enhance their students' understanding of mathematics. These worksheets provide a variety of engaging and challenging
exercises that focus on building a strong foundation in number sense, which is crucial for success in higher-level math courses. By incorporating these worksheets into their lesson plans, teachers
can ensure that their Class 8 students develop the necessary skills to tackle complex mathematical problems with confidence. Furthermore, these Number Sense worksheets for Class 8 are designed to
cater to different learning styles, making them an invaluable resource for any math classroom.
Quizizz is a fantastic platform that offers a wide range of educational resources, including Number Sense worksheets for Class 8, to support teachers in their quest to provide the best possible
learning experience for their students. In addition to worksheets, Quizizz also offers interactive quizzes, engaging games, and other multimedia content that can be easily integrated into lesson
plans to create a dynamic and interactive learning environment. By utilizing Quizizz's extensive library of resources, teachers can ensure that their Class 8 math students receive a well-rounded
education that not only focuses on number sense but also covers other important mathematical concepts. Moreover, Quizizz's user-friendly interface and customizable features make it an ideal tool for
teachers looking to create personalized learning experiences for their students. | {"url":"https://quizizz.com/en/number-sense-worksheets-class-8","timestamp":"2024-11-07T15:28:49Z","content_type":"text/html","content_length":"163637","record_id":"<urn:uuid:8da5e48a-f17f-4b19-8e49-f45fd2ef23f9>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00740.warc.gz"} |
On triple intersections of three families of unit circles
Let p[1], p[2], p[3] be three distinct points in the plane, and, for i = 1, 2, 3, let Ci be a family of n unit circles that pass through pi. We address a conjecture made by Székely, and show that the
number of points incident to a circle of each family is O(n^11/6), improving an earlier bound for this problem due to Elekes, Simonovits, and Szabó [4]. The problem is a special instance of a more
general problem studied by Elekes and Szabó [5] (and by Elekes and Rónyai [3]).
Publication series
Name Proceedings of the Annual Symposium on Computational Geometry
Conference 30th Annual Symposium on Computational Geometry, SoCG 2014
Country/Territory Japan
City Kyoto
Period 8/06/14 → 11/06/14
• Combinatorial geometry
• Incidences
• Polynomials
• Unit circles
Dive into the research topics of 'On triple intersections of three families of unit circles'. Together they form a unique fingerprint. | {"url":"https://cris.huji.ac.il/en/publications/on-triple-intersections-of-three-families-of-unit-circles-14","timestamp":"2024-11-12T03:40:27Z","content_type":"text/html","content_length":"44002","record_id":"<urn:uuid:fd7d67e2-482c-4c35-82d9-1d331835c0d2>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00405.warc.gz"} |
OpenStax College Physics for AP® Courses, Chapter 30, Problem 33 (Problems & Exercises)
(a) What energy photons can pump chromium atoms in a ruby laser from the ground state to its second and third excited states? (b) What are the wavelengths of these photons? Verify that they are in
the visible part of the spectrum.
Question by
is licensed under
CC BY 4.0
Final Answer
a. $\textrm{E}_2 = 2.3 \textrm{ eV}$
$\textrm{E}_3 = 3.0 \textrm{ eV}$
b. $\lambda_2 = 540 \textrm{ nm}.$ This is green.
$\lambda_3 = 410 \textrm{ nm}$. This is violet.
Solution video
OpenStax College Physics for AP® Courses, Chapter 30, Problem 33 (Problems & Exercises)
vote with a rating of votes with an average rating of.
Video Transcript
This is College Physics Answers with Shaun Dychko. Here's a picture illustrating the energy levels of electrons in ruby in a ruby-chromium laser and we are told to say what is the energy of the
second and third states beginning at ground state. So from ground state, it would take 2.3 electron volts to get to the second energy level whereas from the ground state, it would take 3.0 electron
volts to get to the third energy level. And what wavelength photon would correspond to these energy differences. Well, energy of a photon is Planck's constant times speed of light divided by
wavelength and so we can multiply both sides by λ over E to solve for λ. So the wavelength is hc over E. So the wavelength to get to the second energy level is gonna be 1240 electron volt nanometers;
that's what the product hc is divided by 2.3 electron volts and these electron volts cancel leaving us with units of nanometers that's 540 nanometers; that's the color green. And to get to the third
energy level, we expect an answer that is a shorter wavelength because shorter wavelengths have higher frequency and therefore higher energy since you could also express energy with the formula hf.
And so getting a higher energy to the third energy level should require a shorter wavelength and sure enough, it's 410 nanometers which is the color violet. | {"url":"https://collegephysicsanswers.com/openstax-solutions/what-energy-photons-can-pump-chromium-atoms-ruby-laser-ground-state-its-second-0","timestamp":"2024-11-08T17:36:17Z","content_type":"text/html","content_length":"197990","record_id":"<urn:uuid:af89bb7f-1879-44b1-a02b-7a043e42dfa4>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00138.warc.gz"} |
1. Repayment Calculator Excel: A Powerful Tool for Managing Your Loans - Wave Sold
1. Repayment Calculator Excel: A Powerful Tool for Managing Your Loans
by Mark Soldy
In the realm of personal finance, meticulous planning and informed decision-making hold the key to achieving financial stability and success. Whether you’re an individual seeking to optimize your
loan repayment strategy or a business owner navigating the complexities of managing multiple loans, the [1. Repayment Calculator Excel: A Powerful Tool for Managing Your Loans] offers a comprehensive
Key Takeaways:
• A loan involves an amount, interest rate, number of periodic payments, and payment amount.
• PMT function in Excel calculates loan payment amounts.
• PMT formula: PMT ( rate, periods, - amount)
• Rate equals the annual interest rate divided by payment periods per year.
• Periods represent the total number of loan payments.
• Amount refers to the total loan amount.
• Optional arguments include payment type (0 for end of period, 1 for beginning) and present value (current loan value).
Relevant Sources:
Repayment Calculator Excel: A Tool to Tame Your Loans
Juggling multiple loans can be a financial tightrope walk, but with the repayment calculator Excel, you can transform loan management into a walk in the park. This powerful tool helps you understand
your loans, project payments, and make informed decisions about your finances.
Unveiling the Essence of Loan Repayment
To comprehend the inner workings of the repayment calculator Excel, we must first unravel the fundamental components of a loan:
1. Loan Amount: The total sum borrowed.
2. Interest Rate: The cost of borrowing money, usually expressed as an annual percentage.
3. Loan Term: The duration over which the loan is to be repaid.
4. Repayment Amount: The fixed sum paid periodically to settle the loan.
Harnessing the Power of PMT Function
The repayment calculator Excel leverages the PMT function to calculate the repayment amount for a loan. This function takes three mandatory arguments:
1. Interest Rate: The annual interest rate divided by the number of payment periods per year.
2. Number of Periods: The total number of payments to be made over the loan’s lifetime.
3. Loan Amount: The total amount borrowed, entered as a negative value.
Additional Arguments:
1. Payment Type: Specify whether payments are made at the beginning (1) or end (0) of each period (default).
2. Present Value: The current value of the loan (optional).
Navigating PMT Function with Examples
Let’s embark on a practical journey to illustrate the PMT function’s prowess. Consider a loan of $10,000 with an annual interest rate of 5% over five years. With monthly payments, the PMT function
would be:
=PMT(5%/12, 5*12, -10000)
Evaluating this formula yields a monthly repayment amount of $215.07.
Benefits of Repayment Calculator Excel:
1. Accurate Calculations: Precisely computes loan payments, interest, and total repayment.
2. Scenario Analysis: Explore different interest rates, loan terms, and payment amounts to optimize your repayment strategy.
3. Budget Planning: Project future loan payments to create a realistic budget.
4. Debt Management: Keep track of multiple loans, ensuring timely payments and avoiding penalties.
In conclusion, the repayment calculator Excel is an invaluable tool that empowers you to take control of your loans. Its user-friendly interface and accurate calculations make it a must-have for
anyone seeking to manage their finances effectively.
Amortization Calculator Excel Formula
Navigating the complexities of loan repayment schedules can be daunting, especially if you’re juggling multiple loans with varying terms and interest rates. Fortunately, Excel’s amortization
calculator formula simplifies this process, providing a clear breakdown of your loan payments, interest, and principal balances over time.
To harness the power of Excel’s amortization calculator, follow these simple steps:
1. Set Up Your Loan Parameters:
2. Open a new Excel spreadsheet and create a table with columns for “Month,” “Beginning Balance,” “Payment,” “Interest Paid,” “Principal Paid,” and “Ending Balance.”
3. In the first row, enter the following information:
□ Cell A1: “Month 0” (this represents the start of the loan)
□ Cell B1: The total loan amount
□ Cell C1: The monthly payment amount
□ Cell D1: The monthly interest rate divided by 12 (e.g., for a 6% annual interest rate, enter 0.06 / 12)
4. Leave the remaining cells empty for now.
5. Calculate the Amortization Schedule:
6. In cell B2, enter the formula =B1. This copies the beginning balance from the previous month.
7. In cell C2, enter the formula =C1. This copies the monthly payment amount.
8. In cell D2, enter the formula =B2*D1. This calculates the interest paid for the month.
9. In cell E2, enter the formula =C2-D2. This calculates the principal paid for the month.
10. In cell F2, enter the formula =B2-E2. This calculates the ending balance for the month.
11. Complete the Amortization Schedule:
12. Drag the formulas in cells B2:F2 down to the remaining rows in the table. This will automatically populate the amortization schedule for the entire loan term.
13. Analyze Your Loan:
14. Review the amortization schedule to understand how your payments are allocated towards interest and principal over time.
15. Identify the month when you will reach the “break-even” point, where the majority of your payments are applied towards the principal.
16. Calculate the total interest paid over the life of the loan by summing up the values in the “Interest Paid” column.
The amortization calculator Excel formula provides a comprehensive breakdown of your loan repayment schedule, empowering you to make informed decisions about your finances. Whether you’re planning
for the future or refinancing an existing loan, this tool offers valuable insights into your loan’s behavior.
Key Takeaways:
• The amortization calculator excel formula generates a detailed repayment schedule, tracking loan payments, interest, and principal balances over time.
• It helps you understand how your payments are allocated, allowing you to assess the impact of different loan terms and interest rates.
• The formula is easy to use, requiring only basic Excel skills and readily available loan information.
• This tool is valuable for financial planning, budgeting, and making informed decisions about loan repayment strategies.
Relevant Sources:
Q1: How can I create a loan repayment calculator in Excel?
A1: You can easily create a loan repayment calculator in Excel using the PMT function. The formula for PMT is:
PMT(rate, periods, amount)
• rate is the annual interest rate divided by the number of periods per year.
• periods is the total number of payments to be made over the life of the loan.
• amount is the total amount of the loan.
Q2: Can I use a prepayment option in the loan calculator?
A2: Yes, you can include a prepayment option in your loan calculator. To do this, you will need to add a column for the prepayment amount and adjust the formula to include the prepayment.
Q3: How do I create an amortization calculator in Excel?
A3: To create an amortization calculator in Excel, you can use the PMT function along with other functions such as CUMIPMT and CUMPRINC. This will allow you to create a table that shows the breakdown
of each payment, including the amount of interest and principal paid.
Q4: Can I use Excel to calculate the total interest paid on a loan?
A4: Yes, you can use Excel to calculate the total interest paid on a loan. To do this, you can use the SUMIF function to sum up the interest portion of each payment.
Q5: How can I compare different loan options in Excel?
A5: You can use Excel to compare different loan options by creating a table that shows the key details of each loan, such as the interest rate, loan amount, and monthly payment. You can then use
conditional formatting to highlight the best option.
Latest posts by Mark Soldy
(see all) | {"url":"https://www.wavesold.com/repayment-calculator-excel/","timestamp":"2024-11-11T11:25:16Z","content_type":"text/html","content_length":"167849","record_id":"<urn:uuid:332d048b-a6fe-4485-86ce-e44093cd8bbb>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00885.warc.gz"} |
Rates of change
There are 8 NRICH Mathematical resources connected to Rates of change
What is the quickest route across a ploughed field when your speed around the edge is greater?
Match the descriptions of physical processes to these differential equations.
Various solids are lowered into a beaker of water. How does the water level rise in each case?
Can you find the lap times of the two cyclists travelling at constant speeds?
My average speed for a journey was 50 mph, my return average speed of 70 mph. Why wasn't my average speed for the round trip 60mph ?
A conveyor belt, with tins placed at regular intervals, is moving at a steady rate towards a labelling machine. A gerbil starts from the beginning of the belt and jumps from tin to tin.
Explore the rates of growth of the sorts of simple polynomials often used in mathematical modelling.
An article introducing the ideas of differentiation. | {"url":"https://nrich.maths.org/tags/rates-change","timestamp":"2024-11-14T18:06:28Z","content_type":"text/html","content_length":"51797","record_id":"<urn:uuid:e5ad4e86-857a-4879-aaa9-c9f74e5f5eee>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00114.warc.gz"} |
Node350 - CFD SUPPORT
This is an automatically generated documentation by LaTeX2HTML utility. In case of any issue, please, contact us at info@cfdsupport.com.
A direct numerical simulation (DNS) is a simulation in computational fluid dynamics in which the Navier–Stokes equations are numerically solved without any turbulence model. This means that the
whole range of spatial and temporal scales of the turbulence must be resolved. All the spatial scales of the turbulence must be resolved in the computational mesh, from the smallest dissipative
scales (Kolmogorov microscales), up to the integral scale L, associated with the motions containing most of the kinetic energy. The Kolmogorov scale, | {"url":"https://www.cfdsupport.com/OpenFOAM-Training-by-CFD-Support/node350.html","timestamp":"2024-11-11T21:22:35Z","content_type":"text/html","content_length":"56951","record_id":"<urn:uuid:b78accd6-4307-4a28-980d-2bbb9920b7f9>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00143.warc.gz"} |
scipy.integrate.quad(func, a, b, args=(), full_output=0, epsabs=1.49e-08, epsrel=1.49e-08, limit=50, points=None, weight=None, wvar=None, wopts=None, maxp1=50, limlst=50, complex_func=False)[source]#
Compute a definite integral.
Integrate func from a to b (possibly infinite interval) using a technique from the Fortran library QUADPACK.
func{function, scipy.LowLevelCallable}
A Python function or method to integrate. If func takes many arguments, it is integrated along the axis corresponding to the first argument.
If the user desires improved integration performance, then f may be a scipy.LowLevelCallable with one of the signatures:
double func(double x)
double func(double x, void *user_data)
double func(int n, double *xx)
double func(int n, double *xx, void *user_data)
The user_data is the data contained in the scipy.LowLevelCallable. In the call forms with xx, n is the length of the xx array which contains xx[0] == x and the rest of the items are
numbers contained in the args argument of quad.
In addition, certain ctypes call signatures are supported for backward compatibility, but those should not be used in new code.
Lower limit of integration (use -numpy.inf for -infinity).
Upper limit of integration (use numpy.inf for +infinity).
argstuple, optional
Extra arguments to pass to func.
full_outputint, optional
Non-zero to return a dictionary of integration information. If non-zero, warning messages are also suppressed and the message is appended to the output tuple.
complex_funcbool, optional
Indicate if the function’s (func) return type is real (complex_func=False: default) or complex (complex_func=True). In both cases, the function’s argument is real. If full_output is also
non-zero, the infodict, message, and explain for the real and complex components are returned in a dictionary with keys “real output” and “imag output”.
The integral of func from a to b.
An estimate of the absolute error in the result.
A dictionary containing additional information.
A convergence message.
Appended only with ‘cos’ or ‘sin’ weighting and infinite integration limits, it contains an explanation of the codes in infodict[‘ierlst’]
Other Parameters:
epsabsfloat or int, optional
Absolute error tolerance. Default is 1.49e-8. quad tries to obtain an accuracy of abs(i-result) <= max(epsabs, epsrel*abs(i)) where i = integral of func from a to b, and result is the
numerical approximation. See epsrel below.
epsrelfloat or int, optional
Relative error tolerance. Default is 1.49e-8. If epsabs <= 0, epsrel must be greater than both 5e-29 and 50 * (machine epsilon). See epsabs above.
limitfloat or int, optional
An upper bound on the number of subintervals used in the adaptive algorithm.
points(sequence of floats,ints), optional
A sequence of break points in the bounded integration interval where local difficulties of the integrand may occur (e.g., singularities, discontinuities). The sequence does not have to be
sorted. Note that this option cannot be used in conjunction with weight.
weightfloat or int, optional
String indicating weighting function. Full explanation for this and the remaining arguments can be found below.
Variables for use with weighting functions.
Optional input for reusing Chebyshev moments.
maxp1float or int, optional
An upper bound on the number of Chebyshev moments.
limlstint, optional
Upper bound on the number of cycles (>=3) for use with a sinusoidal weighting and an infinite end-point.
See also
double integral
triple integral
n-dimensional integrals (uses quad recursively)
fixed-order Gaussian quadrature
integrator for sampled data
integrator for sampled data
for coefficients and roots of orthogonal polynomials
For valid results, the integral must converge; behavior for divergent integrals is not guaranteed.
Extra information for quad() inputs and outputs
If full_output is non-zero, then the third output argument (infodict) is a dictionary with entries as tabulated below. For infinite limits, the range is transformed to (0,1) and the optional
outputs are given with respect to this transformed range. Let M be the input argument limit and let K be infodict[‘last’]. The entries are:
The number of function evaluations.
The number, K, of subintervals produced in the subdivision process.
A rank-1 array of length M, the first K elements of which are the left end points of the subintervals in the partition of the integration range.
A rank-1 array of length M, the first K elements of which are the right end points of the subintervals.
A rank-1 array of length M, the first K elements of which are the integral approximations on the subintervals.
A rank-1 array of length M, the first K elements of which are the moduli of the absolute error estimates on the subintervals.
A rank-1 integer array of length M, the first L elements of which are pointers to the error estimates over the subintervals with L=K if K<=M/2+2 or L=M+1-K otherwise. Let I be the sequence
infodict['iord'] and let E be the sequence infodict['elist']. Then E[I[1]], ..., E[I[L]] forms a decreasing sequence.
If the input argument points is provided (i.e., it is not None), the following additional outputs are placed in the output dictionary. Assume the points sequence is of length P.
A rank-1 array of length P+2 containing the integration limits and the break points of the intervals in ascending order. This is an array giving the subintervals over which integration will
A rank-1 integer array of length M (=limit), containing the subdivision levels of the subintervals, i.e., if (aa,bb) is a subinterval of (pts[1], pts[2]) where pts[0] and pts[2] are adjacent
elements of infodict['pts'], then (aa,bb) has level l if |bb-aa| = |pts[2]-pts[1]| * 2**(-l).
A rank-1 integer array of length P+2. After the first integration over the intervals (pts[1], pts[2]), the error estimates over some of the intervals may have been increased artificially in
order to put their subdivision forward. This array has ones in slots corresponding to the subintervals for which this happens.
Weighting the integrand
The input variables, weight and wvar, are used to weight the integrand by a select list of functions. Different integration methods are used to compute the integral with these weighting
functions, and these do not support specifying break points. The possible values of weight and the corresponding weighting functions are.
weight Weight function used wvar
‘cos’ cos(w*x) wvar = w
‘sin’ sin(w*x) wvar = w
‘alg’ g(x) = ((x-a)**alpha)*((b-x)**beta) wvar = (alpha, beta)
‘alg-loga’ g(x)*log(x-a) wvar = (alpha, beta)
‘alg-logb’ g(x)*log(b-x) wvar = (alpha, beta)
‘alg-log’ g(x)*log(x-a)*log(b-x) wvar = (alpha, beta)
‘cauchy’ 1/(x-c) wvar = c
wvar holds the parameter w, (alpha, beta), or c depending on the weight selected. In these expressions, a and b are the integration limits.
For the ‘cos’ and ‘sin’ weighting, additional inputs and outputs are available.
For finite integration limits, the integration is performed using a Clenshaw-Curtis method which uses Chebyshev moments. For repeated calculations, these moments are saved in the output
The maximum level of Chebyshev moments that have been computed, i.e., if M_c is infodict['momcom'] then the moments have been computed for intervals of length |b-a| * 2**(-l), l=0,1,...,M_c.
A rank-1 integer array of length M(=limit), containing the subdivision levels of the subintervals, i.e., an element of this array is equal to l if the corresponding subinterval is |b-a|* 2**
A rank-2 array of shape (25, maxp1) containing the computed Chebyshev moments. These can be passed on to an integration over the same interval by passing this array as the second element of
the sequence wopts and passing infodict[‘momcom’] as the first element.
If one of the integration limits is infinite, then a Fourier integral is computed (assuming w neq 0). If full_output is 1 and a numerical error is encountered, besides the error message attached
to the output tuple, a dictionary is also appended to the output tuple which translates the error codes in the array info['ierlst'] to English messages. The output information dictionary contains
the following entries instead of ‘last’, ‘alist’, ‘blist’, ‘rlist’, and ‘elist’:
The number of subintervals needed for the integration (call it K_f).
A rank-1 array of length M_f=limlst, whose first K_f elements contain the integral contribution over the interval (a+(k-1)c, a+kc) where c = (2*floor(|w|) + 1) * pi / |w| and k=1,2,...,K_f.
A rank-1 array of length M_f containing the error estimate corresponding to the interval in the same position in infodict['rslist'].
A rank-1 integer array of length M_f containing an error flag corresponding to the interval in the same position in infodict['rslist']. See the explanation dictionary (last entry in the
output tuple) for the meaning of the codes.
Details of QUADPACK level routines
quad calls routines from the FORTRAN library QUADPACK. This section provides details on the conditions for each routine to be called and a short description of each routine. The routine called
depends on weight, points and the integration limits a and b.
QUADPACK routine weight points infinite bounds
qagse None No No
qagie None No Yes
qagpe None Yes No
qawoe ‘sin’, ‘cos’ No No
qawfe ‘sin’, ‘cos’ No either a or b
qawse ‘alg*’ No No
qawce ‘cauchy’ No No
The following provides a short description from [1] for each routine.
is an integrator based on globally adaptive interval subdivision in connection with extrapolation, which will eliminate the effects of integrand singularities of several types.
handles integration over infinite intervals. The infinite range is mapped onto a finite interval and subsequently the same strategy as in QAGS is applied.
serves the same purposes as QAGS, but also allows the user to provide explicit information about the location and type of trouble-spots i.e. the abscissae of internal singularities,
discontinuities and other difficulties of the integrand function.
is an integrator for the evaluation of \(\int^b_a \cos(\omega x)f(x)dx\) or \(\int^b_a \sin(\omega x)f(x)dx\) over a finite interval [a,b], where \(\omega\) and \(f\) are specified by the
user. The rule evaluation component is based on the modified Clenshaw-Curtis technique
An adaptive subdivision scheme is used in connection with an extrapolation procedure, which is a modification of that in QAGS and allows the algorithm to deal with singularities in \(f(x)\).
calculates the Fourier transform \(\int^\infty_a \cos(\omega x)f(x)dx\) or \(\int^\infty_a \sin(\omega x)f(x)dx\) for user-provided \(\omega\) and \(f\). The procedure of QAWO is applied on
successive finite intervals, and convergence acceleration by means of the \(\varepsilon\)-algorithm is applied to the series of integral approximations.
approximate \(\int^b_a w(x)f(x)dx\), with \(a < b\) where \(w(x) = (x-a)^{\alpha}(b-x)^{\beta}v(x)\) with \(\alpha,\beta > -1\), where \(v(x)\) may be one of the following functions: \(1\), \
(\log(x-a)\), \(\log(b-x)\), \(\log(x-a)\log(b-x)\).
The user specifies \(\alpha\), \(\beta\) and the type of the function \(v\). A globally adaptive subdivision strategy is applied, with modified Clenshaw-Curtis integration on those
subintervals which contain a or b.
compute \(\int^b_a f(x) / (x-c)dx\) where the integral must be interpreted as a Cauchy principal value integral, for user specified \(c\) and \(f\). The strategy is globally adaptive.
Modified Clenshaw-Curtis integration is used on those intervals containing the point \(x = c\).
Integration of Complex Function of a Real Variable
A complex valued function, \(f\), of a real variable can be written as \(f = g + ih\). Similarly, the integral of \(f\) can be written as
\[\int_a^b f(x) dx = \int_a^b g(x) dx + i\int_a^b h(x) dx\]
assuming that the integrals of \(g\) and \(h\) exist over the interval \([a,b]\) [2]. Therefore, quad integrates complex-valued functions by integrating the real and imaginary components
Piessens, Robert; de Doncker-Kapenga, Elise; Überhuber, Christoph W.; Kahaner, David (1983). QUADPACK: A subroutine package for automatic integration. Springer-Verlag. ISBN 978-3-540-12553-2.
McCullough, Thomas; Phillips, Keith (1973). Foundations of Analysis in the Complex Plane. Holt Rinehart Winston. ISBN 0-03-086370-8
Calculate \(\int^4_0 x^2 dx\) and compare with an analytic result
>>> from scipy import integrate
>>> import numpy as np
>>> x2 = lambda x: x**2
>>> integrate.quad(x2, 0, 4)
(21.333333333333332, 2.3684757858670003e-13)
>>> print(4**3 / 3.) # analytical result
Calculate \(\int^\infty_0 e^{-x} dx\)
>>> invexp = lambda x: np.exp(-x)
>>> integrate.quad(invexp, 0, np.inf)
(1.0, 5.842605999138044e-11)
Calculate \(\int^1_0 a x \,dx\) for \(a = 1, 3\)
>>> f = lambda x, a: a*x
>>> y, err = integrate.quad(f, 0, 1, args=(1,))
>>> y
>>> y, err = integrate.quad(f, 0, 1, args=(3,))
>>> y
Calculate \(\int^1_0 x^2 + y^2 dx\) with ctypes, holding y parameter as 1:
testlib.c =>
double func(int n, double args[n]){
return args[0]*args[0] + args[1]*args[1];}
compile to library testlib.*
from scipy import integrate
import ctypes
lib = ctypes.CDLL('/home/.../testlib.*') #use absolute path
lib.func.restype = ctypes.c_double
lib.func.argtypes = (ctypes.c_int,ctypes.c_double)
#(1.3333333333333333, 1.4802973661668752e-14)
print((1.0**3/3.0 + 1.0) - (0.0**3/3.0 + 0.0)) #Analytic result
# 1.3333333333333333
Be aware that pulse shapes and other sharp features as compared to the size of the integration interval may not be integrated correctly using this method. A simplified example of this limitation
is integrating a y-axis reflected step function with many zero values within the integrals bounds.
>>> y = lambda x: 1 if x<=0 else 0
>>> integrate.quad(y, -1, 1)
(1.0, 1.1102230246251565e-14)
>>> integrate.quad(y, -1, 100)
(1.0000000002199108, 1.0189464580163188e-08)
>>> integrate.quad(y, -1, 10000)
(0.0, 0.0) | {"url":"https://scipy.github.io/devdocs/reference/generated/scipy.integrate.quad.html","timestamp":"2024-11-14T08:43:41Z","content_type":"text/html","content_length":"63641","record_id":"<urn:uuid:bb5087c6-559c-42f9-9813-f3f47492b77c>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00304.warc.gz"} |
Week 2 ANS - Numerical Descriptive Measures
Numerical Descriptive
This worksheet relates to chapter three of the text
book (Statistics for Managers 4th Edition).
Past exam questions are very important to give you
an idea of what to expect in an exam and how
prepared you are. You may need to practice doing
questions quickly so that when you are in the exam
you don’t panic.
1. Using the following data to calculate the mean, median, mode, 1st quartile
and 3rd quartile. Are there any outliers?
Mean: Median:
11.63 position = 6
value = 9
Mode: 1st quartile:
9 position = 3
value = 8
3rd quartile: Outliers:
position = 9 40
value = 10
2. The price of renting a car for a week, with manual transmission but
declining collision damage waiver in 12 European countries is presented
in the table. Calculate the mean and interquartile range.
Country Rental Price
Austria 239
Britain 179
France 229
Netherlands 181
Sweden 216
Germany 194
192 Mid Semester, April 2005
In order:
Mean: 216.33
IQ range:
Q1 position: 3.25
Q1 value: 181 (ie. 3rd score)
Q3 position: 9.75
Q3 value: 241 (ie. 10th score)
∴ IQ range = 241 – 181
= 60
At the end of each chapter of the textbook
there is a summary flow chart. They can be
really useful to work out exactly how all of
the concepts fit together.
3. Calculate the coefficient of variation for the following two sets of data
using the given information. Which has greater variation? Why do we
use the coefficient of variation here, and not the standard deviation or
Grams of cereal 400 392 415 387 407
Kilograms of rock 4000 4365 3625 4184 3748
Cereal Rock
Mean = 400.2 Mean = 3984.4
SD = 11.256 SD = 304.135
CV = 2.81% CV = 7.633%
1. An error has been made in the scaling of exam marks, and five extra
marks are given to every student. The result is that:
(a) The mean score increases by 5 but the standard deviation does
not change
(b) Both the mean and the standard deviation increase by 5
(c) The mean increases by 5 and the standard deviation increases by
(d) The mean and variance increase by 5 Final Exam, June 2003
Final Exam, June 1997
2. The correlation coefficient between the price of broccoli and the amount
of rain that fell during the growing season is calculated to be -0.878. This
indicates that
(a) a large amount of rain causes high prices
(b) prices tend to be low when rainfall is high
(c) prices tend to high when rainfall is high
(d) a lack of rain causes prices to rise
Final Exam, June 2004
3. A student scored 70% on the mid-semester exam, 83% on the CML’s
and 76% on the final exam. Find the average score if the mid-semester
exam was worth 25% and the final exam was worth 55%.
(a) 76.3% Mid Semester, April 2000;
(b)75.9% Mid Semester, May 2003
(c) 78.35%
(d) 71.75%
4. The geometric mean
(a) is a better measure of dispersion than the arithmetic mean
(b) is preferable to an arithmetic mean when the data fluctuates
between positive to negative
(c) indicates the multiplicative effects over time in compound
interest situations
(d) both (b) and (c) are true Mid Semester, April 2005
5. The following is the descriptive statistics printout for a set of data, from
Mean 473.4615
Median 451
Mode n/a
Standard Deviation
Minimum 210.7663
Maximum 264
Sum 1049
Count 6155
Which of the following is true?
(a) the distribution is right-skewed
(b) the best measure of central tendency is the median
(c) both (a) and (b) are correct
(d) none of the above are correct
Mid Semester, April 2005 | {"url":"https://anyflip.com/wmyov/wvcc/basic","timestamp":"2024-11-05T20:33:36Z","content_type":"text/html","content_length":"35682","record_id":"<urn:uuid:94c0351a-ccfc-4a61-a24c-08152af460fd>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00882.warc.gz"} |
Gross Profit Margin Calculator
Gross Profit Margin Calculator is used to find out the selling price, the cost or the margin percentage itself
Gross Profit:
Gross Profit Margin:
What does Gross Profit Margin mean?
Gross profit margin is used to compare business models with competitors. And is a financial metric used to assess a company's financial health and business model. By revealing the proportion of
money left over from revenues after accounting for the cost of goods sold. The efficient or higher premium companies see higher profit margins.
Gross profit margin = Gross profit / Total revenue
□ Gross profit = Total revenue - COGS (Cost of goods sold)
A shops COGS (cost of goods sold is $30 and revenue $50, then the gross profit equals ($50 - $30) = $20
Applying the formula, $20 / $50 = 0.4 or 40% | {"url":"https://toolslick.com/finance/company/gross-profit-margin","timestamp":"2024-11-13T12:00:40Z","content_type":"text/html","content_length":"50014","record_id":"<urn:uuid:92aeaba3-c2ff-497a-bbdb-1e5eb2fd2df3>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00103.warc.gz"} |
Analysis of Variance - CFA, FRM, and Actuarial Exams Study Notes
Sometimes the simple linear regression model does not describe the relationship between two variables. To use regression analysis effectively, we must be able to differentiate the two cases.
Breaking down the sum of squares total into its components.
The sum of squares total is the sum of squares regression (SSR) and the sum of squares error. The sum of squares regression is the difference of the sum squared between the mean of the dependent
variable and the value of the dependent variable based on the estimated regression line . Hence, SST= SSR+SSE. Let us use an example to explain this.
$${\text{Sum of Squares Total (SST)}}= \sum_{i=1}^n(Y_i-\bar{Y})2 $$
$${\text{Sum of Squares Regression (SSR) }}= \sum_{i=1}^n(\hat{Y_i}-\bar{Y})^2 $$
$${\text{Sum of Squares Error (SSE) }}= \sum_{i=1}^n(Y_i-\hat{Y})2 $$
Exhibit 1: Breakdown of Sum of Squares Total for ROA Model.
$$\small{\begin{array}{l|l|l|l|l|l}\textbf{Company}&{\textbf{ROA}\\ (\textbf{Y}_{\textbf{i}})}&{\textbf{CAPEX}\\ (\textbf{X}_{\textbf{i}})}&{\textbf{Predicted}\\ \textbf{ROA} (\widehat{\textbf{Y}})}&
{\textbf{Variation}\\ \textbf{to be}\\ \textbf{Explained}\\(\textbf{Y}_{\textbf{i}}-\bar{\textbf{Y}})^{2}}&{\textbf{Variation}\\ \textbf{Unexplained}\\ (\textbf{Y}_{\textbf{i}}-\widehat{\textbf{Y}_{\
textbf{i}}})^{2}}&{\textbf{Variation}\\ \textbf{Explained}\\ (\widehat{\textbf{Y}_{\textbf{i}}}-\bar{\textbf{Y}})^{2}}\\ \hline\text{A} & 15 & 5 & 8.969 & 39.0625 & 23.698& 1.909 \\ \hline \text{B} &
6 & 0.7 & 6.103 & 7.5625 & 0.0107 & 7.005 \\ \hline \text{C} & 10 & 8 & 12.942 & 1.5625 & 8.658 & 17.58\\ \hline\text{D} & 4.0 & 0.4 & 5.822 & 22.5625 & 3.321 & 8.57\\ \hline \textbf{Total} & & & &
70.75 & 35.687 &35.064\\ \hline\text{Mean} & 8.75 & & & &\\ \end{array}}$$
From Exhibit 1 above, we see that
Sum of squares error= 35.687
Sum of squares regression= 35.064
Sum of squares total = 35.687+35.064= 70.75
This sum of squares will be an important input when we come to measure the fit of the regression line.
Measures of Goodness of Fit
The standard error of the regression, the F-statistic, and the coefficient of determination for the test of fit are all measures used to evaluate how well the regression model fits the data (goodness
fit). The coefficient of determination or R^2 measures the proportion of the total variability of the dependent variable explained by the independent variable. R^2 is calculated using the formula:
$${\text{Coefficient of Determination}}=\frac{\text{Sum of Squares Regression}}{\text{Sum of Squares Total}}$$
$${\text{Coefficient of Determination}} =\frac{{\sum_{i=1}^n(\hat{Y_i}-\bar{Y})^2}}{{\sum_{i=1}^n(Y_i-\bar{Y})2}}$$
The coefficient of determination will range from 0% to 100%. From Exhibit 1 on the ROA regression model, our R^2 would be 35.064÷ 70.75=0.4956= 49.56% which means that CAPEX explains 49.56% of the
variation in ROA. The coefficient of determination is not a statistical test. It is descriptive. To show the statistical significance of a regression model, we use the F-distributed statistic, which
is used to compare two variances. For simple regression analysis, F- distributed test statistic is used to determine if the slopes in regression are equal to zero against the alternative hypothesis
that at least one slope is not equal to zero.
The F- distributed statistic is formed by using the sum of squares error and the sum of squares regression, with each being adjusted for degrees of freedom. The sum of square regression is divided by
the number of independent variables to arrive at the mean square regression (MSR). In simple linear regression, the independent variables are represented by k, which is equal to 1.
$${\text{MSR}}=\frac{\text{Sum of Squares Regression}}{\text{k}}$$
Next, we go ahead and divide the sum of square errors by the degrees of freedom to calculate the mean square error (MSE). In simple linear regression, the degrees of freedom \(n-k-1\) becomes \(n-2
$${\text{MSE}}=\frac{\text{Sum of Squares Error}}{\text{n-k-l}}$$
Therefore F- distributed test statistic is:
The F- statistic in regression analysis is one-sided. The right side contains the rejection region because we want to determine if the variation in the numerator (Y explained) is larger than the
variation in the denominator (Y unexplained).
James, an analyst at QPC LTD, has estimated a model that regresses return on equity ROE against its growth opportunities (GO), which is its three-year compounded annual growth rate in sales over
the past 15 years. He was able to estimate the sum of squares error and sum of squares regression as follows:
Sum of Squares Error= 48.99
Sum of Squares Regression= 192.3
The Coefficient of Determination is closest to:
1. 214.29
2. 0797
3. 0.8927
The correct answer is B.
$${\text{The Coefficient of Determination}} = \frac{\text{Sum of Squares Regression}}{\text{Sum of Squares Total}}$$
First, we calculate the sum of squares total by adding sum of squares regression to sum of squares error. 192.3+48.99= 241.29
R^2=192.3÷241.29= 0.797 or 79.7%
A is incorrect. 241.29 is the sum of squares total
B is incorrect. 0.8927 is R which is the square root of the coefficient of determination.
Reading 0: Introduction to Linear Regression
LOS 0 (d) Calculate and interpret the coefficient of determination and the F-statistic in a simple linear regression
Jul 09, 2021
Backward Induction
Backward induction involves working backward from maturity to time 0 to determine a... Read More | {"url":"https://analystprep.com/study-notes/cfa-level-2/quantitative-method/analysis-of-variance/","timestamp":"2024-11-09T11:19:25Z","content_type":"text/html","content_length":"159023","record_id":"<urn:uuid:29561476-dd87-4c1d-9c49-dc037c693ff6>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00077.warc.gz"} |
This tutorial depends on step-49.
This program was contributed by Wolfgang Bangerth and Luca Heltai, using data provided by D. Sarah Stamps.
This program elaborates on concepts of geometry and the classes that implement it. These classes are grouped into the documentation topic on Manifold description for triangulations. See there for
additional information.
This tutorial is also available as a Jupyter Python notebook that uses the deal.II python interface. The notebook is available in the same directory as the original C++ program. Rendered notebook
can also be viewed on the github.
Partial differential equations for realistic problems are often posed on domains with complicated geometries. To provide just a few examples, consider these cases:
• Among the two arguably most important industrial applications for the finite element method, aerodynamics and more generally fluid dynamics is one. Computer simulations today are used in the
design of every airplane, car, train and ship. The domain in which the partial differential equation is posed is, in these cases, the air surrounding the plane with its wings, flaps and engines;
the air surrounding the car with its wheel, wheel wells, mirrors and, in the case of race cars, all sorts of aerodynamic equipment; the air surrounding the train with its wheels and gaps between
cars. In the case of ships, the domain is the water surrounding the ship with its rudders and propellers.
• The other of the two big applications of the finite element method is structural engineering in which the domains are bridges, airplane nacelles and wings, and other solid bodies of often
complicated shapes.
• Finite element modeling is also often used to describe the generation and propagation of earthquake waves. In these cases, one needs to accurately represent the geometry of faults in the Earth
crust. Since faults intersect, dip at angles, and are often not completely straight, domains are frequently very complex. One could cite many more examples of complicated geometries in which one
wants to pose and solve a partial differential equation. What this shows is that the "real" world is much more complicated than what we have shown in almost all of the tutorial programs preceding
this one.
This program is therefore devoted to showing how one deals with complex geometries using a concrete application. In particular, what it shows is how we make a mesh fit the domain we want to solve on.
On the other hand, what the program does not show is how to create a coarse mesh for a domain. The process to arrive at a coarse mesh is called "mesh generation" and there are a number of
high-quality programs that do this much better than we could ever implement. However, deal.II does have the ability to read in meshes in many formats generated by mesh generators and then make them
fit a given shape, either by deforming a mesh or refining it a number of times until it fits. The deal.II Frequently Asked Questions page referenced from http://www.dealii.org/ provides resources to
mesh generators.
Where geometry and meshes intersect
Let us assume that you have a complex domain and that you already have a coarse mesh that somehow represents the general features of the domain. Then there are two situations in which it is necessary
to describe to a deal.II program the details of your geometry:
• Mesh refinement: Whenever a cell is refined, it is necessary to introduce new vertices in the Triangulation. In the simplest case, one assumes that the objects that make up the Triangulation are
straight line segments, a bi-linear surface or a tri-linear volume. The next vertex is then simply put into the middle of the old ones. However, for curved boundaries or if we want to solve a PDE
on a curved, lower-dimensional manifold embedded in a higher-dimensional space, this is insufficient since it will not respect the actual geometry. We will therefore have to tell Triangulation
where to put new points.
• Integration: When using higher order finite element methods, it is often necessary to compute integrals using curved approximations of the boundary, i.e., describe each edge or face of cells as
curves, instead of straight line segments or bilinear patches. The same is, of course, true when integrating boundary terms (e.g., inhomogeneous Neumann boundary conditions). For the purpose of
integration, the various Mapping classes then provide the transformation from the reference cell to the actual cell.
In both cases, we need a way to provide information about the geometry of the domain at the level of an individual cell, its faces and edges. This is where the Manifold class comes into play.
Manifold is an abstract base class that only defines an interface by which the Triangulation and Mapping classes can query geometric information about the domain. Conceptually, Manifold sees the
world in a way not dissimilar to how the mathematical subdiscipline geometry sees it: a domain is essentially just a collection of points that is somehow equipped with the notion of a distance
between points so that we can obtain a point "in the middle" of some other points.
deal.II provides a number of classes that implement the interface provided by Manifold for a variety of common geometries. On the other hand, in this program we will consider only a very common and
much simpler case, namely the situation where (a part of) the domain we want to solve on can be described by transforming a much simpler domain (we will call this the "reference domain"). In the
language of mathematics, this means that the (part of the) domain is a chart. Charts are described by a smooth function that maps from the simpler domain to the chart (the "push-forward" function)
and its inverse (the "pull-back" function). If the domain as a whole is not a chart (e.g., the surface of a sphere), then it can often be described as a collection of charts (e.g., the northern
hemisphere and the southern hemisphere are each charts) and the domain can then be describe by an atlas.
If a domain can be decomposed into an atlas, all we need to do is provide the pull-back and push-forward functions for each of the charts. In deal.II, this means providing a class derived from
ChartManifold, and this is precisely what we will do in this program.
The example case
To illustrate how one describes geometries using charts in deal.II, we will consider a case that originates in an application of the ASPECT mantle convection code, using a data set provided by D.
Sarah Stamps. In the concrete application, we were interested in describing flow in the Earth mantle under the East African Rift, a zone where two continental plates drift apart. Not to beat around
the bush, the geometry we want to describe looks like this:
In particular, though you cannot see this here, the top surface is not just colored by the elevation but is, in fact, deformed to follow the correct topography. While the actual application is not
relevant here, the geometry is. The domain we are interested in is a part of the Earth that ranges from the surface to a depth of 500km, from 26 to 35 degrees East of the Greenwich meridian, and from
5 degrees North of the equator to 10 degrees South.
This description of the geometry suggests to start with a box \(\hat U=[26,35]\times[-10,5]\times[-500000,0]\) (measured in degrees, degrees, and meters) and to provide a map \(\varphi\) so that \(\
varphi^{-1}(\hat U)=\Omega\) where \(\Omega\) is the domain we seek. \((\Omega,\varphi)\) is then a chart, \(\varphi\) the pull-back operator, and \(\varphi^{-1}\) the push-forward operator. If we
need a point \(q\) that is the "average" of other points \(q_i\in\Omega\), the ChartManifold class then first applies the pull-back to obtain \(\hat q_i=\varphi(q_i)\), averages these to a point \(\
hat p\) and then computes \(p=\varphi^{-1}(\hat p)\).
Our goal here is therefore to implement a class that describes \(\varphi\) and \(\varphi^{-1}\). If Earth was a sphere, then this would not be difficult: if we denote by \((\hat \phi,\hat \theta,\hat
d)\) the points of \(\hat U\) (i.e., longitude counted eastward, latitude counted northward, and elevation relative to zero depth), then
\[ \mathbf x = \varphi^{-1}(\hat \phi,\hat \theta,\hat d) = (R+\hat d) (\cos\hat \phi\cos\hat \theta, \sin\hat \phi\cos\hat \theta, \sin\hat \theta)^T \]
provides coordinates in a Cartesian coordinate system, where \(R\) is the radius of the sphere. However, the Earth is not a sphere:
1. It is flattened at the poles and larger at the equator: the semi-major axis is approximately 22km longer than the semi-minor axis. We will account for this using the WGS 84 reference standard for
the Earth shape. The formula used in WGS 84 to obtain a position in Cartesian coordinates from longitude, latitude, and elevation is
\[ \mathbf x = \varphi_\text{WGS84}^{-1}(\phi,\theta,d) = \left( \begin{array}{c} (\bar R(\theta)+d) \cos\phi\cos\theta, \\ (\bar R(\theta)+d) \sin\phi\cos\theta, \\ ((1-e^2)\bar R(\theta)+d) \
sin\theta \end{array} \right), \]
where \(\bar R(\theta)=\frac{R}{\sqrt{1-(e \sin\theta)^2}}\), and radius and ellipticity are given by \(R=6378137\text{m}, e=0.081819190842622\). In this formula, we assume that the arguments to
sines and cosines are evaluated in degree, not radians (though we will have to change this assumption in the code).
2. It has topography in the form of mountains and valleys. We will account for this using real topography data (see below for a description of where this data comes from). Using this data set, we
can look up elevations on a latitude-longitude mesh laid over the surface of the Earth. Starting with the box \(\hat U=[26,35]\times[-10,5]\times[-500000,0]\), we will therefore first stretch it
in vertical direction before handing it off to the WGS 84 function: if \(h(\hat\phi,\hat\theta)\) is the height at longitude \(\hat\phi\) and latitude \(\hat\theta\), then we define
\[ (\phi,\theta,d) = \varphi_\text{topo}^{-1}(\hat\phi,\hat\theta,\hat d) = \left( \hat\phi, \hat\theta, \hat d + \frac{\hat d+500000}{500000}h(\hat\phi,\hat\theta) \right). \]
Using this function, the top surface of the box \(\hat U\) is displaced to the correct topography, the bottom surface remains where it was, and points in between are linearly interpolated.
Using these two functions, we can then define the entire push-forward function \(\varphi^{-1}: \hat U \rightarrow \Omega\) as
\[ \mathbf x = \varphi^{-1}(\hat\phi,\hat\theta,\hat d) = \varphi_\text{WGS84}^{-1}(\varphi_\text{topo}^{-1}(\hat\phi,\hat\theta,\hat d)). \]
In addition, we will have to define the inverse of this function, the pull-back operation, which we can write as
\[ (\hat\phi,\hat\theta,\hat d) = \varphi(\mathbf x) = \varphi_\text{topo}(\varphi_\text{WGS84}(\mathbf x)). \]
We can obtain one of the components of this function by inverting the formula above:
\[ (\hat\phi,\hat\theta,\hat d) = \varphi_\text{topo}(\phi,\theta,d) = \left( \phi, \theta, 500000\frac{d-h(\phi,\theta)}{500000+h(\phi,\theta)} \right). \]
Computing \(\varphi_\text{WGS84}(\mathbf x)\) is also possible though a lot more awkward. We won't show the formula here but instead only provide the implementation in the program.
There are a number of issues we need to address in the program. At the largest scale, we need to write a class that implements the interface of ChartManifold. This involves a function push_forward()
that takes a point in the reference domain \(\hat U\) and transform it into real space using the function \(\varphi^{-1}\) outlined above, and its inverse function pull_back() implementing \(\varphi
\). We will do so in the AfricaGeometry class below that looks, in essence, like this:
... some member variables and other member functions...;
virtual Point< spacedim > push_forward(const Point< chartdim > &chart_point) const =0
virtual Point< chartdim > pull_back(const Point< spacedim > &space_point) const =0
The transformations above have two parts: the WGS 84 transformations and the topography transformation. Consequently, the AfricaGeometry class will have additional (non-virtual) member functions
AfricaGeometry::push_forward_wgs84() and AfricaGeometry::push_forward_topo() that implement these two pieces, and corresponding pull back functions.
The WGS 84 transformation functions are not particularly interesting (even though the formulas they implement are impressive). The more interesting part is the topography transformation. Recall that
for this, we needed to evaluate the elevation function \(h(\hat\phi,\hat\theta)\). There is of course no formula for this: Earth is what it is, the best one can do is look up the altitude from some
table. This is, in fact what we will do.
The data we use was originally created by the Shuttle Radar Topography Mission, was downloaded from the US Geologic Survey (USGS) and processed by D. Sarah Stamps who also wrote the initial version
of the WGS 84 transformation functions. The topography data so processed is stored in a file topography.txt.gz that, when unpacked looks like this:
6.983333 25.000000 700
6.983333 25.016667 692
6.983333 25.033333 701
6.983333 25.050000 695
6.983333 25.066667 710
6.983333 25.083333 702
-11.983333 35.950000 707
-11.983333 35.966667 687
-11.983333 35.983333 659
The data is formatted as latitude longitude elevation where the first two columns are provided in degrees North of the equator and degrees East of the Greenwich meridian. The final column is given in
meters above the WGS 84 zero elevation.
In the transformation functions, we need to evaluate \(h(\hat\phi,\hat\theta)\) for a given longitude \(\hat\phi\) and latitude \(\hat\theta\). In general, this data point will not be available and
we will have to interpolate between adjacent data points. Writing such an interpolation routine is not particularly difficult, but it is a bit tedious and error prone. Fortunately, we can somehow
shoehorn this data set into an existing class: Functions::InterpolatedUniformGridData . Unfortunately, the class does not fit the bill quite exactly and so we need to work around it a bit. The
problem comes from the way we initialize this class: in its simplest form, it takes a stream of values that it assumes form an equispaced mesh in the \(x-y\) plane (or, here, the \(\phi-\theta\)
plane). Which is what they do here, sort of: they are ordered latitude first, longitude second; and more awkwardly, the first column starts at the largest values and counts down, rather than the
usual other way around.
Now, while tutorial programs are meant to illustrate how to code with deal.II, they do not necessarily have to satisfy the same quality standards as one would have to do with production codes. In a
production code, we would write a function that reads the data and (i) automatically determines the extents of the first and second column, (ii) automatically determines the number of data points in
each direction, (iii) does the interpolation regardless of the order in which data is arranged, if necessary by switching the order between reading and presenting it to the
Functions::InterpolatedUniformGridData class.
On the other hand, tutorial programs are best if they are short and demonstrate key points rather than dwell on unimportant aspects and, thereby, obscure what we really want to show. Consequently, we
will allow ourselves a bit of leeway:
• since this program is intended solely for a particular geometry around the area of the East-African rift and since this is precisely the area described by the data file, we will hardcode in the
program that there are \(1139\times 660\) pieces of data;
• we will hardcode the boundaries of the data \([-11.98333^\circ,6.983333^\circ]\times[25^\circ,35.98333^\circ]\);
• we will lie to the Functions::InterpolatedUniformGridData class: the class will only see the data in the last column of this data file, and we will pretend that the data is arranged in a way that
there are 1139 data points in the first coordinate direction that are arranged in ascending order but in an interval \([-6.983333^\circ,11.98333^\circ]\) (not the negated bounds). Then, when we
need to look something up for a latitude \(\hat\theta\), we can ask the interpolating table class for a value at \(-\hat\theta\). With this little trick, we can avoid having to switch around the
order of data as read from file.
All of this then calls for a class that essentially looks like this:
class AfricaTopography
AfricaTopography ()
topography_data (...initialize somehow...)
double value (const double lon, const double lat) const
static constexpr double PI
Note how the value() function negates the latitude. It also switches from the format \(\phi,\theta\) that we use everywhere else to the latitude-longitude format used in the table. Finally, it takes
its arguments in radians as that is what we do everywhere else in the program, but then converts them to the degree-based system used for table lookup. As you will see in the implementation below,
the function has a few more (static) member functions that we will call in the initialization of the topography_data member variable: the class type of this variable has a constructor that allows us
to set everything right at construction time, rather than having to fill data later on, but this constructor takes a number of objects that can't be constructed in-place (at least not in C++98).
Consequently, the construction of each of the objects we want to pass in the initialization happens in a number of static member functions.
Having discussed the general outline of how we want to implement things, let us go to the program and show how it is done in practice.
The commented program
Let us start with the include files we need here. Obviously, we need the ones that describe the triangulation (tria.h), and that allow us to create and output triangulations (grid_generator.h and
grid_out.h). Furthermore, we need the header file that declares the Manifold and ChartManifold classes that we will need to describe the geometry (manifold.h). We will then also need the
GridTools::transform() function from the last of the following header files; the purpose for this function will become discussed at the point where we use it.
#include <deal.II/grid/tria.h>
#include <deal.II/grid/grid_generator.h>
#include <deal.II/grid/grid_out.h>
#include <deal.II/grid/manifold.h>
#include <deal.II/grid/grid_tools.h>
The remainder of the include files relate to reading the topography data. As explained in the introduction, we will read it from a file and then use the Functions::InterpolatedUniformGridData class
that is declared in the first of the following header files. Because the data is large, the file we read from is stored as gzip compressed data and we make use of some BOOST-provided functionality to
read directly from gzipped data.
#include <deal.II/base/function_lib.h>
#include <fstream>
#include <iostream>
#include <memory>
The final part of the top matter is to open a namespace into which to put everything, and then to import the dealii namespace into it.
Describing topography: AfricaTopography
The first significant part of this program is the class that describes the topography \(h(\hat phi,\hat \theta)\) as a function of longitude and latitude. As discussed in the introduction, we will
make our life a bit easier here by not writing the class in the most general way possible but by only writing it for the particular purpose we are interested in here: interpolating data obtained from
one very specific data file that contains information about a particular area of the world for which we know the extents.
The general layout of the class has been discussed already above. Following is its declaration, including three static member functions that we will need in initializing the topography_data member
class AfricaTopography
double value(const double lon, const double lat) const;
static std::vector<double> get_data();
Let us move to the implementation of the class. The interesting parts of the class are the constructor and the value() function. The former initializes the Functions::InterpolatedUniformGridData
member variable and we will use the constructor that requires us to pass in the end points of the 2-dimensional data set we want to interpolate (which are here given by the intervals \([-6.983333,
11.98333]\), using the trick of switching end points discussed in the introduction, and \([25, 35.983333]\), both given in degrees), the number of intervals into which the data is split (379 in
latitude direction and 219 in longitude direction, for a total of \(380\times 220\) data points), and a Table object that contains the data. The data then of course has size \(380\times 220\) and we
initialize it by providing an iterator to the first of the 83,600 elements of a std::vector object returned by the get_data() function below. Note that all of the member functions we call here are
static because (i) they do not access any member variables of the class, and (ii) because they are called at a time when the object is not initialized fully anyway.
: topography_data({{std::make_pair(-6.983333, 11.966667),
std::make_pair(25, 35.95)}},
{{379, 219}},
double AfricaTopography::value(const double lon, const double lat) const
return topography_data.value(
VectorType::value_type * begin(VectorType &V)
The only other function of greater interest is the get_data() function. It returns a temporary vector that contains all 83,600 data points describing the altitude and is read from the file
topography.txt.gz. Because the file is compressed by gzip, we cannot just read it through an object of type std::ifstream, but there are convenient methods in the BOOST library (see http://
www.boost.org) that allows us to read from compressed files without first having to uncompress it on disk. The result is, basically, just another input stream that, for all practical purposes, looks
just like the ones we always use.
When reading the data, we read the three columns but ignore the first two. The datum in the last column is appended to an array that we the return and that will be copied into the table from which
topography_data is initialized. Since the BOOST.iostreams library does not provide a very useful exception when the input file does not exist, is not readable, or does not contain the correct number
of data lines, we catch all exceptions it may produce and create our own one. To this end, in the catch clause, we let the program run into an AssertThrow(false, ...) statement. Since the condition
is always false, this always triggers an exception. In other words, this is equivalent to writing throw ExcMessage("...") but it also fills certain fields in the exception object that will later be
printed on the screen identifying the function, file and line where the exception happened.
std::vector<double> AfricaTopography::get_data()
boost::iostreams::filtering_istream in;
for (unsigned int line = 0; line < 83600; ++line)
double lat, lon, elevation;
in >> lat >> lon >> elevation;
catch (...)
ExcMessage("Could not read all 83,600 data points "
"from the file <topography.txt.gz>!"));
#define AssertThrow(cond, exc)
std::vector< index_type > data
Describing the geometry: AfricaGeometry
The following class is then the main one of this program. Its structure has been described in much detail in the introduction and does not need much introduction any more.
std::unique_ptr<Manifold<3, 3>>
const override
static const double R;
static const double ellipticity;
const AfricaTopography topography;
const double AfricaGeometry::R = 6378137;
const double AfricaGeometry::ellipticity = 8.1819190842622e-2;
virtual std::unique_ptr< Manifold< dim, spacedim > > clone() const =0
The implementation, as well, is pretty straightforward if you have read the introduction. In particular, both of the pull back and push forward functions are just concatenations of the respective
functions of the WGS 84 and topography mappings:
return pull_back_topo(pull_back_wgs84(space_point));
return push_forward_wgs84(push_forward_topo(chart_point));
The next function is required by the interface of the Manifold base class, and allows cloning the AfricaGeometry class. Notice that, while the function returns a std::unique_ptr<Manifold<3,3>>, we
internally create a unique_ptr<AfricaGeometry>. In other words, the library requires a pointer-to-base-class, which we provide by creating a pointer-to-derived-class.
std::unique_ptr<Manifold<3, 3>> AfricaGeometry::clone() const
return std::make_unique<AfricaGeometry>();
The following two functions then define the forward and inverse transformations that correspond to the WGS 84 reference shape of Earth. The forward transform follows the formula shown in the
introduction. The inverse transform is significantly more complicated and is, at the very least, not intuitive. It also suffers from the fact that it returns an angle that at the end of the function
we need to clip back into the interval \([0,2\pi]\) if it should have escaped from there.
const double phi = phi_theta_d[0];
const double theta = phi_theta_d[1];
const double d = phi_theta_d[2];
const double
R_bar = R /
(1 - (ellipticity * ellipticity *
((1 - ellipticity * ellipticity) * R_bar + d) *
const double b
(R * R * (1 - ellipticity * ellipticity));
const double
ep =
((R * R - b * b) / (b * b));
const double
p =
(x[0] * x[0] + x[1] * x[1]);
const double th = std::atan2(R * x[2], b * p);
const double phi = std::atan2(x[1], x[0]);
const double theta =
std::atan2(x[2] + ep * ep * b * Utilities::fixed_power<3>(
(p - (ellipticity * ellipticity * R *
const double R_bar =
const double
R_plus_d = p /
if (phi < 0)
phi_theta_d[0] = phi;
phi_theta_d[1] = theta;
phi_theta_d[2] = R_plus_d - R_bar;
return phi_theta_d;
SymmetricTensor< 2, dim, Number > b(const Tensor< 2, dim, Number > &F)
::VectorizedArray< Number, width > cos(const ::VectorizedArray< Number, width > &)
::VectorizedArray< Number, width > sin(const ::VectorizedArray< Number, width > &)
::VectorizedArray< Number, width > sqrt(const ::VectorizedArray< Number, width > &)
In contrast, the topography transformations follow exactly the description in the introduction. There is not consequently not much to add:
const Point<3>
const double d_hat = phi_theta_d_hat[2];
const double h = topography.value(phi_theta_d_hat[0], phi_theta_d_hat[1]);
const double d = d_hat + (d_hat + 500000) / 500000 * h;
return {phi_theta_d_hat[0], phi_theta_d_hat[1], d};
const double d
= phi_theta_d[2];
const double h = topography.value(phi_theta_d[0], phi_theta_d[1]);
const double
d_hat = 500000 * (
- h) / (500000 + h);
return {phi_theta_d[0], phi_theta_d[1], d_hat};
SymmetricTensor< 2, dim, Number > d(const Tensor< 2, dim, Number > &F, const Tensor< 2, dim, Number > &dF_dt)
Creating the mesh
Having so described the properties of the geometry, not it is time to deal with the mesh used to discretize it. To this end, we create objects for the geometry and triangulation, and then proceed to
create a \(1\times 2\times 1\) rectangular mesh that corresponds to the reference domain \(\hat U=[26,35]\times[-10,5]\times[-500000,0]\). We choose this number of subdivisions because it leads to
cells that are roughly like cubes instead of stretched in one direction or another.
Of course, we are not actually interested in meshing the reference domain. We are interested in meshing the real domain. Consequently, we will use the GridTools::transform() function that simply
moves every point of a triangulation according to a given transformation. The transformation function it wants is a function that takes as its single argument a point in the reference domain and
returns the corresponding location in the domain that we want to map to. This is, of course, exactly the push forward function of the geometry we use. We wrap it by a lambda function to obtain the
kind of function object required for the transformation.
void run()
AfricaGeometry geometry;
std::vector<unsigned int> subdivisions(3);
subdivisions[0] = 1;
subdivisions[1] = 2;
subdivisions[2] = 1;
, subdivisions, corner_points[0], corner_points[1],
const Point<3>
&chart_point) {
return geometry.push_forward(chart_point);
void subdivided_hyper_rectangle(Triangulation< dim, spacedim > &tria, const std::vector< unsigned int > &repetitions, const Point< dim > &p1, const Point< dim > &p2, const bool colorize=false)
const ::parallel::distributed::Triangulation< dim, spacedim > * triangulation
The next step is to explain to the triangulation to use our geometry object whenever a new point is needed upon refining the mesh. We do this by telling the triangulation to use our geometry for
everything that has manifold indicator zero, and then proceed to mark all cells and their bounding faces and edges with manifold indicator zero. This ensures that the triangulation consults our
geometry object every time a new vertex is needed. Since manifold indicators are inherited from mother to children, this also happens after several recursive refinement steps.
The last step is to refine the mesh beyond its initial \(1\times 2\times 1\) coarse mesh. We could just refine globally a number of times, but since for the purpose of this tutorial program we're
really only interested in what is happening close to the surface, we just refine 6 times all of the cells that have a face at a boundary with indicator 5. Looking this up in the documentation of the
GridGenerator::subdivided_hyper_rectangle() function we have used above reveals that boundary indicator 5 corresponds to the top surface of the domain (and this is what the last true argument in the
call to GridGenerator::subdivided_hyper_rectangle() above meant: to "color" the boundaries by assigning each boundary a unique boundary indicator).
for (unsigned int i = 0; i < 6; ++i)
for (const auto &face : cell->face_iterators())
if (face->boundary_id() == 5)
std::cout << "Refinement step " << i + 1 << ": "
<< "km minimal cell diameter" << std::endl;
Having done this all, we can now output the mesh into a file of its own:
const std::string filename = "mesh.vtu";
std::ofstream out(filename);
} // namespace Step53
void write_vtu(const Triangulation< dim, spacedim > &tria, std::ostream &out) const
The main function
Finally, the main function, which follows the same scheme used in all tutorial programs starting with step-6. There isn't much to do here, only to call the single run() function.
int main()
catch (std::exception &exc)
std::cerr << std::endl
<< std::endl
<< "----------------------------------------------------"
<< std::endl;
std::cerr << "Exception on processing: " << std::endl
<< exc.what() << std::endl
<< "Aborting!" << std::endl
<< "----------------------------------------------------"
<< std::endl;
return 1;
catch (...)
std::cerr << std::endl
<< std::endl
<< "----------------------------------------------------"
<< std::endl;
std::cerr << "Unknown exception!" << std::endl
<< "Aborting!" << std::endl
<< "----------------------------------------------------"
<< std::endl;
return 1;
Running the program produces a mesh file mesh.vtu that we can visualize with any of the usual visualization programs that can read the VTU file format. If one just looks at the mesh itself, it is
actually very difficult to see anything that doesn't just look like a perfectly round piece of a sphere (though if one modified the program so that it does produce a sphere and looked at them at the
same time, the difference between the overall sphere and WGS 84 shape is quite apparent). Apparently, Earth is actually quite a flat place. Of course we already know this from satellite pictures.
However, we can tease out something more by coloring cells by their volume. This both produces slight variations in hue along the top surface and something for the visualization programs to apply
their shading algorithms to (because the top surfaces of the cells are now no longer just tangential to a sphere but tilted):
Yet, at least as far as visualizations are concerned, this is still not too impressive. Rather, let us visualize things in a way so that we show the actual elevation along the top surface. In other
words, we want a picture like this, with an incredible amount of detail:
A zoom-in of this picture shows the vertical displacement quite clearly (here, looking from the West-Northwest over the rift valley, the triple peaks of Mount Stanley, Mount Speke, and Mount Baker in
the Rwenzori Range, Lake George and toward the great flatness of Lake Victoria):
These image were produced with three small modifications:
1. An additional seventh mesh refinement towards the top surface for the first of these two pictures, and a total of nine for the second. In the second image, the horizontal mesh size is
approximately 1.5km, and just under 1km in vertical direction. (The picture was also created using a more resolved data set; however, it is too big to distribute as part of the tutorial.)
2. The addition of the following function that, given a point x computes the elevation by converting the point to reference WGS 84 coordinates and only keeping the depth variable (the function is,
consequently, a simplified version of the AfricaGeometry::pull_back_wgs84() function):
get_elevation (
const Point<3>
const double R = 6378137;
const double ellipticity = 8.1819190842622e-2;
const double
b =
(R * R * (1 - ellipticity * ellipticity));
const double
ep =
((R * R - b * b) / (b * b));
const double
p =
(x(0) * x(0) + x(1) * x(1));
const double th = std::atan2(R * x(2), b * p);
const double
R_plus_d = p /
return R_plus_d - R_bar;
3. Adding the following piece to the bottom of the run() function:
std::map<unsigned int,double> boundary_values;
for (std::map<unsigned int,double>::const_iterator p = boundary_values.begin();
p!=boundary_values.end(); ++p)
elevation[p->first] = p->second;
std::ofstream out ("data.vtu");
void write_vtu(std::ostream &out) const
void attach_dof_handler(const DoFHandler< dim, spacedim > &)
void add_data_vector(const VectorType &data, const std::vector< std::string > &names, const DataVectorType type=type_automatic, const std::vector<
DataComponentInterpretation::DataComponentInterpretation > &data_component_interpretation={})
virtual void build_patches(const unsigned int n_subdivisions=0)
This last piece of code first creates a \(Q_1\) finite element space on the mesh. It then (ab)uses VectorTools::interpolate_boundary_values() to evaluate the elevation function for every node at the
top boundary (the one with boundary indicator 5). We here wrap the call to get_elevation() with the ScalarFunctionFromFunctionObject class to make a regular C++ function look like an object of a
class derived from the Function class that we want to use in VectorTools::interpolate_boundary_values(). Having so gotten a list of degrees of freedom located at the top boundary and corresponding
elevation values, we just go down this list and set these elevations in the elevation vector (leaving all interior degrees of freedom at their original zero value). This vector is then output using
DataOut as usual and can be visualized as shown above.
Issues with adaptively refined meshes generated this way
If you zoomed in on the mesh shown above and looked closely enough, you would find that at hanging nodes, the two small edges connecting to the hanging nodes are not in exactly the same location as
the large edge of the neighboring cell. This can be shown more clearly by using a different surface description in which we enlarge the vertical topography to enhance the effect (courtesy of
Alexander Grayver):
So what is happening here? Partly, this is only a result of visualization, but there is an underlying real cause as well:
• When you visualize a mesh using any of the common visualization programs, what they really show you is just a set of edges that are plotted as straight lines in three-dimensional space. This is
so because almost all data file formats for visualizing data only describe hexahedral cells as a collection of eight vertices in 3d space, and do not allow to any more complicated descriptions.
(This is the main reason why DataOut::build_patches() takes an argument that can be set to something larger than one.) These linear edges may be the edges of the cell you do actual computations
on, or they may not, depending on what kind of mapping you use when you do your integrations using FEValues. By default, of course, FEValues uses a linear mapping (i.e., an object of class
MappingQ1) and in that case a 3d cell is indeed described exclusively by its 8 vertices and the volume it fills is a trilinear interpolation between these points, resulting in linear edges. But,
you could also have used tri-quadratic, tri-cubic, or even higher order mappings and in these cases the volume of each cell will be bounded by quadratic, cubic or higher order polynomial curves.
Yet, you only get to see these with linear edges in the visualization program because, as mentioned, file formats do not allow to describe the real geometry of cells.
• That said, let us for simplicity assume that you are indeed using a trilinear mapping, then the image shown above is a faithful representation of the cells on which you form your integrals. In
this case, indeed the small cells at a hanging nodes do not, in general, snugly fit against the large cell but leave a gap or may intersect the larger cell. Why is this? Because when the
triangulation needs a new vertex on an edge it wants to refine, it asks the manifold description where this new vertex is supposed to be, and the manifold description duly returns such a point by
(in the case of a geometry derived from ChartManifold) pulling the adjacent points of the line back to the reference domain, averaging their locations, and pushing forward this new location to
the real domain. But this new location is not usually along a straight line (in real space) between the adjacent vertices and consequently the two small straight lines forming the refined edge do
not lie exactly on the one large straight line forming the unrefined side of the hanging node.
The situation is slightly more complicated if you use a higher order mapping using the MappingQ class, but not fundamentally different. Let's take a quadratic mapping for the moment (nothing
fundamental changes with even higher order mappings). Then you need to imagine each edge of the cells you integrate on as a quadratic curve despite the fact that you will never actually see it
plotted that way by a visualization program. But imagine it that way for a second. So which quadratic curve does MappingQ take? It is the quadratic curve that goes through the two vertices at the end
of the edge as well as a point in the middle that it queries from the manifold. In the case of the long edge on the unrefined side, that's of course exactly the location of the hanging node, so the
quadratic curve describing the long edge does go through the hanging node, unlike in the case of the linear mapping. But the two small edges are also quadratic curves; for example, the left small
edge will go through the left vertex of the long edge and the hanging node, plus a point it queries halfway in between from the manifold. Because, as before, the point the manifold returns halfway
along the left small edge is rarely exactly on the quadratic curve describing the long edge, the quadratic short edge will typically not coincide with the left half of the quadratic long edge, and
the same is true for the right short edge. In other words, again, the geometries of the large cell and its smaller neighbors at hanging nodes do not touch snuggly.
This all begs two questions: first, does it matter, and second, could this be fixed. Let us discuss these in the following:
• Does it matter? It is almost certainly true that this depends on the equation you are solving. For example, it is known that solving the Euler equations of gas dynamics on complex geometries
requires highly accurate boundary descriptions to ensure convergence of quantities that are measure the flow close to the boundary. On the other hand, equations with elliptic components (e.g.,
the Laplace or Stokes equations) are typically rather forgiving of these issues: one does quadrature anyway to approximate integrals, and further approximating the geometry may not do as much
harm as one could fear given that the volume of the overlaps or gaps at every hanging node is only \({\cal O}(h^d)\) even with a linear mapping and \({\cal O}(h^{d+p-1})\) for a mapping of degree
\(p\). (You can see this by considering that in 2d the gap/overlap is a triangle with base \(h\) and height \({\cal O}(h)\); in 3d, it is a pyramid-like structure with base area \(h^2\) and
height \({\cal O}(h)\). Similar considerations apply for higher order mappings where the height of the gaps/overlaps is \({\cal O}(h^p)\).) In other words, if you use a linear mapping with linear
elements, the error in the volume you integrate over is already at the same level as the integration error using the usual Gauss quadrature. Of course, for higher order elements one would have to
choose matching mapping objects.
Another point of view on why it is probably not worth worrying too much about the issue is that there is certainly no narrative in the community of numerical analysts that these issues are a
major concern one needs to watch out for when using complex geometries. If it does not seem to be discussed often among practitioners, if ever at all, then it is at least not something people
have identified as a common problem.
This issue is not dissimilar to having hanging nodes at curved boundaries where the geometry description of the boundary typically pulls a hanging node onto the boundary whereas the large edge
remains straight, making the adjacent small and large cells not match each other. Although this behavior existed in deal.II since its beginning, 15 years before manifold descriptions became
available, it did not ever come up in mailing list discussions or conversations with colleagues.
• Could it be fixed? In principle, yes, but it's a complicated issue. Let's assume for the moment that we would only ever use the MappingQ1 class, i.e., linear mappings. In that case, whenever the
triangulation class requires a new vertex along an edge that would become a hanging node, it would just take the mean value of the adjacent vertices in real space, i.e., without asking the
manifold description. This way, the point lies on the long straight edge and the two short straight edges would match the one long edge. Only when all adjacent cells have been refined and the
point is no longer a hanging node would we replace its coordinates by coordinates we get by a manifold. This may be awkward to implement, but it would certainly be possible.
The more complicated issue arises because people may want to use a higher order MappingQ object. In that case, the Triangulation class may freely choose the location of the hanging node (because
the quadratic curve for the long edge can be chosen in such a way that it goes through the hanging node) but the MappingQ class, when determining the location of mid-edge points must make sure
that if the edge is one half of a long edge of a neighboring coarser cell, then the midpoint cannot be obtained from the manifold but must be chosen along the long quadratic edge. For cubic (and
all other odd) mappings, the matter is again a bit complicated because one typically arranges the cubic edge to go through points 1/3 and 2/3 along the edge, and thus necessarily through the
hanging node, but this could probably be worked out. In any case, even then, there are two problems with this:
□ When refining the triangulation, the Triangulation class can not know what mapping will be used. In fact it is not uncommon for a triangulation to be used differently in different contexts
within the same program. If the mapping used determines whether we can freely choose a point or not, how, then, should the triangulation locate new vertices?
□ Mappings are purely local constructs: they only work on a cell in isolation, and this is one of the important features of the finite element method. Having to ask whether one of the vertices
of an edge is a hanging node requires querying the neighborhood of a cell; furthermore, such a query does not just involve the 6 face neighbors of a cell in 3d, but may require traversing a
possibly very large number of other cells that connect to an edge. Even if it can be done, one still needs to do different things depending on how the neighborhood looks like, producing code
that is likely very complex, hard to maintain, and possibly slow.
Consequently, at least for the moment, none of these ideas are implemented. This leads to the undesirable consequence of discontinuous geometries, but, as discussed above, the effects of this do
not appear to pose problem in actual practice.
The plain program
: topography_data({{std::make_pair(-6.983333, 11.966667),
std::make_pair(25, 35.95)}},
{{379, 219}},
std::vector<double> AfricaTopography::get_data()
boost::iostreams::filtering_istream in;
in >> lat >> lon >> elevation;
(p - (ellipticity * ellipticity * R *
phi_theta_d[0] = phi;
phi_theta_d[1] = theta;
phi_theta_d[2] = R_plus_d - R_bar;
AfricaGeometry geometry;
std::vector<unsigned int> subdivisions(3);
subdivisions[0] = 1;
subdivisions[1] = 2;
subdivisions[2] = 1;
for (const auto &face : cell->face_iterators())
std::ofstream out(filename);
std::cerr << std::endl
<< std::endl
<< std::endl;
<< exc.what() << std::endl
<< std::endl;
std::cerr << std::endl
<< std::endl
<< std::endl;
<< std::endl; | {"url":"https://dealii.org/developer/doxygen/deal.II/step_53.html","timestamp":"2024-11-11T14:46:40Z","content_type":"application/xhtml+xml","content_length":"135861","record_id":"<urn:uuid:9db0fbb8-f807-4b6d-816d-604d4b16b72b>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00458.warc.gz"} |
RUVSeq empirical negative controls? how many to take to find the span set
Last seen 3.2 years ago
United States
so I am using RUVg(eset, k=1,...) determining the in-silico negative controls by gene or transcript that has a p.value greater than 0.55 with the FDR procedure Benj-Hoch (many many highly
insignificant entries came up).
my question is I am not sure how many to include as insignificant entries into RUVg empirical negative controls. ?
my first thought is to take the bottom 10% insignificant genes/transcripts with pval near 0.8 (these were a few hundred far fewer than the flat threshold of p.val 0.55).
Then after reading the RUVSeq manual, they grabbed anything that is not in the top 5000 genes returned from edgeR.
I do notice a difference in the calculated weights by negative control selection process, but am not sure if it is helpful during factor analysis algorithm which elements (and how many elements) can
optimize the computation for the spanning space of unwanted variance.
Any suggestions are greatly appreciated.
Anthony C. | {"url":"https://support.bioconductor.org/p/84980/","timestamp":"2024-11-04T04:24:27Z","content_type":"text/html","content_length":"18410","record_id":"<urn:uuid:21d54b2b-66f6-4d25-b760-a5c2d3321fb3>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00248.warc.gz"} |
ARIMA | Jason Siu
Applied forecasting
ARIMA models are based on the autocorrelation in the data. It composes 3 parts: AR-I-MA.
• AR: Autoregressive models
• I: Stationarity and unit root
• MA: Moving average models
(Integrated part) Stationarity and differencing
A stationary time series is the one whose properties DO NOT depend on the time.
• If a time series has a trend or seasonality, it is nonstationary.
• A white noise series is stationary.
To apply ARIMA models, our time series should be stationary to begin with.
Here is what it looks like :
• Constant variance
□ (Like some mean between two patterns in a series)
• No patterns predictable in the long-term
□ (Can use Histogram to check the distribution. If same, this is stationary.)
So here asks 2 important questions :
• How to test if a time series is stationary?
• What to do if a time series is nonstationary?
(Integrated part) Q1: How to test if a time series is stationary?
Here are some examples :
Here is tricky
This seems like there is seasonality, but it is rather cyclic.
It's cyclic in the sense that there is not a fixed period and the time between the peaks or the troughs is not determined by the calendar; it's determined by the ecology of links, and their
population cycle.
So, this one is actually stationary, even though you might originally think it's not.
If i took a section of the graph of some length s and i took another section at a completely different point in time where the starting point is randomly chosen say over here then the distribution is
the same
Or we can look at the graph of ACF to determine if the ts is stationary.
Also, the values drop to 0 quickly - a sign of stationarity.
(Integrated part) Q2: What to do if a time series is nonstationary?
We will do transformation if is not nonstationary.
1. We do log, box-cox, or whatever suitable (taught previously). →
1. We then differencing →
1. 12 here if that is yearly seasonal.
1. If needed, we will do a second differencing.
1. In practice, we never to beyond the second-order difference.
Seasonal difference:
It is the difference between an observation at time t and the previous observation from the same season.
The above formula assumes m is 12.
Unit root test - objectively determines the need for differencing
There are multiple tests which we can use to test if the ts is stationary
1. ACF
The fact that in ACF, the spikes go to zero quickly suggests that the series is stationary. It is not white-noise though.
2. The LjungBox test → 細係 non-stationary
A small p-value implies the series is ≠ white noise, and hence non-stationary.
Box.test(type = "Ljung-Box")
Box.test(type = "Ljung-Box")
3. The augmented Dickey-Fuller (ADF) test → 細係 stationary
A small p-value implies the series = white noise, and hence stationary.
Null hypothesis = the data are non-stationary and non-seasonal
test phi
4. the Kwiatkowski-Phillips-Schmidt-Shin (KPSS) test → 細係 stationary
Null hypothesis = the data are stationary; we look for evidence that the null hypothesis is false.
Small p-values (e.g., less than 0.05) suggest that differencing i required.
5. STL decomposition strength → >0.64 係 non-stationary
Non-seasonal ARIMA models
(AR part) Autoregressive model
You might think that this looks like linear regression, and indeed it is the extension. The difference is that the regression one has a bunch of explanatory variables; however, in the AR (p) model,
we are going to regress on its own lag values.
Why use a univariant model?
• Other explanatory variables are not available.
• Other explanatory variables are not directly observable.
• Examples: inflation rate, unemployment rate, exchange rate, firm's sales, gold prices, interest rate, etc.
• Changing the parameters = changes ts patterns.
• is white noise. Chaning only change the scale of the series, not the patterns.
• If we add C, then we assumed the trend continues in long term.
Stationarity condition
P is the order of model
We normally restrict autoregressive models to stationary data, in which case some constraints on the values of the parameters are required.
When p≥3
p≥3, the restrictions are much more complicated. The Fablepackage takes care of these restrictions when estimating a model.
(MA part) Moving Average (MA) models
• Moving Average (MA) models ≠ moving average smoothing!
• This is a multiple regression with past errors as predictors.
Non-seasonal ARIMA models
Combining differencing with autoregression and a moving average model, we obtain a non-seasonal ARIMA model.
• = the differenced series (it may have been differenced more than once).
• The “predictors” on the right hand side include both lagged values of and lagged errors.
Q: How do you choose P and Q?
• Before answering this q, we need to know how C, P and Q affect the model.
• Changing d affects the prediction interval; The higher the value of d, the more rapidly the prediction intervals increase in size (d越大,pi 越大).
ACF & PACF
The above shows the ACF. However, the problem with the ACF function is that, when we calculate the correlation between in the case that they are corr, then y t − 1 and y t − 2 must also be
However, then y t and y t − 2 might be correlated, simply because they are both connected to y t − 1, rather than because of any new information contained in y t − 2 that could be used in forecasting
y t .
In short, there is an interaction effect.
These measure these relationships after removing the effects of lags.
Q: How to pick the order of AR(p) using ACF vs. PACF?
Q: How to pick the order of MA(q) using ACF vs. PACF?
Seasonal ARIMA models
It works similar to non-seasonal ARIMA. But it adds the seasonal order terms written as (P, D, Q)
Q: How do you choose the order using ACF /PACF
Estimation and order selection
ARIMA modelling in R
Seasonal ARIMA models
ARIMA vs ETS
Why is the mean positive in this case? (Tute 10 1:06:15v ) (Here) Ex15
A good model contains an unbiased residual. How do we know if our residuals are unbiased?
our residuals are unbiased when it is =0. If the above plot floats around 0, then it would be unbiased and that also means it is a white noise series
What is the process to make data stationary ?
1. Transform the data to remove changing variance
2. Seasonally difference the data to remove seasonality
3. Regular difference if the data is still non-stationary
“ARIMA-based prediction tends to be narrow.” True or False?
Yes, because only the variation in the errors has been accounted for.
• There is also variation in the parameter estimates, and in the model order, that has not been included in the calculation.
• The calculation assumes that the historical patterns that have been modelled will continue into the forecast period.
• Author:Jason Siu
• Copyright:All articles in this blog, except for special statements, adopt BY-NC-SA agreement. Please indicate the source! | {"url":"https://www.jason-siu.com/article/50f18eb6-2caa-4d89-acb8-cf897323aac6","timestamp":"2024-11-10T03:05:39Z","content_type":"text/html","content_length":"412145","record_id":"<urn:uuid:d154affa-c490-4503-9bda-cb5a5e78d613>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00134.warc.gz"} |
Euclid’s Algorithm for finding GCD of two numbers
Finding Greatest Common Divisor (GCD of two numbers) is a simple mathematical problem, the algorithm (Euclid’s Algorithm) for which was devised over 2300 years ago. It is often a common interview
question, not tricky though, is asked just to check one of the Algorithmic Design Technique , Recursion as the algorithm can be implemented with Recursion.
Euclid’s Algorithm
To compute greatest common divisor of two non negative integers “p” and “q”.
1. if q is 0, the answer is p
2. if not , divide “p” by “q” and take the remainder “r”. The answer is the greatest common divisor of “q” and “r”
3. The high-lighted one above is a sub problem of GCD. Carry same steps as above for sub problem, until it is devised to step 1.
Euclid’s Algorithm implementation in Java
Here is the implementation for Euclid’s Algorithm for finding GCD of two numbers.
package com.jminded.algorithmicpuzzles;
public class EuclidAlgorithm {
* @author Umashankar
* @param args
* {@link https://jminded.com}
public static void main(String[] args) {
// TODO Auto-generated method stub
System.out.println(gcd(16, 32));
System.out.println(gcd(3, 66));
* GCD of given two numbers
* @param p
* @param q
* @return
public static int gcd(int p, int q){
return p;
int r=p%q;
return gcd(q,r);
Leave a Comment | {"url":"https://jminded.com/euclids-algorithm-for-finding-gcd-of-two-numbers/","timestamp":"2024-11-11T23:13:08Z","content_type":"text/html","content_length":"73253","record_id":"<urn:uuid:cb05c979-f63f-461b-8a18-94eac570a455>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00682.warc.gz"} |
How to Use Python Programming for Artificial Intelligence
If you're just starting out in the artificial intelligence (AI) world, then Python is a great language to learn since most of the tools are built using it.
In a production setting, you would use a deep learning framework like TensorFlow or PyTorch instead of building your own neural network.
• Python is a popular language for AI development.
• Machine learning and deep learning are essential components of AI.
• Neural networks are used to build AI models.
• Python has prebuilt libraries like TensorFlow and PyTorch for AI development.
• Learning Python is a stepping stone to AI implementation.
Artificial Intelligence Overview
Artificial intelligence (AI) aims to replicate human thinking in computers. It encompasses various approaches, such as machine learning (ML) and deep learning (DL), which are used to solve AI
Machine Learning involves training a system to solve a problem by learning from data, instead of explicitly programming the rules.
In DL, neural networks learn to identify important features without the need for traditional feature engineering techniques.
ML is particularly useful in supervised learning, where a model is trained on a dataset containing inputs and known outputs.
The goal is to make predictions for new, unseen data based on the patterns learned from the training set. DL, on the other hand, excels in handling complex datasets like images or text data, where it
can automatically extract relevant features.
By leveraging Python programming, developers can harness the power of AI.
Python is a versatile language that offers simplicity, prebuilt libraries, and a supportive community.
It is widely used in AI development due to its ease of learning and platform independence.
Python's extensive collection of prebuilt libraries, including TensorFlow, Scikit-Learn, and NumPy, further accelerates the implementation of AI algorithms.
To dive deeper into the world of AI and understand its intricacies, let's explore the key concepts of machine learning, feature engineering, deep learning, neural networks, and the process of
training a neural network.
Key Aspects of Artificial Intelligence Overview
Aspect Description
Artificial Intelligence Replicates human thinking in computers
Machine Learning (ML) Trains systems to solve problems by learning from data
Deep Learning (DL) Neural networks learn to identify important features automatically
Python Programming Simplifies AI development with prebuilt libraries and community support
Machine Learning Basics
Machine learning is a fundamental concept in the field of artificial intelligence (AI).
It involves training a model using a dataset with inputs and known outputs, and then using the model to make predictions for new, unseen data.
The goal is to create a model that can accurately predict the correct outputs based on the inputs it is given.
One common approach to machine learning is supervised learning, where the model is trained using a dataset that contains both the inputs and their corresponding correct outputs.
The model learns patterns and relationships in the data during the training process and then uses this knowledge to make predictions for new data.
This type of machine learning is widely used in various applications, such as image recognition, natural language processing, and recommendation systems.
Supervised learning is just one example of the many different algorithms and techniques available in the field of machine learning.
Other types of machine learning include unsupervised learning, where the model learns patterns and relationships in the data without any labeled outputs, and reinforcement learning, where the model
learns through trial and error based on feedback from its environment.
These different approaches to machine learning allow for a wide range of applications and enable the development of more advanced AI systems.
Types of Machine Learning
Machine learning can be broadly categorized into three main types: supervised learning, unsupervised learning, and reinforcement learning.
• Supervised learning: In supervised learning, the model is trained using a dataset that contains both the inputs and their corresponding correct outputs. The model learns patterns and
relationships in the data during the training process and then uses this knowledge to make predictions for new data.
• Unsupervised learning: In unsupervised learning, the model is trained using a dataset that contains only the inputs, with no corresponding correct outputs. The model learns patterns and
relationships in the data without any predefined labels. This type of learning is useful for tasks such as clustering and dimensionality reduction.
• Reinforcement learning: In reinforcement learning, the model learns through trial and error based on feedback from its environment. The model takes actions in its environment and receives rewards
or penalties based on the outcomes of those actions. Over time, the model learns to take actions that maximize its rewards and minimize its penalties.
Type Description
Supervised Learning The model is trained using a dataset with inputs and known outputs.
Unsupervised Learning The model is trained using a dataset with inputs only, without any labeled outputs.
Reinforcement Learning The model learns through trial and error based on feedback from its environment.
Feature Engineering
Feature engineering plays a crucial role in the field of artificial intelligence. It involves the process of extracting meaningful features from raw data, enabling effective representation and
utilization of the data in machine learning and deep learning models.
By transforming and manipulating the data, feature engineering enhances the performance and accuracy of these models.
There are various techniques used in feature engineering, one of which is lemmatization.
Lemmatization reduces inflected forms of words to their base form, allowing for better analysis and interpretation of text data.
This technique is particularly useful in natural language processing tasks, such as sentiment analysis or text classification.
Moreover, feature engineering is essential when dealing with different types of data, including numerical, categorical, and textual data.
Each data type requires specific preprocessing techniques to capture the relevant information accurately.
By carefully engineering features, AI models can better understand the underlying patterns and relationships within the data, leading to more accurate predictions and insights.
Types of Feature Engineering Techniques
Technique Description
One-Hot Encoding Converts categorical variables into binary vectors, representing each category as a separate feature.
Scaling and Normalization Rescales numerical features to a common scale, preventing any particular feature from dominating the model.
Text Tokenization Splits text into individual tokens, enabling the model to understand and analyze the textual content.
Feature Extraction Extracts relevant information from raw data, reducing dimensionality and improving model efficiency.
These are just a few examples of feature engineering techniques used in AI.
The choice of techniques depends on the specific problem and data at hand.
It requires a combination of domain knowledge, data understanding, and experimentation to identify the most effective feature engineering strategies for a given task.
Deep Learning
Deep learning is a powerful technique in artificial intelligence (AI) that enables neural networks to learn and make predictions without relying on feature engineering techniques.
It is particularly effective for handling complex datasets, such as images or text data.
Deep learning algorithms are implemented using popular libraries like TensorFlow and PyTorch, which provide a wide range of pre-built functions and tools for developing deep learning models.
One of the main advantages of deep learning is its ability to automatically learn relevant features from data, eliminating the need for manual feature extraction.
This is achieved through the use of deep neural networks, which consist of multiple layers of interconnected nodes called neurons.
Each neuron applies mathematical operations to the input data and passes the result to the next layer.
The hidden layers in deep neural networks allow for the learning of hierarchical representations, enabling the network to capture complex patterns and relationships in the data.
To train a deep learning model, a large amount of labeled data is typically required.
The model iteratively adjusts its internal weights and biases to minimize the difference between its predictions and the true labels of the training data.
This process, known as backpropagation, involves calculating the gradients of a loss function with respect to the model's parameters and using them to update the weights and biases through gradient
descent optimization.
Benefits of Deep Learning
There are several key benefits of deep learning:
• Automatic feature extraction: Deep learning models can automatically learn relevant features from raw data, reducing the need for manual feature engineering.
• Ability to handle complex data: Deep learning excels at handling complex data types, such as images, videos, and text, making it suitable for a wide range of AI applications.
• High accuracy: Deep learning models have achieved state-of-the-art performance on various tasks, including image recognition, natural language processing, and speech recognition.
• Scalability: Deep learning models can scale to handle large datasets and can be trained on powerful hardware, such as graphics processing units (GPUs) or specialized hardware like tensor
processing units (TPUs).
Overall, deep learning is a powerful approach to AI that has revolutionized many fields and continues to drive advancements in areas such as computer vision, natural language processing, and
Deep Learning Benefits
Automatic feature extraction Reduces the need for manual feature engineering
Ability to handle complex data Suitable for various data types, such as images and text
High accuracy Achieves state-of-the-art performance on various tasks
Scalability Can scale to handle large datasets and leverage powerful hardware
Neural Networks: Main Concepts
In the field of artificial intelligence, neural networks play a critical role in making predictions and solving complex problems.
Vectors: Storing and Processing Data
Neural networks rely on vectors to store and process data. A vector is a mathematical construct that represents a collection of values.
In the context of neural networks, vectors are used to store input data, such as images, text, or numerical features.
They can also represent the weights and biases that impact the functioning of the network.
Layers: Transforming Data
Neural networks consist of layers that transform the input data.
Each layer performs a specific mathematical operation on the data and passes the transformed data to the next layer.
The layers are interconnected, allowing the network to learn complex representations and patterns from the input data.
Linear Regression: Estimating Relationships
Linear regression is a fundamental concept in neural networks.
It involves estimating the relationship between variables through a linear approximation.
This technique is used to model the relationship between the input data and the expected output.
The weights and bias vectors in linear regression play a crucial role in determining the quality of the predictions made by the network.
In this section, we explored the main concepts behind neural networks.
Vectors are used to store and process data, while layers transform the data to learn complex representations.
Linear regression helps estimate the relationships between variables, enabling the network to make accurate predictions.
Understanding these concepts is essential for building and training neural networks effectively.
The Process to Train a Neural Network
To effectively train a neural network, you need to follow a systematic process that involves making predictions, comparing them to the desired output, and adjusting the network's internal state.
Let's dive into the steps involved in training a neural network:
Step 1: Data Preparation
The first step in training a neural network is to prepare your data.
This involves collecting and organizing a dataset that contains both input features and corresponding target outputs.
The quality and relevance of your data will greatly impact the performance of your neural network.
Step 2: Model Initialization
Once your data is ready, you can initialize your neural network model.
This involves defining the architecture of your network, including the number of layers, the number of nodes in each layer, and the activation functions to be used.
The model initialization sets the initial weights and biases for the network.
Step 3: Forward Propagation
In the forward propagation step, the input data is fed into the neural network, and the network computes the output predictions.
Each layer in the network performs a set of mathematical operations to transform the input data and generate the output for the next layer.
This process continues until the final layer, which produces the output predictions.
Step 4: Loss Calculation
After obtaining the output predictions, the next step is to calculate the loss.
The loss represents the difference between the predicted output and the actual target output.
There are different loss functions available depending on the nature of the problem you are trying to solve.
The choice of loss function will affect the learning behavior of the neural network.
Step 5: Backpropagation and Parameter Update
The backpropagation algorithm is used to calculate the gradient of the loss function with respect to the network's parameters.
This gradient information is then used to update the weights and biases of the network.
By iteratively adjusting the parameters based on the gradient, the network gradually improves its predictions and reduces the loss.
Step 6: Training Evaluation
During the training process, it is important to monitor the performance of the neural network.
This can be done by evaluating the network's predictions on a separate validation dataset.
The evaluation metrics will depend on the specific problem you are solving, but common metrics include accuracy, precision, recall, and F1 score.
Step 7: Stopping Criteria
Knowing when to stop training is crucial to prevent overfitting or underfitting of the neural network.
This can be determined by monitoring the training loss and the validation loss.
If the training loss continues to decrease while the validation loss starts to increase, it is an indication that the network is overfitting the training data.
Stopping the training at this point can help prevent further deterioration in performance.
By following these steps, you can effectively train a neural network to make accurate predictions on your dataset.
Training a neural network requires careful consideration of data preparation, model initialization, forward propagation, loss calculation, backpropagation, training evaluation, and stopping criteria.
With practice and experimentation, you can develop neural networks that excel in solving complex problems and achieve high levels of accuracy.
Vectors and Weights
In the context of neural networks, vectors play a crucial role in representing data.
A vector is a mathematical entity that consists of a collection of numbers or values.
These values can represent various features or attributes of the data being processed.
In the case of neural networks, vectors are used to store input data, intermediate activations, and output predictions.
One important type of vector in neural networks is the weight vector. Weights represent the relationship between the inputs and the output of a network.
They determine the strength of the connections between the neurons in different layers of the network.
The values of the weights are learned during the training process, allowing the network to optimize its predictions based on the given data.
Another vector that is often used in neural networks is the bias vector.
The bias is an additional input to each neuron in a network that allows for more flexibility in modeling complex relationships.
It sets the output of a neuron when all other inputs are equal to zero.
The bias vector helps to introduce non-linearity to the network and improves its capability to learn and generalize from the data.
Vector Description
Input Vector A vector that represents the input data
Weight Vector A vector that represents the relationship between inputs
Bias Vector A vector that sets the output when all other inputs are zero
The relationship between vectors and weights is crucial in determining how a neural network makes predictions and learns from data.
By adjusting the weights and biases, the network can optimize its predictions to minimize error and improve its performance.
Understanding the role of vectors and weights in neural networks is fundamental to effectively building and training AI models.
Linear Regression Model
Linear regression is a commonly used method for estimating the relationship between a dependent variable and independent variables.
It assumes that the relationship between the variables is linear, meaning that the dependent variable can be expressed as a weighted sum of the independent variables.
In this section, we will explore the main concepts of the linear regression model and how it is used in artificial intelligence (AI) applications.
Understanding the Linear Regression Model
The linear regression model is based on the principle of fitting a line to the data points that best represents the relationship between the variables. The model aims to minimize the difference
between the predicted values and the actual values of the dependent variable.
This is done by adjusting the weights and bias vectors that are used in the linear regression equation.
Let's take an example to illustrate how the linear regression model works.
Suppose we have a dataset with two variables: the independent variable X and the dependent variable Y.
The linear regression model estimates the relationship between X and Y by calculating the slope and intercept of the line that best fits the data points.
The equation for a simple linear regression model can be written as:
Y = b0 + b1*X
Here, b0 is the intercept (the value of Y when X is zero) and b1 is the slope (the change in Y for every unit change in X).
The model finds the values of b0 and b1 that minimize the difference between the predicted values of Y and the actual values.
Applications of Linear Regression in AI
The linear regression model is widely used in AI applications, particularly in areas such as predictive analytics, forecasting, and recommendation systems.
It can be applied to various domains, including finance, healthcare, marketing, and more.
By analyzing historical data and using the linear regression model, AI systems can make predictions and generate insights that help businesses make informed decisions.
Applications Description
Predictive Analytics Using historical data to predict future outcomes or trends.
Forecasting Estimating future values based on past data patterns.
Recommendation Systems Suggesting relevant items or actions based on user preferences.
As AI continues to advance, the linear regression model remains a fundamental tool for understanding and analyzing relationships between variables.
Its simplicity and interpretability make it a valuable technique for extracting insights and making predictions in various industries.
Why Python is Best for AI
When it comes to artificial intelligence (AI), Python is undoubtedly the go-to programming language.
Its simplicity, prebuilt libraries, ease of learning, platform independence, and massive community support make it the preferred choice for AI development.
One of the key advantages of Python is its simplicity.
Compared to other programming languages, Python requires less code to implement AI algorithms, allowing developers to focus more on solving AI problems rather than dealing with complex syntax.
Python also offers a wide range of prebuilt libraries specifically designed for machine learning and deep learning, such as TensorFlow, Scikit-Learn, and NumPy.
These libraries provide powerful tools and algorithms that simplify the development process and enable developers to build robust AI models efficiently.
Advantages of Python for AI -
1. Simplicity Python's clean and readable syntax makes it easy to understand and write code, reducing development time and effort.
2. Prebuilt Libraries Python offers a vast collection of libraries specifically designed for AI, providing developers with ready-to-use tools and algorithms.
3. Platform Independence Python is a platform-independent language, allowing developers to write code once and run it on different platforms without modifications.
4. Massive Community Support Python has a large and active community of developers who contribute to its growth, providing support, resources, and guidance.
In conclusion, Python's simplicity, prebuilt libraries, ease of learning, platform independence, and community support make it the best programming language for AI development.
Whether you are a beginner or an experienced developer, Python offers the tools and resources necessary to build advanced AI models and push the boundaries of artificial intelligence.
To further enhance your chatbot's capabilities, consider exploring AI libraries and projects available in Python.
Libraries like TensorFlow and Scikit-Learn provide powerful tools for machine learning, while NumPy offers efficient numerical computations.
Take your chatbot to the next level by incorporating advanced techniques such as sentiment analysis and natural language understanding.
With Python's extensive range of libraries and the support of a thriving community, the possibilities for creating smarter and more sophisticated chatbots are endless.
What is artificial intelligence (AI)?
Artificial intelligence is the field of computer science that focuses on creating machines that can think and perform tasks that would normally require human intelligence.
What is the role of machine learning in AI?
Machine learning is an approach to solving AI problems by training a system to learn from data instead of explicitly programming rules. It involves training a model using a dataset with inputs and
known outputs.
What is deep learning?
Deep learning is a technique within machine learning where a neural network learns to extract important features from complex datasets like images or text without the need for manual feature
What are neural networks?
Neural networks are systems that learn to make predictions by taking input data, making a prediction, comparing it to the desired output, and adjusting their internal state. They consist of
interconnected layers of artificial neurons.
How do you train a neural network?
Training a neural network involves an iterative process of making predictions, comparing them to the desired output, and adjusting the network's internal state. The goal is to minimize the difference
between the predicted and correct outputs.
What are vectors and weights in neural networks?
Vectors are used to represent data in neural networks. In the context of neural networks, weights represent the relationship between inputs, while bias sets the result when all other inputs are equal
to zero.
What is linear regression?
Linear regression is a method used in machine learning when estimating the relationship between a dependent variable and independent variables. It approximates the relationship as linear, expressing
the dependent variable as a weighted sum of the independent variables.
Why is Python the preferred language for AI?
Python is widely used in AI due to its simplicity, prebuilt libraries for machine learning and deep learning, ease of learning, platform independence, and strong community support.
What can Python be used for in AI?
Python can be used to develop various AI applications, including chatbots using natural language processing. Its prebuilt libraries like TensorFlow, Scikit-Learn, and NumPy facilitate implementing AI | {"url":"https://www.articlesfactory.com/articles/programming/how-to-use-python-programming-for-artificial-intelligence.html","timestamp":"2024-11-11T02:05:56Z","content_type":"application/xhtml+xml","content_length":"79466","record_id":"<urn:uuid:c7f3453b-888b-43f1-90fe-65617f16b2cf>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00763.warc.gz"} |
600 followers problem!
If a circle passes through the points of intersection of the coordinate axes with the lines $\lambda x-y+1=0$ and $x-2y+3=0$ , then let the possible values of $\lambda$ be $\lambda_1,\lambda_2,\
cdots$ . Calculate $\left\lceil \displaystyle\prod_{i=1}^n \lambda_i\right\rceil+\left\lfloor \displaystyle\sum_{i=1}^n \lambda_i\right\rfloor$
This section requires Javascript. You are seeing this because something didn't load right. We suggest you, (a) try refreshing the page, (b) enabling javascript if it is disabled on your browser and,
finally, (c) loading the non-javascript version of this page . We're sorry about the hassle. | {"url":"https://solve.club/problems/600-followers-problem-2/600-followers-problem-2.html","timestamp":"2024-11-10T12:11:30Z","content_type":"text/html","content_length":"91237","record_id":"<urn:uuid:62528223-0a02-4904-a5ee-c2f0605e396c>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00723.warc.gz"} |
Learning To Rank algorithm and evaluation index - Moment For Technology
Ranking learning is the core approach to recommendations, search, and advertising, and LTR is a supervised machine learning algorithm for ranking tasks. So LTR is still the traditional machine
learning processing paradigm, construction features, learning objectives, training models, predictions. LTR generally falls into three types: PointWise, PairWise, and ListWise. These three algorithms
are not specific algorithms, but three design ideas, and the main differences are reflected in the loss function, label labeling and optimization methods.
1. PointWise
For search tasks, PointWise only considers the absolute relevance of the current Qeury to each document, and does not consider the relevance of other documents to Qeury. PW methods usually encode
documents into feature vectors, train classification model or regression model according to training data, and score documents directly in the prediction stage. Ranking documents according to this
score is the search result.
The processing logic is shown as follows:
The format of the training data is triples: (Qi, DJ, YIj)(q_i, D_j, Y_ {ij})(Qi, DJ, Yij). Yijy_ {ij}yij is two values, indicating correlation or irrelevant. Train a binary model or regression model
to directly fit yiJY_ {ij}yij. Loss function: Classification model Loss function can use cross entropy, regression model Loss function can use mean square error (MSE). Prediction stage; The score is
used directly for sorting.
1. The PointWise only considers the correlation between query and single document simq,dsim_{q,d}simq,d, and does not consider the relationship between candidate documents. Since our goal is to rank
the candidates, we really want to calculate the relative score, and the direct use of simq,dsim_{q,d}simq,d is often not so accurate. In fact, simq,dsim_{q,d}simq,d is only probability of
accuracy, not true relative order probability.
2. PointWise does not take into account internal dependencies between documents corresponding to the same Query. This leads to the following problems: 1. The samples in the input space are not
independent synchronization score (IID), which violates the basic assumption of machine learning. 2. When different Queries have different numbers of documents, overall Loss is easily dominated
by query groups that have more documents (training data).
3. Sorting is concerned with the accuracy of TopK, so the setting of loss function needs to add the relative position sorting information.
2. PairWise
The basic idea of PairWise is to compare samples in pairs, build partial-order document pairs, and learn order from comparison. As analyzed in PointWise, what we need for a query is the correct order
of the search results, not the correlation score of the search results with the query. PairWise hopes to get the correct order of the whole by correctly estimating the order of a pair of documents.
For example, if the correct order is: “A>B>C”, PairWise learns “A>B>C” by learning the relationship between two pairs: “A>B”, “B>C” and “A>C”.
The processing logic is as follows:
Training data format for (qi, di +, di -) (^ ^ + _i q_i, d, d – _i) (qi, di +, di -), is a positive and negative examples of the query. Also commonly called :(anchor,positive,negative). PairWise is
actually a metric learning approach to learn their relative distance directly, regardless of the actual value. There are two kinds of loss functions: 1. Enter a pair’s Ranking loss: L (r0, r1, y) =
yd (r0 – r1) + 1 – (y) Max (0, margin – d (r0 – r1)) L (r_0, r_1, y) = yd (r_0 – r_1) + (1 – y) Max (0, margin (r_0 – r_1) – d) L (r0, r1, y) = yd (r0 – r1) + 1 – (y) Max (0, margin – d (r0 – r1))
one of the y value of 0 or 1. 2: Enter the Triplet Triplet Loss or Contrastive Loss: L (ra, rp, rn) = Max (0, margin (ra, rp) + d (ra, rn) – d) L (r_a r_p, r_n) = Max (0, margin (r_a r_p) – d + d
(r_a r_n)) L (ra, rp, rn) = Max (0, margin + d (RA, RP)− D (RA, RN)); Like PointWise, scores are used directly for sorting.
1. Due to the need to construct data sets in pair format, which can be n times as many as doc (depending on the construction strategy), the problem in PointWise that “when different Queries have
different numbers of documents, the overall loss is easily dominated by query sets with more documents (training data)” still does not exist. Even more so.
2. PairWise is more sensitive to noisy data than PointWise, meaning that a single mistag can cause multiple pairs to fail.
3. PairWise still only considers the relative position of a pair of doc’s, and the loss function still doesn’t consider the relationship between candidate documents. Can be considered as PointWise
optimization version, the basic idea has not changed.
4. Similarly, PairWise does not consider internal dependencies between documents corresponding to the same Query. As a result, the samples in the input space are not independently synchronized
(IID), which violates the basic assumption of machine learning.
3. ListWise
Both PointWise and PairWise directly learn whether each sample is related or whether the correlation between two positive and negative samples is more like metric learning, which is to try to deduce
the global ranking result through sampling learning. This kind of thinking has fundamental disadvantages. The basic idea of ListWise is to try to optimize sorting metrics like NDCG directly to learn
the best sort results.
The format of one sample input is Query with all of its doc candidates. Given: qiq_iqi, and its candidate doc and tag: C(di1,.. ,dim)C(d_{i1},.. ,d_{im})C(di1,.. , dim), Y (yi1,.. ,yim)Y(y_{i1},..
,y_{im})Y(yi1,.. Yim). The value of the tag YYY represents the order of all doc candidates. Such as a candidate set {a, d, c, b, e} \ {a, d, c, b, e \} {a, d, c, b, e}, if is the natural order, its
corresponding labels for 5,2,3,4,1} {\ {5,2,3,4,1 \} {5,2,3,4,1}. The model was trained by various ListWise algorithms. Prediction stage; Sort by score.
• ListWise has three basic ideas:
1. The first is measure-specific
This approach is to directly optimize metrics such as NDCG. This approach is typically “ideal is full, reality is full”, because ranking metrics such as NDCG, MAP and AUC are mathematically
“non-continuous” and “non-differentiable”. Based on this reality, There are usually three solutions: First, find a “continuous” and “differentiable” alternative function that approximates the NDCG
index and optimize NDCG by optimizing this alternative function. Representing algorithms: SoftRank and AppRank. The second method: try to write a mathematical “boundary” of NDCG and other indicators,
and then optimize the “boundary”. For example, if an upper bound is derived, NDCG can be optimized by minimizing this upper bound. Representative algorithms: SVM-MAP and SVM-NDCG. The third method,
direct optimization algorithm, can be used to deal with “discontinuous” and “non-differentiable” NDCG indicators. Representative algorithms: AdaRank and RankGP. 2. The second method is non-measure
-specific. This method attempts to reconstruct the order according to a known optimal order, and then measures the gap between the two, that is, optimize the model to try to reduce the gap, such as
using KL divergence as Loss. Representative algorithm: The core goal of this method is still to optimize the sorting index of NDCG class and design an alternative objective function. With the
alternative function, Optimization and computation are handled directly in a PairWise fashion. Representative algorithms: LambdaRank and LambdaMART.
• ListWise’s pros and cons
1. Constructing training data is difficult in many scenarios.
2. Since sorting loss needs to be calculated, the computational complexity is usually higher.
3. With plenty of good quality data, ListWise tends to perform better than PairWise and PointWise in learning and optimizing directly at the target task, which is sorting.
4. Common evaluation indicators
Explanation of nDCG
Mean Average Precision(MAP)
In a sort task, each Query has a sorted list. As the name implies, MAP is the average AP of all queries on the test set. Let’s look at AP first:
$AP(\pi,l)=\frac{\sum^m_{k=1}{P@k*I_{\{ l_{\pi^{-1}(k)}=1\}}}}{m_1}$
π\ PI π represents the item list, a list of pushed results. M represents the total number of result lists, and m1M_1M1 represents the number of query-related items in the result lists. I {l PI – 1
(k) = 1} I_ {\ {l_ {\ PI ^ (k)} {1} = 1 \}} I {l PI – 1 (k) = 1}, said at the location k label is related, 1, 0 means no. P@kP@kP@k is topk Precision: P (PI) l = ∑ @ k t < = kI {l PI – 1 (k) = 1} kP
@ (\ PI, l) = k \ frac {\ sum_ < = k {t} {I_ {\ {l_ {\ PI ^ (k)} {1} = 1 \}}}}} {k P (PI) l @ k = k t < = ∑ kI {l PI – 1 (k) = 1}
The attached chart makes it very clear:
Code implementation:
Def _ap (ranked_list ground_truth) : # ranked_list: list of results, such as [' a ', 'b', 'd', 'c', 'e'] # ground_truth: Related item list, for example [' A ', 'D '] hits = 0 SUM_precs = 0 for N, item in enumerate(ranked_list): if item in ground_truth: Hits += 1 sum_precs += hits/(n + 1.0) return sum_precs/Max (1.0, len(ground_truth))Copy the code
1. Search evaluation index — NDCG | {"url":"https://www.mo4tech.com/learning-to-rank-algorithm-and-evaluation-index.html","timestamp":"2024-11-11T07:20:14Z","content_type":"text/html","content_length":"83650","record_id":"<urn:uuid:a294a1c2-2e92-4140-95c9-9e7b947f3925>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00649.warc.gz"} |
Efficient Algorithm for the Linear Complexity of Sequences and Some
Related Consequences
Efficient Algorithm for the Linear Complexity of Sequences and Some Related Consequences
The linear complexity of a sequence s is one of the measures of its predictability. It represents the smallest degree of a linear recursion which the sequence satisfies. There are several algorithms
to find the linear complexity of a periodic sequence s of length N (where N is of some given form) over a finite field F_q in O(N) symbol field operations. The first such algorithm is The Games-Chan
Algorithm which considers binary sequences of period 2^n, and is known for its extreme simplicity. We generalize this algorithm and apply it efficiently for several families of binary sequences. Our
algorithm is very simple, it requires β N bit operations for a small constant β, where N is the period of the sequence. We make an analysis on the number of bit operations required by the algorithm
and compare it with previous algorithms. In the process, the algorithm also finds the recursion for the shortest linear feedback shift-register which generates the sequence. Some other interesting
properties related to shift-register sequences, which might not be too surprising but generally unnoted, are also consequences of our exposition. | {"url":"https://api.deepai.org/publication/efficient-algorithm-for-the-linear-complexity-of-sequences-and-some-related-consequences","timestamp":"2024-11-06T07:03:04Z","content_type":"text/html","content_length":"154586","record_id":"<urn:uuid:55e8c1f8-8508-4a72-ae27-8e506d8951e8>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00665.warc.gz"} |
Flat belt conveyor design calculations with practical application
Engineering / Mechanical Engineering
Flat belt conveyor design calculations with practical application
Flat belt conveyor design calculations consist 1.Conveyor belt speed 2. Roller diameter 3.Conveyor capacity 4. Conveyor power calculations 5.Conveyor live load kg per meter 6. belt width.
What Is Material Handling?
Conveyors are just one subset of the much larger group of material handling equipment. Through the proper application and use of material handling equipment, we try to minimize or even eliminate the
manual handling of material. Material handling is all about movement; raw materials, parts, boxes, crates, pallets, and luggage must be moved from one place to another, from point A to point B,
ideally in the most efficient manner. The material being handled is virtually limitless in size, shape, weight, or form.
Material can be moved directly by people lifting and carrying the items or using hand carts, slings, and other handling accessories. Material can also be moved by people using machines such as
cranes, forklift trucks, and other lifting devices. Finally, material can be moved using automated equipment specifically designed for mechanically handling the items such as robots and conveyors.
“Material handling is the art and science associated with providing the right materials to the right place in the right quantities, in the right condition, in the right sequence, in the right
orientation, at the right time, at the right cost using the right methods.
What Are the Major Objectives of Conveyor Application?
Reduce actual manual handling to a minimum.
Perform all handling operations at the lowest reasonable cost.
Eliminate as many manual operations as possible.
Ease the workload of all operators.
Improve ergonomic considerations for each operator.
Improve workflow between operations.
Provide routing options for intelligent workflow.
Increase throughput.
Carry product where it would be unsafe to do so manually.
Components of belt conveyor:
1.Conveyor frame- It is structure which support and maintains the alignments of idlers, pulleys and drives. There are several prominent frame design types.
a) Most common is a welded steel frame.
b) Aluminum extrusion frame which is popular for flexibility.
2.Drive unit- Which imparts power through one or more pulleys or rollers to move belt and its loads. Generally, drive unit is electric motor with gearbox.
3.Belt- The belt, which forms the moving and supporting surface on which the conveyed material is placed. The belt not only carries material, but also transmit the pull. It is the tractive element.
Types of various belts 1. Flat belt, 2.Modular belt, 3. Cleated belt, 4.V belt, 5.Timing belt.
4.Drive pulley or Drive roller
5.Driven pulley or Driven roller
6. Take up unit- The take-up device in a conveyor belt system has three major functions:
a)To establish and maintain a predetermined tension in the belt.
b) Remove the accumulation of slack in the belt at start-up or during momentary overloads, in addition to maintaining the correct operating tension.
c) To provide sufficient reserve belt length to enable re-splicing, if necessary
7.Electric control panel- Electrical control panel has all electrical components like MCB’s, on off switch, and VFD.
1.Drive roller Diameter (mm)
2.Belt length (mm)
3.Maximum loading Capacity (Tonnes /Hr.)
4. Belt width (mm)
5.Conveyor speed(m/s)
6.Material per unit length(kg/m)
7.Motor (Kw)
Input data or Considerations or Assumptions:
While any conveyor design you have some input data like center distance, required belt speed, type of material and size of material, conveyor capacity i.e. kg/s or Tonnes per hour.
If there is no any data given you have to consider it.
1.Roller Diameter and speed of conveyor
Similarly, By rearranging speed equation diameter of the drive pulley is,
$$D=\frac{V\times60\times1000}{\mathrm\pi\times\mathrm N}$$
V – Belt speed in m/s
D – Roller diameter (mm)
N – No of revolutions per minute(rpm).
2.Belt Length
a) When one pulley is larger than other.
$$L=2C+\frac{\mathrm\pi(\mathrm D+\mathrm d)}2+\frac{{(D-d)}^2}{4C}$$
b) Similarly, When both pulleys having same diameter.
$$L=2C+\frac{\mathrm\pi(\mathrm D+\mathrm d)}2$$
L – Belt Length(mm)
D – Large pulley diameter ( mm)
d – Small pulley diameter (mm)
C – Center distance (mm)
3.Belt width
Minimum belt width may be influenced by loading or transfer point requirements, or by material lump sizes. Standard belt widths are 400, 450, 500, 600, 650, 750, 800, 900, 1000, 1050, 1200, 1350,
1400, 1500, 1600, 1800, 2000 and 2200.
Also, you can calculate belt width as,
$$B_{min}=1.11\lbrack{\{\frac Q{c\times v}\}}^\frac12+0.05\rbrack$$
B min – Minimum belt width required.
Q-Conveyor capacity tonnes/hr
v- Belt speed meter/ second
c-Factor for the type of idler
=240,for flat belt
=460, for trough angle of 20
=510, for trough angle of 25
=540, for trough angle of 30
4.Capacity of conveyor (Q ) Tonnes/hr.
The rate at which material is being carried out by the conveyor.
Capacity Q = 3.6 X Load cross sectional area perpendicular to belt (m^2) X Belt speed (m/s) X Material density (kg/m^3)
= 3.6 x A x V x ρ Tonnes/hr.
5.Mass of material per unit length M (kg/m)
$$M=\frac{\mathrm Q}{3.6\times V}$$
Q – Capacity of conveyor in Tonnes/hour
V – Conveyor belt speed in m/s
6.Belt power calculations in conveyor design
Required Power = Belt pull x Belt speed
$$Power=\frac{F\times V}{1000}\;Kw$$
P – Power rating in (Kw)
V – Velocity of belt (m/s).
F – Total tangential for at the periphery of drive pulley(N).
7. Tangential force on periphery of drive pulley F (N)
$$F=\mu\times g\times(M_b+M_r+M_m)$$
Mb – Total mass of belt (kg)
Mr – Total mass of rollers (kg)
Mm – Mass of material over entire length of conveyor (kg)
g – Acceleration due to gravity(m/s^2)
μ – Coefficient of friction generally in between 0.1 to 0.4
You also like to read:
3 thoughts on “Flat belt conveyor design calculations with practical application”
2. Flat belt conveyor design calculations with practical application”
Dear sir,
I need the full catalog of above topic. I need complete calculations for Flat belt conveyor design. | {"url":"https://skyhighelearn.com/flat-belt-conveyor-design-calculations/","timestamp":"2024-11-07T21:48:29Z","content_type":"text/html","content_length":"116321","record_id":"<urn:uuid:d1d435f8-9776-426c-818f-6c8bde35a522>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00732.warc.gz"} |
Base Ten Block Testimonials
Math Tutoring Testimonials
Prices are going up. Lots of pages to change and it is taking longer than i thought. In the meantime take advantage of this sale.
All passwords including algebra and crash course.
Algebra with base ten blocks online!
30 Hour Course.
Click for details.
FOUR BOOKS, one low price, $19.99.
Absolutely Amazing Addends is done!
Two Wildly Wondrous Work Books are done!
Passwords will change again "SOON".
Base Ten Block CRASH COURSE
Just $97.00
Hourly rate is now $60.00 x 5 = $300.00.
$75.00 single classes.
Use the contact link for payments or other communication.
Crewton Ramone's No Mystery Theatre Now Has A Second Page!!!
(More FREE vids!)
And pardon me: for those of you that can't do math, the November 2020 election was absolutely stolen. Very basic math. Trump won in a landslide, so did a lot of others...this is why YOU need math.
These are actual Math Tutoring Testimonials, from actual students or parents. Turns out getting testimonials is like pulling teeth. One parent told me it will take about six months...I have quite a
few hand written, antique testimonials, written on paper that I need to find and scan...
Also you can check my blog and webpage for posts with screenshots of some of the testimonials I have received from people who use Face Book.
Here is another collection of testimonials from people who have found that this really is an excellent way to teach math.
"We are able to use the Mortensen math blocks for all operational math concepts and skills practice plus so much more. My son is 8 and we are unschoolers. Any concept that comes up can be
demonstrated and practiced w the blocks. But the really great thing is that we have the CRHoM videos to watch for inspiration and support. Math is fun and we are not dependent on a linear curriculum.
We can move from practicing addends one day to multiples the next, algebra, fractions, square roots - anything! So the practice is self-directed and learning belongs to my son and it means
something." ~CW, 2013
"I wanted to let you know that I have passed my algebra class. My final grade was a C. I could not have done it without your help, so I thank you very much." ~TK [...doesn't sound great until you
realize he was getting 40's and 50's when I met him. Took several 100s to get him to a C.]
Irene Newhouse has endorsed your work as Proprietor at Crewton Ramone's House Of Math.
Dear Crewton, I've written this recommendation of your work to share with other LinkedIn users.
"I've known no one who is more successful at tutoring students with difficulties learning math. Before I took our daughter to him for the summer of 2003, I thought she was going to be one of those
kids who need to have access to a calculator at all times. In 6 lessons, she learned the multiplication facts & firmed up addition and subtraction. She went to him again in summer 2010, and her
ability to understand algebraic reasoning improved dramatically." Service Category: Math tutor
Year first hired: 2003 (hired more than once)
Top Qualities: Great Results, Personable, Expert
"The math tutoring was pretty innovative. It's a great idea and the visualization using the blocks really did simplify the type of math we were learning. Crewton Ramone was very enthusiastic and
persistent, which really helped especially when I first started. I think if I had stuck with the program, it would have helped with upper-level math courses." ~B.Hinaga. College Student. 2009 [She
took some classes YEARS ago.]
"Math has always intimidated me...until I met Crewton Ramone. After graduating from high school I needed to brush up on my more than rusty math skills for my college placement exams. I have always
excelled in literature, writing and the arts, but I was afraid I wouldn't even qualify for a credit bearing class in math. Crewton Ramone put me at ease in the math world and had me doing advanced
mathematics through simple, intuitive techniques. Thanks to Crewton Ramone 's help I now look forward to my mathematic studies!" ~A. Eastling, Artist, some time, part time student, gourmand. 2009
When I was stuck in school my mom suggested a tutor to help me with math. I was resistant at first not knowing what to expect, thinking why would a tutor be any more helpful than my teacher. But
Crewton Ramone's way of teaching math makes you look at the problem's from a different angle and it's actually kind of fun! ~Chris A. Highscool Algebra Student. Has since graduated and moved on to
the Merchant Marines...
Here is an old one I just recently came across:
Dear Crewton,
Thank you so much for your visit to our Camp Forum program at the IBI Seminar in Southern California today.
The children enrolled in our program this week are 3,5,6, and 7 year olds. They were all spellbound by your program, as were we adults. Within the span of 30-45 minutes, you had us all doing
algebraic equations typically found on SAT tests with no fear of math at all, and to our amazement it was FUN!
I think perhaps even more impressive is the way that you walk your talk. You set a great example for us in how to keep the children engaged. It is so important, especially at this crucial moment in
our history, to find ways to keep children interested in learning and in life, and to be able to keep the motivation for it based on positive reinforcement.
Thanks once again for the work you do, the way you do that work, and for sharing it all with us today.
Lisa Wayne, Director
Camp Forum
Justin Frodsham
Software Engineer,
“Excellent trainer for teachers as well as a fantastic tutor for students.”
I get a lot of email with these sentiments but they don't really count as testimonials:
"Thanks for the great work you are doing!"~TP, 2012
"Have just discovered your website. I love it.
You are changing the world for the better.
Thank you."~B, 2013
"Thank you for passing along the new password...I don't remember hearing about Mortensen Math until stumbling upon your videos - what a SHAME this isn't the approach schools take with teaching math
to our kids. I know that I would have had a much easier time with the subject if it had been shown to me this way. :-/
Anyway, I love what I've learned on your site and I love watching the videos - DBoyz are SO darn cute! My son has mentioned to me on multiple occasions that this is much easier to understand for him
(his dyslexia has always made math a scary and difficult subject for him - and I've never been particularly strong with numbers either, so approaching it the way you show on the website has made it
less intimidating, and actually pretty neat)."~CD-V, 2012
Yet another kid goes from F to A: http://crewtonramoneshouseofmath.blogspot.com/2014/01/more-hundreds-perfect-scores.html
"We did our first session with Crewton Ramone's House of Math this evening. Wow! I'm amazed. Totally was NOT expecting my six yr old to start learning algebra day 1. I thought we were going to do
addends or build walls, but Nope...CR starts with algebra. When I was growing up, I remember hearing my dad say to others who thought his goals for my brother and I were too lofty, that "children
rise to our [adults] expectations". Well, CR is brilliant and gutsy enough to expect that a 6 yr old can, in fact, learn and do algebra. And not only did my son learn it, but he enjoyed it and his
last comment to me as I was tucking him in, was that he didn't want to watch TV on Saturday but instead he wanted to work on his science project and do math...!" ~Staci Hill Okine, Maryland. March
"Had a tutoring session yesterday, old dude of 32 here-played with blocks for two hours, and had a blast-who knew factoring could be so fun, at any age! " ~Nate Lee
"Crewton Ramone shows you the simple beauty behind math, rather than talking circles around it. And he makes a mean sweet n' sour chicken." ~S.O.L [By far my favorite intitals.] Massage Therapist,
extremely part time party animal. 2009
"I work at XYZ Academy and the principle asked if I knew any math teachers.... They need a math teacher from 8:45 am to 10:15 am. If interested call them, you have the job, they will pay you for 2
hours daily, pay negotiable with experience. You are good at what you do. Wishing you well." ~J.H. Teachers Aide. 2009 [Edited for your reading enjoyment. I have done several teacher trainings there.
Names changed to protect the innocent. Demand for math teachers is reaching condition critical. Unfortunately, they haven't printed enough money yet to get me to work there. It's a quaisi-public
school. My mother was a public school teacher for 30 years+. Nuff said. No offense to the school.]
We would like to say that Master Mathematician Crewton Ramone is truly that … a Master at Math. Master Crewton Ramone is able to take Math and teach it to us in a way that is simple and logical but
most of all enjoyable. Through a series of songs or playing blocks. You are not too young or too old to enjoy math. Math is just a game and Master Crewton Ramone has the secrets to winning this game
called Math with easy to learn games. We highly recommend Master Crewton Ramone. Our daughter who is now 16 says “...he is the best math teacher ever!”. Come play Math with Crewton Ramone he truly
makes it fun and that is half the battle. ~LeRoy, Adrienne & Krysta Fries
Check out the trig page for more.
More from parents and teachers etc as they arrive.
“You are rewarding a teacher poorly if you remain always a pupil.” ~Friedrich Nietzsche
“If a child can't learn the way we teach, maybe we should teach the way they learn.” ~Ignacio Estrada
“Be an opener of doors for such as come after thee.” ~Ralph Waldo
“Teaching should be such that what is offered is perceived as a valuable gift and not as a hard duty.” ~Albert Einstein
Want to see more free pages & lessons & other free stuff on this site?
Consider a dollar a month.
For $1 per month (the lowest level subscription) you get access to
Super Duper Super Secret Facebook Page.
You'll find hours and hours of videos with base ten blocks and information you won't may not find anywhere else not even on this website. I often post video tutoring sessions there. Other people
post vids and links there. Lessons cost the people doing them minimum $50.00 and hour. You can watch 2 to 10 of them a month for a dollar...Do the math. Currently 127 people are there. About half of
them are active.
You basically get a support group for a buck a month.
Here's My Patreon:
Note: from time to time the passwords change. Simply e-mail me for a new one or a new passport as the case may be. Annual passes are good for one year, lifetime passes are good for as long as the
site remains up, (site has been up for eight years now). All single page passwords have lifetime renewal.
Note: Mortensen Product Ordering Buttons Have Been Removed Due To Shipping/Inventory Issues. i basically DO NOT sell product for them anymore. Use eBay or other sources for base ten blocks. | {"url":"https://www.crewtonramoneshouseofmath.com/math-tutoring-testimonials.html","timestamp":"2024-11-05T05:27:53Z","content_type":"text/html","content_length":"36591","record_id":"<urn:uuid:e7be020a-e003-463f-b44f-f8e7eb20eb23>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00304.warc.gz"} |
Curve Construction Guide
The term structure of interest rates, also known as yield curve, is defined as the relationship between the yield-to-maturity on a zero coupon bond and the bond’s maturity. Zero yield curves play
an essential role in the valuation of all financial products.
The current methodology in capital markets for marking to market securities and derivatives is to estimate and discount future cash flows using rates derived from the appropriate term structure. The
yield term structure is increasingly used as the foundation for deriving relative term structures and as a benchmark for pricing and hedging.
We adopt a hybrid of bootstrapping method and enhanced method to generate zero curve. Assuming cash Libor rates, Libor futures and Libor swaps as underlying instruments, the curve generation process
is described
Separate the underlying instruments into two groups based upon the longest Libor maturity of the corresponding Libor future. Those with shorter maturities are classified as short to medium-term
instruments; typical of cash Libor rates, Libor futures and short-term Libor swaps. Those with longer maturities are long-term instruments; typical of medium to long term Libor swaps.
Yield curves can be derived from government bonds or LIBOR/swap instruments. The LIBOR/swap term structure offers several advantages over government curves, and is a robust tool for pricing and
hedging financial products. Correlations among governments and other fixed-income products have declined, making the swap term structure a more efficient hedging and pricing vehicle. | {"url":"https://cfrm17.github.io/irCurve.html","timestamp":"2024-11-08T02:13:14Z","content_type":"text/html","content_length":"5325","record_id":"<urn:uuid:ba7381db-3955-49f3-ae61-7ed358765cb3>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00211.warc.gz"} |
Gateway to Research (GtR) - Explore publicly funded research
Circle rotations and their generalisations in Diophantine approximation
Diophantine approximation is the study of how well real numbers can be approximated by rational numbers. Throughout the history of mathematics this has been one of the most important fields in
applications to real world problems. Today Diophantine approximation is used in numerical algorithms and computer programs which model scientific experiments and other natural behaviour. It also
plays a significant role as a supporting structure for results in many other mathematical and scientific settings.
There are several long standing open problems in Diophantine approximation which have attracted recent attention in the wider mathematical community. One of these is the Littlewood Conjecture, which
predicts how well pairs of real numbers can be simultaneously approximated by rationals with the same denominator. The goal of this project is to investigate the Littlewood Conjecture and related
problems by using information about the distribution of circle rotations and their generalisations.
Suppose you take a circle of circumference one and single out a point somewhere along the boundary. If you rotate the whole circle through a fixed angle your point will move to a new position on the
circle. If you think about repeating this rotation infinitely many times then the collection of all possible positions of the point is called its orbit. Understanding the orbits of points under a
given rotation is a basic problem which is directly related to understanding how well a real number can be approximated by fractions.
I have recently shown how a technique called Ostrowski expansion can be used to prove substantial new results about the Littlewood Conjecture. Ostrowski expansion basically allows us to reorganize
the orbits of points into an infinite array of blocks, each of which can then be understood by using number theoretic techniques. In this way the Ostrowski expansion can be used to isolate one of the
variables in the Littlewood Conjecture and thereby recast the problem in a one-dimensional setting.
This understanding of circle rotations may well lead to the proof of the entire Littlewood Conjecture. However there are also several other interesting problems which are open to attack via this
One such problem which I will investigate is known as the "shrinking targets" problem. Here you consider a circle rotation and to each element in the orbit of a point you attach a small ball of a
certain radius. The radii of the balls should shrink as the rotation progresses, and the problem is to determine which points on the circle are captured in infinitely many of the balls. In the form
presented here the answer to this problem is known. However it is still a wide open problem to prove a quantitative result, which would tell us something about the proportion of balls which capture a
given point on the circle. These types of problems have consequences in dynamical systems and particle physics.
Another problem is to replace the circle rotation by a different transformation of the circle. The so-called "interval exchange transformations" are generalisations of circle rotations which are
relevant to problems in Diophantine approximation and dynamical systems. It is possible to associate to each of these transformations an Ostrowski expansion that encodes information about the orbits
of points. In this way the framework which we are developing to study the Littlewood Conjecture should also allow us to prove new and interesting results in many settings.
Planned Impact
1) Increasing security of internet transactions: The accomplishment of milestone (D2i) in my Case for Support will increase global knowledge of the security of the RSA encryption algorithm. This is
currently the most widely used algorithm for public encryption of data, and most internet transactions and signatures rely on this algorithm to ensure their validity. When we share information with a
secure web site or disclose our credit card and personal information online we rely on the security of the RSA algorithm to protect this information from unauthorized parties.
On the negative side there is a global proliferation of online phishing attacks, in which internet criminals fraudulently disguise themselves as banks and other legitimate organizations in order to
steal personal data from unsuspecting victims. This has resulted in massive losses, with losses from online bank fraud alone amounting to tens of millions of pounds each year (according to recent
conservative estimates by Microsoft). RSA encryption is one of the safeguards that banks and other institutions currently use in order to establish and verify secure connections to minimize the
damage caused by phishing. This research proposal will help us to understand more of the possible weaknesses of the RSA algorithm, which is crucial to closing the door on the illegal compromise of
personal data.
RSA encryption is always a topic of major interest in computer science. As such publishing our results online and in a major mathematical journal will ensure that researchers and people who are
applying these algorithms to daily life will be aware of our advances. Furthermore the University of Bristol is closely connected with the Heilbronn Institute, which will also aid in quickly
disseminating our findings to the correct parties.
2) Strengthening the people pipeline in the UK: I attended a recent International Review of Mathematics meeting in Durham, where it was pointed out strongly that one of the key problems facing
mathematics in the UK is the insufficiency of post-doctoral positions for recent PhDs. Related to this need, one of the provisions of this research proposal is for the appointment of an RA for 3
years. This will help to strengthen the people pipeline in the UK and it will provide a solid foundation for the RA to establish a successful career in mathematics.
This project will have a considerable academic impact on the RA, who is to join me for three years beginning in the second year of the grant. My goal is to select and train a promising recent PhD
student coming from either number theory or dynamical systems, in order both to extend the RA's mathematical knowledge base and to help them to establish international connections with other
mathematicians working in related fields. Ideally this would be a person from the UK, and I have several people in mind. Being involved in these widely respected and far reaching problems at the
forefront of current research will open many doors and prepare the RA for a successful career in mathematics.
3) Helping to maintain the diversity and international standing of the UK as a whole: Part of the proposal includes plans to travel worldwide and to invite people to the UK, which will help to
maintain the diversity and international links that the UK has with other parts of the world. Also if certain objectives of the proposal are successful, for example if we succeed in proving the
Littlewood Conjecture, it is likely to draw international media attention because of the growing interest in mathematics among the general public.
Description We have discovered a lot of interesting new connections between number theory, dynamical systems, and tiling theory. Our research has led to a better understanding of deformation
properties and statistics of patterns in materials known as quasicrystals.
Exploitation Hard to know right now. In the academic world people are already taking our results and building on them. It is a fast moving field and there is still time on the the grant, so rather
Route than speculate I would prefer to keep working and just see how things develop.
Sectors Chemicals Healthcare Manufacturing including Industrial Biotechology Security and Diplomacy
Description Math event (STEM centre, York)
Form Of Engagement Participation in an activity, workshop or similar
Part Of Official No
Geographic Reach Regional
Primary Audience Schools
Results and Impact After a public lecture by a popular maths book writer, I led an interactive presentation about probability theory at weekend math event for primary and secondary school
students. One of my postdocs (Sara Munday) led an interactive presentation about fractal geometry.
Year(s) Of Engagement 2015
Description Problem solving course (Univ York)
Form Of
Engagement Participation in an activity, workshop or similar
Part Of
Official No
Geographic Regional
Primary Schools
One of my postdocs (Henna Koivusalo) and I have been organizing and running a bi-weekly problem solving course at the University of York. The course is aimed at Y12 and Y13 students.
Results and The goals are: 1) To help them develop their mathematical reasoning and problem solving skills, 2) To give them the flavor of what it is to do mathematics, at the same time introducing
Impact them to more advanced and abstract material than they will learn in their A-levels, 3) To help the Y13 students prepare to take the STEP exam, to qualify for entry to and scholarships
in the mathematics programs at top schools.
Year(s) Of
Engagement 2014,2015,2016
Description RI Masterclass (York)
Form Of Engagement Participation in an activity, workshop or similar
Part Of Official No
Geographic Reach Regional
Primary Audience Schools
Results and Impact I ran an RI Masterclass in York on the topics of large numbers and infinity. The audience were mostly Y9 students, and the goal was to give them a flavor of more advanced
mathematical thinking, and to inspire some of them to want to purse further mathematical education.
Year(s) Of 2016
Engagement Activity
Description Summer school in tiling theory
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Postgraduate students
Results and Impact I was an invited speaker at a summer school on tiling theory, funded by CIRM, in Oleron, France. I gave a lecture series on repetitivity of patterns in mathematical models
for quasicrystals.
Year(s) Of Engagement 2016
URL https://oleron.sciencesconf.org/
Description Summer school on fractal geometry and complex dynamics
Form Of Engagement Participation in an activity, workshop or similar
Part Of Official No
Geographic Reach International
Primary Audience Postgraduate students
Results and Impact I gave a course at the Summer School on Fractal Geometry and Complex Dynamics, at Cal Poly San Luis Obispo. The course consisted of several lectures that described the basic
theory of quasicrystals and their diffraction properties.
Year(s) Of Engagement 2016
URL http://www.calpoly.edu/~epearse/Fractals2016/ | {"url":"https://gtr.ukri.org/projects?ref=EP%2FJ00149X%2F2&pn=0&fetchSize=10&selectedSortableField=date&selectedSortOrder=ASC","timestamp":"2024-11-04T17:13:39Z","content_type":"application/xhtml+xml","content_length":"67887","record_id":"<urn:uuid:5db29c45-8729-4d1e-822f-fd1d97dadb95>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00866.warc.gz"} |
Benign Overfitting in Two-Layer ReLU Convolutional Neural Networks for XOR Data
Benign Overfitting in Two-Layer ReLU Convolutional Neural Networks for XOR Data
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:35404-35469, 2024.
Modern deep learning models are usually highly over-parameterized so that they can overfit the training data. Surprisingly, such overfitting neural networks can usually still achieve high prediction
accuracy. To study this “benign overfitting” phenomenon, a line of recent works has theoretically studied the learning of linear models and two-layer neural networks. However, most of these analyses
are still limited to the very simple learning problems where the Bayes-optimal classifier is linear. In this work, we investigate a class of XOR-type classification tasks with label-flipping noises.
We show that, under a certain condition on the sample complexity and signal-to-noise ratio, an over-parameterized ReLU CNN trained by gradient descent can achieve near Bayes-optimal accuracy.
Moreover, we also establish a matching lower bound result showing that when the previous condition is not satisfied, the prediction accuracy of the obtained CNN is an absolute constant away from the
Bayes-optimal rate. Our result demonstrates that CNNs have a remarkable capacity to efficiently learn XOR problems, even in the presence of highly correlated features.
Cite this Paper
Related Material | {"url":"https://proceedings.mlr.press/v235/meng24c.html","timestamp":"2024-11-15T03:42:09Z","content_type":"text/html","content_length":"16480","record_id":"<urn:uuid:d907e8d8-e2e2-4042-9e80-de605812e7b8>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00556.warc.gz"} |
Nine-point circle
(Redirected from Nine point circle)
The nine-point circle (also known as Euler's circle or Feuerbach's circle) of a given triangle is a circle which passes through 9 "significant" points:
• The three feet of the altitudes of the triangle.
• The three midpoints of the edges of the triangle.
• The three midpoints of the segments joining the vertices of the triangle to its orthocenter. (These points are sometimes known as the Euler points of the triangle.)
"The nine-point circle is tangent to the incircle, has a radius equal to half the circumradius, and its center is the midpoint of the segment connecting the orthocenter and the circumcenter."
That such a circle exists is a non-trivial theorem of Euclidean geometry.
The center of the nine-point circle is the nine-point center and is usually denoted $N$.
The nine-point circle is tangent to the incircle, has a radius equal to half the circumradius, and its center is the midpoint of the segment connecting the orthocenter and the circumcenter, upon
which the centroid also falls.
It's also denoted Kimberling center $X_5$.
First Proof of Existence
Since $O_c$ is the midpoint of $AB$ and $E_b$ is the midpoint of $BH$, $O_cE_b$ is parallel to $AH$. Using similar logic, we see that $O_bE_c$ is also parallel to $AH$. Since $E_b$ is the midpoint of
$HB$ and $E_c$ is the midpoint of $HC$, $E_bE_c$ is parallel to $BC$, which is perpendicular to $AH$. Similar logic gives us that $O_bO_c$ is perpendicular to $AH$ as well. Therefore $O_bO_cE_bE_c$
is a rectangle, which is a cyclic figure. The diagonals $O_bE_b$ and $O_cE_c$ are diagonals of the circumcircle. Similar logic to the above gives us that $O_aO_cE_aE_c$ is a rectangle with a common
diagonal to $O_bO_cE_bE_c$. Therefore the circumcircles of the two rectangles are identical. We can also gain that rectangle $O_aO_bE_aE_b$ is also on the circle.
We now have a circle with the points $O_a$, $O_b$, $O_c$, $E_a$, $E_b$, and $E_c$ on it, with diameters $O_aE_A$, $O_bE_b$, and $O_cE_c$. We now note that $\angle E_aH_aO_a=\angle E_bH_bO_b=\angle
E_cH_cO_c=90^{\circ}$. Therefore $H_a$, $H_b$, and $H_c$ are also on the circle. We now have a circle with the midpoints of the sides on it, the three midpoints of the segments joining the vertices
of the triangle to its orthocenter on it, and the three feet of the altitudes of the triangle on it. Therefore, the nine points are on the circle, and the nine-point circle exists.
Second Proof of Existence
We know that the reflection of the orthocenter about the sides and about the midpoints of the triangle's sides lie on the circumcircle. Thus, consider the homothety centered at $H$ with ratio ${1}/
{2}$. It maps the circumcircle of $\triangle ABC$ to the nine-point circle, and the vertices of the triangle to its Euler points. Hence proved.
Common Euler circle
Let an acute-angled triangle $ABC$ with orthocenter $H$ be given.
$\Omega = \odot ABC, Z$ be the point on $\Omega$ opposite $A.$
Points $E \in AB$ and $F \in AC$ such that $AEHF$ is a parallelogram. The line $EF$ intersects $\Omega$ at the points $X$ and $Y.$
Prove that triangles $\triangle ABC$ and $\triangle XYZ$ has common Euler (nine-point) circle.
Proof $\[BH \perp AC, EH||AF \implies BH \perp EH, \angle BEH = \angle BAC.\]$$\[CH \perp AB, FH||AE \implies CH \perp FH, \angle CFH = \angle BAC.\]$$\[\triangle BHE \sim \triangle CHF \implies \
frac{AF}{BE} = \frac{EH}{BE} = \frac{FH}{FC} = \frac{AE}{FC}.\]$$\[XE \cdot EY = AE \cdot BE = AF \cdot FC = XF \cdot FY \implies\]$$\[XE \cdot (EF + FY) = (XE + EF) \cdot FY \implies XE = FY.\]$
Denote $D$ is midpoint $AH \implies DX = DE + XE = DF + FY = DY.$
Let’s consider $\triangle AHZ.$ Circumcenter of $\triangle ABC$ point $O$ is the midpoint $AZ,$ point $D$ is the midpoint $AH.$
Denote $G$ the centroid of $\triangle AHZ, G = ZD \cup HO \implies \frac {HG} {GO} = 2 \implies$
$G$ is the centroid of $\triangle ABC.$
Denote $M$ the midpoint of $BC. \frac {AG} {GM} = 2 \implies M$ is the midpoint of $HZ.$
$\frac {ZG} {GD} = 2 \implies G$ is the centroid of $\triangle XYZ.$
Point $O$ is the circumcenter of $\triangle XYZ \implies H$ is the orthocenter of $\triangle XYZ.$
The triangles $\triangle ABC$ and $\triangle XYZ$ has common circumcircle and common center of Euler circle (the midpoint of $OH$) therefore these triangles has the common Euler circle.
vladimir.shelomovskii@gmail.com, vvsss
See also
This article is a stub. Help us out by expanding it. | {"url":"https://artofproblemsolving.com/wiki/index.php/Nine_point_circle","timestamp":"2024-11-06T18:50:02Z","content_type":"text/html","content_length":"56986","record_id":"<urn:uuid:8c8a685a-0f29-463b-a9c1-d3dfcdf25c3e>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00783.warc.gz"} |
Modelling Fine Sediment Dynamics: Towards a Common Erosion Law for Fine Sand, Mud and Mixtures
IFREMER/DYNECO/DHYSED, centre de Bretagne, ZI de la pointe du Diable CS 10070, 29280 Plouzané, France
AAMP (Agence des Aires Marines Protégées), 16 quai de la Douane, 29200 Brest, France
SHOM/DOPS/HOM/Sédimentologie, 13 rue du Châtellier CS 92803, 29228 Brest, France
Author to whom correspondence should be addressed.
Submission received: 7 June 2017 / Revised: 17 July 2017 / Accepted: 18 July 2017 / Published: 27 July 2017
This study describes the building of a common erosion law for fine sand and mud, mixed or not, in the case of a typical continental shelf environment, the Bay of Biscay shelf, characterized by
slightly energetic conditions and a seabed mainly composed of fine sand and muddy sediments. A 3D realistic hydro-sedimentary model was used to assess the influence of the erosion law setting on
sediment dynamics (turbidity, seabed evolution). A pure sand erosion law was applied when the mud fraction in the surficial sediment was lower than a first critical value, and a pure mud erosion law
above a second critical value. Both sand and mud erosion laws are formulated similarly, with different parameters (erodibility parameter, critical shear stress and power of the excess shear stress).
Several transition trends (linear or exponential) describing variations in these erosion-related parameters between the two critical mud fractions were tested. Suspended sediment concentrations
obtained from simulations were compared to measurements taken on the Bay of Biscay shelf with an acoustic profiler over the entire water column. On the one hand, results show that defining an abrupt
exponential transition improves model results regarding measurements. On the other hand, they underline the need to define a first critical mud fraction of 10 to 20%, corresponding to a critical clay
content of 3–6%, below which pure sand erosion should be prescribed. Both conclusions agree with results of experimental studies reported in the literature mentioning a drastic change in erosion mode
above a critical clay content of 2–10% in the mixture. Results also provide evidence for the importance of considering advection in this kind of validation with in situ observations, which is likely
to considerably influence both water column and seabed sediment dynamics.
1. Introduction
The transport of fine sediments can be assumed to mainly occur in suspension. Suspended transport is generally simulated by solving an advection/diffusion equation, assuming that sediment particles
have the same velocity as water masses, except the vertical settling component (e.g., [
]). Such an equation involves sink and source terms at the bed boundary, which are deposition and erosion fluxes under conditions defined by the hydraulic forcing and the behaviour and composition of
both the suspended and deposited sediments. This means that dealing with fine sediment dynamics, whether or not cohesive, requires the formulation of an erosion law. Such an erosion law should be
applicable for fine sands (and even medium sands under strong shear stresses) as well as for mud, and naturally for mixtures of sand and mud. Despite these similarities between the suspended
transport of fine sand and mud, their erosion processes have generally been investigated separately due to their contrasting behaviours.
An abundant literature is available on cohesive sediments, based on the existence of a critical shear stress for erosion (
in N·m
) and several empirical relationships between the erosion flux
(in kg·m
) and excess shear stress (i.e., the difference between the actual shear stress (
in N·m
) and the critical value
, either normalized by the latter or not). The parameters of such an erosion law involve bed characteristics, which may concern electrochemical forces, mineral composition, and organic matter content
], or pore water characteristics [
]. They also depend on the consolidation state [
], and may be altered by biota effects [
]. Bulk density has often been proposed as a proxy for characterizing the bed, but the plasticity index has also been suggested [
], as well as the undrained cohesion and the sodium adsorption ratio [
Literature on sand erosion laws is scarce, partly because of experimental difficulties linked to simultaneous settling and resuspension processes, and partly because sand transport has mainly been
considered through the formulation of transport capacity, even in the case of suspension. The need to simultaneously simulate transport of mud and (fine) sand, and their mixture, updates the need for
an erosion law for fine sand, often named
pick up
functions [
Pick up
functions link the erosion rate with the particle characteristics (size and density) and the shear stress (or, equivalently to the Shields parameter) [
]. As for cohesive sediment erosion laws, the dependence on the forcing stress is either absolute (~
) or relative (~
), involving (or not) a threshold value
The analogy between erosion formulations for sand and mud can be highlighted, especially considering the Partheniades-Ariathurai law [
] for cohesive sediments, the more usual in numerical models. The erosion law can then be written as
$E = E 0 · ( τ / τ e − 1 ) n , i f τ ≥ τ e$
is an erodibility parameter in kg·m
a power function of the sediment composition. In Equation (1),
are functions of the sediment composition and its consolidation state in the case of cohesive sediments, or functions of the particle diameter and density in the case of non-cohesive sediments.
Assuming a similar erosion law for the whole range of mixed sediments, the problem becomes the assessment of the critical shear stress for erosion on the one hand, and the erosion factor
on the other hand, in the full transition range between cohesive and non-cohesive materials. Following Van Ledden et al. [
], the proxy for such a transition could be clay content. An alternative proxy could be mud content (hereafter referred to as
), considering that the clay to silt ratio is often uniform in a given study area [
Experiments have provided evidence that the resistance to erosion (i.e., critical shear stress
) increases when mud is added to sand, either because of electrochemical bonds which take effect in binding the sand grains or because a cohesive matrix takes place between and around sand grains
(e.g., [
]). Literature on erosion rate for mixed sediments is much less abundant, but a significant decrease (several orders of magnitude) in the erosion rate with mud content is most often reported (e.g
, [
]). For instance, Smith et al. [
] presented laboratory measurements showing a decrease of about two orders of magnitude when the clay fraction in the mixture increased from 0% to 5–10%, and up to one order of magnitude more when
the clay fraction increased from 10% to 30%.
From the point of view of modelling, Le Hir et al. [
] reviewed the different approaches developed in the last 20 years to manage sand/mud erosion in numerical process-based models. A non-cohesive and a cohesive regime separated by one or two critical
mud fractions were commonly introduced and simulated by specific erosion laws. Le Hir et al. [
] and Waeles et al. [
], partly followed by Bi and Toorman [
], used a three-stage erosion law. Below a first critical mud fraction
, they considered a non-cohesive regime where the erosion flux of any class of sandy and muddy sediments remains proportional to its respective concentration in the mixture, but is computed according
to a pure sand erosion law (with a potential modulation of the power applied to the excess shear stress). Starting from a value characteristic of a pure sand bed, critical shear stress in this first
regime is either kept constant [
], or linearly increases with the mud fraction
]. Above a second critical mud fraction
, these authors defined a cohesive regime: Waeles et al. [
] and Le Hir et al. [
] formulate the erosion law using the relative mud concentration (the concentration of mud in the space between sand grains), considered as more relevant than the mud density in the case of sand/mud
mixtures, which was in agreement with observations from Migniot [
] or Dickhudt et al. [
]. Between
, they ensured the continuity by prescribing a linear variation of
between non-cohesive and cohesive erosion settings. Carniello et al. [
] used a two-stage erosion law built by Van Ledden [
]: below a critical mud fraction
, the erosion factor of the sand fraction is steady and the one for the mud fraction varies slightly according to the factor 1/(1 −
), while above
the erosion rate is the same for sand and mud fractions and logarithmically decreases according to a power law. Regarding the critical shear stress for erosion, it first slightly increases with
and then varies linearly to reach the mud shear strength. Dealing only with the critical shear stress, Ahmad et al. [
] proposed an alternative to the Van Ledden [
] expression: without any critical mud fraction,
varies linearly with
for low values of
and more strongly for high
values, using a parameter representing the packing of the sand sediment in the mixture. Generally speaking, the transitional erosion rate between the two regimes is poorly documented.
The aim of this paper is to fit an erosion law for mixed sediments to be applied in environments dominated by fine sediment, such as continental shelves, including our area of interest, the Bay of
Biscay continental shelf (hereafter referred to as BoBCS). In these environments that develop outside the coastal fringe where wave impact is higher and where stronger tidal currents may take place,
a common sedimentological feature is a mixture of fine sands and mud. In addition, bathymetric gradients are often gentle so that the shear stress gradients are likely to be small: as a consequence,
the critical shear stress for erosion of surficial material in equilibrium with such environments is also likely to be low contrasted. Last, suspended sediment concentrations (hereafter referred to
as SSC) are rather low in such “deep” coastal waters (at least they are much lower than in shallow waters), and sediment exchanges remain very small: as a consequence, freshly deposited sediment does
not consolidate under its own weight, and erosion is so small that the surficial sediment can hardly become over-consolidated. This means that in this context, consolidation appears as a second order
process, and that when the surficial sediment is muddy, its shear strength never reaches high values. On the contrary, the surface shear strength is likely to remain close to typical values for fine
sand in the order of 0.1–0.2 Pa (for sands, this range corresponds to the decreasing trend of the Shields parameter with diameter, when the latter is below 0.6 mm).
Due to difficulties in measuring erosion fluxes (and not only shear strengths) for sand and mud mixtures, the assessment of the erosion law for fine sediment of the BoBCS is achieved by means of
continuous measurements in the field, completed by a realistic hydro-sedimentary model of the specific area for testing the erosion law and its parameterization. In this way, the paper also proposes
a methodology for fitting the erosion law in a poorly known natural environment. The paper is organized as follows.
Section 2
gives the strategy for assessing the erosion law and describes measurements acquisitions on the one hand, and the modelling framework on the other. The description of the model includes the main
features of hydrodynamic and sediment models, and the validation of hydrodynamics computation.
Section 3
details the successive steps which led to the building of a new erosion law for sand/mud mixtures.
Section 4
presents model results obtained in different erosion settings along with an assessment of their relevance with respect to observations.
Section 5
discusses the main results in light of previous studies, followed by a short conclusion in
Section 6
2. Strategy and Modelling Background
2.1. Strategy for Assessing an Erosion Law, and Its Application to the BoBCS
Most often erosion laws have been investigated by comparing the critical erosion shear stress and erosion fluxes deduced from the tested law with measurements in a laboratory flume. However, this
process has some difficulties. When the sediment is placed or settled in a flume, its behaviour is likely to differ from natural sediment one, especially when cohesive properties begin to develop. In
addition, while critical shear stresses can be determined straightforwardly, the erosion flux of the sandy fraction is questionable, mainly because computations generally apply to rough erosion,
while measurements often concern a net erosion flux, resulting from this rough erosion and possible simultaneous deposition, which are not easy to characterize. This concern does not apply to the mud
Here, an alternative methodology is proposed. Selecting an environment where local sediment is representative of the study area, a continuous measurement of suspended sediment concentration and local
forcing is used to test the erosion law by simulating observed features by means of modelling. In the likely case of non-negligible horizontal gradients, a full 3D modelling is preferred, as it
accounts for all processes, including erosion, deposition and horizontal advection. Deploying field measurements over several months increases the probability of investigating very different forcing
conditions, especially if a winter period is selected. The use of an acoustic current profiler enables a simultaneous measurement of local forcing (both current patterns and waves) and resulting
resuspension over the whole water column, by analysing the acoustic backscatter. The former observation is used for validating hydrodynamics simulations, while the latter can be compared to predicted
SSC according to tested erosion parameters.
2.2. Measurements Used for Erosion Law Assessment and Model Validation
As in many continental shelves (e.g., [
]), the surficial sediment of the BoBCS is mainly composed of fine sand (~200 µm) and mud, since water depths exceed the depth of wave-induced frequent reworking, typically about 20–30 m, for the
wave regime of the Bay of Biscay. The erosion law for fine sediment mixtures has been fitted by selecting measurements in a rather representative environment of the shelf, both in terms of sediment
cover, hydrodynamic forcing and near bottom turbidity. A station located near the coast of southern Brittany, close to
Le Croisic
(hereafter referred to as
Figure 1
) meets these criteria: the local water depth is 23 m on average and the exposure to waves is attenuated by a rocky bank localised in the southwest nearby. Despite a tidal range of 4–5 m, tidal
currents remain low, and flow is likely to be controlled by wind-induced currents [
]. The seabed is constituted by muddy sands (
of 7.5 µm,
of 163 µm, 5.1% clay (% < 4 µm) and 25% mud (% < 63 µm) contents), and exhibits some gradients around the station, with muddy facies to the north, and more sandy ones to the south (
Figure 1
For measurements, a mooring line was deployed over the whole water column for two months between 25 November 2007 and 31 January 2008. This period corresponded to typical winter hydrodynamic and
meteorological conditions: mainly south-westerly winds, high rates of river discharge (e.g., Loire river: average flow rate of 1161 m^3·s^−1, and maximum flow rate of 2240 m^3·s^−1), and rather large
swells (H[s] (significant wave height) peaks > 3 m, T[p] (peak period) 10–18 s).
Several instruments were placed along the mooring line. A
sensor (OTT Hydromet, Kempten, Germany) providing temperature and salinity measurements (at 60 min intervals) was fixed 1.50 m below the sea surface. An upward looking acoustic Doppler profiler
(Acoustic Wave And Current profiler, 1000 KHz, Nortek AS, Vangkroken 2, Norway; hereafter referred to as
) was fixed on a structure placed near the seabed to record the backscattered acoustic intensity (at 30 min intervals), the intensity and direction of the currents (1 min of time-integrated data at
30 min intervals), water elevation (at 30 min intervals), as well as wave height, period, and direction (at 60 min intervals). The return echo was sampled over the entire water column with 25
cm-thick cells. In this configuration, the first sampling cell of the
profiler was located 1.67 m above the seabed (including the frame height and the
blank distance). At the same elevation, a turbidity sensor (
; Sea-Bird Scientific, Bellevue, Washington, DC, USA) provided hourly measurements for calibration. This turbidity signal was further transformed to
in mg·L
using water samples. The backscatter index (
) from the
profiler was evaluated from the sonar equation [
], following the procedure described by Tessier et al. [
], in particular by considering the geometrical attenuation for spherical spreading, the signal attenuation induced by the water, and the geometric correction linked to the expansion of the
backscattering volume with increasing distance from the source. Given that
derived from the turbidity sensor did not exceed 100 mg·L
, the signal attenuation caused by the particles was disregarded when estimating backscatter [
]. Then, an empirical relationship was established between the
of first
cell and
measurements of turbidity sensor, following Tessier et al. [
$10 l o g 10 ( S S C ) = c 1 · B I + c 2$
We finally obtained a determination coefficient
of 0.78 with
= 0.42794 and
= 32.8907. Changes in
concentrations in the water column could thus be quantified (
Section 4
2.3. Hydrodynamics Models (Waves, Currents)
2.3.1. Brief Description
Current patterns and advection of suspended sediments were computed with the three-dimensional MARS3D model (3D hydrodynamic Model for Applications at Regional Scale, IFREMER), described in detail by
Lazure and Dumas [
]. This code solves the primitive equations under classical Boussinesq and hydrostatic pressure assumptions. The model configuration spreads from 40°58′39′′ N to 55° N in latitude, and from 18°2′1′′
W to 9°30′ E in longitude (
Figure 1
) with a uniform horizontal resolution of 2500 m (822 × 624 horizontal grid points). The vertical discretization is based on 40 generalized sigma layers. For water depths lower than 15 m, sigma
levels are uniformly distributed through the water column. Above this depth, a resolution increase is prescribed in the lower and upper parts of the water column, following the formulation of Song
and Haidvogel [
]. For instance, this setting leads to a bottom cell of 40 cm for a total water depth of 25 m. The numerical scheme uses a mode splitting technique with an iterative and semi-implicit method which
allows simultaneous calculation of internal and external modes at the same time step. In our case, the time step is fixed at 150 s. Vertical viscosity and diffusivity for temperature, salinity and
momentum were obtained using the
turbulence closure of Rodi [
]. Simulations were performed with the realistic forcings detailed in
Table 1
Regarding wave forcing, a wave hindcast database built with the WaveWatchIII (WWIII) model (realistic and validated configuration of Boudière et al. [
]) enabled the computation of bottom wave-induced shear stresses (
in N·m
). The wave-induced shear stress was computed according to the formulation of Jonsson [
], with a wave-induced friction factor determined following Soulsby et al. [
]. Then, the total bottom shear stress
was computed from the estimated
and from the current-induced shear stress (
in N·m
) provided by the hydrodynamic model, according to the formulation of Soulsby [
], i.e., accounting for a non-linear interaction between waves and currents. Both wave and current shear stresses were computed by considering a skin roughness length
linked to a 200 µm sand, representative of the sandy facies widely encountered on the BoBCS (
/30 = 2 × 10
m with
the Nikuradse roughness coefficient).
2.3.2. Hydrodynamic Validation of the Model
Figure 2
shows model validation regarding hydrodynamics at the
station from 1 December 2007 to 18 January 2008 in terms of significant wave height (
), surface temperature and salinity (
, respectively), and current intensity and direction (
, respectively).
Measurements highlight typical late autumn/winter energetic conditions. Average
were around 1.5 m and occasionally exceeded 4 m in stormy conditions. Peak periods (not illustrated here) ranged from 8 to 18 s (around 10 s on average). Even during “calm” periods,
values are generally no lower than 0.8 m.
Figure 2
a illustrates the ability of the model of Boudière et al. [
] to describe
over the period, with a root mean square error (
) of 0.24 cm and a
of 0.95. Measured and modelled
are illustrated in
Figure 2
b,c respectively, and demonstrate the correct response of the model with respect to observations with
values of 0.5 °C/0.86 for
and 2 PSU/0.7 for
. The weaker correlation obtained for
is mainly due to model underestimation around 2 January 2008 (i.e., about 5 PSU). It should be underlined that the model accurately reproduces the abrupt change (decrease) in surface temperature and
salinity on 11 December 2007, linked to the veering of the Loire river plume caused by easterly winds. This plume advection led to stratification which in turn influenced the vertical profiles of
currents in terms of direction and intensity (
Figure 2
e,g, respectively), which are well represented by the model. More generally, the model provides an appropriate response regarding the direction and intensity of the current (
Figure 2
d,f) over the entire water column throughout the period. For instance, measured bottom current velocities (0.11 m·s
on average, max of 0.44 m·s
) are correctly reproduced by the model with a
of 0.05 m·s
2.4. Sediment Transport Model
The sediment model used in the present 3D modelling system is described in Le Hir et al. [
]. The model makes it possible to simulate the transport and changes in all kinds of sediment mixtures under the action of hydrodynamic forcing, including that of waves and currents. According to the
model concept, the consolidation state of the bottom sediment is linked to the relative mud concentration, i.e., the concentration of mud in the space between the sand grains (
C[rel mud]
). Given that the surficial sediment in our study area is weakly consolidated (erosion actually occurs in surficial layers only, typically a few mm or cm thick) and mainly composed of fine sand,
consolidation was disregarded and
C[rel mud]
was set at a constant value of 550 kg·m
(representative of pre-consolidated sediment according to Grasso et al. [
]). In addition, bed load was not taken into account in the present application, assuming that surficial sediment in our study area is mainly composed of mixtures of mud and fine sand. In our case,
three sediment classes (for which the mass concentrations are the model state variables) are considered: a fine sand (
), a non-cohesive material with a representative size of 200 µm, and two muddy classes (
1 and
2), which can be distinguished by their settling velocity, in order to be able to schematically represent the vertical dispersion of cohesive material over the shelf. The sediment dynamics was
computed with an advection/dispersion equation for each sediment class, representing transport in the water column, as well as exchanges at the water/sediment interface linked to erosion and
deposition processes. Consequently, the concentrations of suspended sediment, the related horizontal and vertical fluxes, and the corresponding changes in the seabed (composition and thickness) are
simulated. This section details the way of managing sediment deposition, seabed initialization, and technical aspects linked to vertical discretization within the seabed compartment. The erosion law
establishment and the numerical modelling experiment aiming to fit an optimal setting will be addressed independently in
Section 3
2.4.1. Managing Sediment Deposition
The deposition flux
for each sediment class
is computed according to the Krone law:
$D i = W s , i · S S C i · ( 1 − τ τ d , i )$
In Equation (3),
is the total bottom shear stress (waves and currents; see
Section 2.3.1
is the settling velocity,
is the suspended sediment concentration, and
is the critical shear stress for deposition. In the present study, the latter was set to a very high value (1000 N·m
) so it is ineffective: considering that consolidation processes are negligible near the interface and that, as a result, the critical shear stress for erosion remains low (
Section 3
), deposited sediments can be quickly resuspended in the water column if the hydrodynamic conditions are sufficiently intense, which replaces the role played by the term between parentheses in
Equation (3) [
The settling velocity of non-cohesive sediment (
) is computed according to the formulation of Soulsby [
]. The one related to the mud
) is assumed to vary as a function of its concentration in the water column (
) and the ambient turbulence according to the formulation of Van Leussen [
$W s , M 1 = m a x { W s , m i n ; m i n { W s , m a x ; k · S S C M 1 m · ( 1 + a · G 1 + b · G 2 ) } } , w i t h G = ε ν$
are empirical constants,
is the absolute velocity gradient,
is the turbulent dissipation rate, and
is the water viscosity. For
, respective values of 0.3 and 0.09, set by Van Leussen [
] from experiments, were used. In agreement with the settling velocity setting used by Tessier et al. [
] in a modelling study of turbidity over the southern Brittany continental shelf and recent experiments by Verney et al. [
] in similar environments in the Gulf of Lions,
were respectively set to 0.005 and 0.7. Following Tessier et al. [
is limited by minimum (
) and maximum (
) values, respectively set to 0.1 mm·s
and 4 mm·s
, the latter being reached for
≥ 700 mg·L
(thus ignoring the hindered settling process which actually does not occur in the range of
over the shelf). Lastly, the mud
linked to ambient turbidity in the water column is assumed to have a constant very low settling velocity set to 2.5 × 10
After resolving the sediment transport equation, the actual deposition on the bottom can be computed. In agreement with Le Hir et al. [
], sand and mud particles are deposited successively, by filling the pores or by creating new layers. A volume concentration of well-sorted sediment (
C[vol sort] =
0.58) was attributed to sediment when only one class of non-cohesive sediment is present. However, the bed concentration can reach a higher value if several classes are mixed (
C[vol mix] =
0.67). These typical volume concentrations [
] led to mass concentrations assuming a fixed sediment density ρs (2600 kg·m
) for all classes. In the case of simultaneous deposition of sand and mud, the sand is first deposited at a concentration which depends on surficial sediment composition: in the case of mixed
sediment, the sand is first mixed with the initial mixture until
C[vol mix].ρ[s]
= 1742 kg·m
) is reached. The remaining sand is deposited with the concentration
C[vol sort].ρ[s]
= 1508 kg·m
), from which an increase in the thickness of the layer can be deduced. The same kind of deposition occurs when the surficial sediment is only comprised of sand. In addition, the thickness of any
layer is limited to
dz[sed, max]
, a numerical parameter of the model. Any deposition of excess sand leads to the creation of a new layer (see
Section 2.4.2
). Next, mud is deposited: it progressively fills up pores between the sand grains until either
C[rel mud]
is reached. Considering these criteria, mud is mixed within the initial and new deposits starting from the water/sediment interface. If any excess mud remains after the mixing step, it is added to
the upper sediment layer, contributing to its thickening.
2.4.2. Sediment Discretization within the Seabed
In the model, an initial sediment height (
h[sed, ini]
) is introduced, and the seabed is vertically discretized in a given number of layers of equivalent thicknesses (
dz[sed, ini]
). An optimal vertical discretization of the sediment was assessed by Mengual [
]. By means of sensitivity analyses, it was shown that beyond a 1/3 mm resolution within the seabed compartment (i.e., thickness of each layer
dz[sed, ini]
), the
response of the model in the water column did not change anymore. According to the conclusions drawn from this sensitivity analysis, the initial sediment thickness
h[sed, ini]
was set at 0.03 m, corresponding to 90 sediment layers of thickness
dz[sed, ini]
(1/3 mm).
As previously mentioned in
Section 2.4.1
, a new layer is created when the actual deposition leads to a thickness of the surficial sediment layer higher than a maximum value
dz[sed, max]
, corresponding to a parameter of the model. Nevertheless, a maximum number of layers in the seabed compartment (
) needs to be defined in order to make computational costs acceptable. While
is reached, a fusion of the two sediment layers located at the base of the sedimentary column occurs. By this way, the creation of new deposited layers becomes once again possible. The parameters
dz[sed, max]
control changes of the “sediment vertical discretization” during the simulation, and are likely to influence the sediment dynamics. To prevent any variations in the
response of the model linked to changes in the seabed vertical resolution, the maximum thickness of sediment layers,
dz[sed, max]
, was set at the same value than the initial one
dz[sed, ini]
This seabed management constitutes a compromise between the likely maximum erosion depth in most of the shelf and the possibility of new layers being deposited in other places.
2.4.3. Sediment Facies Initialization for the Application to the BoBCS
The seabed was initialized using the distribution of the three sediment classes according to existing surficial sediment maps (e.g., [
]) and for the sand and mud fractions, new sediment samples taken in the BoBCS (
Figure 1
). No sediment initialization was undertaken in areas indicated as rocky on sediment maps. The mud
2, characterized by a very low settling velocity, is linked to river inputs and was also uniformly initialized (20 kg·m
) on the seabed over the entire shelf. This particle class enables the representation of a non-negligible part of the ambient turbidity of a few mg·L
near the coast.
Given the sand fraction
of the mixed sediment, a relationship could be derived between
C[rel mud]
(relative mud concentration set at a constant value of 550 kg·m
) and the bulk sediment density (
). The latter can be written as:
$C b u l k = C r e l m u d 1 + f s · ( C r e l m u d ρ s − 1 )$
For large sand fraction (above 90%), the given relative concentration of 550 kg·m
becomes unlikely, so that
$C b u l k$
is maximised by
$C m a x$
, which corresponds to a dense sediment (
depending on the properties of the sediment mixture, i.e., well-sorted or mixed sediments). This formulation could be validated by comparing with recent data in two sectors of the BoBCS [
]: bulk densities of 1540 kg·m
and 1380 kg·m
were obtained for 85% and 75% of sand, respectively, while the application of Equation (5) with sand fractions
of 0.85 and 0.75 led to
values of 1668 kg·m
and 1346 kg·m
in fair agreement. Sediment concentrations linked to each particle class were then deduced according to their respective fraction.
Lastly, the sediment distribution has been prescribed uniformly along the vertical dimension (i.e., in each layer of the seabed compartment) in the absence of 3D data on grain size distributions.
3. Erosion Law Setting: Building and Numerical Experiment
3.1. General Formulation
As previously evoked in the introduction, the Partheniades-Ariathurai law [
], the most commonly used for cohesive sediments, can also represent the
pick up
process of fine sands. Such an erosion law is then assumed whatever the sediment composition. In the present study, the erosion flux
is thus expressed following Equation (1). A simultaneous erosion of sandy and muddy fractions is assumed, according to their respective concentration in the mixture.
This erosion law involves three parameters,
(erodibility parameter, kg·m
(critical shear stress, N·m
), and
(hereafter referred to as “erosion-related parameters”), which are set at different values depending on the mud content
(<63 µm) of the surficial sediment. The concept of critical mud fraction is retained, with the definition of a first critical fraction
below which a non-cohesive behaviour is prescribed, and of a second one,
, above which the sediment is assumed to behave as pure mud. Two sets of erosion-related parameters will be defined below and above these mud fractions (see
Section 3.2
Section 3.3
for sand and mud, respectively). As already mentioned in
Section 1
, a transition in erosion-related parameters has to be prescribed between
(between pure sand and pure mud parameters) to manage the erosion of “transitional” sand/mud mixtures. Such a transition is investigated in
Section 3.4
3.2. Pure Sand Erosion
Below the first critical mud fraction
, the non-cohesive erosion regime is prescribed by defining a first set of erosion-related parameters linked to pure sand erosion in Equation (1):
, and
. Le Hir et al. [
] suggested to compute
as a function of the sand mean diameter
f[mcr][1] = α[0]
= 10
), leading to a value of 20% considering the fine sand
of 200 µm (considered as a reference value hereafter). However, the model erosion dynamics is likely to significantly vary while the surficial mud fraction
is close to
. Such a sensitivity is addressed in
Section 4.2
In Equation (1), parameters linked to pure sand erosion
, and
were deduced from numerical simulations and empirical formulations. According to Van Rijn [
] and many other numerical models (e.g., [
]), the best fit for
is 1.5. The critical shear stress
was determined from the Shields critical mobility parameter computed according to the formulation of Soulsby [
], leading to a value of 0.15 N·m
for a sand of 200 µm.
Considering that equilibrium conditions are usually met for non-cohesive sediment transport, and that such an equilibrium requires compensation between the erosion rate and the deposition flux, many
authors formulate the erosion rate (or
pick up
function) as:
is the settling velocity and
is a reference concentration which characterizes the equilibrium. The description of
is generally associated with the reference height,
, the distance from the bed where the concentration is considered. In point of fact, this location has to be the one where the equilibrium between deposition and erosion is considered, with
respective values that largely depend on the reference height, because of large concentration gradients near the bed. From the point of view of sediment modelling, this means that the deposition flux
at the base of the water column has to be expressed at the exact location where the erosion flux is considered, that is, at the reference height where the equilibrium concentration is given. Van Rijn
] used the concept of equilibrium concentration as a boundary condition of the computation of suspended sediment profile and fitted the expression:
$C r e f = 0.015 ρ s D / ( h r e f · D * 0.3 ) · ( τ / τ e − 1 ) 1.5$
$D *$
is the non-dimensional median sand diameter (
$D * = D ( σ g / ν 2 ) 1 / 3$
$σ = ( ρ s / ρ − 1 )$
being the water density,
the gravitational acceleration, and
the water viscosity). Using the
formulation of Van Rijn [
] in Equation (6) enables us to express the
constant in the “Partheniades” form of the erosion law as:
$E 0 , s a n d = 0.015 ρ s D W s , S 1 h r e f D * 0.3$
= 200 µm (
= 0.02 m, and
= 2.5 cm·s
, Equation (8) leads to
= 5.94 × 10
. The relevance of this
value was assessed by simulating an equilibrium state under a steady current and by comparing the depth-integrated horizontal sediment flux with some standard transport capacity formulations. Using a
1DV version of the code, several computations of fine sand resuspension were performed under different flow intensities (
$V e l I N T$
), and once the equilibrium was reached (deposition = resuspension) the total transport (
) was computed as:
$Q s a n d = 1 ρ s ∫ k = 1 n V e l I N T ( k ) · S S C S 1 ( k ) · d z ( k )$
refers to the thickness of the cell
(in m) in the water column (discretized in
layers along the vertical dimension) and
to the suspended sediment concentration of sand (
) in the cell
. Sand transport rates
were then compared to the rates deduced from the formulations of Van Rijn [
], Engelund and Hansen [
], and Yang [
] for similar flow velocities (hereafter
, and
respectively). Results are illustrated in
Figure 3
. The results obtained by Dufois and Le Hir [
], who also used an advection/diffusion model to predict sand transport rates for a wide range of current conditions and numerous sand diameters, have been added in
Figure 3
Figure 3
shows that the sand transport rates obtained from our computations are in a consistent range regarding those obtained with other formulations or studies cited in the literature, demonstrating the
suitability of our
The erosion-related parameters
, and
for fine sand are summarized in
Table 2
and constitute reference parameters characterizing pure sand erosion.
3.3. Pure Mud Erosion
Above the second critical mud fraction
(reference value of 70% according to the default value used by Le Hir et al. [
]), the cohesive erosion regime is prescribed by defining a second set of erosion-related parameters linked to pure mud erosion in Equation (1):
, and
As frequently specified in the Partheniades-Ariathurai formulation, the
exponent was set to 1. Given the lack of established formulation of the erosion factor for pure mud, experimental approaches are often used to calibrate it for specific materials, preferably in situ
when possible. For this purpose, a specific device had been designed: the “erodimeter” is described in Le Hir et al. [
]. It consists of a small recirculating flume where a unidirectional flow is increased step by step and interacts with a sediment sample carefully placed at the bottom after transfer from a
cylindrical core. When measurements are made on board an oceanographic vessel, the test can be considered as
in situ. On the BoBCS, erosion tests had been conducted on board the Thalassa N/O: a few of them were performed on muddy sediment samples (mud content higher than 80%).
Figure 4
illustrates the critical shear stress for erosion
, estimated to be 0.1 N·m
, suggesting a barely-consolidated easily erodible sediment.
Concerning the erosion coefficient
, the range of values cited in the literature extends from 10
to 10
for natural mud beds in open water (e.g., [
]). Simulating fine sediment transport along the BoBCS, Tessier et al. [
] applied the Partheniades erosion law but used an even lower erosion constant (
= 1.3 × 10
). As a first attempt in the present study, a low value
= 10
was used, and its appropriateness was demonstrated by comparing the computed erosion fluxes (
) from Equation (1) (with
τ[e] = τ[e,mud]
otherwise) with measurements from erodimetry experiments (
) conducted on three muddy sediment samples from the BoBCS (mud fraction higher than 70%) (
Figure 5
The above-mentioned
, and
values are summarized in
Table 2
and constitute reference parameters characterizing pure mud erosion.
3.4. Erosion of Transitional Sand/Mud Mixtures: Selection of Transition Formulations to be Tested
Between critical mud fractions (f[mcr][1] and f[mcr][2]), E[0], τ[e], and n ranged between pure sand (E[0,sand], τ[e,sand], and n[sand]) and pure mud (E[0,mud], τ[e,mud], and n[mud]) parameters,
following a transition trend which had to be specified. We defined several expressions of the erosion law for the transition between non-cohesive and cohesive behaviours as a function of the
surficial sediment mud fraction (f[m]).
First, we considered a linear transition type in which erosion-related parameters are linearly interpolated from the respective sand and mud parameters according to their respective concentrations in
the mixture (default solution in the original paper by Le Hir et al. [
Several experimental studies in the literature revealed that the transition from non-cohesive to cohesive could be more abrupt. For instance, Smith et al. [
] performed erodibility experiments on natural and artificial sand/mud mixtures, and showed a rapid decrease in erosion rates with increasing mud/clay contents. Nevertheless, the transition between
the two regimes remains poorly documented. Here, we propose an exponential formulation that specifically enables adjustment of the sharpness of the transition, which is not possible with the few
existing expressions (e.g., [
]). The exponential transition trend (Equation (10)) was applied to all erosion-related parameters (
) as a function of mud content, with a coefficient
allowing the adjustment of the sharpness of the transition, which becomes more abrupt with an increase in
$X e x p = ( X s a n d − X m u d ) e C e x p · P e x p + X m u d$
= (
= {
}; and
= {
Different settings of the erosion law were tested in 3D simulations to assess the correctness of the model response in terms of
and changes in the seabed. All settings are illustrated in
Figure 6
. Three kinds of transition, one linear, and two exponentials (with
= 10 and 40 in Equation (10)), were defined to evaluate the effect of the transition trend only, using the reference critical mud fractions (
= 20% and
= 70%). The corresponding settings are named
, and
, respectively.
A second series of simulations was run to evaluate the sensitivity of the results to critical mud fraction values. Thus, the
value was successively reduced to 10%, 5%, and ~0% (with the corresponding
= 60%, 55%, and 50%), but only the exponential transition regime (with
= 40) was considered (simulations
, and
respectively), as it produced better results than the other transitions (see results
Section 4.1
4. Results
4.1. Influence of the Transition Trend between Non-Cohesive and Cohesive Erosion Modes in the Erosion Law
The first step consisted in assessing the 3D model response in the model cell located closest to the
station in terms of
and changes in the seabed (i.e., composition, thickness) by considering two transitions of the erosion law, one linear and one exponential (with
= 40 in Equation (10)), between the non-cohesive and cohesive regimes (
Figure 7
). This first comparison was performed using the “reference”
values of 20% and 70%, respectively. The two erosion settings,
are illustrated in
Figure 6
Section 3.4
Total bottom shear stresses (i.e., those caused by waves and currents) and barotropic currents in
Figure 7
a illustrate changes in forcing throughout the period. In the water column, we successively represented
over the entire water column (
Figure 7
b–d) and at 1.67 m above the seabed (i.e., at the level of the first
Figure 7
e) for the two simulations and for the
measurements. Changes in the seabed in the two simulations are presented in
Figure 7
f,g. Lastly, a global sediment budget (in kg) was applied to the model cell used for the comparison, and the contribution of advection (hereafter referred to as
F[OBC, mud]
for mud and
F[OBC, sand]
for sand, representing the total amount (integrated) of sediment that crosses the borders of the cells along the water column, as net inflow if
increases or as net outflow if
decreases) is illustrated in
Figure 7
Regarding the dynamics of sediments in suspension,
from the
profiler ranged between 10 and 80 mg·L
over the study period. Four main resuspension events can be identified: the first event lasted from 1 to 5 December 2007 (Event 1), the second from 8 to 11 December 2007 (Event 2), the third from 11
to 12 January 2008 (Event 3), and the last from 15 to 17 January 2008 (Event 4). During these events,
values ranged between 20 and 80 mg·L
near the seabed, and did not exceed 40 mg·L
close to the surface (
Figure 7
). The rest of the time, a higher frequency turbidity signal linked to the semi-diurnal tide resuspension was recorded near the seabed with
in the range of 10–15 mg·L
. Turbidity peaks were regularly detected in the surface signal whereas there was no significant increase near the seabed: these peaks are probably due to the signal diffraction caused by
wave-induced air bubbles (wave mixing; see Tessier [
]). Such a phenomenon also occurs during energetic events (
> 2 m) with a
signal in the surface higher than in the rest of the water column.
Despite the fact that the model response generally agreed with observations, computing
with the
erosion setting highlighted some periods during which turbidity was overestimated, for instance in the upper half of the water column during Event 2, and several times between 30 December and 9
January (
Figure 7
c,e). Overestimations were particularly noticeable in the
series at 1.67 m above the seabed (
Figure 7
e): modelled
regularly exceeded observed
by 20 to 40 mg·L
during Events 2 and 4, and even during calmer periods (e.g
, between 30 December and 9 January). In contrast, modelled
were underestimated by a factor of 2 during Event 3. The
erosion setting led to a representation of observed
with a
of 14 mg·L
over the study period.
Using the
erosion setting enabled a general improvement in modelled
with a
of 10.5 mg·L
over the period, and a correct response during the four energetic periods (
Figure 7
d,e). Differences in
between simulations during Events 1, 2, and 4 were mainly due to different erosion rates (especially due to
in Equation (1)) prescribed for the same intermediate mud fraction in the surficial layer (in
Figure 6
a, this rate is clearly higher in the
setting). However, other differences, especially between 30 of December and 9 of January and during Event 3, are linked to the contrasted changes in the seabed between the two simulations.
One major difference in these changes occurs after Event 2 with more significant mud deposition in simulation
Figure 7
f,g). This difference can also be seen in
Figure 7
h, which highlights a significant decrease in
F[OBC, mud]
(i.e., flow of mud out of the cell) in simulation
, but not in
. The decrease in
F[OBC, mud]
in simulation
is probably due to less mud inputs from adjacent cells, resulting in relatively more mud exported by advection and thus less mud deposition during the decrease in shear stress following Event 2. Note
that this difference in seabed changes influences the sediment dynamics in both simulations throughout the period. Following Event 2, a transition in surficial sediment from muddy to sandy occurs in
both simulations but at different times. In the
simulation, the transition occurs half way through the period, around 30 December, and manifests itself as a
peak linked to mud resuspension near the bottom (
Figure 7
e), and by a decrease in
F[OBC, mud]
(relative loss by advection), which does not occur in the
simulation. Following the transition in the nature of the seabed, the
simulation regularly gives incorrect
responses (e.g., overestimation around 6 January, underestimation during Event 3). These results underline the potential role of advection processes in the contrasted results of the two simulations,
and the need for full 3D modelling to obtain a final fit of the erosion law. In the
simulation, the transition occurs later in the period, around 11 of January, and enables a correct
response regarding Event 3, associated with a decrease in
F[OBC, mud]
. It may mean that, on the one hand, setting
allows a more accurate representation of resuspension dynamics in response to a given forcing, and on the other hand, it induces a more correct change in the nature of the seabed with respect to the
variations in forcing over time. Note that despite contrasted sediment dynamics in the different simulations, the mud budget at the scale of the cell summed over the whole period led to similar
trends corresponding to a relative loss by advection of around −5 × 10
kg (that is −5/2.5
Sand contributes to turbidity over shorter periods than mud, mainly during Events 1, 2, and 4. Results provide evidence that sand is not subject to the same dynamics as mud. Until the transition in
the nature of the surficial sediments in the middle of the period (30 December), sand dynamics appear to be quite similar in the two simulations (e.g.,
F[OBC, sand]
Figure 7
h). Beyond this date, the contrasted nature of the seabed results in more regular sand resuspension in
, with an advection component leading to a relative local sand loss (in the cell). Starting from Event 3,
F[OBC, sand]
increases (i.e., relative sand inflow into the cell) in both simulations while the advection flux of mud decreases (in
) or does not change (in
). This highlights the fact that sand and mud dynamics are likely to differ depending on the nature of the seabed in adjacent cells.
The erosion setting S1[EXP][1], which is characterized by a less abrupt exponential transition in erosion law (C[exp] = 10 in Equation (10)), appears to be less accurate in terms of SSC (not
illustrated here) with a RMSE of 12 mg·L^−1 over the study period (versus 10.5 mg·L^−1 in S1[EXP][2]). The turbidity response provided by the model would be expected to be degraded while
progressively reducing the decreasing trend of the transition (until a linear decrease is reached).
Considering the accurate representation of resuspension events and the lower RMSE obtained with the erosion setting S1[EXP][2], the latter was considered as an “optimum” setting, suggesting that the
definition of an exponential transition to describe sand/mud mixture erosion between non-cohesive and cohesive erosion modes may be appropriate in hydro-sedimentary numerical models.
4.2. Influence of Critical Mud Fractions
The sensitivity of the
model results to critical mud fractions,
, was assessed, starting from the “optimum” erosion setting deduced in
Section 4.1
and characterized by an exponential transition between
= 20% and
= 70% with
= 40 in Equation (10) (i.e.,
setting). Both critical mud fractions were progressively reduced by 10% (
= 10%,
= 60%), 15% (
= 5%,
= 55%), and 20% (
≈ 0%,
= 50%). The corresponding settings,
, and
are illustrated in
Figure 6
. Results linked to the application of these different settings are illustrated in
Figure 8
. Note that the second critical mud fraction
appears in the extension of the exponential decay (Equation (10)), but, due to the shape of the exponential trend, it does not constitute a real critical mud fraction but rather an adjustment
parameter for the transition. We can thus consider that this sensitivity analysis mainly deals with the setting of the first critical mud fraction
First, it can be seen that the
linked to the four resuspension events progressively decrease with the decrease in
Figure 8
b), which results in underestimation of turbidity with respect to observed values. While no clear differences in
appear between
(the latter is not illustrated in
Figure 8
), i.e., with a reduction of
from 20% to 10%, significant
underestimations occur for
< 10%. The average turbidity during resuspension events is underestimated by 15–20% (respectively, 30%) in simulation
). Regarding maximum
, underestimations of
peaks are around 15–30% (respectively, 40–50%) during Events 1 and 2 in simulation
). In addition,
peaks during Events 3 and 4 are completely absent in these two simulations with an underestimation of about 60%. Other simulations with linear trend but low
were tested and showed no improvement compared with the settings illustrated in
Figure 7
Figure 8
Changes in the seabed linked to the optimum erosion setting
and the simulation
= 5%) are illustrated in
Figure 8
c,d. Following Event 2, contrary to results in
, no drastic change in the nature of the seabed occurs in
in the rest of the period, with a surficial sediment containing at least 30–40% of mud. This less dynamic change in the seabed is consistent with the lower
obtained in the water column. A reduction of
led to the application of a pure mud erosion law starting from a lower mud content in the surficial sediment. This mostly resulted in less erosion (
) with weaker
and slower changes in the seabed. This is also visible in the variations in
F[OBC, mud]
Figure 8
e) which highlight the fact that the reduced sediment dynamics obtained by reducing
results in weaker gradients (
, seabed nature) with adjacent cells, and a less dynamic advection term over the study period.
5. Discussion
5.1. Setting Describing Erosion of a Sand/Mud Mixture
All experimental studies in the literature on the erosion of sand/mud mixtures mentioned a transition in the erosion mode when fine particles progressively fill the spaces between non-cohesive
particles. Panagiotopoulos et al. [
] proposed a conceptual model showing the mechanism for the initiation of sediment motion for sand-mud mixtures, based on the forces acting on an individual grain and the associated angle of internal
friction. When the mud content increases, clay minerals progressively fill the spaces between the sand particles, which slightly alters the pivoting characteristics and consequently the internal
friction angle, and thus a slight change in erosion resistance. As soon as the sand particles are no longer in contact with one another, pivoting stops being the main mechanism behind the initiation
of motion, and the resistance of the clay fraction mainly controls erosion. Depending on the authors, this transition is expressed by reasoning in terms of mud or clay content in the mixture. For
instance, Mitchener and Torfs [
] proposed a transition between 3% and 15% mud content, and suggested using cohesive-type sediment transport equations above this transition value and sand transport theories below it. Other authors
suggested that the transition occurs at much higher mud content, i.e., between 20% and 30% (e.g., [
]). More generally, previous investigations emphasized that only 2% to 10% of clay minerals (dry weight) added in a non-cohesive sediment matrix were sufficient to control the soil properties and
increase the resistance of the bed to erosion (e.g
, [
Modelling results in the present study highlighted the fact that the critical mud fraction (
), above which a transition toward a cohesive erosion mode would start, is at least 10% mud content. Grain size analyses of numerous sediment samples (from several locations, and at different depths
in the sediment) from the BoBCS revealed that the clay content (per cent < 4 µm) corresponds to 30% (±3%,
= 0.96) of the mud content (per cent < 63 µm). Such a constant ratio between the clay and mud fractions (or between the clay and silt fractions) in a given area has been observed in many sites
worldwide (e.g., [
]). Thus, the critical clay content deduced from our modelling fitting would be around 3%. Therefore, our results are in agreement with experimental results of previous studies regarding the
existence and the value of a critical mud/clay fraction indicating a transition in the mode of erosion.
Multiple transition trends of the erosion law (linear, exponentials) were tested to describe the erosion behaviour of transitional sand/mud mixtures, i.e., when the mud fraction exceeds the critical
value of
in the mixture. The quality of the model response was evaluated by comparing
results with turbidity measurements provided by the
profiler over the entire water column. Based on
and average or maximum
values reached during resuspension events, the results provided a more accurate representation of observations while considering an abrupt exponential transition of erosion parameters (i.e.,
, and
in the Partheniades form of the erosion law, see Equation (1)). Actually, changes in
produced by this transition formulation mainly hold in the contrasted
values prescribed in erosion law depending on the seabed mud fraction (
; see
Table 2
). This result agrees with results recently obtained by Smith et al. [
] who performed erosion experiments on mixed sediment beds prepared in the laboratory (250–500 µm sands mixed with different clayey sediments corresponding to kaolinite or kaolinite/bentonite). In
particular, they observed a rapid decrease in erosion rates, from 1.5 to 2.5 orders of magnitude, over a range of 2% to 10% clay content. In the present study, exponential transitions prescribed in
= 10 and
= 40 in Equation (10),
Figure 6
) led to a variation in the erodibility parameter
of about 2.5 orders of magnitude over a mud content range of 10% and 40%, i.e., over a clay content ranging from 3% and 12% respectively. The best model results obtained from erosion setting
are thus consistent with the findings of Smith et al. [
], and suggest that a rapid exponential transition may be appropriate to describe the erosion of a sand/mud mixture between non-cohesive and cohesive erosion modes in numerical hydro-sedimentary
5.2. Limitations of the Approach and Remaining Uncertainties
Despite successive assessments of model quality, some limitations and uncertainties concerning our modelling approach remain and have to be addressed.
5.2.1. Mud Erosion Law
A pure mud erosion law was set up based on erodimetry experiments performed on muddy sediment samples from the BoBCS. A critical shear stress for mud erosion (
) of 0.1 N·m
was deduced. By combining this
value with the minimum erodibility parameter
recommended by e.g., Winterwerp [
], i.e., 10
, the application of the mud erosion law from the model (
Section 3.3
) led to good agreement between modelled erosion fluxes and those obtained in erodimetry experiments for comparable applied shear stresses (
Figure 5
). Such a lower critical stress for erosion when the mixture is muddier is opposite to trends most often published, characterized by an increase of the resistance to erosion when mud is added to sand
(e.g., [
]). Other simulations were performed with higher
values, 0.15, 0.2, and 0.4 N·m
. As expected, modelled
was underestimated compared with observed
increased (even for 0.15 N·m
). Another assessment of
could have produced similar results, but we preferred to keep the shear strength provided by our experiments, the low value being justified by the fact that in our environment (erosion on a
continental shelf with low bottom friction) the sediment is never remobilized at depth, and the surficial sediment remains unconsolidated.
5.2.2. Initial Condition of the Sediment and Time Variation of the Seabed
Seabed initialization was prescribed from the synthesis of sediment facies applied at the beginning of each simulation. To evaluate the influence of the sediment initialization on model results (SSC,
seabed variations), the “optimum” model setting from the present study (S1[EXP][2]) was used again in a new simulation using the surficial sediment cover computed at the end of a one-year simulation
used as spin up. We obtained similar SSC results with a RMSE of 11.3 mg·L^−1 over the study period (versus 10.5 mg·L^−1 in the original S1[EXP][2] simulation). Similarities in seabed variations
(thickness and composition) in the two simulations were likewise remarkable (not illustrated here). Thus, the seabed initialization prescribed at the beginning of each simulation appears to be
appropriate and does not correspond to a transitional state regarding the sediment dynamics.
Model results concerning changes in the seabed highlighted pronounced gradients in the nature of the surficial sediment in most simulations, with an alternation of muddy and sandy facies depending on
the intensity of forcing (e.g., shear stress, advection). Such variations in the nature of the sediment are not unrealistic, since grain size analyses of a sediment core sampled at
station revealed a layered bed, with alternating muddy and sandy layers at different depths in the sediment. These variations are also consistent with the geographical location of
station (
Figure 1
), in a zone with horizontal gradients in sediment facies.
A further validation of the model in terms of thickness or elevation would require other measurements as in situ altimetry data (e.g., [
]). Simulations make it necessary to use altimetry data, which were not available in the present study.
5.2.3. Applicability of the Sand/Mud Mixture Erosion Law
Assessment of the effect of erosion settings on the quality of model results with respect to observations would require further comparisons in other study sites where the seabed consists of both sand
and mud. This would make it possible to know if an abrupt transition between non-cohesive and cohesive erosion modes systematically improves model accuracy in terms of SSC.
The use of the sand/mud mixture erosion law derived from this study requires site-specific information beforehand, in particular grain size analyses for the assessment of the mud or the clay content.
Since the critical mud fraction f[mcr][1], above which erosion behaviour starts to change, mainly depends on clay content, the ratio between clay and mud fractions can be used. In future works, it
would be interesting to explore other mud properties than grain size and sediment fractions, such as mineralogy, to represent more accurately the key role played by cohesive sediments in erosion
process, especially for transitional sand/mud mixtures between the contrasted non-cohesive/cohesive regimes.
Lastly, the formulation of erosion was based on the Patheniades-Ariathurai law, with an erosion flux proportional to the normalized excess shear stress (
1). Such a formulation is very sensitive to the value of the critical shear stress for erosion, which can be difficult to estimate and highly variable in the case of sand/mud mixtures. Alternatively,
a formulation of the erosion flux proportional to the excess shear stress (
) would reduce the sensitivity of erosion to
. It would also be in agreement with the Van Kesteren-Winterwerp-Jacobs erosion law [
], and deserves further investigations following the pioneering work of Jacobs et al. [
6. Conclusions
The aim of this study was to assess the influence of the erosion law prescribed in a 3D realistic hydro-sedimentary model on sediment dynamics in the case of a seabed composed of both fine sand and
mud in a slightly energetic environment, representative of continental shelves. According to the sediment model described by Le Hir et al. [
], the sediment was eroded as a mixture and was assumed to behave as pure sand below a first critical mud fraction in the surficial sediment, and as pure mud above a second one. Following
hydrodynamic validation of the model and rigorous assessment of pure sand and pure mud erosion dynamics, several transition trends of erosion-related parameters (erodibility parameter, critical shear
stress, and exponent in the Partheniades erosion law) were tested to describe the erosion of transitional sand/mud mixtures between the two critical fractions. Different simulations were run using
linear or exponential transitions, and different critical mud fractions. The accuracy of model results regarding suspended matter dynamics was evaluated at a single point, located on the Bay of
Biscay shelf, by performing comparisons with turbidity observations provided by an acoustic profiler during two typical winter months. The main conclusions of this work are:
• Using an abrupt exponential transition, e.g., an erodibility parameter decrease of 2.5 orders of magnitude over a 10% (respectively, 3%) mud (respectively, clay) content range, improves SSC model
results regarding measurements, compared to results obtained with linear or less abrupt exponential transitions. This conclusion agrees with recent experimental studies in the literature on the
erosion of sand/mud mixtures, which mention a drastic change in erosion mode for only a small percentage of clay added in the mixture.
• A first critical mud fraction (above which the erosion mode begins to change) of 10–20% is required to ensure a relevant model response in turbidity. By reasoning in terms of the clay fraction,
the corresponding critical clay fraction ranges between 3% and 6%. Once again, this conclusion agrees with experimental studies in the literature reporting that 2% to 10% of clay minerals in a
sediment mixture are sufficient to control the soil properties.
• The erosion flux of mixed sediments appears to be very sensitive to the clay fraction of the surficial sediment, and then is likely to change considerably at a given location, according to
erosion and deposition events.
• The need to perform 3D simulations to account for advection, which considerably influences sediment dynamics in terms of export of resuspended sediments, sediment inflows from adjacent cells, and
consequent changes in the surficial seabed (nature and thickness of deposits).
Therefore, the optimal erosion law derived from this study to describe sand/mud mixture erosion led to model results consistent with measurements and with most of the conclusions deduced from
experimental studies already published. This should encourage further similar comparisons and suggests that the application of this kind of erosion setting is appropriate for hydro-sedimentary
This study was supported by the SHOM (Service Hydrographique et Océanographique de la Marine) and IFREMER (Institut Français de Recherche pour l’Exploitation de la Mer). The authors would like to
thank the SHOM for surficial mud content data. Lastly, the two anonymous reviewers are deeply thanked for their comments and suggestions that greatly improved the manuscript.
Author Contributions
All authors conceived the study; B.M., P.L.H., and F.C. designed the numerical experiments; T.G. performed the seabed initialization of the model describing the horizontal distribution of sediment
facies; B.M. performed the simulations; B.M., P.L.H. and F.C. analysed the simulations; and B.M. and P.L.H. wrote the paper.
Conflicts of Interest
The authors declare no conflict of interest.
1. Le Hir, P.; Cayocca, F.; Waeles, B. Dynamics of sand and mud mixtures: A multiprocess-based modelling strategy. Cont. Shelf Res. 2011, 31, S135–S149. [Google Scholar] [CrossRef]
2. Righetti, M.; Lucarelli, C. May the Shields theory be extended to cohesive and adhesive benthic sediments? J. Geophys. Res. 2007, 112, C05039. [Google Scholar] [CrossRef]
3. Kimiaghalam, N.; Clark, S.P.; Ahmari, H. An experimental study on the effects of physical, mechanical, and electrochemical properties of natural cohesive soils on critical shear stress and
erosion rate. Int. J. Sediment Res. 2016, 31, 1–15. [Google Scholar] [CrossRef]
4. Williamson, H.J.; Ockenden, M.C. Laboratory and field investigations of mud and sand mixtures. In Proceedings of the First International Conference on Hydro-Science and Engineering, Advances in
Hydro-Science and Engineering, Washington, DC, USA, 7–11 June 1993; Wang, S.S.Y., Ed.; Volume 1, pp. 622–629. [Google Scholar]
5. Le Hir, P.; Monbet, Y.; Orvain, F. Sediment erodability in sediment transport modelling: Can we account for biota effects? Cont. Shelf Res. 2007, 27, 1116–1142. [Google Scholar] [CrossRef]
6. Jacobs, W.; Le Hir, P.; Van Kesteren, W.; Cann, P. Erosion threshold of sand–mud mixtures. Cont. Shelf Res. 2011, 31, S14–S25. [Google Scholar] [CrossRef]
7. Le Hir, P.; Cann, P.; Waeles, B.; Jestin, H.; Bassoullet, P. Erodibility of natural sediments: Experiments on sand/mud mixtures from laboratory and field erosion tests. In Sediment and
Ecohydraulics: INTERCOH 2005 (Proceedings in Marine Science); Kusuda, T., Yamanishi, H., Spearman, J., Gailani, J.Z., Eds.; Elsevier: Amsterdam, The Netherlands, 2008; Volume 9, pp. 137–153. [
Google Scholar]
8. Van Rijn, L.C. Sediment pick-up functions. J. Hydraul. Eng. 1984, 110, 1494–1502. [Google Scholar] [CrossRef]
9. Emadzadeh, A.; Cheng, N.S. Sediment pickup rate in uniform open channel flows. In Proceedings of the River Flow 2016, Iowa City, IA, USA, 11–14 July 2016; Constantinescu, G., Garcia, M., Hanes,
D., Eds.; Taylor & Francis Group: London, UK, 2016; Volume 1, pp. 450–457. [Google Scholar]
10. Partheniades, E. A Study of Erosion and Deposition of Cohesive Soils in Salt Water. Ph.D. Thesis, University of California, Berkeley, CA, USA, 1962. [Google Scholar]
11. Ariathurai, C.R. A Finite Element Model of Cohesive Sediment Transportation. Ph.D. Thesis, University of California, Davis, CA, USA, 1974. [Google Scholar]
12. Van Ledden, M.; Van Kesteren, W.G.M.; Winterwerp, J.C. A conceptual framework for the erosion behaviour of sand–mud mixtures. Cont. Shelf Res. 2004, 24, 1–11. [Google Scholar] [CrossRef]
13. Flemming, B.W. A revised textural classification of gravel-free muddy sediments on the basis of ternary diagrams. Cont. Shelf Res. 2000, 20, 1125–1137. [Google Scholar]
14. Mitchener, H.; Torfs, H. Erosion of mud/sand mixtures. Coast. Eng. 1996, 29, 1–25. [Google Scholar] [CrossRef]
15. Panagiotopoulos, I.; Voulgaris, G.; Collins, M.B. The influence of clay on the threshold of movement of fine sandy beds. Coast. Eng. 1997, 32, 19–43. [Google Scholar] [CrossRef]
16. Ye, Z.; Cheng, L.; Zang, Z. Experimental study of erosion threshold of reconstituted sediments. In Proceedings of the ASME 2011 30th International Conference on Ocean, Offshore and Arctic
Engineering, Rotterdam, The Netherlands, 19–24 June 2011; American Society of Mechanical Engineers: New York, NY, USA; Volume 7, pp. 973–983. [Google Scholar] [CrossRef]
17. Smith, S.J.; Perkey, D.; Priestas, A. Erosion thresholds and rates for sand-mud mixtures. In Proceedings of the 13th International Conference on Cohesive Sediment Transport Processes (INTERCOH),
Leuven, Belgium, 7–11 September 2015; Toorman, E., Mertens, T., Fettweis, M., Vanlede, J., Eds.; [Google Scholar]
18. Gailani, J.Z.; Jin, L.; McNeil, J.; Lick, W. Effects of Bentonite Clay on Sediment Erosion Rates. DOER Technical Notes Collection. Available online: http://www.dtic.mil/docs/citations/ADA390214
(accessed on 21 July 2017).
19. Waeles, B.; Le Hir, P.; Lesueur, P. A 3D morphodynamic process-based modelling of a mixed sand/mud coastal environment: The Seine estuary, France. In Sediment and Ecohydraulics: INTERCOH 2005,
Proceedings in Marine Science; Kusuda, T., Yamanishi, H., Spearman, J., Gailani, J.Z., Eds.; Elsevier: Amsterdam, The Netherlands, 2008; Volume 9, pp. 477–498. [Google Scholar]
20. Bi, Q.; Toorman, E.A. Mixed-sediment transport modelling in Scheldt estuary with a physics-based bottom friction law. Ocean Dyn. 2015, 65, 555–587. [Google Scholar] [CrossRef]
21. Migniot, C. Tassement et rhéologie des vases—Première partie. La Houille Blanche 1989, 1, 11–29. (In French) [Google Scholar] [CrossRef]
22. Dickhudt, P.J.; Friedrichs, C.T.; Sanford, L.P. Mud matrix solids fraction and bed erodibility in the York River estuary, USA, and other muddy environments. Cont. Shelf Res. 2011, 31, S3–S13. [
Google Scholar] [CrossRef]
23. Carniello, L.; Defina, A.; D’Alpaos, L. Modeling sand-mud transport induced by tidal currents and wind waves in shallow microtidal basins: Application to the Venice Lagoon (Italy). Estuar. Coast.
Shelf Sci. 2012, 102, 105–115. [Google Scholar] [CrossRef]
24. Van Ledden, M. Sand-Mud Segregation in Estuaries and Tidal Basins. Ph.D. Thesis, Delft University of Civil Engineering, Delft, The Netherlands, 2003. [Google Scholar]
25. Ahmad, M.F.; Dong, P.; Mamat, M.; Wan Nik, W.B.; Mohd, M.H. The critical shear stresses for sand and mud mixture. Appl. Math. Sci. 2011, 5, 53–71. [Google Scholar]
26. Wiberg, P.L.; Drake, D.E.; Harris, C.K.; Noble, M. Sediment transport on the Palos Verdes shelf over seasonal to decadal time scales. Cont. Shelf Res. 2002, 22, 987–1004. [Google Scholar] [
27. Ulses, C.; Estournel, C.; Durrieu de Madron, X.; Palanques, A. Suspended sediment transport in the Gulf of Lions (NW Mediterranean): Impact of extreme storms and floods. Cont. Shelf Res. 2008, 28
, 2048–2070. [Google Scholar] [CrossRef]
28. Fard, I.K.P. Modélisation des Échanges Dissous Entre L’estuaire de la Loire et les Baies Côtières Adjacentes. Ph.D. Thesis, University of Bordeaux, Bordeaux, France, 2015. [Google Scholar]
29. Lurton, X. An Introduction to Underwater Acoustics: Principles and Applications; Springer: Berlin, Germany, 2002. [Google Scholar]
30. Tessier, C.; Le Hir, P.; Lurton, X.; Castaing, P. Estimation de la matière en suspension à partir de l’intensité rétrodiffusée des courantomètres acoustiques à effet Doppler (ADCP). C. R. Geosci.
2008, 340, 57–67. (In French) [Google Scholar] [CrossRef]
31. Lazure, P.; Dumas, F. An external–internal mode coupling for a 3D hydrodynamical model for applications at regional scale (MARS). Adv. Water Resour. 2008, 31, 233–250. [Google Scholar] [CrossRef]
32. Song, Y.; Haidvogel, D. A semi-implicit ocean circulation model using a generalized topography-following coordinate system. J. Comput. Phys. 1994, 115, 228–244. [Google Scholar] [CrossRef]
33. Rodi, W. Turbulence Models and Their Application in Hydraulics, 3rd ed.; IAHR Monograph: Delft, The Netherlands, 1993. [Google Scholar]
34. Ferry, N.; Parent, L.; Garric, G.; Barnier, B.; Jourdain, N.C. Mercator global Eddy permitting ocean reanalysis GLORYS1V1: Description and results. Mercator-Ocean Q. Newsl. 2010, 36, 15–27. [
Google Scholar]
35. Boudière, E.; Maisondieu, C.; Ardhuin, F.; Accensi, M.; Pineau-Guillou, L.; Lepesqueur, J. A suitable metocean hindcast database for the design of Marine energy converters. Int. J. Mar. Energy
2013, 3–4, e40–e52. [Google Scholar] [CrossRef]
36. Déqué, M.; Dreveton, C.; Braun, A.; Cariolle, D. The ARPEGE/IFS atmosphere model: A contribution to the French community climate modelling. Clim. Dyn. 1994, 10, 249–266. [Google Scholar] [
37. Lyard, F.; Lefèvre, F.; Letellier, T.; Francis, O. Modelling the global ocean tides: Modern insights from FES2004. Ocean Dyn. 2006, 56, 394–415. [Google Scholar] [CrossRef]
38. Jonsson, I.G. Wave boundary layers and friction factors. In Proceedings of the 10th International Conference on Coastal Engineering, Tokyo, Japan, September 1966; American Society of Civil
Engineers: New York, NY, USA, 1966; pp. 127–148. [Google Scholar]
39. Soulsby, R.L.; Hamm, L.; Klopman, G.; Myrhaug, D.; Simons, R.R.; Thomas, G.P. Wave-current interaction within and outside the bottom boundary layer. Coast. Eng. 1993, 21, 41–69. [Google Scholar]
40. Soulsby, R. Dynamics of Marine Sands: A Manual for Practical Applications; Thomas Telford: London, UK, 1997. [Google Scholar]
41. Grasso, F.; Le Hir, P.; Bassoullet, P. Numerical modelling of mixed-sediment consolidation. Ocean Dyn. 2015, 65, 607–616. [Google Scholar] [CrossRef]
42. Van Leussen, W. Estuarine Macroflocs and Their Role in Fine-Grained Sediment Transport. Ph.D. Thesis, University of Utrecht, Utrecht, The Netherlands, 1994. [Google Scholar]
43. Tessier, C.; Le Hir, P.; Dumas, F.; Jourdin, F. Modélisation des turbidités en Bretagne sud et validation par des mesures in situ. Eur. J. Environ. Civ. Eng. 2008, 12, 179–190. [Google Scholar] [
44. Verney, R.; Gangloff, A.; Chapalain, M.; Le Berre, D.; Jacquet, M. Floc features in estuaries and coastal seas. In Proceedings of the 5th Particles in Europe Conference, Budapest, Hungary, 3–5
October 2016. [Google Scholar]
45. Dyer, K.R. Coastal and Estuarine Sediment Dynamics; John Wiley & Sons: New York, NY, USA, 1986. [Google Scholar]
46. Mengual, B. Variabilité Spatio-Temporelle des Flux Sédimentaires Dans le Golfe de Gascogne: Contributions Relatives des Forçages Climatiques et des Activités De Chalutage. Ph.D. Thesis,
University of Western Brittany, Brest, France, 2016. [Google Scholar]
47. Bouysse, P.; Lesueur, P.; Klingebiel, A. Carte Des Sédiments Superficiels du Plateau Continental du Golfe de Gascogne: Partie Septentrionale au 1/500 000. Co-Éditée par le BRGM Et l’IFREMER,
1986. Available online: http://sextant.ifremer.fr/record/ea0b61b0-71c6-11dc-b1e4-000086f6a62e/ (accessed on 21 July 2017).
48. Mengual, B.; Cayocca, F.; Le Hir, P.; Draye, R.; Laffargue, P.; Vincent, B.; Garlan, T. Influence of bottom trawling on sediment resuspension in the “Grande-Vasière” area (Bay of Biscay, France).
Ocean Dyn. 2016, 66, 1181–1207. [Google Scholar] [CrossRef]
49. Van Rijn, L.C. Unified view of sediment transport by currents and waves. II: Suspended transport. J. Hydraul. Eng. 2007, 133, 668–689. [Google Scholar] [CrossRef]
50. Van Rijn, L.C. Sediment transport, part II: Suspended load transport. J. Hydraul. Eng. 1984, 110, 1613–1641. [Google Scholar] [CrossRef]
51. Engelund, F.; Hansen, E. A Monograph on Sediment Transport in Alluvial Streams; Teknish Forlag, Technical Press: Copenhagen, Denmark, 1967. [Google Scholar]
52. Yang, C.T. Incipient motion and sediment transport. J. Hydraul. Div. 1973, 99, 1679–1704. [Google Scholar]
53. Dufois, F.; Le Hir, P. Formulating Fine to Medium Sand Erosion for Suspended Sediment Transport Models. J. Mar. Sci. Eng. 2015, 3, 906–934. [Google Scholar] [CrossRef]
54. Winterwerp, J.C. Flow-Induced Erosion of Cohesive Beds; A Literature Survey. Rijkswaterstaat—Delft Hydraulics, Cohesive Sediments Report 25, February 1989. Available online: http://
publicaties.minienm.nl/download-bijlage/45703/164198.pdf (accessed on 21 July 2017).
55. Tessier, C. Caractérisation et Dynamique des Turbidités en Zone Côtière: L’exemple de la Région Marine Bretagne Sud. Ph.D. Thesis, University of Bordeaux 1, Bordeaux, France, 2006. [Google
56. Raudkivi, A.J. Loose Boundary Hydraulics, 3rd ed.; Pergamon Press: Oxford, UK, 1990. [Google Scholar]
57. Bassoullet, P.; Le Hir, P.; Gouleau, D.; Robert, S. Sediment transport over an intertidal mudflat: Field investigations and estimation of fluxes within the “Baie de Marennes-Oléron” (France).
Cont. Shelf Res. 2000, 20, 1635–1653. [Google Scholar] [CrossRef]
58. Winterwerp, J.C.; Van Kesteren, W.G.M. Introduction to the Physics of Cohesive Sediment Dynamics in the Marine Environment; Developments in Sedimentology; Elsevier: Amsterdam, The Netherlands,
2004. [Google Scholar]
Figure 1. Geographic extent of the 3D model configuration with its bathymetry (in meters with respect to mean sea level) (a). Initial condition for the seabed compartment (at the resolution of the
model) (b), over the zone surrounding Le Croisic station (indicated by a black dot). In (a), the thickest white line represents the 180 m isobath, which can be considered as the external boundary of
the continental shelf. In both subplots, black lines refer to the 40-m, 70-m, 100-m, and 130-m isobaths over the shelf.
Figure 2. Validation of the hydrodynamic model over most of the simulated period at LC station. Model results are compared with AWAC measurements in terms of (a) significant wave height, (b) surface
temperature, (c) surface salinity, current direction ((d,e), respectively), and current intensity ((f,g), respectively).
Figure 3.
Sand (200 µm diameter) transport rates computed with a 1DV model using the pure sand erosion law, and obtained for different flow intensities (solid red curve). For identical flow intensities and
sand diameter, sand transport rates deduced from empirical formulations of Van Rijn [
] (black curve), Engelund and Hansen [
] (blue curve), and Yang [
] (green curve) are illustrated. Empty red circles refer to the modelling results of Dufois and Le Hir [
] representing transport rates of a 200 µm sand under different flow intensities from an advection/diffusion model.
Figure 4.
Erodimetry experiment conducted on a muddy sample of the BoBCS using the “erodimeter” device [
]. On the graph, the blue and pink curves represent time evolutions of bottom shear stress (N·m
) and suspended sediment concentration (
in mg·L
) during the few minutes of the experiment. The applied shear stress at which erosion begins (around 0.1 N·m
) is illustrated by a grey band.
Figure 5. Comparisons of erosion fluxes deduced from erodimetry experiments conducted on muddy samples from the BoBCS and those computed from pure mud erosion law (Equation (1)) using similar shear
stresses. Different symbols depict different sediment samples and labels refer to the applied shear stresses (τ in N·m^−2). The solid black line represents perfect agreement between modelled and
measured fluxes, and the dotted lines delimit the range linked to model overestimation or underestimation by a factor 2.
Figure 6. Variations in erosion-related parameters (erodibility parameter E[0] (a); critical shear stress τ[e] (b); and exponent n (c); used in the erosion law (Equation (1)) as a function of the
surficial sediment mud content in the different erosion settings tested.
Figure 7. Comparisons of the results of the 3D model obtained from erosion settings S1[LIN] and S1[EXP][2], and measurements made by the AWAC acoustic profiler. (a) Shear stresses τ and
depth-integrated currents VEL[INT]; (b) measured SSC over the entire water column; (c,d) computed SSC for S1[LIN] and S1[EXP][2] simulations; (e) time series of measured and modelled SSC variations
at the level of the AWAC first cell (1.67 m above the bottom); (f,g) changes in the seabed (mud fraction, thickness of the sediment layers) in the two simulations (white dotted lines represent the
boundaries of the sediment layers); and (h) integrated amount of SSC advected through the water column (solid lines represent mud and dotted lines represent sand).
Figure 8. Comparisons of the results of the 3D model obtained from erosion settings S1[EXP][2], S3[EXP][2] and S4[EXP][2], and measurements made with the AWAC acoustic profiler. (a) Shear stresses τ
and depth-integrated currents VEL[INT]; (b) time series of measured and modelled SSC at the level of the first AWAC cell (1.67 m above the bottom); (c,d) changes in the seabed (mud fraction,
thickness of the sediment layers) in S1[EXP][2] and S3[EXP][2] simulations (the white dotted lines represent the boundaries of the sediment layers); and (e) integrated amount of mud advected through
the water column in the S1[EXP][2] and S3[EXP][2] simulations.
Forcing Source
Initial & boundary conditions (3D velocities, temperature, salinity) GLORYS global ocean reanalysis [34]
Wave (Significant height, peak period, bottom excursion and orbital velocities) WaveWatch III hindcast [35]
Meteorological conditions (Atmospheric pressure, wind, temperature, relative humidity, cloud cover) ARPEGE model [36]
Tide (14 components) FES2004 solution [37]
River discharge (flow and SSC) Daily runoff data (French freshwater office)
Erosion Regime E[0] (kg·m^−2·s^−1) τ[e] (N·m^−2) n
Non-cohesive (pure sand) E[0,sand] = 5.94 × 10^−3 τ[e,sand] = 0.15 n[sand] = 1.5
Cohesive (pure mud) E[0,mud] = 10^−5 τ[e,mud] = 0.1 n[mud] = 1
© 2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http:/
Share and Cite
MDPI and ACS Style
Mengual, B.; Hir, P.L.; Cayocca, F.; Garlan, T. Modelling Fine Sediment Dynamics: Towards a Common Erosion Law for Fine Sand, Mud and Mixtures. Water 2017, 9, 564. https://doi.org/10.3390/w9080564
AMA Style
Mengual B, Hir PL, Cayocca F, Garlan T. Modelling Fine Sediment Dynamics: Towards a Common Erosion Law for Fine Sand, Mud and Mixtures. Water. 2017; 9(8):564. https://doi.org/10.3390/w9080564
Chicago/Turabian Style
Mengual, Baptiste, Pierre Le Hir, Florence Cayocca, and Thierry Garlan. 2017. "Modelling Fine Sediment Dynamics: Towards a Common Erosion Law for Fine Sand, Mud and Mixtures" Water 9, no. 8: 564.
Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details
Article Metrics | {"url":"https://www.mdpi.com/2073-4441/9/8/564","timestamp":"2024-11-06T02:04:14Z","content_type":"text/html","content_length":"530815","record_id":"<urn:uuid:911bee02-e4c1-4004-a034-a5235125e7de>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00630.warc.gz"} |
Melanocyte - Wikiwand
Melanocytes are melanin-producing neural crest-derived^[3] cells located in the bottom layer (the stratum basale) of the skin's epidermis, the middle layer of the eye (the uvea),^[4] the inner ear,^
[5] vaginal epithelium,^[6] meninges,^[7] bones,^[8] and heart found in many mammals and birds.^[9] Melanin is a dark pigment primarily responsible for skin color. Once synthesized, melanin is
contained in special organelles called melanosomes which can be transported to nearby keratinocytes to induce pigmentation. Thus darker skin tones have more melanosomes present than lighter skin
tones. Functionally, melanin serves as protection against UV radiation. Melanocytes also have a role in the immune system.
This article about
may be
excessively human-centric
Illustration of a melanocyte
Micrograph of melanocytes in the epidermis
Through a process called melanogenesis, melanocytes produce melanin, which is a pigment found in the skin, eyes, hair, nasal cavity, and inner ear. This melanogenesis leads to a long-lasting
pigmentation, which is in contrast to the pigmentation that originates from oxidation of already-existing melanin.
There are both basal and activated levels of melanogenesis; in general, lighter-skinned people have low basal levels of melanogenesis. Exposure to UV-B radiation causes increased melanogenesis. The
purpose of melanogenesis is to protect the hypodermis, the layer under the skin, from damage by UV radiation. The color of the melanin is black, allowing it to absorb a majority of the UV light and
block it from passing through the epidermis.^[10]
Since the action spectrum of sunburn and melanogenesis are virtually identical, they are assumed to be induced by the same mechanism.^[11] The agreement of the action spectrum with the absorption
spectrum of DNA points towards the formation of cyclobutane pyrimidine dimers (CPDs) - direct DNA damage.
Typically, between 1000 and 2000 melanocytes are found per square millimeter of skin or approximately 5% to 10% of the cells in the basal layer of epidermis. Although their size can vary, melanocytes
are typically 7 μm in length.
Both lightly and darkly pigmented skin contain similar numbers of melanocytes,^[12] with difference in skin color due to differences the packing of eumelanin into the melanosomes of keratinocytes:
those in dark-toned skin are "packaged into peri-nuclear distributed, ellipsoid" melanosomes while those light-tone skin are "assembled into clustered small, circular melanosomes".^[13] There are
also differences in the quantity and relative amounts of eumelanin and pheomelanin.^[13] Pigmentation including tanning is under hormonal control, including the MSH and ACTH peptides that are
produced from the precursor proopiomelanocortin.
Vitiligo is a skin disease where people lack melanin in certain areas in the skin.
People with oculocutaneous albinism typically have a very low level of melanin production. Albinism is often but not always related to the TYR gene coding the tyrosinase enzyme. Tyrosinase is
required for melanocytes to produce melanin from the amino acid tyrosine.^[14] Albinism may be caused by a number of other genes as well, like OCA2,^[15] SLC45A2,^[16] TYRP1,^[17] and HPS1^[18] to
name some. In all, already 17 types of oculocutaneous albinism have been recognized.^[19] Each gene is related to different protein having a role in pigment production.
People with Chédiak–Higashi syndrome have a buildup of melanin granules due to abnormal function of microtubules.
In addition to their role as UV radical scavengers, melanocytes are also part of the immune system, and are considered to be immune cells.^[20] Although the full role of melanocytes in immune
response is not fully understood, melanocytes share many characteristics with dendritic cells: branched morphology; phagocytic capabilities; presentation of antigens to T-cells; and production and
release of cytokines.^[20]^[21]^[22] Although melanocytes are dendritic in form and share many characteristics with dendritic cells, they derive from different cell lineages. Dendritic cells are
derived from hematopoietic stem cells in the bone marrow. Melanocytes on the other hand originate from neural crest cells. As such, although morphologically and functionally similar, melanocytes and
dendritic cells are not the same.
Melanocytes are capable of expressing MHC Class II,^[21] a type of MHC expressed only by certain antigen presenting cells of the immune system, when stimulated by interactions with antigen or
cytokines. All cells in any given vertebrate express MHC, but most cells only express MHC class I. The other class of MHC, Class II, is found only on "professional" antigen presenting cells such as
dendritic cells, macrophages, B cells, and melanocytes. Importantly, melanocytes stimulated by cytokines express surface proteins such as CD40 and ICAM1 in addition to MHC class II, allowing for
co-stimulation of T cells.^[20]
In addition to presenting antigen, one of the roles of melanocytes in the immune response is cytokine production.^[23] Melanocytes express many proinflammatory cytokines including IL-1, IL-3, IL-6,
IL-8, TNF-α, and TGF-β.^[20]^[21] Like other immune cells, melanocytes secrete these cytokines in response to activation of Pattern Recognition Receptors (PRRs) such as Toll Like Receptor 4 (TLR4)
which recognize MAMPs. MAMPs, also known as PAMPs, are microbial associated molecular patterns, small molecular elements such as proteins, carbohydrates, and lipids present on or in a given pathogen.
In addition, cytokine production by melanocytes can be triggered by cytokines secreted by other nearby immune cells.^[20]
Melanocytes are ideally positioned in the epidermis to be sentinels against harmful pathogens. They reside in the stratum basale,^[23] the lowest layer of the epidermis, but they use their dendrites
to interact with cells in other layers,^[24] and to capture pathogens that enter the epidermis.^[21] They likely work in concert with both keratinocytes and Langerhans cells,^[20]^[21] both of which
are also actively phagocytic,^[23] to contribute to the immune response.
Tyrosine is the non-essential amino acid precursor of melanin. Tyrosine is converted to dihydroxyphenylalanine (DOPA) via the enzyme tyrosinase. Then DOPA is polymerized into melanin. The copper-ion
based enzyme-catalyzed oxidative transformation of catechol derivative dopa to light absorbing dopaquinone to indole-5,6-quinone is clearly seen following the polymerization to melanin, the color of
the pigment ranges from red to dark brown.
Numerous stimuli are able to alter melanogenesis, or the production of melanin by cultured melanocytes, although the method by which it works is not fully understood. Increased melanin production is
seen in conditions where adrenocorticotropic hormone (ACTH) is elevated, such as Addison's and Cushing's disease. This is mainly a consequence of alpha-MSH being secreted along with the hormone
associated with reproductive tendencies in primates. Alpha-MSH is a cleavage product of ACTH that has an equal affinity for the MC1 receptor on melanocytes as ACTH.^[25]
Melanosomes are vesicles that package the chemical inside a plasma membrane. The melanosomes are organized as a cap protecting the nucleus of the keratinocyte. When ultraviolet rays penetrate the
skin and damage DNA, thymidine dinucleotide (pTpT) fragments from damaged DNA will trigger melanogenesis^[26] and cause the melanocyte to produce melanosomes, which are then transferred by dendrites
to the top layer of keratinocytes.
Stem cells
The precursor of the melanocyte is the melanoblast. In adults, stem cells are contained in the bulge area of the outer root sheath of hair follicles. When a hair is lost and the hair follicle
regenerates, the stem cells are activated. These stem cells develop into both keratinocyte precursors and melanoblasts - and these melanoblasts supply both hair and skin (moving into the basal layer
of the epidermis). There is additionally evidence that melanocyte stem cells are present in cutaneous nerves, with nerve signals causing these cells to differentiate into melanocytes for the skin.^ | {"url":"https://www.wikiwand.com/en/articles/Melanocyte","timestamp":"2024-11-09T13:12:15Z","content_type":"text/html","content_length":"362682","record_id":"<urn:uuid:0dd38554-69bd-4d26-ac5f-fde197fff272>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00362.warc.gz"} |