text
stringlengths 256
16.4k
|
|---|
EUDML | Weak compactness in the dual of a C*-algebra is determined commutatively. EuDML | Weak compactness in the dual of a C*-algebra is determined commutatively.
Weak compactness in the dual of a C*-algebra is determined commutatively.
Pfitzner, H.. "Weak compactness in the dual of a C*-algebra is determined commutatively.." Mathematische Annalen 298.2 (1994): 349-372. <http://eudml.org/doc/165175>.
@article{Pfitzner1994,
author = {Pfitzner, H.},
keywords = {Pelczyński’s property (V) for -algebras; weak compactness in the dual of a -algebra is determined commutatively; Grothendieck spaces},
title = {Weak compactness in the dual of a C*-algebra is determined commutatively.},
AU - Pfitzner, H.
TI - Weak compactness in the dual of a C*-algebra is determined commutatively.
KW - Pelczyński’s property (V) for -algebras; weak compactness in the dual of a -algebra is determined commutatively; Grothendieck spaces
Hana Krulišová, C*-algebras have a quantitative version of Pełczyński's property (V)
Narcisse Randrianantoanina, Pełczyński's Property (V) on spaces of vector-valued functions
Manuel González, Eero Saksman, Hans-Olav Tylli, Representing non-weakly compact operators
Pelczyński’s property (V) for
{C}^{*}
-algebras, weak compactness in the dual of a
{C}^{*}
-algebra is determined commutatively, Grothendieck spaces
{C}^{*}
{W}^{*}
{C}^{*}
Articles by H. Pfitzner
|
EUDML | Submanifolds of an even-dimensional manifold structured by a -parallel connection. EuDML | Submanifolds of an even-dimensional manifold structured by a -parallel connection.
Submanifolds of an even-dimensional manifold structured by a
T
-parallel connection.
Matsumoto, Koji; Mihai, Adela; Naitza, Dorotea
Matsumoto, Koji, Mihai, Adela, and Naitza, Dorotea. "Submanifolds of an even-dimensional manifold structured by a -parallel connection.." Lobachevskii Journal of Mathematics 13 (2003): 81-85. <http://eudml.org/doc/224377>.
author = {Matsumoto, Koji, Mihai, Adela, Naitza, Dorotea},
keywords = {immersed submanifold},
title = {Submanifolds of an even-dimensional manifold structured by a -parallel connection.},
AU - Naitza, Dorotea
TI - Submanifolds of an even-dimensional manifold structured by a -parallel connection.
KW - immersed submanifold
immersed submanifold
Articles by Naitza
|
Variance ratio test for random walk - MATLAB vratiotest - MathWorks Australia
Conduct Variance Ratio Test on Vector of Data
Conduct Variance Ratio Test on Table Variable
Adjust Number of Overlapping Periods for Test
Variance ratio test for random walk
h = vratiotest(y)
[h,pValue,stat,cValue] = vratiotest(y)
StatTbl = vratiotest(Tbl)
[___] = vratiotest(___,Name=Value)
[___,ratio] = vratiotest(___)
h = vratiotest(y) returns the rejection decision h from conducting the variance ratio test for assessing whether the univariate time series y is a random walk.
[h,pValue,stat,cValue] = vratiotest(y) also returns the p-value pValue, test statistic stat, and critical value cValue of the test.
StatTbl = vratiotest(Tbl) returns the table StatTbl containing variables for the test results, statistics, and settings from conducting the variance ratio test on the last variable of the input table or timetable Tbl. To select a different variable in Tbl to test, use the DataVariable name-value argument.
[___] = vratiotest(___,Name=Value) uses additional options specified by one or more name-value arguments, using any input-argument combination in the previous syntaxes. vratiotest returns the output-argument combination for the corresponding input arguments.
Some options control the number of tests to conduct. The following conditions apply when vratiotest conducts multiple tests:
vratiotest treats each test as separate from all other tests.
For example, vratiotest(Tbl,DataVariable="GDP",Alpha=0.025,IID=[false true]) conducts two tests, at a level of significance of 0.025, on the variable GDP of the table Tbl. The first test does not assume that the innovations series is iid and the second test assumes that the innovations series is iid.
[___,ratio] = vratiotest(___) additionally returns the variance ratios of each test ratio.
Test whether a time series is a random walk using the default options of vratiotest. Input the time series data as a numeric vector.
Load the global large-cap equity indices data set, which contains daily closing prices during 1993–2003. Extract the closing prices of the S&P 500 series.
sp = DataTable.SP;
plot(dt,sp)
title("S&P 500 Price Series")
The first half of the series exhibits exponential growth.
Scale the series by applying the log transformation.
logsp = log(DataTable.SP);
The logsp series is in levels.
Assess the null hypothesis that the log series is a random walk by applying the variance ratio test. Use default options.
h = vratiotest(logsp)
h = 0 indicates that, at a 5% level of significance, the test fails to reject the null hypothesis that the series is a random walk.
Load the global large-cap equity indices data set, extract the closing prices of the S&P 500 price series, and apply the log transform to the series.
Assess the null hypothesis that the log series is a random walk by applying the variance ratio test. Return the test decision,
\mathit{p}
[h,pValue,stat,cValue] = vratiotest(logsp)
Test whether a time series, which is one variable in a table, is a random walk using the default options.
Load the global large-cap equity indices data set. Convert the table DataTable to a timetable.
Apply the log transform to all series.
LTT = varfun(@log,TT(:,2:end));
LTT.Properties.VariableNames{end}
The last variable in the table is the log of the S&P 500 price series log_SP.
Assess the null hypothesis of the variance ratio test that the log of the S&P 500 price series is a random walk.
StatTbl = vratiotest(LTT)
h pValue stat cValue Alpha Period IID
_____ _______ _______ ______ _____ ______ _____
Test 1 false 0.70045 0.38471 1.96 0.05 2 false
vratiotest returns test results and settings in the table StatTbl, where variables correspond to test results (h, pValue, stat, and cValue) and settings (Alpha, Period, and IID), and rows correspond to individual tests (in this case, vratiotest conducts one test).
By default, vratiotest tests the last variable in the table. To select a variable from an input table to test, set the DataVariable option.
Test whether the S&P 500 series is a random walk using various step sizes, with and without the iid innovations assumption.
Load the global large-cap equity indices data set. Convert the table DataTable to a timetable and apply the log transform to all timetable variables.
Plot the S&P 500 returns.
plot(diff(LTT.log_SP))
The plot indicates that the returns have possible conditional heteroscedasticity.
Conduct separate tests for whether the series is a random walk using periods 2, 4, and 8. For each period, conduct separate tests assuming the innovations are iid and without the assumption. Return the variance ratios of each test, and compute the first-order autocorrelation of the returns from the first ratio.
q = [2 4 8 2 4 8];
iid = logical([1 1 1 0 0 0]);
[StatTbl,ratio] = vratiotest(LTT,Period=q,IID=iid,DataVariable="log_SP")
h pValue stat cValue Alpha Period IID
_____ ________ ________ ______ _____ ______ _____
Test 1 false 0.56704 0.57242 1.96 0.05 2 true
Test 2 false 0.33073 -0.97265 1.96 0.05 4 true
Test 3 true 0.030933 -2.1579 1.96 0.05 8 true
Test 4 false 0.70045 0.38471 1.96 0.05 2 false
Test 5 false 0.50788 -0.66214 1.96 0.05 4 false
Test 6 false 0.13034 -1.5128 1.96 0.05 8 false
ratio = 6×1
rho1 = ratio(1) - 1 % First-order autocorrelation of returns
rho1 = 0.0111
StatTbl.h indicates that the test fails to reject that the series is a random walk at 5% level, except in the case where period = 8 and IID = true. This rejection is likely due to the test not accounting for the heteroscedasticity.
y — Univariate time series data in levels
Univariate time series data in levels, specified as a numeric vector. Each element of y represents an observation.
vratiotest removes missing observations, represented by NaN values, from the input series.
vratiotest assumes that the specified input data is in levels. To convert a return series r to levels, define y(1) appropriately and let y = cumsum([y(1) r]).
Example: vratiotest(Tbl,DataVariable="GDP",Alpha=0.025,IID=[false true]) conducts two tests, at a level of significance of 0.025, on the variable GDP of the table Tbl. The first test does not assume that the innovations series is iid and the second test assumes that the innovations series is iid.
Period — Period q used to create overlapping return horizons
Period q used to create overlapping return horizons for the variance ratio, specified as an integer that is greater than 1 and less T/2 or a vector of such integers, where T is the effective sample size of yt.
When Period = 2 (the default), the first-order autocorrelation of the returns is asymptotically equal to ratio − 1.
vratiotest conducts a separate test for each value in Period.
Example: Period=2:4 runs three tests; the first test sets Period to 2, the second test sets Period to 3, and the third test sets Period to 4.
IID — Flag for whether to assume innovations are iid (independenct and identically distributed)
Flag for whether to assume the innovations are iid, specified as a value in this table or a vector of such values.
Innovations are not iid; the alternative hypothesis is the innovations series εt is a correlated series.
Set false when the input series is a long-term macroeconomic or financial price series because often, for such series, the iid assumption is unreasonable and rejection of the random-walk null hypothesis is due to heteroscedasticity is uninteresting.
Innovations are iid; the alternative hypothesis is εt is a dependent or not identically distributed series (for example, heteroscedastic).
Set true to strengthen the random-walk null hypothesis.
vratiotest conducts a separate test for each value in IID.
Example: IID=true
vratiotest conducts a separate test for each value in Alpha.
If you perform sequential testing using multiple values of q (Period), small-sample size distortions, beyond those that result from the asymptotic approximation of critical values, can result [2].
When vratiotest conducts multiple tests, the function applies all single settings (scalars or character vectors) to each test.
Test rejection decisions, returned as a logical scalar or vector with length equal to the number of tests. vratiotest returns h when you supply the input y.
Values of 1 indicate rejection of the random-walk null hypothesis in favor of the alternative.
Values of 0 indicate failure to reject the random-walk null hypothesis.
Test statistic p-values, returned as a numeric scalar or vector with length equal to the number of tests. vratiotest returns pValue when you supply the input res.
p-values are standard normal probabilities.
Test statistics, returned as a numeric scalar or vector with length equal to the number of tests. vratiotest returns stat when you supply the input y.
Test statistics are asymptotically standard normal.
Critical values, returned as a numeric scalar or vector with length equal to the number of tests. vratiotest returns cValue when you supply the input y.
Critical values are for two-tail probabilities.
Test summary, returned as a table with variables for the outputs h, pValue, stat, and cValue, and with a row for each test. vratiotest returns StatTbl when you supply the input Tbl.
StatTbl contains variables for the test settings specified by Period, Alpha, and IID.
ratio — Variance ratios
Variance ratios, returned as a numeric vector with length equal to the number of tests. Each ratio is the ratio of the following quantities:
The variance of the q-fold overlapping return horizon
q times the variance of the return series
For a random walk, the ratios are asymptotically equal to one. For a mean-reverting series, the ratios are less than one. For a mean-averting series, the ratios are greater than one.
The variance ratio test assesses the null hypothesis that a univariate time series yt is a random walk. The null model is
yt = c + yt–1 + εt,
where c is a drift constant and εt is an uncorrelated innovations series with zero mean.
When the innovations are not iid (the IID argument is false), the alternative is εt is a correlated series.
When innovations are iid (IID is true), the alternative is εt is either a dependent or not identically distributed series (for example, heteroscedastic).
The test statistic is based on a ratio of variance estimates of returns rt = yt − yt−1 and period q (Period argument) return horizons t + … + rt−q+1.
The variance ratio (ratio output) is for a test is
\frac{\text{VAR}\left({r}_{t}+...+{r}_{t-q+1}\right)}{q\text{VAR}\left({r}_{t}\right)}.
The horizon overlaps to increase the efficiency of the estimator and power of the test. Under either null hypothesis, an uncorrelated εt series implies that the period q variance is asymptotically equal to q times the period 1 variance. However, the variance of the ratio depends on the degree of heteroscedasticity, and, therefore, the variance of the ratio depends on the null hypothesis.
Rejection of the null hypothesis due to dependence of the innovations does not imply that the εt are correlated. If the innovations are dependent, nonlinear functions of εt can be correlated, regardless of whether εt are correlated. For example, if Cov(εt,εt−k) = 0 for all k ≠ 0, a k ≠ 0 can exist such that Cov(εt2,εt−k2) ≠ 0.
The test is two-tailed; therefore, the test rejects the random-walk null hypothesis when the test statistic is outside of the critical interval [−cValue,cValue]. Each tail outside of the critical interval has probability Alpha/2.
The test finds the largest integer n such that nq ≤ T – 1, where q is the vaule of the Period argument and T is the sample size. Then, the test discards the final (T–1) – nq observations. To include these final observations, remove the initial (T–1) – nq observations from the input series before you run the test.
[1] Campbell, J. Y., A. W. Lo, and A. C. MacKinlay. Chapter 12. “The Econometrics of Financial Markets.” Nonlinearities in Financial Data. Princeton, NJ: Princeton University Press, 1997.
[2] Cecchetti, S. G., and P. S. Lam. “Variance-Ratio Tests: Small-Sample Properties with an Application to International Output Data.” Journal of Business and Economic Statistics. Vol. 12, 1994, pp. 177–186.
[3] Cochrane, J. “How Big is the Random Walk in GNP?” Journal of Political Economy. Vol. 96, 1988, pp. 893–920.
[4] Faust, J. “When Are Variance Ratio Tests for Serial Dependence Optimal?” Econometrica. Vol. 60, 1992, pp. 1215–1226.
[5] Lo, A. W., and A. C. MacKinlay. “Stock Market Prices Do Not Follow Random Walks: Evidence from a Simple Specification Test.” Review of Financial Studies. Vol. 1, 1988, pp. 41–66.
[6] Lo, A. W., and A. C. MacKinlay. “The Size and Power of the Variance Ratio Test.” Journal of Econometrics. Vol. 40, 1989, pp. 203–238.
[7] Lo, A. W., and A. C. MacKinlay. A Non-Random Walk Down Wall St. Princeton, NJ: Princeton University Press, 2001.
|
FischerGroup - Maple Help
Home : Support : Online Help : Mathematics : Group Theory : FischerGroup
FischerGroup( n )
: {22,23,24} : integer parameter indicating the Fischer group
The Fischer groups are three among the sporadic finite simple groups. They were discovered by Bernd Fischer in the 1970s, and are generated by a conjugacy class of involutions, the product of any two of which has order either
2
3
{{Fi}_{24}}^{'}
is the derived subgroup (of index
2
) of a non-simple group
{\mathrm{Fi}}_{24}
2510411418381323442585600
The FischerGroup( n ) command returns a permutation group isomorphic to the Fischer group
{\mathrm{Fi}}_{22}
{\mathrm{Fi}}_{23}
{{Fi}_{24}}^{'}
for n = 22, 23, 24, respectively.
\mathrm{with}\left(\mathrm{GroupTheory}\right):
G≔\mathrm{FischerGroup}\left(23\right)
\textcolor[rgb]{0,0,1}{G}\textcolor[rgb]{0,0,1}{≔}{\textcolor[rgb]{0,0,1}{Fi}}_{\textcolor[rgb]{0,0,1}{23}}
\mathrm{Degree}\left(G\right)
\textcolor[rgb]{0,0,1}{31671}
\mathrm{GroupOrder}\left(G\right)
\textcolor[rgb]{0,0,1}{4089470473293004800}
\mathrm{IsSimple}\left(G\right)
\textcolor[rgb]{0,0,1}{\mathrm{true}}
The GroupTheory[FischerGroup] command was introduced in Maple 17.
|
2Al
Earth - and your family and friends - would be millions of kilometres away. How does that make you feel?
A mind map is simply a way to visually organise your thoughts about a subject. It can include written ideas, diagrams, keywords and it can be colour coded.
On the Mind map: Mars worksheet, you will see that the word ‘Mars’ is in the centre circle. It is surrounded by four circles with questions in them.
Now it’s your turn. And remember, a mind map is just a way to visually organise what you think you already know about Mars so include anything you think of because there is no right or wrong at this point.
2Al
2Al
You do a little bit of creative thinking every day even if you don’t realise it. All we are doing in this activity is going through the creative thinking process step-by-step...
Completing column 1 should have made you realise that there is a lot more you need to know before you can answer the question, ”How Could Humans Live on Mars?” For example, could we grow our own food on Mars?
Find out which organisations are considered experts eg. NASA for space travel
2Al
2Al
Once you cut out the pieces and stick them together in place, you need to complete the phrase “Ask a _______” by writing the name of the expert who could answer the question on each piece. You can then write ‘Mars’ in the circle in the middle of the jigsaw and colour in the circle to look like the planet Mars.
1.1.5 Jigsaw puzzle - Mars example
2Al
2Al
If you are part of a class, your teacher will lead a class discussion where you and your classmates will share your responses to the claims, analyse which claim testers you used and decide whether you have since changed your minds about any of the claims.
2Al
2Al
|
4.3 Mission Control Report (PBL Part C) - Big History School
To work through '4.3 Mission Control Report (PBL Part C)' you need to complete the Activities in order. So first complete ‘Learning Plan’ then move to Activity '4.3.1' followed by Activity '4.3.2' through to '4.3.4', and then finish with ‘Learning Summary’.
MISSION CONTROL REPORT C: Mars human habitat design
In Mission Control Report C you will answer the question, ‘How could humans live on Mars?’ by researching and brainstorming solutions for meeting human needs and creating a thriving community in your Mars human habitat.
To get started, read carefully through the Mission Control Report C learning goals below. Make sure you tick each of the check boxes to show that you have read all your Mission Control Report C learning goals.
As you read through the learning goals you may come across some words that you haven’t heard before. Please don’t worry. By the time you finish Mission Control Report C you will become very familiar with them!
You will come back to these learning goals at the end of Mission Control Report C to see if you have confidently achieved them.
Research: Human needs and solutions
Identify the four basic human needs
Research and brainstorm solutions for meeting human needs on Mars
Research: Thriving human communities
explore features of a thriving human community
research and brainstorm ideas for creating a thriving community on Mars
<!-Lesson quiz shortcode here-->
2Al
Welcome to Mission Control Report C!
In Mission Phase 4 you’ve been exploring how humans have learned to survive and thrive on almost every continent of planet Earth - and even orbiting our planet on the International Space Station. But how could humans survive and thrive on another planet?
In Mission Control Report C you will answer the question, ‘How could humans live on Mars?’ by:
1. Identifying human needs and researching/brainstorming solutions for meeting those needs in your Mars human habitat.
2. Identifying the features of a thriving human community and brainstorming how to create a thriving community in your Mars human habitat.
In Mission video 24: Human needs & solutions, the Mission Control Teams explore some of the options that scientists and engineers believe could successfully meet our basic human needs on Mars.
While you watch Mission video 24: Human needs & solutions look out for the answers to the following questions:
1. What are the four basic human needs?
2. What are two main ways we could try to meet those needs on Mars?
3. How could we provide breathable air on Mars?
4. How could we provide water on Mars?
5. How could we provide food on Mars?
6. How could we provide shelter on Mars?
Now that you’ve learned a little about what scientists and engineers are considering for meeting human needs on Mars, take a moment to think of other possible options that you may have heard of, or that you could invent, that could help to meet each of the four human needs on Mars.
2Al
In Mission video 24: Human needs & solutions, one of the Human Team members asked the following question: 'How could you meet the four basic human needs on a planet without water, tiny amounts of oxygen, no plants and animals for food and an atmosphere which is toxic for humans?’
To help answer that question, in this activity you will conduct some research and brainstorm solutions for meeting human needs in your Mars human habitat design.
Before you begin though, you will revisit some of the important information you heard in Mission video 24: Human needs & solutions. The Readings below summarise some of the solutions scientists and engineers have been considering for meeting human needs on Mars.
If you are working individually, you will read the Reading: Meeting our four basic human needs on Mars which provides an overview of all the Readings.
If you are working as part of a Mission Team, each one of your team members will read only one of the other four Meeting our needs... readings. This person will become the team expert on that human need. For example, the ‘Water Expert,’ the ‘Food Expert,’ the ‘Air Expert’ or the ‘Shelter Expert.’
This Reading provides an overview of how scientists and engineers believe we may be able to meet our four basic human needs on Mars. If you are working individually, take 5 minutes to read through it carefully and highlight the main points.
Once you have completed the Reading, if you have access to the internet, watch the suggested videos listed in Helpful Resources and conduct your own further independent research.
Creating air on Mars - https://www.youtube.com/watch?time_continue=47&v=DS3hcInSGyU
Extracting water on Mars - https://www.youtube.com/watch?v=7M9_p7FooE8
Water recycling in space - https://www.youtube.com/watch?v=BCjH3k5gODI
Growing food on Mars - https://www.youtube.com/watch?v=LMKl-KAg07U
Building shelter on Mars - https://www.youtube.com/watch?v=A3crw903HU0
If you are working in a Mission Team, at least one member of your team should read the Reading: Meeting our need for air on Mars and watch the suggested video in Helpful Resources before conducting further independent research.
If you are working in a Mission Team, at least one member of your team should read the Reading: Meeting our need for water on Mars, and watch the suggested videos in Helpful Resources before conducting further independent research.
If you are working in a Mission Team, at least one member of your team should read the Reading: Meeting our need for food on Mars, and watch the suggested video in Helpful Resources before conducting further independent research.
Space kitchen - https://www.youtube.com/watch?v=AZx0RIV0wss
If you are working in a Mission Team, at least one member of your team should read about ‘Meeting our need for shelter on Mars,’ and watch the suggested video in Helpful Resources before conducting further independent research.
Now it’s time to use what you have learned from Mission video 24: Human needs & solutions and the Readings to brainstorm ideas for your Mars human habitat on the Research table: human needs and solutions worksheet. If you are working in a Mission Team, you will complete this worksheet together with your team members.
Read through the instructions on the Research table: human needs and solutions worksheet:
Write in the ‘Need’ column which basic human need matches the purpose written in the column beside it
Write down which resources you think you can take to Mars from Earth in the ‘Take and Store’ column
Write or draw what you think you could ‘Make or Adapt’ on Mars in the final column
Once you have completed your Research table: human needs and solutions worksheet, your teacher will ask you to review the solutions you brainstormed and think about how self-sustaining they are. Remember, your human habitat will need to be able to exist for a long time without depending on outside help and by using natural resources responsibly.
Don’t be afraid to make changes to your Research table if, once you review your solutions, you think of better self-sustaining alternatives!
2Al
Hopefully in the last activity you found that brainstorming was a really useful way to generate lots of interesting and useful ideas for meeting human needs on Mars.
In the second part of Mission Control Report C, you’ll get a chance to use your brainstorming skills again when you begin to consider how to create a human habitat on Mars where humans will not only survive but also thrive.
In Mission video 25: Thriving human communities, the Human Team identifies the features of a thriving human community and explores ideas for creating a community on Mars where humans can thrive. They also discuss the sorts of energy sources you could consider to power your human habitat on Mars.
While you watch Mission video 25: Thriving human communities look out for the answers to the following questions:
1. What are three different energy sources you could use on Mars?
2. What are some ideas for creating a thriving human community on Mars?
Now that you’ve head some ideas about which energy sources you could use on Mars and how to create a thriving human community, take a moment to think of other possible ideas, even ideas you’ve heard of elsewhere, that could help you to create a thriving human community on Mars.
2Al
Commander Ripley pointed out in Mission video 25: Thriving human communities that when you are designing your Mars human habitat, you will need to be very clever with how you use your space and the limited building materials you will have.
You will need to keep this in mind during this activity where you will brainstorm ideas for creating a thriving community in your Mars human habitat.
Before you begin though, you will revisit some of the important information you heard in Mission video 25: Thriving human communities. The Reading below summarises ideas for energy sources you could use and how you can make your Mars human habitat a real community.
Read the Reading: Thriving human communities, highlighting the most important points with a highlighter. If you have access to the Internet, watch the suggested videos listed in Helpful Resources for more information and undertake some of your own further independent research.
Generating power on Mars - https://www.youtube.com/watch?v=ysLHApdznic
Exercise on the International Space Station - https://www.youtube.com/watch?v=Wam7poPzG1w
Once you have finished your research, you can use the Brainstorm: thriving human communities worksheet to brainstorm ideas for how you will ensure your human habitat is a welcoming community for the first human inhabitants. If you are working in Mission Teams you will complete this worksheet with your team members.
On the worksheet you will need to write or draw your ideas for how you will cater for:
Socialising and relaxing
Exercising and health
You will then need to describe which energy source/s you will use to power this community.
To acknowledge that students have completed the third step in their Mars Mission you may like to print out and hand each student a Mission Control Report C progress pass from Mission Control.
There are four passes per page. Simply write the student names in the spaces provided. You may wish to print them on card or laminate the passes before handing them out to students.
Now that you’ve completed your research and brainstorming sessions, refer back to the Chart: KWHLAQ which you began during your pre-mission skills training sessions. You will find a copy of the Chart: KWHLAQ in Helpful Resources.
Remember that the BIG question of your Mars Mission is: ‘How Could Humans Live on Mars?’
By now, you should have some really good ideas about how you could help humans to live on Mars. Turn to the ‘A - What Action Will You Take?’ column on your Chart: KWHLAQ and consider the following questions:
What are you creating to help humans live on Mars?
How can you apply what you have learned so far throughout your Mission to create the best possible human habitat on Mars? For example, you have learned that humans need to breathe oxygen and Mars has less than 1% oxygen so you may consider extracting oxygen from the water on Mars.
How will you share what you have learned? Your teacher will have already advised you how you will present your Mars human habitat model.
You are now ready to move on to the final part of your Mars Mission - Mission Control Report D!
2Al
In Mission Control Report C you answered the question, ‘How could humans live on Mars?’ by researching and brainstorming solutions for meeting human needs and creating a thriving community in your Mars human habitat.
Now it’s time to revisit your Mission Control Report C learning goals and read through them again carefully.
Well done on completing your learning summary. Click here to go to 4.4 Mission Control Report (PBL Part D)
Once you have checked the boxes to confirm you have achieved your learning goals for Sequence '4.3 Mission Control Report (PBL Part C)' click on the 'I have achieved my learning goals' button below.
Go to 4.4 Mission Control Report (PBL Part D) »
2Al
|
ServiceNow app for integration with the Release plugin
Since Release 8.6, a ServiceNow certified app is optional for integration. You can obtain the app from the ServiceNow Store.
To use the app, you should install it in your ServiceNow console, to allow it to create a new menu item with all the required configuration areas. For more information on the installation process see Install an application.
The table below shows which versions of the Release plugin require a ServiceNow app version:
No ServiceNow app
ServiceNow app 1.0.2
ServiceNow app 1.2.x
pre-8.5 ☑ ☒ ☒
8.5 ☒ ☑ ☑
8.6 and above ☑ ☒ ☑
Using design templates from Release
All design templates can be retrieved by ServiceNow and can be used to initiate a release from ServiceNow. If you want to use the record information from the ServiceNow record that was used to initiate the release, you will need to create two template variables:
${id}: the sys_id of the record the release was initiated from
${number}: the number of the record the release was initiated from
The variables listed above will be entered by ServiceNow upon creation.
Connect the Release server from ServiceNow
To set up a connection to a Release server from within the ServiceNow app:
Navigate to Release > properties
In the endpoint field, enter the URL of the Release server (with api/v1/ appended to the URL).
In the Username field, enter the username to connect with the Release server.
In the Password field, enter a password to connect with the Release server.
Optionally, set the Language ISO code if your instance of Release uses a different language than the one used in ServiceNow.
Optionally, set the “Autostart on true/false” value to specify if the release should immediately start in Release.
Optionally, set the “Fetch the release templates from XLR on a nightly basis” value to true/false depending on your requirements.
The next step is to retrieve the possible templates from Release. Follow these steps:
In the ServiceNow console, navigate to Release > Release templates
Press the Get Templates button.
This will retrieve all design templates from Release and display them in a list.
Tip: This is also a quick way to test your connection to the Release server.
Set up trigger rules in ServiceNow
Trigger rules in ServiceNow need to be set up to initiate actions on change events in communications between ServiceNow and Release. There are three communication options that can be used from ServiceNow:
Create Release: initiate a release in Release
Comment: leave a comment in Release
Complete Task: complete a task in Release
To set up a trigger rule, in the ServiceNow app:
Navigate to Release > Trigger Rules and click New.
A few samples have been set up for you to review. You can either reuse these or create your own.
Add a Name for the rule and select a Type from the list.
In the Table list on the right side, select a table to apply the rule to. This will populate the available fields.
The Previous State and Current State tabs are used to compare the state of the release prior to and after the change event, and run the trigger rules if the conditions are met.
Available fields and field mappings exist next to the trigger rules. If you want to send different field information such as variables, from ServiceNow to Release, you must configure these mappings.
Creating variables in XLR and connection in ServiceNow
On the Menu bar, select Design -> Folders -> Add Folder. Type the folder name and Create
Click Add template, Then either Create new template or Import Template. Type the template name and click Create
In the Show dropdown menu select variables. To import variables, click New Varaiable. To change the Variable type, select the Type in the dropdown menu. Continue the process to add all the variables you need.
Connect XLR by searching in ServiceNow for Digitalai Release -> Properties
Input the XLR Url, username and password then click save. The connection is now completed.
Passing release variables parameters to XLR
Click on Change -> Create new
Fill in the description next to the Short description field
Click the search button next to Assignment group -> Select an assignment group
Scroll down and click the search button next to the Digitalai Release template
Select the name of your template
Import the Values next to the Digitalai variables
Save the Change event by right clicking and selecting save. The form will be reloaded upon saving and it will auto-populate the Digitalai Identifier and the Release status.
Variables format:
Text String of any type
List Box List that allows you to choose a default value
Password Must be greater than 8 characters. Must include a capital character, lowercase character, number and special character(ex: !@#$%^&*?{}/). Once the form is saved and reloaded the password will be masked (ExamplePassword12!)
Checkbox This is a boolean value. This can be set by either typing false or true
Number This is a number value which can be set to any value
List This is a list of values. Unlike the box value it does not need to have a starting value set
Date This date value can be set using the format (YYYY-MM-DD HH:MM:SS)
Key-value Map This allows you to set Key and value. Using the format: KeyOne:ValOne, KeyThree:ValThree, KeyTwo:ValTwo
Set This allows you to store data with out repeating values. Using the format: ValTwo,ValOne,Val_Three
Placeholder This allows you to refer to values in labels in the change request. Use the
sign to access the value in a label, e.g. Use
requestedby, to access the value of the Requested By label. Once saved, the placeholder will be converted to the associated value, e.g. the image below shows the value System Administrator is referenced by using $requestedby
Important note about editing variables
These values must be created in XLR. They can not be created or edited by adjusting the Variable names or using the plus button. You can edit the Variables values in ServiceNow.
Complete a gate task when change request is approved.
In XLR, find the Release with the label In Progress. Click on the release to edit it. *(If you do not see your Release follow the steps in Passing release variables parameters to XLR).
To create a Gate task, navigate to Core -> Gate. To get this task to run automatically, use the Tag “snow_approved”.
Gates allow for the user to either automatically run them, using “snow_approved” tag or by clicking Skip.
In ServiceNow, click the Request approval button on the top bar.
Scroll to the bottom select the Approvers tab. Click the box next to at least two of the approvers name. In the actions on selected row menu select Approve.
Click the Button Get XLR info on the top bar. This will change the Digitalai Release Status to IN_PROGRESS in ServiceNow.
At the bottom of the page in Service now, once again, go to the approvers tab. More approvers should be visible. Select the approvers and in the actions on selected row menu, select Approve. When the change requests gets approved in ServiceNow, this will trigger the completion of all gate tasks that contain the tag ‘snow_approved’ in the associated XLR release.
Information on tasks in ServiceNow
Four fields are used in ServiceNow for communicating with Release:
Release Template: the template to use when creating a release from ServiceNow
XLR Identifier: the identifier of a release in Release. This field is created from ServiceNow.
XLR State: the state of the release in Release
Correlation id: the Release task ID for which the last communication was done.
Next to these fields:
A Get XLR info button is available to retrieve the latest status of the release in Release
A Navigate to Release related link navigates to the release in the Release user interface.
|
Medium dungeoneering token box - The RuneScape Wiki
A box filled with a medium quantity of Dungeoneering tokens. Dungeoneering tokens can be used to purchase a wide range of items that are helpful in combat.
Item JSON: {"edible":"no","disassembly":"no","stackable":"no","stacksinbank":"yes","death":"always","name":"Medium dungeoneering token box","bankable":"yes","gemw":false,"equipable":"no","members":"no","id":"36092","release_date":"30 November 2015","restriction":"microtransaction","release_update_post":"Christmas 2015 & New Defenders","lendable":"no","destroy":"Are you sure you want to destroy your Treasure Hunter prize? You cannot reclaim it.","highalch":false,"weight":0.01,"lowalch":false,"tradeable":"no","examine":"A box filled with a medium quantity of Dungeoneering tokens. Dungeoneering tokens can be used to purchase a wide range of items that are helpful in combat.","noteable":"no"}
A medium dungeoneering token box is an item that could be obtained from Treasure Hunter on 8 December 2015, or later from various sources (e.g. during the Christmas Advent Calendar promotion, from loot piñatas, or if the player has received the skilling backpack from the Port Sarim Invasion event they may receive it as a reward when consuming a charge of the backpack).
The amount of tokens found in each box can be found with the equation
{\displaystyle {\frac {17}{20}}x^{2}+25}
tokens, where
{\displaystyle x}
is your level in Dungeoneering. At level 120 Dungeoneering it awards 12,265 Dungeoneering tokens when opened.
The dungeoneering token box may be kept in the bank, and players can open them later for more tokens when they reach a higher Dungeoneering level.
Level XP Level XP Level XP Level XP
4 39 34 1008 64 3507 94 7536
19 332 49 2066 79 5330 109 10124
Big event mystery box (Birth by Fire) N/A 1 Uncommon
Big event mystery box (Deathbeard's Demise) N/A 1 Common
Gemfall 1 Unknown
Large sack of loot N/A 1 Uncommon
Small sack of loot N/A 1 Uncommon
Prior to the 15 November 2021 update, the examine text was "A medium sized box filled with who knows how many Dungeoneering tokens."
Although it is shown as non-bankable in the Treasure Hunter interface, the item can be banked.
The inventory image and examine text have been changed.
Players can now auto-redeem Dungeoneering token boxes won from Treasure Hunter.
Retrieved from ‘https://runescape.wiki/w/Medium_dungeoneering_token_box?oldid=35775680’
|
2.3 Mission Control Report (PBL Part A) - Big History School
To work through '2.3 Mission Control Report (PBL Part A)' you need to complete the Activities in order. So first complete ‘Learning Plan’ then move to Activity '2.3.1' followed by Activity '2.3.2' through to '2.3.4', and then finish with ‘Learning Summary’.
MISSION CONTROL REPORT A: Mars background research
In Mission Control Report A you will need to demonstrate to Mission Control that you have a thorough knowledge of Mars by producing an information report.
To get started, read carefully through the Mission Control Report A learning goals below. Make sure you tick each of the check boxes to show that you have read all your Mission Control Report A learning goals.
As you read through the learning goals you may come across some words that you haven’t heard before. Please don’t worry. By the time you finish Mission Control Report A you will have become very familiar with them!
You will come back to these learning goals at the end of Mission Control Report A to see if you have confidently achieved them.
Mars research and notetaking
Research what scientists know about Mars
Take research notes and include sources
Organise research notes into an information report structure
Report writing: drafting, editing, publishing
Write a Mars information report draft
Use a rubric to help edit a Mars information report
Publish a Mars information report
2Al
Now that you have completed 'Learning Plan' you need to continue working through the activities in order. So now complete Activity '2.3.1' followed by Activity '2.3.2', Activity '2.3.3', Activity '2.3.4', and then finish with 'Learning Summary'.
Welcome to your first Mission Control Report!
Before you can proceed any further with your Mission, you need to show Mission Control that you have a thorough understanding of Mars by producing a detailed Information Report.
Your teacher will advise you whether you will be presenting your Information Report as a written text, poster, a PowerPoint presentation or any other digital format.
Mission video 12: Mars background research gives you background information about what scientists know about Mars and will cover a lot of the details you’ll need to include in your Information Report.
While you watch Mission video 12: Mars background research look out for the answers to the following questions:
1. Where is Mars located?
2. How big is Mars? How does this compare to Earth?
3. What type of planet is Mars? What is its surface like?
5. What do we know about Mars’ atmosphere?
6. Is there gravity on Mars?
7. What is the temperature range on Mars?
8. How has Mars been explored by humans?
When you read about Mars, it’s easy to sometimes forget that it is a real place with changing weather conditions just like Earth. To find out what the weather and temperature is like on Mars right now, take a look at NASA’s Mars Dashboard. You’ll find a link in Helpful Resources.
And now that you have a bit more background information about Mars, in the next activity you will practice taking notes while you conduct your own further independent research for your Mars Information Report.
https://mars.jpl.nasa.gov/#red_planet/0
2Al
The first step in preparing any information report is doing your research and taking notes. It is really important when you’re doing your research to write down where you’ve found your information (these are called your ‘sources’).
In this second Mission Control Report A activity you will continue preparing to write your Mars Information Report by learning how to take notes while you are undertaking your research.
To begin your research, your teacher will print out a copy of the Reading: Exploring Mars for you. It’s always a good idea, when you read something as part of your research, to first go through it carefully with a highlighter. Once you’ve highlighted the most important points, use your own words to write those points on your Notetaking: Mars information report worksheet.
In Helpful Resources you’ll find some suggested websites for further research. Before you begin searching online, remember these research tips that were covered in your pre-mission critical thinking skills training:
Use specific keywords when doing a search ie. Mars oxygen, Mars temperature etc.
Add “for kids” at the end of your search phrase for more appropriate results
Don’t just click on the first website - read through the results list before choosing the most relevant one
Look for websites which end in .edu or .gov as they are usually more reliable
Check the last time the website was updated
Go to more than one website to double-check facts.
Once you’ve finished taking notes from the Reading: Exploring Mars, you may like to watch Mission video 12: Mars background research from the previous activity again, pausing the video at important points so that you can take notes on your worksheet.
https://mars.nasa.gov/allaboutmars/facts/#infographic
On the Notetaking: Mars information report worksheet you’ll find a series of questions that you’ll need to answer in order to complete your Mars Information Report. Use this worksheet whenever you are conducting your Mars research so that you can jot down bullet points when you find good, relevant information.
On the right-hand side of your worksheet you’ll find a column with the heading ‘Sources.’ This is where you write the name of the video/book/magazine/website that you got your information from.
Once you have completed your Mars research, you are ready to begin planning the structure of your Mars Information Report.
2Al
Organize research notes into an information report structure
Hopefully you found out lots of interesting things about Mars during your research that you didn’t know before!
In this third Mission Control Report A activity you will organize all the interesting Mars research notes you took into a formal information report structure.
You’ll notice on the Planning: Mars information report worksheet that an information report follows a particular structure:
Title: you could keep your Title simple or be a little creative and use alliteration etc.
Introduction: this is a general statement which outlines what the information report is about.
Paragraphs 1,2,3: each paragraph should begin with a topic sentence and address one aspect of Mars. For example:
- Mars’ Atmosphere
- Mars Discoveries
Conclusion: the final paragraph may include any final interesting facts about Mars or information about the possible future of Mars exploration.
Refer back to your research notes and use the Planning: Mars information report worksheet to plan out what you’re going to include in each paragraph of your information report. Remember, this is not your final version so you can use bullet points.
Once you have finished your planning, you are ready to draft, edit and publish your Mars Information Report.
2Al
Now that you are familiar with the structure of an information report, and you have completed your planning sheet, you should find writing your Mars Information Report a whole lot easier.
But as you will see in this lesson, reviewing and improving on your first draft before publishing your final Mars Information Report is a very important step!
Before you start writing the first draft of your Mars Information Report, take a thorough look at the Rubric: Mars information report. It details what information you need to include in your report to impress Mission Control.
You will use this rubric during the last three stages of completing your Mars Information Report:
When you are writing your Mars Information Report refer back to the rubric regularly to ensure you have included everything you need to.
2. Reflecting & Editing
(a) Self-assessment: once you have completed your draft Mars Information Report, use the ‘Self’ column in the rubric to assess your work.
(b) Peer-assessment: your teacher will let you know if they will choose a peer (eg. classmate) to assess your work or if you can choose your own peer. Your peer will complete the ‘Peer’ column on your rubric.
(c) Based on your own self-assessment and the feedback from your peer, you should review and edit your Mars Information Report, making improvements to ensure you have completed it to the best of your ability.
Once you have reviewed and edited your Mars Information Report and are confident that you have completed it to the best of your ability, hand your information report, along with your completed rubric, to your teacher.
Refer back to the Chart: KWHLAQ which you began in your first lesson and add the most important things you have learned so far about Mars to the “L - What have you Learned?” column. You can find a copy to refresh your memory in Helpful Resources.
Also, check whether, through researching your Mars Information Report, you have answered any of the questions in the “W - What do you Want to know?” column.
Congratulations - you have completed Mission Control Report A!
You are now ready to move on to Mission Phase 3 where you will begin to learn about Earth and the Goldilocks conditions for life...
1.1.3 - Chart - KWHLAQ Example
2Al
In Mission Control Report A you demonstrated to Mission Control that you have a thorough knowledge of Mars by producing an information report.
Now it’s time to revisit your Mission Control Report A learning goals and read through them again carefully.
Well done on completing your Learning summary. Click here to complete your Mission Transmission before moving to the next Phase of your Mars Mission.
Once you have checked the boxes to confirm you have achieved your learning goals for Sequence '2.3 Mission Control Report (PBL Part A)' click on the 'I have achieved my learning goals' button below.
Go to 2.4 Mission Transmission »
2Al
|
Effect of Hole Geometry on the Thermal Performance of Fan-Shaped Film Cooling Holes | J. Turbomach. | ASME Digital Collection
Michael Gritsch,
, Brown Boveri Str. 7, Baden 5401, Switzerland
Will Colban,
Will Colban
Heinz Schär,
Gritsch, M., Colban, W., Schär, H., and Döbbeling, K. (April 28, 2005). "Effect of Hole Geometry on the Thermal Performance of Fan-Shaped Film Cooling Holes." ASME. J. Turbomach. October 2005; 127(4): 718–725. https://doi.org/10.1115/1.2019315
This study evaluates the impact of typical cooling hole shape variations on the thermal performance of fan-shaped film holes. A comprehensive set of experimental test cases featuring 16 different film-cooling configurations with different hole shapes have been investigated. The shape variations investigated include hole inlet-to-outlet area ratio, hole coverage ratio, hole pitch ratio, hole length, and hole orientation (compound) angle. Flow conditions applied cover a wide range of film blowing ratios
M=0.5
to 2.5 at an engine-representative density ratio
DR=1.7
. An infrared thermography data acquisition system is used for highly accurate and spatially resolved surface temperature mappings. Accurate local temperature data are achieved by an in situ calibration procedure with the help of thermocouples embedded in the test plate. Detailed film-cooling effectiveness distributions and discharge coefficients are used for evaluating the thermal performance of a row of fan-shaped film holes. An extensive variation of the main geometrical parameters describing a fan-shaped film-cooling hole is done to cover a wide range of typical film-cooling applications in current gas turbine engines. Within the range investigated, laterally averaged film-cooling effectiveness was found to show only limited sensitivity from variations of the hole geometry parameters. This offers the potential to tailor the hole geometry according to needs beyond pure cooling performance, e.g., manufacturing facilitations.
gas turbines, orifices (mechanical), cooling, temperature measurement, thermocouples, infrared imaging
Film cooling, Discharge coefficient, Flow (Dynamics)
,” AIAA paper no 86-1326.
,” ASME paper No.-99-GT-34.
Advanced Cooling of Gas Turbine Blades: Effect of Hole Geometry and Film Cooling Performance
8th Colloquium on Turbomachinery
TMPRC Seoul National University
, Seoul, Korea, June 27th–July 3rd.
Assessment of Various Film-Cooling Configurations Including Shaped and Compound Angle Holes Based on Large-Scale Experiments
Interaction of Film Cooling Rows: Effects of Hole Geometry and Row Spacing on the Cooling Performance Downstream of the Second Row of Holes
Effect of Internal Crossflow on the Effectiveness of Shaped Film-cooling Holes
Free-StreamTurbulence Effects on Film Cooling With Shaped Holes
Film Cooling on a Convex Surface With Zero Pressure Gradient Flow
Film-Cooling From Holes With Expanded Exits: A Comparison of Computational Results With Experiments
A Detailed Analysis of Film Cooling Physics, Part III: Streamwise Injection with Shaped Holes
Heat Transfer With Film Cooling Near Non-Tangential Injection Slots
In Situ Calibration for Quantitative Infrared Thermography
Proceedings of the 3rd International Conference on Quantitative Infrared Thermography (QIRT '96)
, Germany, September 2–5.
Film Cooling Effectiveness and Heat/Mass Transfer Coefficient Measurements around a Conical-Shaped Hole with Compound Angle Injection
A Detailed Analysis of Film Cooling Physics, Part IV: Compound-Angle Injection With Shaped Holes
Film Cooling Effectiveness and Heat Transfer Coefficient Distribution Around Diffusion Shaped Holes
Experimental Investigations on Cooling Hole Diameters of an Impingement-Effusion Cooling System
|
Home : Support : Online Help : Mathematics : Packages : powseries
Overview of the powseries Package
List of powseries Package Commands
powseries[command](arguments)
The powseries package contains commands to create and manipulate formal power series represented in general form.
Each command in the powseries package can be accessed by using either the long form or the short form of the command name in the command calling sequence.
evalpow
Other trigonometric functions, such as tan(x), sec(x), and the hyperbolic functions, such as sinh(x), can be generated through the powseries[evalpow] command.
To display the help page for a particular powseries command, see Getting Help with a Command in a Package.
For more information on the general representation of formal power series, see powseries[powseries].
a≔\mathrm{powseries}[\mathrm{powexp}]\left(x\right):
b≔\mathrm{powseries}[\mathrm{tpsform}]\left(a,x,5\right)
\textcolor[rgb]{0,0,1}{b}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{+}\frac{\textcolor[rgb]{0,0,1}{1}}{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{}{\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{+}\frac{\textcolor[rgb]{0,0,1}{1}}{\textcolor[rgb]{0,0,1}{6}}\textcolor[rgb]{0,0,1}{}{\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{3}}\textcolor[rgb]{0,0,1}{+}\frac{\textcolor[rgb]{0,0,1}{1}}{\textcolor[rgb]{0,0,1}{24}}\textcolor[rgb]{0,0,1}{}{\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{4}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{O}\textcolor[rgb]{0,0,1}{}\left({\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{5}}\right)
\mathrm{with}\left(\mathrm{powseries}\right):
c≔\mathrm{powadd}\left(\mathrm{powpoly}\left(1+{x}^{2}+x,x\right),\mathrm{powlog}\left(1+x\right)\right):
d≔\mathrm{tpsform}\left(c,x,6\right)
\textcolor[rgb]{0,0,1}{d}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{+}\frac{\textcolor[rgb]{0,0,1}{1}}{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{}{\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{+}\frac{\textcolor[rgb]{0,0,1}{1}}{\textcolor[rgb]{0,0,1}{3}}\textcolor[rgb]{0,0,1}{}{\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{3}}\textcolor[rgb]{0,0,1}{-}\frac{\textcolor[rgb]{0,0,1}{1}}{\textcolor[rgb]{0,0,1}{4}}\textcolor[rgb]{0,0,1}{}{\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{4}}\textcolor[rgb]{0,0,1}{+}\frac{\textcolor[rgb]{0,0,1}{1}}{\textcolor[rgb]{0,0,1}{5}}\textcolor[rgb]{0,0,1}{}{\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{5}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{O}\textcolor[rgb]{0,0,1}{}\left({\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{6}}\right)
e≔\mathrm{evalpow}\left(\mathrm{sinh}\left(x\right)\right):
\mathrm{tpsform}\left(e,x,5\right)
\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{+}\frac{\textcolor[rgb]{0,0,1}{1}}{\textcolor[rgb]{0,0,1}{6}}\textcolor[rgb]{0,0,1}{}{\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{3}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{O}\textcolor[rgb]{0,0,1}{}\left({\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{5}}\right)
g≔\mathrm{tpsform}\left(\mathrm{powdiff}\left(\mathrm{powsin}\left(x\right)\right),x,6\right)
\textcolor[rgb]{0,0,1}{g}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{-}\frac{\textcolor[rgb]{0,0,1}{1}}{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{}{\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{+}\frac{\textcolor[rgb]{0,0,1}{1}}{\textcolor[rgb]{0,0,1}{24}}\textcolor[rgb]{0,0,1}{}{\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{4}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{O}\textcolor[rgb]{0,0,1}{}\left({\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{6}}\right)
h≔\mathrm{evalpow}\left(\mathrm{Tan}\left(x\right)\right):
k≔\mathrm{tpsform}\left(\mathrm{negative}\left(h\right),x,5\right)
\textcolor[rgb]{0,0,1}{k}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{-}\frac{\textcolor[rgb]{0,0,1}{1}}{\textcolor[rgb]{0,0,1}{3}}\textcolor[rgb]{0,0,1}{}{\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{3}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{O}\textcolor[rgb]{0,0,1}{}\left({\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{5}}\right)
|
by Michael Niedermayer
michaelni at gmx dot at
3.4 Order of operation precedence
3.6 Bitstream functions
5.1.1 In AVI File Format
5.1.2 In ISO/IEC 14496-12 (MP4 File Format)
5.1.3 In NUT File Format
This document assumes familiarity with mathematical and coding concepts such as Range coding and YCbCr colorspaces.
The key words MUST, MUST NOT, SHOULD, and SHOULD NOT in this document are to be interpreted as described in RFC 2119.
For reference, below is an excerpt of RFC 2119:
"MUST" means that the definition is an absolute requirement of the specification.
"MUST NOT" means that the definition is an absolute prohibition of the specification.
"SHOULD" means that there may exist valid reasons in particular circumstances to ignore a particular item, but the full implications must be understood and carefully weighed before choosing a different course.
"SHOULD NOT" means that there may exist valid reasons in particular circumstances when the particular behavior is acceptable or even useful, but the full implications should be understood and the case carefully weighed before implementing any behavior described with this label.
ESC Escape symbol to indicate that the symbol to be stored is too large for normal storage and a different method is used to store it.
MSB Most significant bit, the bit that can cause the largest change in magnitude of the symbol.
RCT Reversible Color Transform, a near linear, exactly reversible integer transform that converts between RGB and YCbCr representations of a sample.
VLC Variable length code.
Note: the operators and the order of precedence are the same as used in the C programming language ISO/IEC 9899.
a / b means a divided by b with truncation of the result toward zero.
a % b means remainder of a divided by b.
a >> b means arithmetic righ shift of two’s complement integer representation of a by b binary digits.
a++ is equivalent to a = a + 1.
a– is equivalent to a = a - 1.
a += b is equivalent to a = a + b.
a -= b is equivalent to a = a - b.
a > b means a greater than b.
a >= b means a greater than or equal to b.
a < b means a less than b.
a <= b means a less than or equal b.
a == b means a equal to b.
a != b means a not equalto b.
a ? b : c means b if a is true otherwise c.
a = b, a += b, a -= b
remaining_bits_in_bitstream( ) means the count of remaining bits after the current position in the bitstream. It is computed from the NumBytes value multiplied by 8 minus the count of bits already read by the bitstream parser.
Each frame is split in 1 to 4 planes (Y, Cb, Cr, Alpha). In the case of the normal YCbCr colorspace the Y plane is coded first followed by the Cb and Cr planes, if an Alpha/transparency plane exists, it is coded last. In the case of the JPEG2000-RCT colorspace the lines are interleaved to improve caching efficiency since it is most likely that the RCT will immediately be converted to RGB during decoding; the interleaved coding order is also Y, Cb, Cr, Alpha.
For the purpose of the predictior and context, samples above the coded slice are assumed to be 0; samples to the right of the coded slice are identical to the closest left sample; samples to the left of the coded slice are identical to the top right sample (if there is one), otherwise 0.
Note, this is also used in JPEG-LS and HuffYuv.
tl t tr
context=Q_{0}[l-tl]+\left|Q_{0}\right|(Q_{1}[tl-t]+\left|Q_{1}\right|(Q_{2}[t-tr]+\left|Q_{2}\right|(Q_{3}[L-l]+\left|Q_{3}\right|Q_{4}[T-t])))
Q_{i}[a-b]=Table_{i}[(a-b)\&255]
Cb=b-g
Cr=r-g
Y=g+(Cb+Cr)>>2
g=Y-(Cb+Cr)>>2
r=Cr+g
b=Cb+g
Instead of coding the n+1 bits of the sample difference with Huffman-, or Range coding (or n+2 bits, in the case of RCT), only the n (or n+1) least significant bits are used, since this is sufficient to recover the original sample. In the equation below, the term "bits" represents bits_per_raw_sample+1 for RCT or bits_per_raw_sample otherwise:
coder\_input=\left[\left(sample\_difference+2^{bits-1}\right)\&\left(2^{bits}-1\right)\right]-2^{bits-1}
Early experimental versions of FFV1 used the CABAC Arithmetic coder from H.264 but due to the uncertain patent/royality situation, as well as its slightly worse performance, CABAC was replaced by a range coder based on an algorithm defined by G. Nigel N. Martin in 1979 RangeCoder.
C_{i}
B_{i}
b_{i}
S_{0,i}
j_{n}
r_{i}=\left\lfloor \frac{R_{i}S_{i,C_{i}}}{2^{8}}\right\rfloor
\begin{array}{ccccccccc} S_{i+1,C_{i}}=zero\_state_{S_{i,C_{i}}} & \wedge & l{}_{i}=L_{i} & \wedge & t_{i}=R_{i}-r_{i} & \Longleftarrow & b_{i}=0 & \Longleftrightarrow & L_{i}<R_{i}-r_{i}\\ S_{i+1,C_{i}}=one\_state_{S_{i,C_{i}}} & \wedge & l_{i}=L_{i}-R_{i}+r_{i} & \wedge & t_{i}=r_{i} & \Longleftarrow & b_{i}=1 & \Longleftrightarrow & L_{i}\geq R_{i}-r_{i} \end{array}
\begin{array}{ccc} S_{i+1,k}=S_{i,k} & \Longleftarrow & C_{i}\neq k \end{array}
\begin{array}{ccccccc} R_{i+1}=2^{8}t_{i} & \wedge & L_{i+1}=2^{8}l_{i}+B_{j_{i}} & \wedge & j_{i+1}=j_{i}+1 & \Longleftarrow & t_{i}<2^{8}\\ R_{i+1}=t_{i} & \wedge & L_{i+1}=l_{i} & \wedge & j_{i+1}=j_{i} & \Longleftarrow & t_{i}\geq2^{8} \end{array}
R_{0}=65280
L_{0}=2^{8}B_{0}+B_{1}
j_{0}=2
To encode scalar integers it would be possible to encode each bit separately and use the past bits as context. However that would mean 255 contexts per 8-bit symbol which is not only a waste of memory but also requires more past data to reach a reasonably good estimate of the probabilities. Alternatively assuming a Laplacian distribution and only dealing with its variance and mean (as in Huffman coding) would also be possible, however, for maximum flexibility and simplicity, the chosen method uses a single symbol to encode if a number is 0 and if not encodes the number using its exponent, mantissa and sign. The exact contexts used are best described by the following code, followed by some comments.
void put_symbol(RangeCoder *c, uint8_t *state, int v, int is_signed) {
put_rac(c, state+0, !v);
int a= ABS(v);
int e= log2(a);
for (i=0; i<e; i++)
put_rac(c, state+1+MIN(i,9), 1); //1..10
put_rac(c, state+1+MIN(i,9), 0);
for (i=e-1; i>=0; i--)
put_rac(c, state+22+MIN(i,9), (a>>i)&1); //22..31
if (is_signed)
put_rac(c, state+11 + MIN(e, 10), v < 0); //11..21
one\_state_{i}=default\_state\_transition_{i}+state\_transition\_delta_{i}
zero\_state_{i}=256-one\_state_{256-i}
The alternative state transition table has been build using iterative minimization of frame sizes and generally performs better than the default. To use it, the coder_type has to be set to 2 and the difference to the default has to be stored in the parameters. The reference implemenation of FFV1 in FFmpeg uses this table by default at the time of this writing when Range coding is used.
This coding mode uses golomb rice codes. The VLC code is split into 2 parts, the prefix stores the most significant bits, the suffix stores the k least significant bits or stores the whole number in the ESC case. The end of the bitstream (of the frame) is filled with 0-bits so that the bitstream contains a multiple of 8 bits.
Run mode is entered when the context is 0, and left as soon as a non-0 difference is found, the level is identical to the predicted one, the run and the first different level is coded.
log2_run[41]={
0, 0, 0, 0, 1, 1, 1, 1,
8, 9,10,11,12,13,14,15,
if (run_count == 0 && run_mode == 1) {
if (get_bits1()) {
run_count = 1 << log2_run[run_index];
if (x + run_count <= w)
if (log2_run[run_index])
run_count = get_bits(log2_run[run_index]);
run_count = 0;
if (run_index)
run_mode = 2;
The log2_run function is also used within JPEGLS.
if(diff>0) diff--;
In the case of a bitstream with version >= 2, a configuration record is stored in the the underlying container, at the track header level. It contains the parameters used for all frames. The size of the configuration record, NumBytes, is supplied by the underlying container.
ConfigurationRecord( NumBytes ) {
while( remaining_bits_in_bitstream( ) > 32 )
reserved_for_future_use // u(1)
configuration_record_crc_parity // u(32)
configuration_record_crc_parity 32 bits that are choosen so that the configuration record as a whole has a crc remainder of 0. This is equivalent to storing the crc remainder in the 32-bit parity. The CRC generator polynom used is the standard IEEE CRC polynom (0x104C11DB7) with initial value 0.
The Configuration Record extends the stream format chunk ("AVI ", "hdlr", "strl", "strf") with the ConfigurationRecord bistream. See AVI for more information about chunks.
The Configuration Record extends the sample description box ("moov", "trak", "mdia", "minf", "stbl", "stsd") with a "glbl" box which contains the ConfigurationRecord bitstream. See ISO14496_12 for more information about boxes.
The codec_specific_data element (in "stream_header" packet) contains the ConfigurationRecord bitstream. See NUT for more information about elements.
Frame( ) { type
keyframe br
if( keyframe && !ConfigurationRecordIsPresent )
for( i = 0; i < slice_count; i++ )
Slice( i )
Slice( i ) { type
if( version > 2 )
SliceHeader( i )
if( colorspace_type == 0) {
for( p = 0; p < primary_color_count; p++ ) {
Plane( p )
} else if( colorspace_type == 1 ) {
if( i || version > 2 )
slice_size u(24)
if( ec ) {
error_status u(8)
slice_crc_parity u(32)
slice_crc_parity 32 bits that are choosen so that the slice as a whole has a crc remainder of 0. This is equivalent to storing the crc remainder in the 32-bit parity. The CRC generator polynom used is the standard IEEE CRC polynom (0x104C11DB7) with initial value 0.
SliceHeader( i ) { type
slice_x ur
slice_y ur
slice_width - 1 ur
slice_height - 1 ur
for( j = 0; j < quant_table_index_count; j++ )
quant_table_index [ i ][ j ] ur
picture_structure ur
sar_num ur
sar_den ur
if( version > 3 ) {
reset_contexts br
slice_coding_mode ur
slice_width indicates the width on the slice raster. Inferred to be 1 if not present.
slice_height indicates the height on the slice raster. Inferred to be 1 if not present.
quant_table_index_count is defined as 1 + ( ( chroma_planes || version < 4 ) ? 1 : 0 ) + ( alpha_plane ? 1 : 0 ).
picure structure used
Parameters( ) { type
micro_version ur
coder_type ur
if( coder_type > 1 )
state_transition_delta[ i ] sr
colorspace_type ur
bits_per_raw_sample ur
chroma_planes br
log2( h_chroma_subsample ) ur
log2( v_chroma_subsample ) ur
alpha_plane br
num_h_slices - 1 ur
num_v_slices - 1 ur
quant_table_count ur
for( i = 0; i < quant_table_count; i++ )
QuantizationTable( i )
for( i = 0; i < quant_table_count; i++ ) {
states_coded br
if( states_coded )
for( j = 0; j < context_count[ i ]; j++ )
for( k = 0; k < CONTEXT_SIZE; k++ )
initial_state_delta[ i ][ j ][ k ] sr
intra ur
version specifies the version of the bitstream. Each version is incompatible with others versions: decoders SHOULD reject a file due to unknown version. Decoders SHOULD reject a file with version < 2 && ConfigurationRecordIsPresent == 1. Decoders SHOULD reject a file with version >= 2 && ConfigurationRecordIsPresent == 0.
Meaning of micro_version for version 4 (note: at the time of writting of this specification, version 4 is not considered stable so the first stable version value is to be annonced in the future):
1 JPEG 2000 RCT
chroma\_width=2^{-log2\_h\_chroma\_subsample}luma\_width
chroma\_height=2^{-log2\_v\_chroma\_subsample}luma\_height
slice_count indicates the number of slices in the current frame, slice_count is 1 if it is not explicitly coded.
0 32bit CRC on the global header
1 32bit CRC per slice and the global header
0 frames are independent or dependent (key and non key frames)
1 frames are independent (key frames only)
QuantizationTable( i ) { // type
for( j = 0; j < MAX_CONTEXT_INPUTS; j++ ) {
QuantizationTablePerContext( i, j, scale )
scale *= 2 * len_count[ i ][ j ] - 1
context_count[ i ] = ( scale + 1 ) / 2
QuantizationTablePerContext(i, j, scale) { type
for( k = 0; k < 128; ) {
len - 1 sr
for( a = 0; a < len; a++ ) {
quant_tables[ i ][ j ][ k ] = scale* v
for( k = 1; k < 128; k++ ) {
quant_tables[ i ][ j ][ 256 - k ] = -quant_tables[ i ][ j ][ k ]
quant_tables[ i ][ j ][ 128 ] = -quant_tables[ i ][ j ][ 127 ]
len_count[ i ][ j ] = v
In version 2 and later the maximum slice size in pixels is $\frac{width\centerdot height}{4}$, unless the frame is smaller or equal 352x288 this is to ensure that fast multithreaded decoding is possible.
JPEG-LS FCD 14495 http://www.jpeg.org/public/fcd14495p.pdf
HuffYuv http://cultact-server.novi.dk/kpo/huffyuv/huffyuv.html
FFmpeg http://ffmpeg.org
JPEG2000 http://www.jpeg.org/jpeg2000/
Information technology Coding of audio-visual objects Part 12: ISO base media file format http://www.iso.org/iso/iso_catalogue/catalogue_tc/catalogue_detail.htm?csnumber=61988
NUT Open Container Format http://www.ffmpeg.org/~michael/nut.txt
|
Asymptotic Self-Similarity for Solutions of Partial Integro-Differential Equations | EMS Press
Asymptotic Self-Similarity for Solutions of Partial Integro-Differential Equations
The question is studied whether weak solutions of linear partial integro-differential equations approach a constant spatial profile after rescaling, as time goes to infinity. The possible limits and corresponding scaling functions are identified and are shown to actually occur. The limiting equations are fractional diffusion equations which are known to have self-similar fundamental solutions. For an important special case, is is shown that the asymptotic profile is Gaussian and convergence holds in
L^2
, that is, solutions behave like fundamental solutions of the heat equation to leading order. Systems of integro-differential equations occurring in viscoelasticity are also discussed, and their solutions are shown to behave like fundamental solutions of a related Stokes system. The main assumption is that the integral kernel in the equation is regularly varying in the sense of Karamata.
Hans Engler, Asymptotic Self-Similarity for Solutions of Partial Integro-Differential Equations. Z. Anal. Anwend. 26 (2007), no. 4, pp. 417–438
|
\mathrm{with}\left(\mathrm{geom3d}\right):
\mathrm{icosidodecahedron}\left(t,\mathrm{point}\left(o,0,0,0\right),1\right)
\textcolor[rgb]{0,0,1}{t}
Access information relating to the icosidodecahedron
t
\mathrm{center}\left(t\right)
\textcolor[rgb]{0,0,1}{o}
\mathrm{form}\left(t\right)
\textcolor[rgb]{0,0,1}{\mathrm{icosidodecahedron3d}}
\mathrm{radius}\left(t\right)
\textcolor[rgb]{0,0,1}{1}
\mathrm{schlafli}\left(t\right)
[[\textcolor[rgb]{0,0,1}{3}]\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{5}]]
\mathrm{sides}\left(t\right)
\frac{\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{}\sqrt{\textcolor[rgb]{0,0,1}{5}}}{\textcolor[rgb]{0,0,1}{5}\textcolor[rgb]{0,0,1}{+}\sqrt{\textcolor[rgb]{0,0,1}{5}}}
Define a cuboctahedron with center (1,1,1), radius
\sqrt{2}
\mathrm{QuasiRegularPolyhedron}\left(i,[[3],[4]],\mathrm{point}\left(o,1,1,1\right),1\right)
\textcolor[rgb]{0,0,1}{i}
\mathrm{form}\left(i\right)
\textcolor[rgb]{0,0,1}{\mathrm{cuboctahedron3d}}
|
2Al
2Al
Take a moment to look closely at the periodic table and see how many elements you immediately recognize.
2Al
Step 6. Observe what happens to the pepper granules after you have rotated the water in the cup for a while. Record your observations using labeled diagrams in the Observe box on the worksheet. Complete a “before rotation” diagram of what the pepper looked like beforehand and an “after rotation” diagram of what the pepper looked like afterwards. Write at least one sentence to explain your observation.
Step 7. To finalize the process, Explain what you observed in this demonstration and how it is similar to the process of accretion in the final Explain box on the worksheet.
2Al
While you watch Mission video 11: Our sun and neighboring planets look out for the answers to the following questions:
2Al
When you hear that the circumference of Earth is 25, 000 miles or that the solar system stretches for billions of miles, can you really even begin to imagine how big these measurements are?
In this activity you will make a simple Earth, Moon and Mars scale model to try to understand how huge the distances are in our solar system - even between our nearest neighbors. First you will model the relative sizes of Earth, the Moon and Mars. Then you will model the relative distances between Earth, the Moon and Mars.
In this model 1 inch = 1000 miles.
As a point of reference, Earth has a circumference (length around it) of approx. 25, 000 miles.
Red, white and blue (or 3 different colored) balloons
Step 1. Inflate your blue balloon until you estimate the circumference of the balloon is approximately 25 inches.
Step 2. Using the tape measure, measure around the balloon and let air out or blow more air in until the circumference of your balloon is 25 icnhes.
Step 4. Mars is approximately half the size of Earth so you will inflate your balloon until you estimate the circumference is approximately 13 inches.
Step 5. Using the tape measure, measure around the balloon and let air out or blow more air in until the circumference of your balloon is 13 inches.
Step 7. Earth’s Moon is approximately half the size of Mars so you will inflate your balloon until you estimate the circumference is approximately 6 inches.
Step 8. Using the tape measure, measure around the balloon and let air out or blow more air in until the circumference of your balloon is 6 inches.
Remember that, as in the size scale model, 1 inch = 1000 miles
Ruler, measuring tape or trundle wheel
The distance between Earth and the moon has been rounded up to 250,000 miles.
Step 2. Using a ruler, measuring tape or trundle wheel, measure out (250 inches) 21 feet and secure the Moon balloon there. This is the distance from Earth to the Moon to scale.
The closest Mars ever orbits to Earth is 37 million miles (rounded down). To accurately represent the distance of Earth to Mars on this scale, you would have to travel (37,000 inches) 3000 feet from your Earth starting point.
It’s pretty difficult to estimate how far 3000 feet is! Your teacher may point out a landmark for you that is 3000 feet away. Or, if you have access to Google Maps, you could work out exactly where 3000 feet away is by:
Adjusting the length of the path until it is 3000 feet
If Mars is Earth’s neighbor, can you imagine how much the greater distance to the furthest planet, Neptune, must be? How long would you estimate it would take to travel to Neptune?
2Al
2Al
|
Home : Support : Online Help : Mathematics : Discrete Mathematics : Ordinals : Factor
factor an ordinal number
Factor(a, output=o, form=f)
(optional) literal keyword; one of full (default), monic, rmonic or pairs
If output=list (the default), a list of ordinals, nonnegative integers and polynomials with positive integer coefficients is returned.
Otherwise, if output=inert is specified, an inert product of ordinal numbers using the inert multiplication and exponentiation operators &. and &^, respectively, is returned. Factors equal to
1
are omitted from this product representation.
The Factor(a) calling sequence computes a factored normal form of
as a product of nonnegative integers and ordinals of the form
{\mathbf{\omega }}^{d}
{\mathbf{\omega }}^{d}+1
a={\mathbf{\omega }}^{{e}_{1}}\cdot {c}_{1}+⋯+{\mathbf{\omega }}^{{e}_{k-1}}\cdot {c}_{k-1}+{\mathbf{\omega }}^{{e}_{k}}\cdot {c}_{k}
, then the full factored normal form is:
{\mathbf{\omega }}^{{d}_{k}}\cdot {c}_{k}\cdot \left({\mathbf{\omega }}^{{d}_{k-1}}+1\right)\cdot {c}_{k-1}\cdot \dots \cdot \left({\mathbf{\omega }}^{{d}_{1}}+1\right)\cdot {c}_{1}
{d}_{k}={e}_{k}
{e}_{i+1}={e}_{i}+{d}_{i}
1\le i<k
Each factor
{b}_{i}={\mathbf{\omega }}^{{d}_{i}}+1
is irreducible in the sense that if
{b}_{i}=u\cdot v
for some ordinals
u
v
u=1
v=1
{b}_{i}={u}^{v}
u
v
u={b}_{i}
v=1
The monic factored normal form is:
{\mathbf{\omega }}^{{d}_{k}}\cdot \left({\mathbf{\omega }}^{{d}_{k-1}}+{c}_{k}\right)\cdot \dots \cdot \left({\mathbf{\omega }}^{{d}_{1}}+{c}_{2}\right)\cdot {c}_{1}
The rmonic factored normal form is:
{\mathbf{\omega }}^{{d}_{k}}\cdot {c}_{k}\cdot \left({\mathbf{\omega }}^{{d}_{k-1}}\cdot {c}_{k-1}+1\right)\cdot \dots \cdot \left({\mathbf{\omega }}^{{d}_{1}}\cdot {c}_{1}+1\right)
If form=pairs is specified, then the result is returned in the form
[[{d}_{k},{c}_{k}],[{d}_{k-1},{c}_{k-1}],\mathrm{...},[{d}_{1},{c}_{1}]]
a
can be parametric. However, unless all coefficients
{c}_{i}
are positive when substituting arbitrary nonnegative integers for all the parameters, an error will be raised.
\mathrm{with}\left(\mathrm{Ordinals}\right)
[\textcolor[rgb]{0,0,1}{\mathrm{`+`}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{`.`}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{`<`}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{<=}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{Add}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{Base}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{Dec}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{Decompose}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{Div}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{Eval}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{Factor}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{Gcd}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{Lcm}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{LessThan}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{Log}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{Max}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{Min}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{Mult}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{Ordinal}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{Power}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{Split}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{Sub}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{`^`}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{degree}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{lcoeff}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{log}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{lterm}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{\omega }}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{quo}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{rem}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{tcoeff}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{tdegree}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{tterm}}]
a≔\mathrm{Ordinal}\left([[\mathrm{\omega },5],[9,4],[7,3],[5,3],[3,3],[2,2]]\right)
\textcolor[rgb]{0,0,1}{a}\textcolor[rgb]{0,0,1}{≔}{\textcolor[rgb]{0,0,1}{\mathbf{\omega }}}^{\textcolor[rgb]{0,0,1}{\mathbf{\omega }}}\textcolor[rgb]{0,0,1}{\cdot }\textcolor[rgb]{0,0,1}{5}\textcolor[rgb]{0,0,1}{+}{\textcolor[rgb]{0,0,1}{\mathbf{\omega }}}^{\textcolor[rgb]{0,0,1}{9}}\textcolor[rgb]{0,0,1}{\cdot }\textcolor[rgb]{0,0,1}{4}\textcolor[rgb]{0,0,1}{+}{\textcolor[rgb]{0,0,1}{\mathbf{\omega }}}^{\textcolor[rgb]{0,0,1}{7}}\textcolor[rgb]{0,0,1}{\cdot }\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{+}{\textcolor[rgb]{0,0,1}{\mathbf{\omega }}}^{\textcolor[rgb]{0,0,1}{5}}\textcolor[rgb]{0,0,1}{\cdot }\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{+}{\textcolor[rgb]{0,0,1}{\mathbf{\omega }}}^{\textcolor[rgb]{0,0,1}{3}}\textcolor[rgb]{0,0,1}{\cdot }\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{+}{\textcolor[rgb]{0,0,1}{\mathbf{\omega }}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{\cdot }\textcolor[rgb]{0,0,1}{2}
\mathrm{Factor}\left(a\right)
[{\textcolor[rgb]{0,0,1}{\mathbf{\omega }}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathbf{\omega }}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{\mathbf{\omega }}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{\mathbf{\omega }}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{\mathbf{\omega }}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{4}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{\mathbf{\omega }}}^{\textcolor[rgb]{0,0,1}{\mathbf{\omega }}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{5}]
Display the result as a product, and verify the answer.
\mathrm{Factor}\left(a,\mathrm{output}=\mathrm{inert}\right)
{\textcolor[rgb]{0,0,1}{\mathbf{\omega }}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{\mathbf{\cdot }}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{\mathbf{\cdot }}\left(\textcolor[rgb]{0,0,1}{\mathbf{\omega }}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{1}\right)\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{\mathbf{\cdot }}\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{\mathbf{\cdot }}\left({\textcolor[rgb]{0,0,1}{\mathbf{\omega }}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{1}\right)\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{\mathbf{\cdot }}\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{\mathbf{\cdot }}\left({\textcolor[rgb]{0,0,1}{\mathbf{\omega }}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{1}\right)\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{\mathbf{\cdot }}\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{\mathbf{\cdot }}\left({\textcolor[rgb]{0,0,1}{\mathbf{\omega }}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{1}\right)\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{\mathbf{\cdot }}\textcolor[rgb]{0,0,1}{4}\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{\mathbf{\cdot }}\left({\textcolor[rgb]{0,0,1}{\mathbf{\omega }}}^{\textcolor[rgb]{0,0,1}{\mathbf{\omega }}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{1}\right)\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{\mathbf{\cdot }}\textcolor[rgb]{0,0,1}{5}
\mathrm{value}\left(\right)
{\textcolor[rgb]{0,0,1}{\mathbf{\omega }}}^{\textcolor[rgb]{0,0,1}{\mathbf{\omega }}}\textcolor[rgb]{0,0,1}{\cdot }\textcolor[rgb]{0,0,1}{5}\textcolor[rgb]{0,0,1}{+}{\textcolor[rgb]{0,0,1}{\mathbf{\omega }}}^{\textcolor[rgb]{0,0,1}{9}}\textcolor[rgb]{0,0,1}{\cdot }\textcolor[rgb]{0,0,1}{4}\textcolor[rgb]{0,0,1}{+}{\textcolor[rgb]{0,0,1}{\mathbf{\omega }}}^{\textcolor[rgb]{0,0,1}{7}}\textcolor[rgb]{0,0,1}{\cdot }\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{+}{\textcolor[rgb]{0,0,1}{\mathbf{\omega }}}^{\textcolor[rgb]{0,0,1}{5}}\textcolor[rgb]{0,0,1}{\cdot }\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{+}{\textcolor[rgb]{0,0,1}{\mathbf{\omega }}}^{\textcolor[rgb]{0,0,1}{3}}\textcolor[rgb]{0,0,1}{\cdot }\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{+}{\textcolor[rgb]{0,0,1}{\mathbf{\omega }}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{\cdot }\textcolor[rgb]{0,0,1}{2}
Other output forms. Note the grouping of similar factors.
\mathrm{Factor}\left(a,\mathrm{output}=\mathrm{inert},\mathrm{form}=\mathrm{monic}\right)
{\textcolor[rgb]{0,0,1}{\mathbf{\omega }}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{\mathbf{\cdot }}\left(\textcolor[rgb]{0,0,1}{\mathbf{\omega }}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{2}\right)\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{\mathbf{\cdot }}{\left({\textcolor[rgb]{0,0,1}{\mathbf{\omega }}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{3}\right)}^{\textcolor[rgb]{0,0,1}{3}}\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{\mathbf{\cdot }}\left({\textcolor[rgb]{0,0,1}{\mathbf{\omega }}}^{\textcolor[rgb]{0,0,1}{\mathbf{\omega }}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{4}\right)\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{\mathbf{\cdot }}\textcolor[rgb]{0,0,1}{5}
\mathrm{Factor}\left(a,\mathrm{output}=\mathrm{inert},\mathrm{form}=\mathrm{rmonic}\right)
\left({\textcolor[rgb]{0,0,1}{\mathbf{\omega }}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{\cdot }\textcolor[rgb]{0,0,1}{2}\right)\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{\mathbf{\cdot }}\left(\textcolor[rgb]{0,0,1}{\mathbf{\omega }}\textcolor[rgb]{0,0,1}{\cdot }\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{1}\right)\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{\mathbf{\cdot }}{\left({\textcolor[rgb]{0,0,1}{\mathbf{\omega }}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{\cdot }\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{1}\right)}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{\mathbf{\cdot }}\left({\textcolor[rgb]{0,0,1}{\mathbf{\omega }}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{\cdot }\textcolor[rgb]{0,0,1}{4}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{1}\right)\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{\mathbf{\cdot }}\left({\textcolor[rgb]{0,0,1}{\mathbf{\omega }}}^{\textcolor[rgb]{0,0,1}{\mathbf{\omega }}}\textcolor[rgb]{0,0,1}{\cdot }\textcolor[rgb]{0,0,1}{5}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{1}\right)
Just the bare data of the full factored normal form, and the original data of the Cantor normal form, for comparison.
\mathrm{Factor}\left(a,\mathrm{form}=\mathrm{pairs}\right)
[[\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}]\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}]\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}]\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}]\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{4}]\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{\mathbf{\omega }}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{5}]]
\mathrm{op}\left(a\right)
[[\textcolor[rgb]{0,0,1}{\mathbf{\omega }}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{5}]\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{9}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{4}]\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{7}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}]\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{5}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}]\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}]\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}]]
\mathrm{Factor}\left(a+x\right)
Error, (in Ordinals:-Factor) cannot determine if x is nonzero
\mathrm{Factor}\left(a+x+7,\mathrm{form}=\mathrm{rmonic}\right)
[\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{7}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{\mathbf{\omega }}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{\cdot }\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathbf{\omega }}\textcolor[rgb]{0,0,1}{\cdot }\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{\mathbf{\omega }}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{\cdot }\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{\mathbf{\omega }}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{\cdot }\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{\mathbf{\omega }}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{\cdot }\textcolor[rgb]{0,0,1}{4}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{\mathbf{\omega }}}^{\textcolor[rgb]{0,0,1}{\mathbf{\omega }}}\textcolor[rgb]{0,0,1}{\cdot }\textcolor[rgb]{0,0,1}{5}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{1}]
\mathrm{Mult}\left(\mathrm{op}\left(\right)\right)
{\textcolor[rgb]{0,0,1}{\mathbf{\omega }}}^{\textcolor[rgb]{0,0,1}{\mathbf{\omega }}}\textcolor[rgb]{0,0,1}{\cdot }\textcolor[rgb]{0,0,1}{5}\textcolor[rgb]{0,0,1}{+}{\textcolor[rgb]{0,0,1}{\mathbf{\omega }}}^{\textcolor[rgb]{0,0,1}{9}}\textcolor[rgb]{0,0,1}{\cdot }\textcolor[rgb]{0,0,1}{4}\textcolor[rgb]{0,0,1}{+}{\textcolor[rgb]{0,0,1}{\mathbf{\omega }}}^{\textcolor[rgb]{0,0,1}{7}}\textcolor[rgb]{0,0,1}{\cdot }\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{+}{\textcolor[rgb]{0,0,1}{\mathbf{\omega }}}^{\textcolor[rgb]{0,0,1}{5}}\textcolor[rgb]{0,0,1}{\cdot }\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{+}{\textcolor[rgb]{0,0,1}{\mathbf{\omega }}}^{\textcolor[rgb]{0,0,1}{3}}\textcolor[rgb]{0,0,1}{\cdot }\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{+}{\textcolor[rgb]{0,0,1}{\mathbf{\omega }}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{\cdot }\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{+}\left(\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{7}\right)
The Ordinals[Factor] command was introduced in Maple 2015.
|
I intend to publish a translation of a sketch of the life of Erasmus Darwin by Dr Krause; & I shall prefix to it a preliminary notice by myself, consisting of about 130 folio M.S. pages.1
I intend to have rather large type on thickish paper with cut gold edges—
There will be three wood-cuts & a photograph of Dr D.
The book therefore will be for its size rather expensive.2
I should guess that it wd be about 200 pages. I have endeavoured to make my notice interesting to the public, but whether I have succeeded is quite doubtful. Dr Krause’s part relates chiefly to my grandfather on evolution. I have written to Messrs Clowes to ask them whether they can oblige me by setting up the whole of my preliminary notice in slips.3 You can then if you please see a copy; & decide whether you will publish it on commission for me, or on our old terms of
\frac{2}{3}
profit.4
I am quite incapable of forming any judgement of the chance of the little book selling fairly well.
CD had written a biographical sketch of his grandfather Erasmus Darwin to accompany an English translation of Ernst Krause’s essay (Krause 1879a).
In the event, the published book contained a portrait of Erasmus Darwin as the frontispiece and two woodcuts, one of Elston Hall, where Erasmus was born, and one of Breadsall Priory, where he died (Erasmus Darwin, pp. 3, 125); it was sold for 7s. 6d. (see letter from R. F. Cooke, 25 October 1879).
The letter to William Clowes & Sons, printers to John Murray, has not been found.
Murray usually published CD’s books at his own expense and paid CD a percentage of the profits on publication (an advance against royalties).
Intends to publish a translation of Ernst Krause’s essay on Dr Erasmus Darwin, with a prefatory notice by himself. Asks JM to decide whether to publish it on commission or on usual two-thirds profit terms. CD incapable of judging chance of its selling.
|
Beth number - formulasearchengine
Revision as of 17:51, 10 September 2014 by en>BethNaught (Reverted edits by 108.44.210.18 (talk) to last version by Monkbot)
In mathematics, the infinite cardinal numbers are represented by the Hebrew letter
{\displaystyle \aleph }
(aleph) indexed with a subscript that runs over the ordinal numbers (see aleph number). The second Hebrew letter
{\displaystyle \beth }
(beth) is used in a related way, but does not necessarily index all of the numbers indexed by
{\displaystyle \aleph }
2 Relation to the aleph numbers
3 Specific cardinals
3.1 Beth null
3.2 Beth one
3.3 Beth two
3.4 Beth omega
To define the beth numbers, start by letting
{\displaystyle \beth _{0}=\aleph _{0}}
be the cardinality of any countably infinite set; for concreteness, take the set
{\displaystyle \mathbb {N} }
of natural numbers to be a typical case. Denote by P(A) the power set of A; i.e., the set of all subsets of A. Then define
{\displaystyle \beth _{\alpha +1}=2^{\beth _{\alpha }},}
which is the cardinality of the power set of A if
{\displaystyle \beth _{\alpha }}
is the cardinality of A.
Given this definition,
{\displaystyle \beth _{0},\ \beth _{1},\ \beth _{2},\ \beth _{3},\ \dots }
are respectively the cardinalities of
{\displaystyle \mathbb {N} ,\ P(\mathbb {N} ),\ P(P(\mathbb {N} )),\ P(P(P(\mathbb {N} ))),\ \dots .}
so that the second beth number
{\displaystyle \beth _{1}}
{\displaystyle {\mathfrak {c}}}
, the cardinality of the continuum, and the third beth number
{\displaystyle \beth _{2}}
is the cardinality of the power set of the continuum.
Because of Cantor's theorem each set in the preceding sequence has cardinality strictly greater than the one preceding it. For infinite limit ordinals λ the corresponding beth number is defined as the supremum of the beth numbers for all ordinals strictly smaller than λ:
{\displaystyle \beth _{\lambda }=\sup\{\beth _{\alpha }:\alpha <\lambda \}.}
One can also show that the von Neumann universes
{\displaystyle V_{\omega +\alpha }\!}
have cardinality
{\displaystyle \beth _{\alpha }\!}
Relation to the aleph numbers
Assuming the axiom of choice, infinite cardinalities are linearly ordered; no two cardinalities can fail to be comparable. Thus, since by definition no infinite cardinalities are between
{\displaystyle \aleph _{0}}
{\displaystyle \aleph _{1}}
{\displaystyle \beth _{1}\geq \aleph _{1}.}
Repeating this argument (see transfinite induction) yields
{\displaystyle \beth _{\alpha }\geq \aleph _{\alpha }}
for all ordinals
{\displaystyle \alpha }
The continuum hypothesis is equivalent to
{\displaystyle \beth _{1}=\aleph _{1}.}
The generalized continuum hypothesis says the sequence of beth numbers thus defined is the same as the sequence of aleph numbers, i.e.,
{\displaystyle \beth _{\alpha }=\aleph _{\alpha }}
{\displaystyle \alpha }
Specific cardinals
Since this is defined to be
{\displaystyle \aleph _{0}}
or aleph null then sets with cardinality
{\displaystyle \beth _{0}}
the natural numbers N
the algebraic numbers
the computable numbers and computable sets
the set of finite sets of integers
Sets with cardinality
{\displaystyle \beth _{1}}
the transcendental numbers
the complex numbers C
Euclidean space Rn
the power set of the natural numbers (the set of all subsets of the natural numbers)
the set of sequences of integers (i.e. all functions N → Z, often denoted ZN)
the set of sequences of real numbers, RN
the set of all continuous functions from R to R
the set of finite subsets of real numbers
{\displaystyle \beth _{2}}
(pronounced beth two) is also referred to as 2c (pronounced two to the power of c).
{\displaystyle \beth _{2}}
The power set of the set of real numbers, so it is the number of subsets of the real line, or the number of sets of real numbers
The power set of the power set of the set of natural numbers
The set of all functions from R to R (RR)
The set of all functions from Rm to Rn
The power set of the set of all functions from the set of natural numbers to itself, so it is the number of sets of sequences of natural numbers
The Stone–Čech compactifications of R, Q, and N
Beth omega
{\displaystyle \beth _{\omega }}
(pronounced beth omega) is the smallest uncountable strong limit cardinal.
The more general symbol
{\displaystyle \beth _{\alpha }(\kappa )}
, for ordinals α and cardinals κ, is occasionally used. It is defined by:
{\displaystyle \beth _{0}(\kappa )=\kappa ,}
{\displaystyle \beth _{\alpha +1}(\kappa )=2^{\beth _{\alpha }(\kappa )},}
{\displaystyle \beth _{\lambda }(\kappa )=\sup\{\beth _{\alpha }(\kappa ):\alpha <\lambda \}}
if λ is a limit ordinal.
{\displaystyle \beth _{\alpha }=\beth _{\alpha }(\aleph _{0}).}
In ZF, for any cardinals κ and μ, there is an ordinal α such that:
{\displaystyle \kappa \leq \beth _{\alpha }(\mu ).}
And in ZF, for any cardinal κ and ordinals α and β:
{\displaystyle \beth _{\beta }(\beth _{\alpha }(\kappa ))=\beth _{\alpha +\beta }(\kappa ).}
Consequently, in Zermelo–Fraenkel set theory absent ur-elements with or without the axiom of choice, for any cardinals κ and μ, the equality
{\displaystyle \beth _{\beta }(\kappa )=\beth _{\beta }(\mu )}
holds for all sufficiently large ordinals β (that is, there is an ordinal α such that the equality holds for every ordinal β ≥ α).
This also holds in Zermelo–Fraenkel set theory with ur-elements with or without the axiom of choice provided the ur-elements form a set which is equinumerous with a pure set (a set whose transitive closure contains no ur-elements). If the axiom of choice holds, then any set of ur-elements is equinumerous with a pure set.
T. E. Forster, Set Theory with a Universal Set: Exploring an Untyped Universe, Oxford University Press, 1995 — Beth number is defined on page 5.
|CitationClass=book }} See pages 6 and 204–205 for beth numbers.
|CitationClass=book }} See page 109 for beth numbers.
Retrieved from "https://en.formulasearchengine.com/index.php?title=Beth_number&oldid=229646"
|
Global Constraint Catalog: Csort
<< 5.370. some_equal5.372. sort_permutation >>
[OlderSwinkelsEmden95]
\mathrm{𝚜𝚘𝚛𝚝}\left(\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{1},\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{2}\right)
\mathrm{𝚜𝚘𝚛𝚝𝚎𝚍𝚗𝚎𝚜𝚜}
\mathrm{𝚜𝚘𝚛𝚝𝚎𝚍}
\mathrm{𝚜𝚘𝚛𝚝𝚒𝚗𝚐}
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{1}
\mathrm{𝚌𝚘𝚕𝚕𝚎𝚌𝚝𝚒𝚘𝚗}\left(\mathrm{𝚟𝚊𝚛}-\mathrm{𝚍𝚟𝚊𝚛}\right)
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{2}
\mathrm{𝚌𝚘𝚕𝚕𝚎𝚌𝚝𝚒𝚘𝚗}\left(\mathrm{𝚟𝚊𝚛}-\mathrm{𝚍𝚟𝚊𝚛}\right)
|\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{1}|=|\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{2}|
\mathrm{𝚛𝚎𝚚𝚞𝚒𝚛𝚎𝚍}
\left(\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{1},\mathrm{𝚟𝚊𝚛}\right)
\mathrm{𝚛𝚎𝚚𝚞𝚒𝚛𝚎𝚍}
\left(\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{2},\mathrm{𝚟𝚊𝚛}\right)
First, the variables of the collection
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{2}
correspond to a permutation of the variables of
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{1}
. Second, the variables of
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{2}
\left(〈1,9,1,5,2,1〉,〈1,1,1,2,5,9〉\right)
\mathrm{𝚜𝚘𝚛𝚝}
Values 1, 2, 5 and 9 have the same number of occurrences within both collections
〈1,9,1,5,2,1〉
〈1,1,1,2,5,9〉
. Figure 5.371.1 illustrates this correspondence.
〈1,1,1,2,5,9〉
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{1}
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{2}
collections of the Example slot (note that the items of the
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{2}
are sorted in increasing order)
\mathrm{𝚜𝚘𝚛𝚝}
{V}_{1}\in \left[2,3\right]
{V}_{2}\in \left[2,3\right]
{V}_{3}\in \left[1,2\right]
{V}_{4}\in \left[4,5\right]
{V}_{5}\in \left[2,4\right]
{S}_{1}\in \left[2,3\right]
{S}_{2}\in \left[2,3\right]
{S}_{3}\in \left[1,3\right]
{S}_{4}\in \left[4,5\right]
{S}_{5}\in \left[2,5\right]
\mathrm{𝚜𝚘𝚛𝚝}
\left(〈{V}_{1},{V}_{2},{V}_{3},{V}_{4},{V}_{5}〉,〈{S}_{1},{S}_{2},{S}_{3},{S}_{4},{S}_{5}〉\right)
\mathrm{𝚜𝚘𝚛𝚝}
|\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{1}|>1
\mathrm{𝚛𝚊𝚗𝚐𝚎}
\left(\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{1}.\mathrm{𝚟𝚊𝚛}\right)>1
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{1}
\mathrm{𝚟𝚊𝚛}
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{1}
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{2}
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{2}
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{1}
The main usage of the
\mathrm{𝚜𝚘𝚛𝚝}
constraint, that was not foreseen when the
\mathrm{𝚜𝚘𝚛𝚝}
constraint was invented, is its use in many reformulations. Many constraints involving one or several collections of variables become much simpler to express when the variables of these collections are sorted. In addition these reformulations typically have a size that is linear in the number of variables of the original constraint. This justifies why the
\mathrm{𝚜𝚘𝚛𝚝}
constraint is considered to be a core constraint. As illustrative examples of these types of reformulations we successively consider the
\mathrm{𝚊𝚕𝚕𝚍𝚒𝚏𝚏𝚎𝚛𝚎𝚗𝚝}
\mathrm{𝚜𝚊𝚖𝚎}
\mathrm{𝚊𝚕𝚕𝚍𝚒𝚏𝚏𝚎𝚛𝚎𝚗𝚝}
\left(〈{v}_{1},{v}_{2},\cdots ,{v}_{n}〉\right)
constraint can be reformulated as the conjunction
\mathrm{𝚜𝚘𝚛𝚝}
\left(〈{v}_{1},{v}_{2},\cdots ,{v}_{n}〉,〈{w}_{1},{w}_{2},\cdots ,{w}_{n}〉\right)\wedge
\mathrm{𝚜𝚝𝚛𝚒𝚌𝚝𝚕𝚢}_\mathrm{𝚒𝚗𝚌𝚛𝚎𝚊𝚜𝚒𝚗𝚐}
\left(〈{w}_{1},{w}_{2},\cdots ,{w}_{n}〉\right)
\mathrm{𝚜𝚊𝚖𝚎}
\left(〈{u}_{1},{u}_{2},\cdots ,{u}_{n}〉,〈{v}_{1},{v}_{2},\cdots ,{v}_{n}〉\right)
\mathrm{𝚜𝚘𝚛𝚝}
\left(〈{u}_{1},{u}_{2},\cdots ,{u}_{n}〉,〈{w}_{1},{w}_{2},\cdots ,{w}_{n}〉\right)\wedge
\mathrm{𝚜𝚘𝚛𝚝}
\left(〈{v}_{1},{v}_{2},\cdots ,{v}_{n}〉,〈{w}_{1},{w}_{2},\cdots ,{w}_{n}〉\right)
A variant of this constraint called
\mathrm{𝚜𝚘𝚛𝚝}_\mathrm{𝚙𝚎𝚛𝚖𝚞𝚝𝚊𝚝𝚒𝚘𝚗}
was introduced in [Zhou97]. In this variant an additional list of domain variables represents the permutation that allows to go from
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{1}
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{2}
[GuernalecColmerauer97], [MehlhornThiel00].
sorting in Choco, sorted in Gecode, sort in MiniZinc, sorting in SICStus.
\mathrm{𝚜𝚘𝚛𝚝}_\mathrm{𝚙𝚎𝚛𝚖𝚞𝚝𝚊𝚝𝚒𝚘𝚗}
\mathrm{𝙿𝙴𝚁𝙼𝚄𝚃𝙰𝚃𝙸𝙾𝙽}
parameter added).
\mathrm{𝚕𝚎𝚡}_\mathrm{𝚐𝚛𝚎𝚊𝚝𝚎𝚛𝚎𝚚}
\mathrm{𝚜𝚊𝚖𝚎}
\mathrm{𝚊𝚕𝚕𝚍𝚒𝚏𝚏𝚎𝚛𝚎𝚗𝚝}
\mathrm{𝚜𝚊𝚖𝚎}
characteristic of a constraint: core, sort.
constraint arguments: constraint between two collections of variables, pure functional dependency.
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{1}
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{2}
\mathrm{𝑃𝑅𝑂𝐷𝑈𝐶𝑇}
↦\mathrm{𝚌𝚘𝚕𝚕𝚎𝚌𝚝𝚒𝚘𝚗}\left(\mathrm{𝚟𝚊𝚛𝚒𝚊𝚋𝚕𝚎𝚜}\mathtt{1},\mathrm{𝚟𝚊𝚛𝚒𝚊𝚋𝚕𝚎𝚜}\mathtt{2}\right)
\mathrm{𝚟𝚊𝚛𝚒𝚊𝚋𝚕𝚎𝚜}\mathtt{1}.\mathrm{𝚟𝚊𝚛}=\mathrm{𝚟𝚊𝚛𝚒𝚊𝚋𝚕𝚎𝚜}\mathtt{2}.\mathrm{𝚟𝚊𝚛}
•\text{for}\text{all}\text{connected}\text{components:}
\mathrm{𝐍𝐒𝐎𝐔𝐑𝐂𝐄}
=
\mathrm{𝐍𝐒𝐈𝐍𝐊}
•
\mathrm{𝐍𝐒𝐎𝐔𝐑𝐂𝐄}
=|\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{1}|
•
\mathrm{𝐍𝐒𝐈𝐍𝐊}
=|\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{2}|
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{2}
\mathrm{𝑃𝐴𝑇𝐻}
↦\mathrm{𝚌𝚘𝚕𝚕𝚎𝚌𝚝𝚒𝚘𝚗}\left(\mathrm{𝚟𝚊𝚛𝚒𝚊𝚋𝚕𝚎𝚜}\mathtt{1},\mathrm{𝚟𝚊𝚛𝚒𝚊𝚋𝚕𝚎𝚜}\mathtt{2}\right)
\mathrm{𝚟𝚊𝚛𝚒𝚊𝚋𝚕𝚎𝚜}\mathtt{1}.\mathrm{𝚟𝚊𝚛}\le \mathrm{𝚟𝚊𝚛𝚒𝚊𝚋𝚕𝚎𝚜}\mathtt{2}.\mathrm{𝚟𝚊𝚛}
\mathrm{𝐍𝐀𝐑𝐂}
=|\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{2}|-1
Parts (A) and (B) of Figure 5.371.3 respectively show the initial and final graph associated with the first graph constraint of the Example slot. Since it uses the
\mathrm{𝐍𝐒𝐎𝐔𝐑𝐂𝐄}
\mathrm{𝐍𝐒𝐈𝐍𝐊}
graph properties, the source and sink vertices of this final graph are stressed with a double circle. Since there is a constraint on each connected component of the final graph we also show the different connected components. The
\mathrm{𝚜𝚘𝚛𝚝}
Each connected component of the final graph of the first graph constraint has the same number of sources and of sinks.
The number of sources of the final graph of the first graph constraint is equal to
|\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{1}|
The number of sinks of the final graph of the first graph constraint is equal to
|\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{2}|
|\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{1}-1|
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{2}
\mathrm{𝚜𝚘𝚛𝚝}
Consider the first graph constraint. Since the initial graph contains only sources and sinks, and since isolated vertices are eliminated from the final graph, we make the following observations:
Sources of the initial graph cannot become sinks of the final graph,
Sinks of the initial graph cannot become sources of the final graph.
From the previous observations and since we use the
\mathrm{𝑃𝑅𝑂𝐷𝑈𝐶𝑇}
arc generator on the collections
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{1}
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{2}
, we have that the maximum number of sources and sinks of the final graph is respectively equal to
|\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{1}|
|\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{2}|
\mathrm{𝐍𝐒𝐎𝐔𝐑𝐂𝐄}=|\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{1}|
\mathrm{𝐍𝐒𝐎𝐔𝐑𝐂𝐄}\ge |\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{1}|
\underline{\overline{\mathrm{𝐍𝐒𝐎𝐔𝐑𝐂𝐄}}}
\overline{\mathrm{𝐍𝐒𝐎𝐔𝐑𝐂𝐄}}
. In a similar way, we can rewrite
\mathrm{𝐍𝐒𝐈𝐍𝐊}=|\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{2}|
\mathrm{𝐍𝐒𝐈𝐍𝐊}\ge |\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{2}|
\underline{\overline{\mathrm{𝐍𝐒𝐈𝐍𝐊}}}
\overline{\mathrm{𝐍𝐒𝐈𝐍𝐊}}
\mathrm{𝑃𝐴𝑇𝐻}
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{2}
collection, the maximum number of arcs of the final graph is equal to
|\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{2}|-1
. Therefore we can rewrite the graph property
\mathrm{𝐍𝐀𝐑𝐂}=|\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{2}|-1
\mathrm{𝐍𝐀𝐑𝐂}\ge |\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{2}|-1
\underline{\overline{\mathrm{𝐍𝐀𝐑𝐂}}}
\overline{\mathrm{𝐍𝐀𝐑𝐂}}
\mathrm{𝚜𝚘𝚛𝚝}
\mathrm{𝚜𝚘𝚛𝚝}
|
Global Constraint Catalog: Kreverse_of_a_constraint
<< 3.7.213. Resource constraint3.7.215. Run of a permutation >>
\mathrm{𝚊𝚖𝚘𝚗𝚐}
\mathrm{𝚌𝚑𝚊𝚗𝚐𝚎}_\mathrm{𝚌𝚘𝚗𝚝𝚒𝚗𝚞𝚒𝚝𝚢}
\mathrm{𝙲𝚃𝚁}\in \left\{=,\ne \right\}
\mathrm{𝚌𝚑𝚊𝚗𝚐𝚎}_\mathrm{𝚌𝚘𝚗𝚝𝚒𝚗𝚞𝚒𝚝𝚢}
\mathrm{𝙲𝚃𝚁}\in \left\{<\right\}
\mathrm{𝚌𝚑𝚊𝚗𝚐𝚎}_\mathrm{𝚌𝚘𝚗𝚝𝚒𝚗𝚞𝚒𝚝𝚢}
\mathrm{𝙲𝚃𝚁}\in \left\{>\right\}
\mathrm{𝚌𝚑𝚊𝚗𝚐𝚎}_\mathrm{𝚌𝚘𝚗𝚝𝚒𝚗𝚞𝚒𝚝𝚢}
\mathrm{𝙲𝚃𝚁}\in \left\{\le \right\}
\mathrm{𝚌𝚑𝚊𝚗𝚐𝚎}_\mathrm{𝚌𝚘𝚗𝚝𝚒𝚗𝚞𝚒𝚝𝚢}
\mathrm{𝙲𝚃𝚁}\in \left\{\ge \right\}
\mathrm{𝚍𝚎𝚎𝚙𝚎𝚜𝚝}_\mathrm{𝚟𝚊𝚕𝚕𝚎𝚢}
\mathrm{𝚎𝚡𝚊𝚌𝚝𝚕𝚢}
\mathrm{𝚏𝚞𝚕𝚕}_\mathrm{𝚐𝚛𝚘𝚞𝚙}
\mathrm{𝚐𝚛𝚘𝚞𝚙}
\mathrm{𝚐𝚛𝚘𝚞𝚙}_\mathrm{𝚜𝚔𝚒𝚙}_\mathrm{𝚒𝚜𝚘𝚕𝚊𝚝𝚎𝚍}_\mathrm{𝚒𝚝𝚎𝚖}
\mathrm{𝚑𝚒𝚐𝚑𝚎𝚜𝚝}_\mathrm{𝚙𝚎𝚊𝚔}
\mathrm{𝚒𝚗𝚏𝚕𝚎𝚡𝚒𝚘𝚗}
\mathrm{𝚕𝚎𝚗𝚐𝚝𝚑}_\mathrm{𝚏𝚒𝚛𝚜𝚝}_\mathrm{𝚜𝚎𝚚𝚞𝚎𝚗𝚌𝚎}
\mathrm{𝚕𝚎𝚗𝚐𝚝𝚑}_\mathrm{𝚕𝚊𝚜𝚝}_\mathrm{𝚜𝚎𝚚𝚞𝚎𝚗𝚌𝚎}
\mathrm{𝚕𝚘𝚗𝚐𝚎𝚜𝚝}_\mathrm{𝚌𝚑𝚊𝚗𝚐𝚎}
\mathrm{𝙲𝚃𝚁}\in \left\{=,\ne \right\}
\mathrm{𝚕𝚘𝚗𝚐𝚎𝚜𝚝}_\mathrm{𝚍𝚎𝚌𝚛𝚎𝚊𝚜𝚒𝚗𝚐}_\mathrm{𝚜𝚎𝚚𝚞𝚎𝚗𝚌𝚎}
\mathrm{𝚕𝚘𝚗𝚐𝚎𝚜𝚝}_\mathrm{𝚒𝚗𝚌𝚛𝚎𝚊𝚜𝚒𝚗𝚐}_\mathrm{𝚜𝚎𝚚𝚞𝚎𝚗𝚌𝚎}
\mathrm{𝚖𝚊𝚡𝚒𝚖𝚞𝚖}
\mathrm{𝚖𝚊𝚡}_\mathrm{𝚍𝚎𝚌𝚛𝚎𝚊𝚜𝚒𝚗𝚐}_\mathrm{𝚜𝚕𝚘𝚙𝚎}
\mathrm{𝚖𝚊𝚡}_\mathrm{𝚒𝚗𝚌𝚛𝚎𝚊𝚜𝚒𝚗𝚐}_\mathrm{𝚜𝚕𝚘𝚙𝚎}
\mathrm{𝚖𝚒𝚗}_\mathrm{𝚍𝚎𝚌𝚛𝚎𝚊𝚜𝚒𝚗𝚐}_\mathrm{𝚜𝚕𝚘𝚙𝚎}
\mathrm{𝚖𝚒𝚗}_\mathrm{𝚒𝚗𝚌𝚛𝚎𝚊𝚜𝚒𝚗𝚐}_\mathrm{𝚜𝚕𝚘𝚙𝚎}
\mathrm{𝚖𝚒𝚗}_\mathrm{𝚠𝚒𝚍𝚝𝚑}_\mathrm{𝚙𝚎𝚊𝚔}
\mathrm{𝚖𝚒𝚗}_\mathrm{𝚠𝚒𝚍𝚝𝚑}_\mathrm{𝚟𝚊𝚕𝚕𝚎𝚢}
\mathrm{𝚖𝚒𝚗𝚒𝚖𝚞𝚖}
\mathrm{𝚙𝚎𝚊𝚔}
\mathrm{𝚜𝚖𝚘𝚘𝚝𝚑}
\mathrm{𝚟𝚊𝚕𝚕𝚎𝚢}
A constraint which has a reverse constraint, where the reverse is defined in the following way. Consider two constraints
\mathrm{ctr}\left(\mathrm{𝑐𝑜𝑙},{r}_{1},\cdots ,{r}_{n}\right)
{\mathrm{ctr}}^{\text{'}}\left(\mathrm{𝑐𝑜𝑙},{r}_{1},\cdots ,{r}_{n}\right)
for which, in both cases, the argument
\mathrm{𝑐𝑜𝑙}
is a collection of items that functionally determines all the other arguments
{r}_{1},\cdots ,{r}_{n}
{\mathrm{ctr}}^{\text{'}}
is the reverse constraint of constraint
\mathrm{ctr}
if, for any collection of items
\mathrm{𝑐𝑜𝑙}
, we have the equivalence
\mathrm{ctr}\left(\mathrm{𝑐𝑜𝑙},{r}_{1},\cdots ,{r}_{n}\right)⇔{\mathrm{ctr}}^{\text{'}}\left({\mathrm{𝑐𝑜𝑙}}^{\mathrm{𝑟𝑒𝑣}},{r}_{1},\cdots ,{r}_{n}\right)
{\mathrm{𝑐𝑜𝑙}}^{\mathrm{𝑟𝑒𝑣}}
denotes the collection
\mathrm{𝑐𝑜𝑙}
where the items of the collection are reversed. When constraints
\mathrm{ctr}
{\mathrm{ctr}}^{\text{'}}
are identical we say that constraint
\mathrm{ctr}
is its own reverse.
The previous enumeration provides the list of reversible constraints where, for each reversible constraint, we give its reverse only when it is different from the original constraint.
Note that if a constraint can be represented by a counter deterministic automaton with one single counter that is only incremented and for which all states are accepting, then by computing the reverse automaton, the corresponding reverse constraint can be mechanically obtained. However note that the reverse automaton may be non-deterministic and may contain
ϵ
transitions [Mohri2009]. Figure 3.7.57 gives an automaton counting the number of occurrences of words 001 in a sequence and its reverse automaton. Figure 3.7.58 provides an automaton with one counter and its reverse automaton that has a different number of states.
Figure 3.7.57. (A) Counter automaton returning the number of occurrences
𝙽
of word 001 in a sequence, and (B) its reverse counter automaton (returning the number of occurrences
𝙽
of word 100 in a sequence)
Figure 3.7.58. (A) Counter automaton, and (B) its reverse counter automaton which has a different number of states
|
International System of Units - Citizendium
(Redirected from SI)
2.1 Dimensionless derived units
2.2 Named derived units
2.3 Other derived units
3 Non-SI units accepted for use
4.1 Spelling variations
The International System of Units, abbreviated SI from its French language name, Le Système International d'Unités, is a comprehensive set of units of measurement. Aside from its dominance in science, the United States Omnibus Trade and Comptetitiveness Act of 1988 states "the metric system of measurement is the preferred system of weights and measures for United States trade and commerce".[1]
The SI is based on the original metric system developed in France in the 1790s. In October 1960 the 11th international "General Conference on Weights and Measures" met in Paris and renamed the Metric System (MKSA) of units (based on the six base units: meter, kilogram, second, ampere, kelvin and candela—in 1971 mole was added as seventh base unit) to the "International System of Units." The 11th Conference also established the abbreviation "SI" as the official abbreviation, to be used in all languages. Adoption of the abbreviation SI, especially outside scientific circles, is slow. The terms "metric system" or "MKSA units" are still frequently being used.
For more background and a bibliography for the SI units see the SI units page on the NIST website.[2]
The SI is founded on seven SI base units for seven base quantities assumed to be mutually independent:
Name Symbol Quantity Definition
metre m length The meter is the length of the path travelled by light in vacuum during a time interval of 1/299 792 458 of a second.
kilogram kg mass The kilogram is equal to the mass of the international prototype of the kilogram.
second s time The second is the duration of 9 192 631 770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the cesium 133 atom.
ampere A electrical current The ampere is that constant current which, if maintained in two straight parallel conductors of infinite length, of negligible circular cross-section, and placed 1 meter apart in vacuum, would produce between these conductors a force equal to 2 x 10-7 newton per meter of length.
kelvin K temperature The kelvin, unit of thermodynamic temperature, is the fraction 1/273.16 of the thermodynamic temperature of the triple point of water.
mole mol amount of substance The mole is the amount of substance of a system which contains as many elementary entities as there are atoms in 0.012 kilogram of carbon 12
candela cd luminous intensity The candela is the luminous intensity, in a given direction, of a source that emits monochromatic radiation of frequency 540 x 1012 hertz and that has a radiant intensity in that direction of 1/683 watt per steradian.
To allow for ease of discussion of quantities orders of magnitude different from the base units, prefixes may be used to form decimal multiples and submultiples of units. The SI prefixes with their meanings and symbols are:
It is important to note that the kilogram is the only SI unit with a prefix as part of its name and symbol. Because multiple prefixes may not be used, in the case of the kilogram the prefix names are used with the unit name "gram" and the prefix symbols are used with the unit symbol "g." With this exception, any SI prefix may be used with any SI unit, including the degree Celsius and its symbol °C.[3]
Other quantities, called derived quantities, are defined in terms of the seven base quantities via a system of quantity equations. The SI derived units for these derived quantities are obtained from these equations and the seven SI base units.
There are two dimensionless derived units, for plane angle and solid angle:
Dimensionless SI units
radian rad angle The unit of angle is the angle subtended at the centre of a circle by an arc of the circumference equal in length to the radius of the circle. There are
{\displaystyle 2\pi }
radians in a circle.
steradian sr solid angle The unit of solid angle is the solid angle subtended at the centre of a sphere of radius r by a portion of the surface of the sphere having an area r2. There are
{\displaystyle 4\pi }
steradians on a sphere.
Twenty other derived units have specific names; most are named after pioneering researchers in the fields in which they are used. These are:
coulomb C Electric charge or flux s∙A s∙A
volt V Electrical potential difference, Electromotive force W/A = J/C m2∙kg∙s−3∙A−1
ohm Ω Electric resistance, Impedance, Reactance V/A m2∙kg∙s−3∙A−2
farad F Electric capacitance C/V m−2∙kg−1∙s4∙A2
tesla T Magnetic flux density, magnetic induction V∙s/m2 = Wb/m2 kg∙s−2∙A−1
siemens S Electric conductance 1/Ω m−2∙kg−1∙s3∙A2
degree Celsius °C Thermodynamic temperature T°C = TK − 273.15
Other derived units
Some derived units are named after the basic units from which they are derived, sometimes including the dimension. Other derived units have names which are a mix of base unit names and derived unit names. Some are listed below:
Compound units derived from basic SI units
Name Symbol Quantity Expression in terms
metre per second m·s−1 speed, velocity m·s−1
metre per second squared m·s−2 acceleration m·s−2
metre per second cubed m·s−3 jerk m·s−3
radian per second rad·s−1 angular velocity s−1
reciprocal metre m−1 wavenumber m−1
kilogram per cubic metre kg·m−3 Density, mass density kg·m−3
cubic metre per kilogram kg−1·m3 specific volume kg−1·m3
mole per cubic metre m−3·mol amount (-of-substance) concentration m−3·mol
cubic metre per mole m3·mol−1 molar volume m3·mol−1
square metre per second m2·s−1 kinematic viscosity, diffusion coefficient m2·s−1
ampere per square metre A·m−2 electric current density A·m−2
ampere per metre A·m−1 magnetic field strength A·m−1
candela per square metre cd·m−2 luminance cd·m−2
Compound units derived from SI units
newton second N·s momentum, impulse kg·m·s−1
newton metre second N·m·s angular momentum kg·m2·s−1
newton metre N·m torque, moment of force kg·m2·s−2
joule per kelvin J·K−1 heat capacity, entropy kg·m2·s−2·K−1
joule per kelvin mole J·K−1·mol−1 molar heat capacity, molar entropy kg·m2·s−2·K−1·mol−1
joule per kilogram kelvin J·K−1·kg−1 specific heat capacity, specific entropy m2·s−2·K−1
joule per mole J·mol−1 molar energy kg·m2·s−2·mol−1
joule per kilogram J·kg−1 specific energy m2·s−2
joule per cubic metre J·m−3 energy density kg·m−1·s−2
newton per metre N·m−1 = J·m−2 surface tension kg·s−2
watt per square metre W·m−2 heat flux density, irradiance kg·s−3
watt per metre kelvin W·m−1·K−1 thermal conductivity kg·m·s−3·K−1
pascal second Pa·s = N·s·m−2 dynamic viscosity kg·m−1·s−1
coulomb per cubic metre C·m−3 electric charge density m−3·s·A
siemens per metre S·m−1 conductivity kg−1·m−3·s3·A2
siemens square metre per mole S·m2·mol−1 molar conductivity kg-1·s3·mol−1·A2
farad per metre F·m−1 permittivity kg−1·m−3·s4·A2
henry per metre H·m−1 permeability kg·m·s−2·A−2
volt per metre V·m−1 electric field strength kg·m·s−3·A−1
coulomb per kilogram C·kg−1 exposure (X and gamma rays) kg−1·s·A
gray per second Gy·s−1 absorbed dose rate m2·s−3
The 2006 edition of the International System of Units, published by the International Bureau of Weights and Measures (BIPM) includes non-SI units that are accepted for use with the International System because they are widely used in everyday life.[4] Their use is expected to continue indefinitely, and each has an exact definition in terms of an SI unit. The values in the table below were extracted from Tables 6 and 8 of the 2006 Edition:
Non-SI units accepted for use with the International System of Units
area hectare ha 1 ha = 1 hm2 = 104m2
volume litre L or l 1 L = 1 dm3 = 103 cm3 = 10−3 m3
mass tonne t 1 t = 103 kg
plane angle degree ° 1 ° = (
{\displaystyle \pi }
/180) rad
minute ' 1 ' = (1/60)° = (
{\displaystyle \pi }
/10800) rad
second " 1 " = (1/60)' = (
{\displaystyle \pi }
/648000) rad
pressure bar bar 1 bar = 0.1 MPa = 100 kPa = 105 Pa
millimetre of mercury mmHg 1 mmHg ≈ 133.322 Pa
speed knot kn 1 kn = (1852/3600) m/s
Symbols for units are written in lower case, except for symbols derived from the name of a person. For example, the unit of pressure is named after Blaise Pascal, so its symbol is written "Pa" whereas the unit itself is written "pascal".
The one exception is the litre, whose original symbol "l" is unsuitably similar to the numeral "1" or the uppercase letter "i" (depending on the typographic font used), at least in many English-speaking countries. The American National Institute of Standards and Technology recommends that "L" be used instead, a usage which is common in the U.S., Canada, Australia, and New Zealand (but not elsewhere). This has been accepted as an alternative by the CGPM since 1979. The cursive "ℓ" is occasionally seen, especially in Japan and Greece, but this is not currently recommended by any standards body. For more information, see Litre.
The American National Institute of Standards and Technology has defined guidelines for using the SI units in its own publications and for other users of the SI[5]. These guidelines give guidance on pluralizing unit names: the plural is formed by using normal English grammar rules, for example, "henries" is the plural of "henry". The units lux, hertz, and siemens are exceptions from this rule: they remain the same in singular and plural. Note that this rule only applies to the full names of units, not to their symbols.
Symbols are written in upright Roman type (m for metres, L for litres), so as to differentiate from the italic type used for mathematical variables (m for mass, l for length).
A space separates the number and the symbol, e.g. "2.21 kg", "7.3×102 m2", "22 °C" [6]. Exceptions are the symbols for plane angular degrees, minutes and seconds (°, ′ and ″), which are placed immediately after the number with no intervening space.
Spaces may be used to group decimal digits in threes, e.g. "1 000 000" or "342 142" (in contrast to the commas or dots used in other systems, e.g. "1,000,000" or "1.000.000"). This is presumably to reduce confusion because a comma is used as a decimal in many countries while others use a period. In print, the space used for this purpose is typically narrower than that between words.
The 10th resolution of CGPM in 2003 declared that "the symbol for the decimal marker shall be either the point on the line or the comma on the line". In practice, the decimal point is used in English, and the comma in most other European languages.
Symbols for derived units formed from multiple units by multiplication are joined with a space or centre dot (·), e.g. "N m" or "N·m".
Symbols formed by division of two units are joined with a solidus (⁄), or given as a negative exponent. For example, the "metre per second" can be written "m/s", "m s−1", "m·s−1" or
{\displaystyle \textstyle {\frac {\mathrm {m} }{\mathrm {s} }}.}
A solidus should not be used if the result is ambiguous, i.e. "kg·m−1·s−2" is preferable to "kg/m·s2". (Taylor (§ 6.1.6) specifically calls for the use of a solidus.[5] Many computer users will type the / character provided on American computer keyboards, which in turn produces the Unicode character U+002F, which is named solidus. Taylor does not offer suggestions about which mark should be used when more sophisticated typesetting options are available.)
In countries using ideographic writing systems such as Chinese and Japanese, often the full symbol of the unit, including prefixes, is placed in one square. (See the "Letterlike Symbols" Unicode subrange.)
Several nations, notably the United States, typically use the spellings "meter" and "liter" instead of "metre" and "litre" in keeping with standard American English spelling, which also corresponds to the official spelling used in several other languages, such as German, Dutch, Swedish, etc. In addition, the official U.S. spelling for the SI prefix "deca" is "deka".[3]
In some English-speaking countries, the unit "ampere" is often shortened to "amp" (singular) or "amps" (plural).
The numerical values of the principal physical constants can be found in a NIST summary.[7] NIST maintains a web site with the current numerical values of the physical constants in SI units.[8]
↑ See the official documentation by BN Taylor listed on the Bibliography page.
↑ International system of units (SI). The NIST reference on constants units and uncertainty. National Institute of Standards and Technology (2000). Retrieved on 2011-03-28.
↑ 3.0 3.1 Taylor, Barry N. (December 2003). The NIST Reference on Constants, Units, and Uncertainty. National Institute of Standards and Technology. Retrieved on 1 March 2007.
↑ 4.0 4.1 Bureau International des Poids et Mesures (2006). The International System of Units (SI). 8th ed.. Retrieved on 14 July 2006.
↑ 5.0 5.1 Taylor, B.N. (1995). NIST Special Publication 811: Guide for the Use of the International System of Units (SI). National Institute of Standards and Technology. Retrieved on 9 June 2006.
↑ Taylor, B.N. (1995). NIST Special Publication 811: Guide for the Use of the International System of Units (SI). National Institute of Standards and Technology. Retrieved on 1 March 2007.
↑ See Table L in PJ Mohr, BN Taylor, and DB Newell (2008). "CODATA recommended values of the fundamental physical constants: 2006". Reviews of Modern Physics vol. 80: p. 633 ff.
↑ CODATA internationally recommended values of the fundamental physical constants. The NIST reference on constants units and uncertainty. NIST. Retrieved on 2011-03-28.
Retrieved from "https://citizendium.org/wiki/index.php?title=International_System_of_Units&oldid=19479"
|
The explicit general solution of trivial Monge-Ampère equation | EMS Press
The explicit general solution of trivial Monge-Ampère equation
The general solution of the equation
z_{xx}z_{yy} - z_{xy}^2 = 0
with minimal smoothness requirements is presented in explicit form; it depends on 2 functions of one variable. In particular, it allows to describe explicitly all developable surfaces (without planar points) in
\Bbb R^3
. The domain and singularities of the solution are investigated.
V. Ushakov, The explicit general solution of trivial Monge-Ampère equation. Comment. Math. Helv. 75 (2000), no. 1, pp. 125–133
|
15 February 2018 Groups quasi-isometric to right-angled Artin groups
We characterize groups quasi-isometric to a right-angled Artin group (RAAG)
G
with finite outer automorphism group. In particular, all such groups admit a geometric action on a
CAT\left(0\right)
cube complex that has an equivariant “fibering” over the Davis building of
G
. This characterization will be used in forthcoming work of the first author to give a commensurability classification of the groups quasi-isometric to certain RAAGs.
Jingyin Huang. Bruce Kleiner. "Groups quasi-isometric to right-angled Artin groups." Duke Math. J. 167 (3) 537 - 602, 15 February 2018. https://doi.org/10.1215/00127094-2017-0042
Received: 14 March 2016; Revised: 31 July 2017; Published: 15 February 2018
Keywords: building , cube complex , quasi-isometry , right-angled Artin group , rigidity
Jingyin Huang, Bruce Kleiner "Groups quasi-isometric to right-angled Artin groups," Duke Mathematical Journal, Duke Math. J. 167(3), 537-602, (15 February 2018)
|
Write “theoretical” or “experimental” to describe the probabilities for each of the following situations.
The chance of getting tails when flipping a coin is
\frac { 1 } { 2 }
Remember that theoretical probabilities are calculated mathematically based on what is expected while experimental probabilities are based on data collected in experiments.
I flipped a coin eight times and got heads six times, so the probability is
\frac { 6 } { 8 }
Is this probability based on data from an actual test of flipping a coin?
My mom packed my lunch three of the past five days, so the probability of my mom packing my lunch is
\frac { 3 } { 5 }
Ask yourself the same questions as you did in parts (a) and (b).
The chance of winning the state lottery is
1
98{,}000{,}000
Did someone actually test the probability of winning the lottery?
Based on mathematical models, the chance of rain today is
60\%
One way to calculate probability without physically testing it is to use a mathematical model
Lena got three “hits” in her last seven times at bats, so her chance of getting a hit is
\frac { 3 } { 7 }
How did Lena determine the probability that she would get a hit at her next bat?
|
Home : Support : Online Help : Education : Student Packages : Linear Algebra : Computation : Solvers : GenerateMatrix
generate the coefficient Matrix from linear equations
GenerateMatrix(eqns, vars, options)
list or set of (linear) equations
(optional) parameters; for a complete list, see LinearAlgebra[GenerateMatrix]
The GenerateMatrix(eqns, vars) command generates the augmented coefficient Matrix or coefficient Matrix and right hand side Vector from the linear system of equations eqns in the unknowns vars. The augmented form is the default, and the last column of this Matrix is the right-hand side values from the original list of equations.
The non-augmented form is returned if the result is assigned to two variables (see the second example below) or if the augmented = false option is provided.
\mathrm{with}\left(\mathrm{Student}[\mathrm{LinearAlgebra}]\right):
\mathrm{sys}≔[3x[1]+2x[2]+3x[3]-2x[4]=1,x[1]+x[2]+x[3]=3,x[1]+2x[2]+x[3]-x[4]=2]:
\mathrm{var}≔[x[1],x[2],x[3],x[4]]:
\mathrm{GenerateMatrix}\left(\mathrm{sys},\mathrm{var}\right)
[\begin{array}{ccccc}\textcolor[rgb]{0,0,1}{3}& \textcolor[rgb]{0,0,1}{2}& \textcolor[rgb]{0,0,1}{3}& \textcolor[rgb]{0,0,1}{-2}& \textcolor[rgb]{0,0,1}{1}\\ \textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{3}\\ \textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{2}& \textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{-1}& \textcolor[rgb]{0,0,1}{2}\end{array}]
A,b≔\mathrm{GenerateMatrix}\left(\mathrm{sys},\mathrm{var}\right)
\textcolor[rgb]{0,0,1}{A}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{b}\textcolor[rgb]{0,0,1}{≔}[\begin{array}{cccc}\textcolor[rgb]{0,0,1}{3}& \textcolor[rgb]{0,0,1}{2}& \textcolor[rgb]{0,0,1}{3}& \textcolor[rgb]{0,0,1}{-2}\\ \textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{2}& \textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{-1}\end{array}]\textcolor[rgb]{0,0,1}{,}[\begin{array}{c}\textcolor[rgb]{0,0,1}{1}\\ \textcolor[rgb]{0,0,1}{3}\\ \textcolor[rgb]{0,0,1}{2}\end{array}]
A≔\mathrm{GenerateMatrix}\left(\mathrm{sys},\mathrm{var}\right)
\textcolor[rgb]{0,0,1}{A}\textcolor[rgb]{0,0,1}{≔}[\begin{array}{ccccc}\textcolor[rgb]{0,0,1}{3}& \textcolor[rgb]{0,0,1}{2}& \textcolor[rgb]{0,0,1}{3}& \textcolor[rgb]{0,0,1}{-2}& \textcolor[rgb]{0,0,1}{1}\\ \textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{3}\\ \textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{2}& \textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{-1}& \textcolor[rgb]{0,0,1}{2}\end{array}]
Student[LinearAlgebra][GenerateEquations]
|
Flowty Docs
Flowty Docs Introduction
flowty@github
Vehicle Routing Problem with Time Windows Problem
Multi-Commodity Flow Multi-Commodity Flow
Time Constrained Multi-Commodity Flow Problem in Liner Shipping
Custom Subproblem Algorithm and Initialization
User Heuristic and Solution Verification
This is where you will get started solving models like
\begin{aligned} \min{ } & \sum_{k \in K} \sum_{e \in E^k} c_e x_e + \sum_{i \in N} f_i y_i \\ \text{s.t. } & \sum_{k \in K} \sum_{e \in E^k} \alpha_{je} x_e + \sum_{i \in N} \beta_{ji} y_i = b_j & & j \in M \\ & x_e = \sum_{p \in P^k} \Big ( \sum_{ s \in p: s = e} \mathbf{1} \Big ) \lambda_p & & k \in K, e \in E^k \\ & L^k \leq \sum_{p \in P^k} \lambda_p \leq U^k && k \in K \\ & \lambda_p \in \mathbb{Z}^+ && k \in K, p \in P^k \\ & x_e \in \mathbb{Z} && k \in K, e \in E^k \\ & y_i \in \mathbb{Z} && i \in N \\ \end{aligned}
given graphs
G(V^k, E^k),\; k \in K
with paths
p \in P^k
subject to resource constraints
R^k
Go to the quick start
Copyright © 2021 Flowty ApS
|
Translations:Sketcher BSplineDecreaseKnotMultiplicity/15/en - FreeCAD Documentation
Translations:Sketcher BSplineDecreaseKnotMultiplicity/15/en
{\displaystyle n=d-m}
{\displaystyle d=3}
Retrieved from "http://wiki.freecadweb.org/index.php?title=Translations:Sketcher_BSplineDecreaseKnotMultiplicity/15/en&oldid=910753"
|
2.1 How did everything begin? - Big History School
To work through '2.1 How did everything begin?' you need to complete the Activities in order. So first complete ‘Learning Plan’ then move to Activity '2.1.1' followed by Activity '2.1.2' through to '2.1.5', and then finish with ‘Learning Summary’.
In How did everything begin you will learn all about the Big Bang, the lifecycle of stars and how when stars die they create the building blocks of the Universe.
To get started, read carefully through the How did everything begin? learning goals below. Make sure you tick each of the check boxes to show that you have read all your How did everything begin? learning goals.
As you read through the learning goals you may come across some words that you haven’t heard before. Please don’t worry. By the time you finish How did everything begin? you will have become very familiar with them!
You will come back to these learning goals at the end of How did everything begin? to see if you have confidently achieved them.
The Big Bang and the expanding universe
Understand the basics of the Big Bang Theory
Identify two of the pieces of evidence which support the Big Bang Theory
Conduct a demonstration of the expanding universe
Stars and the building blocks of our universe
Define what is a star
Understand the lifecycle of stars
Describe how stars create the building blocks of the universe
2Al
Welcome to Mars Mission Phase 2!
Now that you’ve completed your pre-mission critical thinking skills training, you’ve received your Mars Mission Brief and you’ve signed your Mission Pledge, you are ready to discover more about the Universe itself. For example, how did it all begin?
While you watch Mission video 8: The Big Bang and the expanding Universe look out for the answers to the following questions:
2. Which experts study the Universe?
3. What is the Big Bang theory?
4. What evidence is there to support the Big Bang theory?
5. What are the four main things that came out of the Big Bang?
When we’re talking about lots of events that happened over billions of years, it’s easy for them to get all jumbled up in your head. That’s why, whenever we learn about a new ‘event’ in Big History, we’re going to refer back to our History of the Universe Timeline.
Either your teacher will help you identify where the event should be placed on the classroom display or you will refer back to the Timeline: History of the Universe worksheet you completed when you did the "3 big mission phase questions" activity. You will find a copy of Timeline: history of the Universe example in Helpful Resources to refresh your memory.
So where does the Big Bang appear on the Timeline?
This should be an easy one because the Big Bang is the beginning of everything!
Mission video 8: The Big Bang and the expanding Universe, which you just watched, makes a pretty bold claim:
“The Universe began 13.8 billion years ago with a Big Bang.”
Do you remember when we talked about ‘claim testers’ in your pre-mission training? If you don’t, you may want to refer to Infographic: the four claim testers in Helpful Resources.
If you are part of a class, your teacher will lead a class discussion where you and your classmates will decide whether you believe the claim above by using your claim testers.
What was your intuition when you first heard the claim? Remember there’s no right or wrong here, it’s just whatever your gut feeling is when you hear the claim.
What does your logic tell you? Edwin Hubble thought that since the Universe is slowly growing bigger then if you were to go backwards to the beginning there must have been a powerful release of energy from an initial small point. When you think it through like this, does it make sense to you?
Do you believe the authorities that make this claim? In order for a scientific theory to be accepted it must be supported by lots of evidence. Edwin Hubble discovered convincing evidence and the majority of scientists today believe it to be true. Do you?
Do you find the evidence for the Big Bang theory convincing? We’ve only been able to briefly cover Redshift and CBR in a very simple way. And there is a whole lot more evidence out there that supports the Big Bang theory. Does this evidence convince you?
Now that you’ve considered all four claim testers, what’s your final decision? Do you believe the Universe began 13.8 billion years ago with a Big Bang? Can you explain why or why not?
2Al
When you hear the words ‘Big Bang’ the immediate image that comes to most people’s minds is an explosion. However, you might remember from Mission video 8: The Big Bang and the expanding Universe that the Universe actually ‘inflated’ extremely quickly and violently. And the Universe is still slowly expanding today.
In this activity you are going to use a balloon to conduct a simple demonstration of how the Universe expands over time.
This Big Bang balloon demonstration will help to explain what is meant by ‘inflation’ and ‘expansion’ when we talk about the Big Bang, as well as the concept of ‘redshift.’
Your teacher will instruct you whether you will be doing this activity individually or in pairs or as part of a group.
Use the Demo: the Big Bang balloon instruction sheet, along with a balloon and markers and follow the instructions.
Step 1. Inflate your balloon slightly so it is easier to draw on.
Step 2. Use a marker to draw dots randomly on your balloon to represent stars.
Step 3. Draw squiggly lines randomly on your balloon to represent lightwaves.
Step 4. Completely deflate your balloon to represent the singularity.
Step 5. Blow up your balloon a few inches so you can see the distance between the dots representing the stars. Identify 2 dots on the balloon and observe the distance between them. Also observe the length of the squiggly lines representing lightwaves.
Step 6. Blow 3 more small breaths into your balloon and observe how the distance between the same 2 dots has changed. Also observe how the length of the lightwaves has changed.
Step 7. Blow a final 3 breaths into your balloon and observe how the distance between the same 2 dots has changed. Also observe how the length of the lightwaves has changed.
Once you have completed the demonstration, answer the observation questions on Demo: the Big Bang balloon instruction sheet.
If you are part of a class, your teacher will ask you and your classmates to share your responses to the observation questions on the instruction sheet:
1. What happened to the dots when you blew into the balloon?
2. What does this demonstrate?
3. What happened to the lightwaves when you blew into the balloon?
2Al
So, as you learned in Mission video 8: The Big Bang and the expanding Universe, scientists don’t know what existed before the Big Bang and they don’t know why it happened but they do know what came out of the Big Bang.
In this activity you will complete a diagram which shows what came out of the Big Bang and you will have the opportunity to include some of the new vocabulary you have learned.
Complete Diagram: the Big Bang by writing the four main things that came out of the Big Bang in the appropriate places on the diagram.
Make sure you surround the diagram with any images you can recall from your Big Bang activities, for example, wavelengths. You should also include keywords that you remember, for example, singularity and redshift, to demonstrate to Mission Control the new vocabulary you have learned.
If you are part of a class, your teacher will ask you and your classmates to share your Big Bang diagrams, especially the additional images and keywords you have included.
And now you are ready to learn about how the energy force that came out of the Big Bang - gravity - helped to create the first stars in our Universe!
2Al
In this activity, you will learn all about stars - how they are born, how they live and how they die. Most importantly, you will learn how they produce the building blocks of everything else in the Universe.
Mission video 9: Stars and the building blocks of our Universe? picks up the Big History story about 100 million years after the Big Bang when the first stars were born...
While you watch Mission video 9: Stars and the building blocks of our Universe look out for the answers to the following questions:
1. When did the first stars appear?
3. What are the main stages in the life cycle of a star?
4. Why are stars so important?
So where do the ‘first stars’ appear on our History of the Universe Timeline?
Either your teacher will help you identify where they appear on the classroom display or you will refer back to the Timeline: history of the Universe worksheet you completed when you did the '3 big mission phase questions' activity. You will find a copy of Timeline: history of the Universe example in Helpful Resources.
And now that you have learned all about stars and their significance to the Universe, in the next activity you will focus on the life cycle of stars, including how the death of stars leads to the creation of chemical elements - the building blocks of our Universe.
2Al
As mentioned in Mission video 9: Stars and the building blocks of our Universe, stars have a life cycle much like we do - they are born, they live and then they die. What happens at the end of the life cycle of a star depends on its size.
Massive stars die in spectacular explosions called supernovae. In this activity, you will complete a diagram which illustrates the life cycle of a massive star from its birth in a beautiful cloud of gas and dust called a nebula to its death by supernova which scatters chemical elements throughout the Universe.
You will notice on the Image match: star life cycle diagram that there are blank numbered boxes. Each of the numbered boxes in the diagram represents a different stage in the life cycle of a massive star.
Your teacher will give you a strip of pictures which represent each stage in the life cycle of a massive star. Paste each picture in the correct box in the Image match: star life cycle diagram.
When the diagram is completed, answer the question next to the diagram: “What is a star?”
If you are part of a class, your teacher will ask you and your classmates to share your completed Image match: star life cycle diagram and your answer to the question: ‘What is a star?’
And next time there is a clear night take the opportunity to go outside, look up and observe the different star constellations and planets. Maybe you could be lucky enough to see a supernova?
2Al
In How did everything begin? you learned all about the Big Bang, the lifecycle of stars and how when stars die they create the building blocks of the Universe.
Now it’s time to revisit your How did everything begin? learning goals and read through them again carefully.
Well done on completing your learning summary. Click here to go to 2.2 How did our solar system form?
Once you have checked the boxes to confirm you have achieved your learning goals for Sequence '2.1 How did everything begin?' click on the 'I have achieved my learning goals' button below.
Go to 2.2 How did our solar system form? »
2Al
|
Analytic Number Theory | EMS Press
It was an exciting week at the Forschungsinstitut, with reports of important new developments, and intense work on a variety of fronts. The atmosphere was warm and relaxed, almost convivial, and certainly more cooperative than competitive, although the mutual seriousness of purpose was constantly evident.
Of all the new results announced at the meeting, three stand out for special mention:
Jerzy Kaczorowski and Alberto Perelli have shown that there is no member of the Selberg Class with degree in the open interval
(1,2)
. The Selberg Class is an attempt to describe, by means of functional equations and Euler products, those functions for which one feels the Riemann Hypothesis should be true. It is presumed that eventually it will turn out that the Selberg Class is synonymous with the set of automorphic
L
-functions, but we are very far from proving this. The degree, which relates to the sum of the arguments of the gamma function factors in the functional equation, is conjectured always to be an integer. The Riemann zeta function has degree
1
, and H.-E. Richert showed that there is no member with degree
< 1
. More recently it had been shown that there is no member with degree in the interval
(1, 5/3)
. This is a central problem, that many people have attacked, so the realization of
(1,2)
is a remarkable step forward, albeit a modest advance when compared with the enormous task ahead of us.
The Prouhet--Thue--Morse sequence has been independently discovered three times, in 1851, 1906, and 1921, respectively. Prouhet related the sequence to number theory, Thue applied it to combinatorics on words, and Morse to differential geometry. Let
w(n)
denote the `binary weight of
n
', which is to say the number of
1
's in the binary expansion of
n
w(0)=0
w(2n)=w(n)
w(2n+1)= w(2n)+1
t_n=0
w(n)
t_n = 1
n
is odd. Thus the word
t_0t_1t_2\ldots
0110100110010110100\ldots
\,. The power series generating function of
(-1)^{w(n)}
can be written in closed form:
\sum_{n=0}^\infty (-1)^{w(n)}z^n = \prod_{k=0}^\infty\big(1-z^{2^k}\big)\qquad (|z|<1)\,.
\big|\sum_{0\le n\le N}(-1)^{w(n)}\big|\le 1
N
; thus the integers are very equally divided between those for which
w(n)
is even and those for which
w(n)
is odd. In 1967, Gelfond (famous for work in transcendence) asked whether
w(p)
is (asymptotically) equally even and odd, as
p
ranges over primes
p\le x
x\to\infty
. The Prime Number Theorem concerns the leading binary digits of
p
, and Dirichlet's theorem on primes in arithmetic progression relates to the trailing digits. As concerns
(-1)^{w(p)}
, one is dealing simultaneously with {\it all} binary digits of
p
. Many researchers have worked on this problem without success, including at least one of the conference organizers. Some years ago a solution was announced in C. R. Paris, but this was followed neither by a proof nor a retraction. Now at last we have a solution: Jo\"el Rivat and Christian Mauduit have cleverly seen how to show that
\sum_{p\le x}(-1)^{w(p)}=o(\pi(x))
x\to \infty
Consider the Pell equation
x^2-dy^2=\pm1
, which relates to the units in the real quadratic number field
\Bbb Q(\sqrt{d})
d
is divisible by a prime
p\equiv 3\pmod 4
x^2-dy^2=-1
has no solution. If
d
\equiv 1\pmod 4
x^2 - dy^2=-1
{\it does} have a solution. The number of
d\le x
d
is composed entirely of primes
p\equiv1\pmod 4
\asymp x/\sqrt{\log x}
; thus the case of
d
prime is negligible among these discriminants. In a spectacular {\it tour de force}, Etienne Fouvry and J\"urgen Kl\"uners have shown that the `negative Pell equation'
x^2-dy^2=-1
has a solution for a positive proportion of discriminants
d
composed entirely of primes
\equiv1\pmod 4
The advances described above could not have been anticipated, and are at once surprising and gratifying. And just a few years before, the team of Goldston, Pintz and Y\i ld\i r\i m excited the world with their proof that
p_{n+1}-p_n=o(\log p_n)
infinitely often. This brings us a little closer to twin primes. Since
p_{n+1}-p_n
\log p_n
on average, it is reasonable to consider the distribution of
(p_{n+1}-p_n)/\log p_n
. We conjecture that this quantity is asymptotically distributed like an exponential random variable, with density
e^{-x}
. It would follow that every number in
[0,\infty]
is a limit point of the numbers
(p_{n+1}-p_n)/\log p_n
. In the 1930's it was shown that
+\infty
is a limit point, but it was only with the work of GPY that we could for the first time name a finite real number (namely 0) that is a limit point of this sequence. The GPY technology has been scrutinized, and has matured, but the team had their heads together for long hours during the conference, with the promise of further results.
Other highly active subareas that were represented at the meeting include additive combinatorics and the circle method, rational points on varieties, spectral decompositions for
L
-functions, sieve methods, and others.
The vast array of activity, the overload of talent, the extreme unpredictability of advances all make it challenging to select a fruitful mix of participants. On this occasion we feel that we could not have done better. Several participants, after the evening problem session, said that it was the best such session that they had ever experienced at Oberwolfach---more open, frank, and productive.
This meeting is in the tradition of Oberwolfach meeting organized by Theodor Schneider in the 1960's and 1970's that some us remember. We hope to emulate his vision as best possible in the modern times by taking a broad view and only the most gifted invitees.
Jörg Brüdern, Hugh L. Montgomery, Robert C. Vaughan, Analytic Number Theory. Oberwolfach Rep. 5 (2008), no. 1, pp. 669–746
|
Chain complex - Wikipedia
Tool in homological algebra
In mathematics, a chain complex is an algebraic structure that consists of a sequence of abelian groups (or modules) and a sequence of homomorphisms between consecutive groups such that the image of each homomorphism is included in the kernel of the next. Associated to a chain complex is its homology, which describes how the images are included in the kernels.
A cochain complex is similar to a chain complex, except that its homomorphisms are in the opposite direction. The homology of a cochain complex is called its cohomology.
In algebraic topology, the singular chain complex of a topological space X is constructed using continuous maps from a simplex to X, and the homomorphisms of the chain complex capture how these maps restrict to the boundary of the simplex. The homology of this chain complex is called the singular homology of X, and is a commonly used invariant of a topological space.
Chain complexes are studied in homological algebra, but are used in several areas of mathematics, including abstract algebra, Galois theory, differential geometry and algebraic geometry. They can be defined more generally in abelian categories.
1.1 Exact sequences
1.2 Chain maps
1.3 Chain homotopy
2.1 Singular homology
3 Category of chain complexes
A chain complex
{\displaystyle (A_{\bullet },d_{\bullet })}
is a sequence of abelian groups or modules ..., A0, A1, A2, A3, A4, ... connected by homomorphisms (called boundary operators or differentials) dn : An → An−1, such that the composition of any two consecutive maps is the zero map. Explicitly, the differentials satisfy dn ∘ dn+1 = 0, or with indices suppressed, d2 = 0. The complex may be written out as follows.
{\displaystyle \cdots {\xleftarrow {d_{0}}}A_{0}{\xleftarrow {d_{1}}}A_{1}{\xleftarrow {d_{2}}}A_{2}{\xleftarrow {d_{3}}}A_{3}{\xleftarrow {d_{4}}}A_{4}{\xleftarrow {d_{5}}}\cdots }
The cochain complex
{\displaystyle (A^{\bullet },d^{\bullet })}
is the dual notion to a chain complex. It consists of a sequence of abelian groups or modules ..., A0, A1, A2, A3, A4, ... connected by homomorphisms dn : An → An+1 satisfying dn+1 ∘ dn = 0. The cochain complex may be written out in a similar fashion to the chain complex.
{\displaystyle \cdots {\xrightarrow {d^{-1}}}A^{0}{\xrightarrow {d^{0}}}A^{1}{\xrightarrow {d^{1}}}A^{2}{\xrightarrow {d^{2}}}A^{3}{\xrightarrow {d^{3}}}A^{4}{\xrightarrow {d^{4}}}\cdots }
The index n in either An or An is referred to as the degree (or dimension). The difference between chain and cochain complexes is that, in chain complexes, the differentials decrease dimension, whereas in cochain complexes they increase dimension. All the concepts and definitions for chain complexes apply to cochain complexes, except that they will follow this different convention for dimension, and often terms will be given the prefix co-. In this article, definitions will be given for chain complexes when the distinction is not required.
A bounded chain complex is one in which almost all the An are 0; that is, a finite complex extended to the left and right by 0. An example is the chain complex defining the simplicial homology of a finite simplicial complex. A chain complex is bounded above if all modules above some fixed degree N are 0, and is bounded below if all modules below some fixed degree are 0. Clearly, a complex is bounded both above and below if and only if the complex is bounded.
The elements of the individual groups of a (co)chain complex are called (co)chains. The elements in the kernel of d are called (co)cycles (or closed elements), and the elements in the image of d are called (co)boundaries (or exact elements). Right from the definition of the differential, all boundaries are cycles. The n-th (co)homology group Hn (Hn) is the group of (co)cycles modulo (co)boundaries in degree n, that is,
{\displaystyle H_{n}=\ker d_{n}/{\mbox{im }}d_{n+1}\quad \left(H^{n}=\ker d^{n}/{\mbox{im }}d^{n-1}\right)}
An exact sequence (or exact complex) is a chain complex whose homology groups are all zero. This means all closed elements in the complex are exact. A short exact sequence is a bounded exact sequence in which only the groups Ak, Ak+1, Ak+2 may be nonzero. For example, the following chain complex is a short exact sequence.
{\displaystyle \cdots {\xrightarrow {}}\;0\;{\xrightarrow {}}\;\mathbf {Z} \;{\xrightarrow {\times p}}\;\mathbf {Z} \twoheadrightarrow \mathbf {Z} /p\mathbf {Z} \;{\xrightarrow {}}\;0\;{\xrightarrow {}}\cdots }
In the middle group, the closed elements are the elements pZ; these are clearly the exact elements in this group.
Chain maps[edit]
A chain map f between two chain complexes
{\displaystyle (A_{\bullet },d_{A,\bullet })}
{\displaystyle (B_{\bullet },d_{B,\bullet })}
is a sequence
{\displaystyle f_{\bullet }}
of homomorphisms
{\displaystyle f_{n}:A_{n}\rightarrow B_{n}}
for each n that commutes with the boundary operators on the two chain complexes, so
{\displaystyle d_{B,n}\circ f_{n}=f_{n-1}\circ d_{A,n}}
. This is written out in the following commutative diagram.
A chain map sends cycles to cycles and boundaries to boundaries, and thus induces a map on homology
{\displaystyle (f_{\bullet })_{*}:H_{\bullet }(A_{\bullet },d_{A,\bullet })\rightarrow H_{\bullet }(B_{\bullet },d_{B,\bullet })}
A continuous map f between topological spaces X and Y induces a chain map between the singular chain complexes of X and Y, and hence induces a map f* between the singular homology of X and Y as well. When X and Y are both equal to the n-sphere, the map induced on homology defines the degree of the map f.
The concept of chain map reduces to the one of boundary through the construction of the cone of a chain map.
Chain homotopy[edit]
See also: Homotopy category of chain complexes
A chain homotopy offers a way to relate two chain maps that induce the same map on homology groups, even though the maps may be different. Given two chain complexes A and B, and two chain maps f, g : A → B, a chain homotopy is a sequence of homomorphisms hn : An → Bn+1 such that hdA + dBh = f − g. The maps may be written out in a diagram as follows, but this diagram is not commutative.
The map hdA + dBh is easily verified to induce the zero map on homology, for any h. It immediately follows that f and g induce the same map on homology. One says f and g are chain homotopic (or simply homotopic), and this property defines an equivalence relation between chain maps.
Let X and Y be topological spaces. In the case of singular homology, a homotopy between continuous maps f, g : X → Y induces a chain homotopy between the chain maps corresponding to f and g. This shows that two homotopic maps induce the same map on singular homology. The name "chain homotopy" is motivated by this example.
Singular homology[edit]
Main article: Singular homology
Let X be a topological space. Define Cn(X) for natural n to be the free abelian group formally generated by singular n-simplices in X, and define the boundary map
{\displaystyle \partial _{n}:C_{n}(X)\to C_{n-1}(X)}
{\displaystyle \partial _{n}:\,(\sigma :[v_{0},\ldots ,v_{n}]\to X)\mapsto (\sum _{i=0}^{n}(-1)^{i}\sigma :[v_{0},\ldots ,{\hat {v}}_{i},\ldots ,v_{n}]\to X)}
where the hat denotes the omission of a vertex. That is, the boundary of a singular simplex is the alternating sum of restrictions to its faces. It can be shown that ∂2 = 0, so
{\displaystyle (C_{\bullet },\partial _{\bullet })}
is a chain complex; the singular homology
{\displaystyle H_{\bullet }(X)}
is the homology of this complex.
Singular homology is a useful invariant of topological spaces up to homotopy equivalence. The degree zero homology group is a free abelian group on the path-components of X.
de Rham cohomology[edit]
Main article: de Rham cohomology
The differential k-forms on any smooth manifold M form a real vector space called Ωk(M) under addition. The exterior derivative d maps Ωk(M) to Ωk+1(M), and d2 = 0 follows essentially from symmetry of second derivatives, so the vector spaces of k-forms along with the exterior derivative are a cochain complex.
{\displaystyle \Omega ^{0}(M)\ {\stackrel {d}{\to }}\ \Omega ^{1}(M)\to \Omega ^{2}(M)\to \Omega ^{3}(M)\to \cdots }
The cohomology of this complex is called the de Rham cohomology of X. The homology group in dimension zero is isomorphic to the vector space of locally constant functions from M to R. Thus for a compact manifold, this is the real vector space whose dimension is the number of connected components of M.
Smooth maps between manifolds induce chain maps, and smooth homotopies between maps induce chain homotopies.
Category of chain complexes[edit]
Chain complexes of K-modules with chain maps form a category ChK, where K is a commutative ring.
If V = V
{\displaystyle {}_{*}}
and W = W
{\displaystyle {}_{*}}
are chain complexes, their tensor product
{\displaystyle V\otimes W}
is a chain complex with degree n elements given by
{\displaystyle (V\otimes W)_{n}=\bigoplus _{\{i,j|i+j=n\}}V_{i}\otimes W_{j}}
and differential given by
{\displaystyle \partial (a\otimes b)=\partial a\otimes b+(-1)^{\left|a\right|}a\otimes \partial b}
where a and b are any two homogeneous vectors in V and W respectively, and
{\displaystyle \left|a\right|}
denotes the degree of a.
This tensor product makes the category ChK into a symmetric monoidal category. The identity object with respect to this monoidal product is the base ring K viewed as a chain complex in degree 0. The braiding is given on simple tensors of homogeneous elements by
{\displaystyle a\otimes b\mapsto (-1)^{\left|a\right|\left|b\right|}b\otimes a}
The sign is necessary for the braiding to be a chain map.
Moreover, the category of chain complexes of K-modules also has internal Hom: given chain complexes V and W, the internal Hom of V and W, denoted Hom(V,W), is the chain complex with degree n elements given by
{\displaystyle \Pi _{i}{\text{Hom}}_{K}(V_{i},W_{i+n})}
{\displaystyle (\partial f)(v)=\partial (f(v))-(-1)^{\left|f\right|}f(\partial (v))}
We have a natural isomorphism
{\displaystyle {\text{Hom}}(A\otimes B,C)\cong {\text{Hom}}(A,{\text{Hom}}(B,C))}
Amitsur complex
A complex used to define Bloch's higher Chow groups
Buchsbaum–Rim complex
Cousin complex
Eagon–Northcott complex
Gersten complex
Graph complex[1]
Schur complex
Differential graded Lie algebra
Dold–Kan correspondence says there is an equivalence between the category of chain complexes and the category of simplicial abelian groups.
Buchsbaum–Eisenbud acyclicity criterion
^ "Graph complex".
Bott, Raoul; Tu, Loring W. (1982), Differential Forms in Algebraic Topology, Berlin, New York: Springer-Verlag, ISBN 978-0-387-90613-3
Hatcher, Allen (2002). Algebraic Topology. Cambridge: Cambridge University Press. ISBN 0-521-79540-0.
Retrieved from "https://en.wikipedia.org/w/index.php?title=Chain_complex&oldid=1079975635"
|
Unexpected Solutions of the Nehari Problem
A. Gritsans, F. Sadyrbaev, "Unexpected Solutions of the Nehari Problem", International Journal of Analysis, vol. 2014, Article ID 467831, 5 pages, 2014. https://doi.org/10.1155/2014/467831
A. Gritsans1 and F. Sadyrbaev2
1University of Daugavpils, Parades Street 1, Daugavpils, LV-5400, Latvia
2Institute of Mathematics and Computer Science, University of Latvia, Raynis Boulevard 29, Riga, LV-1459, Latvia
Academic Editor: Luc Molinet
The Nehari characteristic numbers are the minimal values of an integral functional associated with a boundary value problem (BVP) for nonlinear ordinary differential equation. In case of multiple solutions of the BVP, the problem of identifying of minimizers arises. It was observed earlier that for nonoscillatory (positive) solutions of BVP those with asymmetric shape can provide the minimal value to a functional. At the same time, an even solution with regular shape is not a minimizer. We show by constructing the example that the same phenomenon can be observed in the Nehari problem for the fifth characteristic number which is associated with oscillatory solutions of BVP (namely, with those having exactly four zeros in .
The variational theory of eigenvalues in Sturm-Liouville problems for linear ordinary differential equations provides variational interpretation of eigenvalues which emerge as minima of some quadratic functionals being considered with certain restrictions [1].
As to nonlinear boundary value problems for ordinary differential equations, the Nehari theory of characteristic values provides some analogue of the linear theory. The Nehari theory deals in particular with superlinear differential equations of the form The Nehari numbers , by definition, are minimal values of the functional over the set of all functions , which are continuous and piecewise continuously differentiable in ; there exist numbers such that and in any ; in any , and It was proved in [2] (see also [3]) that minimizers in the above variational problem are -solutions of the boundary value problem Putting (3) into (2) one gets where is an appropriate solution of the BVP (4). The BVP (4) may have multiple solutions but not all of them are minimizers.
It appears that in order to detect it is sufficient to consider solutions of the boundary value problem (4).
Z. Nehari posed the question is it true that there is only one minimizer associated with . It was shown implicitly in [4] that there may be multiple minimizers associated with the first characteristic number . These minimizers are positive solutions of the problem (4) ().
Later in the work by the authors [5] the example was constructed showing three solutions of the BVP (4). They are depicted in Figure 1.
Three solutions of the BVP from [5].
Two of these solutions are asymmetric and one is an even function. Surprisingly, two asymmetric solutions are the minimizers.
The same phenomenon was observed later by Kajikiya [6] who studied “the Emden-Fowler equation whose coefficient is even in the interval under the Dirichlet boundary condition.” It was proved that if the density of the coefficient function is thin in the interior of and thick on the boundary, then a least energy solution is not even. Therefore, the equation has at least three positive solutions: the first one is even, the second one is a non-even least energy solution , and the third one is the reflection . Similar phenomena were discussed in [7, 8].
In this note, we confirm this phenomenon for the characteristic value . Solutions for this characteristic value have exactly four zeros in an open interval . We construct the example and provide all the calculations and visualizations.
The problem of finding characteristic values is called the Nehari problem. A function that supplies minimal value in the Nehari problem will be called the Nehari solution. Nehari solutions associated with the equation, the interval , and the number , all are solutions of the respective boundary value problem (4).
In our constructions below we use the auxiliary functions called the lemniscatic sine and cosine . These functions can be introduced as solutions of the Cauchy problems, respectively, as follows: for the equation . These functions are much like the usual sine and cosine functions, but they are not the derivatives of each other. Instead the following holds: A complete list of formulas for the lemniscatic functions can be found in [9]. The lemniscatic functions can be handled symbolically by the Wolfram Mathematica program using the representation in terms of the built-in Jacobi functions.
3. Construction of the Equation
Consider the interval . Define the piecewise linear function where and is a selected number. Define . The function (depicted in Figure 2) is U-shaped function “thin” in the middle of the interval .
The function in (9) for and .
Consider equation together with the boundary conditions
Consider the Cauchy problems Let be a solution of (11) in , a solution of (12) in .Then, the function is a -solution of the problem (9) and the problem (10) if the continuity and smoothness conditions are satisfied. The problems (11) and (12) can be solved explicitly as where The derivatives can be computed as In order to get an explicit formula for a solution of the BVP (9) and (10), one has to solve a system of two equations with respect to : This system after replacements and simplifications is In new variables and , the system takes the form Notice that if a solution of the system (21) is known, then a solution of the BVP (9) and (10) can be constructed such that If we introduce the functions the system (21) can be rewritten as
Zeros of the functions and in the square are depicted in Figure 3. Notice that a set of zeros of consists of the diagonal and the “wings.”
Zeros of (solid line) and (dashed line), .
The intersection points of smaller “hat” with the zero set of reflect three solutions of the problem (9) and (10) depicted in Figure 1. The cross-point on the bisectrix relates to the even solution, and two symmetric cross-points on the “wings” relate to the remaining two solutions of asymmetric shape. The latter two solutions are “unexpected” minimizers (or, as in [6], “non-even least energy solutions”).
It was proved in [5, Proposition 1] that for sufficiently large there are exactly three cross-points on a smaller “hat” (probably, for any ). The similar proof can be conducted for the middle “hat” in Figure 3. There are exactly three cross-points (and exactly three solutions of the system (24)) that give rise to solutions of the problem (9) and the problem (10) that have exactly two zeros in the interval . The respective values of the Nehari functional (5) were calculated and the result is the same: the even solution supplies , the two solutions of asymmetric shape supply the value .
Therefore, we confirm the phenomenon observed in [5, 6] also for oscillatory (with exactly two zeros in ) solutions.
3.1. Nehari Solutions with Four Internal Zeros
We study in details the case of the Nehari characteristic number . Related solutions of the boundary value problem have exactly four zeros in the interval . Solving the system (24) on the third (counting from the origin) “hat” provides us with values The respective solutions are known analytically through (15) and (16) and can be computed numerically. The second way yields the three graphs depicted in Figures 4, 5 and 6.
In order to detect the Nehari solutions, we compute the expression in (5) for the three solutions depicted in Figure 4 to Figure 6. Recall that , , and .
Let be the numerical value of the above expression for solutions defined by the initial data , , .
One gets that . The integral over the even solution is . The Nehari solutions are those of asymmetric shape.
R. Courant and D. Hilbert, Methods of Mathematical Physics, vol. 1, Interscience, New York, NY, USA, 1953. View at: MathSciNet
F. Sadyrbaev, “Solutions of an equation of Emden-Fowler type,” Differential Equations, vol. 25, no. 5, pp. 560–565, 1989. View at: Google Scholar | Zentralblatt MATH | MathSciNet
A. Gritsans and F. Sadyrbaev, “Characteristic numbers of non-autonomous Emden-Fowler type equations,” Mathematical Modelling and Analysis, vol. 11, no. 3, pp. 243–252, 2006. View at: Google Scholar | Zentralblatt MATH | MathSciNet
R. Kajikiya, “Non-even least energy solutions of the Emden-Fowler equation,” Proceedings of the American Mathematical Society, vol. 140, no. 4, pp. 1353–1362, 2012. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
R. Kajikiya, “Least energy solutions of the generalized Hénon equation in reflectionally symmetric or point symmetric domains,” Journal of Differential Equations, vol. 253, no. 5, pp. 1621–1646, 2012. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
R. Kajikiya, “Non-even positive solutions of the one-dimensional
p
-Laplace Emden-Fowler equation,” Applied Mathematics Letters, vol. 25, no. 11, pp. 1891–1895, 2012. View at: Publisher Site | Google Scholar | MathSciNet
A. Gritsans and F. Sadyrbaev, “Lemniscatic functions in the theory of the Emden-Fowler diferential equation,” Mathematics. Differential Equations, vol. 3, pp. 5–27, 2003. View at: Google Scholar
Copyright © 2014 A. Gritsans and F. Sadyrbaev. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
|
Solar-like oscillations - Wikipedia
Solar-like oscillations are oscillations in distant stars that are excited in the same way as those in the Sun, namely by turbulent convection in its outer layers. Stars that show solar-like oscillations are called solar-like oscillators. The oscillations are standing pressure and mixed pressure-gravity modes that are excited over a range in frequency, with the amplitudes roughly following a bell-shaped distribution. Unlike opacity-driven oscillators, all the modes in the frequency range are excited, making the oscillations relatively easy to identify. The surface convection also damps the modes, and each is well-approximated in frequency space by a Lorentzian curve, the width of which corresponds to the lifetime of the mode: the faster it decays, the broader is the Lorentzian. All stars with surface convection zones are expected to show solar-like oscillations, including cool main-sequence stars (up to surface temperatures of about 7000K), subgiants and red giants. Because of the small amplitudes of the oscillations, their study has advanced tremendously thanks to space-based missions[1] (mainly COROT and Kepler).
Solar-like oscillations have been used, among other things, to precisely determine the masses and radii of planet-hosting stars and thus improve the measurements of the planets' masses and radii.[2][3]
Echelle diagrams[edit]
An echelle diagram for the Sun, using data for low-angular-degree modes from the Birmingham Solar Oscillations Network (BiSON).[7][8] Modes of the same angular degree
{\displaystyle \ell }
form roughly vertical lines at high frequencies, as expected from the asymptotic behaviour of the mode frequencies.
The peak of the oscillation power roughly corresponds to lower frequencies and radial orders for larger stars. For the Sun, the highest amplitude modes occur around a frequency of 3 mHz with order
{\displaystyle n_{\mathrm {max} }\approx 20}
, and no mixed modes are observed. For more massive and more evolved stars, the modes are of lower radial order and overall lower frequencies. Mixed modes can be seen in the evolved stars. In principle, such mixed modes may also be present in main-sequence stars but they are at too low frequency to be excited to observable amplitudes. High-order pressure modes of a given angular degree
{\displaystyle \ell }
are expected to be roughly evenly-spaced in frequency, with a characteristic spacing known as the large separation
{\displaystyle \Delta \nu }
.[9] This motivates the echelle diagram, in which the mode frequencies are plotted as a function of the frequency modulo the large separation, and modes of a particular angular degree form roughly vertical ridges.
Scaling relations[edit]
{\displaystyle \nu _{\mathrm {max} }\propto {\frac {g}{\sqrt {T_{\mathrm {eff} }}}}}
Similarly, the large frequency separation
{\displaystyle \Delta \nu }
is known to be roughly proportional to the square root of the density:
{\displaystyle \Delta \nu \propto {\sqrt {\frac {M}{R^{3}}}}}
{\displaystyle M\propto {\frac {\nu _{\mathrm {max} }^{3}}{\Delta \nu ^{4}}}T_{\mathrm {eff} }^{3/2}}
{\displaystyle R\propto {\frac {\nu _{\mathrm {max} }}{\Delta \nu ^{2}}}T_{\mathrm {eff} }^{1/2}}
Equivalently, if one knows the star's luminosity, then the temperature can be replaced via the blackbody luminosity relationship
{\displaystyle L\propto R^{2}T_{\mathrm {eff} }^{4}}
{\displaystyle M\propto {\frac {\nu _{\mathrm {max} }^{12/5}}{\Delta \nu ^{14/5}}}L^{3/10}}
{\displaystyle R\propto {\frac {\nu _{\mathrm {max} }^{4/5}}{\Delta \nu ^{8/5}}}L^{1/10}}
Some bright solar-like oscillators[edit]
^ Chaplin, W. J.; Miglio, A. (2013). "Asteroseismology of Solar-Type and Red-Giant Stars". Annual Review of Astronomy and Astrophysics. 51 (1): 353–392. arXiv:1303.1957. Bibcode:2013ARA&A..51..353C. doi:10.1146/annurev-astro-082812-140938. S2CID 119222611.
^ Davies, G. R.; et al. (2016). "Oscillation frequencies for 35 Kepler solar-type planet-hosting stars using Bayesian techniques and machine learning". Monthly Notices of the Royal Astronomical Society. 456 (2): 2183–2195. arXiv:1511.02105. Bibcode:2016MNRAS.456.2183D. doi:10.1093/mnras/stv2593.
^ Silva Aguirre, V.; et al. (2015). "Ages and fundamental properties of Kepler exoplanet host stars from asteroseismology". Monthly Notices of the Royal Astronomical Society. 452 (2): 2127–2148. arXiv:1504.07992. Bibcode:2015MNRAS.452.2127S. doi:10.1093/mnras/stv1388.
^ Bedding, Timothy R.; et al. (2011). "Gravity modes as a way to distinguish between hydrogen- and helium-burning red giant stars". Nature. 471 (7340): 608–11. arXiv:1103.5805. Bibcode:2011Natur.471..608B. doi:10.1038/nature09935. PMID 21455175. S2CID 4338871.
^ Beck, Paul G.; et al. (2012). "Fast core rotation in red-giant stars as revealed by gravity-dominated mixed modes". Nature. 481 (7379): 55–7. arXiv:1112.2825. Bibcode:2012Natur.481...55B. doi:10.1038/nature10612. PMID 22158105. S2CID 4310747.
^ Fuller, J.; Cantiello, M.; Stello, D.; Garcia, R. A.; Bildsten, L. (2015). "Asteroseismology can reveal strong internal magnetic fields in red giant stars". Science. 350 (6259): 423–426. arXiv:1510.06960. Bibcode:2015Sci...350..423F. doi:10.1126/science.aac6933. PMID 26494754. S2CID 17161151.
^ Broomhall, A.-M.; et al. (2009). "Definitive Sun-as-a-star p-mode frequencies: 23 years of BiSON observations". Monthly Notices of the Royal Astronomical Society. 396 (1): L100–L104. arXiv:0903.5219. Bibcode:2009MNRAS.396L.100B. doi:10.1111/j.1745-3933.2009.00672.x. S2CID 18297150.
^ Davies, G. R.; Chaplin, W. J.; Elsworth, Y.; Hale, S. J. (2014). "BiSON data preparation: a correction for differential extinction and the weighted averaging of contemporaneous data". Monthly Notices of the Royal Astronomical Society. 441 (4): 3009–3017. arXiv:1405.0160. Bibcode:2014MNRAS.441.3009D. doi:10.1093/mnras/stu803.
^ Tassoul, M. (1980). "Asymptotic approximations for stellar nonradial pulsations". The Astrophysical Journal Supplement Series. 43: 469. Bibcode:1980ApJS...43..469T. doi:10.1086/190678.
^ Kjeldsen, H.; Bedding, T. R. (1995). "Amplitudes of stellar oscillations: the implications for asteroseismology". Astronomy and Astrophysics. 293: 87. arXiv:astro-ph/9403015. Bibcode:1995A&A...293...87K.
Retrieved from "https://en.wikipedia.org/w/index.php?title=Solar-like_oscillations&oldid=1057495082"
|
Global Constraint Catalog: Kdominating_queens
<< 3.7.88. Domain definition3.7.90. Domination >>
\mathrm{𝚗𝚟𝚊𝚕𝚞𝚎}
A constraint that can be used for modelling the dominating queens problem. Place a number of queens on an
n
chessboard in such a way that all squares are either attacked by a queen or are occupied by a queen. A queen can attack all squares located on the same column, on the same row or on the same diagonal. Values of the minimum number of queens for
n
less than or equal to 120 are reported in [OstergardWeakley01]. Most of them are in fact either equal to
⌊\frac{n+1}{2}⌋
⌊\frac{n+1}{2}⌋+1
. Values
n=3
n=11
are the only two values below 120 for which the previous assertion is not true since we only need in these two cases
⌊\frac{n}{2}⌋
queens.
|
Global Constraint Catalog: Kfloor_planning_problem
<< 3.7.103. Facilities location problem3.7.105. Flow >>
\mathrm{𝚍𝚒𝚏𝚏𝚗}
\mathrm{𝚐𝚎𝚘𝚜𝚝}
\mathrm{𝚕𝚎𝚡}_\mathrm{𝚌𝚑𝚊𝚒𝚗}_\mathrm{𝚐𝚛𝚎𝚊𝚝𝚎𝚛}
\mathrm{𝚕𝚎𝚡}_\mathrm{𝚌𝚑𝚊𝚒𝚗}_\mathrm{𝚕𝚎𝚜𝚜}
A constraint that can be used for the floor planning problem. The floor planning problem [Pfefferkorn75], [Tong87], [Maculet91], [Charman95], [Medjdoub96] involves various type of spaces, such as the placement space itself (i.e., the floor), the rooms to place within the placement space, and the circulation between the rooms. The placement space can be located on a single level or on several levels. Very often the placement space corresponds to a single rectangle and all rooms are rectangles with their borders parallel to the contour of the placement space. Circulation typically corresponds to corridors or stairs that respectively allow to access from one room to another room or from one level to another level. Within the context of floor planning three main classes of constraints have been identified [MedjdoubYannou00], namely dimensional topological and implicit constraints:
A dimensional constraint usually restricts the length, the width or the surface of a single space. Ratio constraints enforce aesthetic proportions between the length and the width of a single space or constraint the surfaces of two closely related spaces such as the toilets and the shower. Dimensional constraints can be expressed by reducing the domain of some variable or by stating some arithmetic constraints between two variables.
A topological constraint imposes a condition between two spaces. Typical topological constraints are:
Adjacency constraints with a minimum contact between a room and a corridor or another room allow expressing that there must be enough place to put a door between two given spaces. In the context of staircases one has to enforce that fact that the first and last stairs are completely accessible. When a corridor is made up from two parts, one also has to enforce that the two parts are fully in contact.
Adjacency with the contour constraints between a room and a specified (or not) side of the contour allow expressing the orientation of a room (or just that a room must have some window).
Relative positioning constraints between two specified rooms allow for instance expressing the fact that a room is located to the north of another room.
Minimum and maximum distance constraints between two rooms allow expressing the proximity between two given rooms.
Topological constraints occur naturally in the preliminary design phase in architecture and can typically be expressed by using reified or global constraints.
An implicit constraint puts a global condition that is inherent to floor planning problems between all the spaces of the floor. We typically have:
Inclusion of each room and circulation within the contour.
Partitioning of the placement space (i.e., no wasted space is permitted). This is usually a hard constraint which requires specific propagation in order to prevent the creation of wasted space.
Non-overlapping between rooms.
Symmetry breaking constraints between identical rooms imposes for instance a lexicographic order between their respective lower leftmost corners.
Such constraints can typically be expressed by using global constraints, such as
\mathrm{𝚍𝚒𝚏𝚏𝚗}
\mathrm{𝚐𝚎𝚘𝚜𝚝}
\mathrm{𝚕𝚎𝚡}_\mathrm{𝚌𝚑𝚊𝚒𝚗}_\mathrm{𝚕𝚎𝚜𝚜}
Finally, in order to allocate as much surface as possible to the rooms, one wants sometimes to minimise the total circulation area between the different rooms.
Figure 3.7.27. A solution to Maculet floor planning problem which minimises the total area of the corridors
In order to illustrate these constraints we now consider an example of floor planning problem taken from R. Maculet PhD thesis [Maculet91] involving 11 spaces. Constraints on the dimensions of these space are:
The floor where to place everything has a size of 12 by 10 meters.
The living has a surface between 33 and 42 square meters and a minimum size of 4 by 4.
The kitchen has a surface between 9 and 15 square meters and a minimum size of 3 by 3.
The shower has a surface between 6 and 9 square meters and a minimum size of 2 by 2.
The toilet has a surface between 1 and 2 square meters and a minimum size of 1 by 1.
The first and second parts of the corridor have both a surface between 1 and 12 square meters and a minimum size of 1 by 1.
The first, second and third rooms have all a surface between 11 and 15 square meters and a minimum size of 3 by 3.
The fourth room has a surface between 15 and 20 square meters and a minimum size of 3 by 3.
Topological constraints between spaces are:
The living is located on the south-west contour. The kitchen, the first, second and third rooms are either located on the south or on the north contour. The fourth room is on the south contour.
All spaces, except the kitchen, are adjacent to one of the corridors with at least 1 meter of full contact.
The kitchen is adjacent to the living and to the shower.
The toilet is adjacent to the kitchen or to the shower.
The first and the second parts of the corridor are adjacent and fully in contact.
Finally no wasted space is permitted. Figure 3.7.27 presents a solution to the corresponding floor planning problem that minimises the area of the two corridors.
|
Maple 2017 includes the following enhancements to Maple language and programming facilities.
Improvements to Debugging Facilities
The function showsource (and the corresponding debugger command of the same name) takes a procedure name and statement number or range thereof, and displays the source line(s) corresponding to the specified statement(s). The lines are displayed preceded by one or two numbers, the first being the source line number, and the second being the statement number if the source line corresponds to a statement. See showsource.
The existing function and debugger command, stopat, can be used to set breakpoints based on source line numbers by calling it in the form, stopat(filePathString,lineNums), where filePathString is a string (instead of a procedure name). See stopat.
Note that to do source level debugging, you must have the source. You cannot do source debugging on libraries provided only in .mla format (such as the libraries that ship with Maple).
Inspecting Variables on the Maple Stack
The inspect debugger command is a general purpose command for inspecting the current state of the procedure activation stack. It can be used to inspect parameters, local variables, and the code of currently active procedures. See the inspect command in debugger.
In support of the inspect command, the output of the where debugger command (which shows the activation stack) now has numbered separating lines between the displayed stack levels.
Counted Breakpoints
The stopat function and debugger command can now be passed an integer (N) or range of integers (N..M) for its optional condition argument. This specifies that the debugger should be invoked only the Nth (through Mth) time that the specified statement is executed. See stopat.
Depth-limited Tracing
The trace function now takes an optional keyword argument, depth=N, where N is an integer, and limits tracing to N levels of invocation of the specified procedure. This makes it easier to trace highly recursive procedures when only the top few levels are of interest.
The output of showstat now respects the settings of interface(indentamount) and interface(longdelim).
The index command constructs an indexed expression. The calling sequence
\mathrm{index}\left(f,x\right)
is equivalent to constructing the expression
f\left[x\right]
index( f, x );
{\textcolor[rgb]{0,0,1}{f}}_{\textcolor[rgb]{0,0,1}{x}}
If the first argument,
f
, is an indexable expression, then the index command returns the result of indexing the first argument by the second argument:
index( [a, b, c ], 2 );
\textcolor[rgb]{0,0,1}{b}
MTM:-unwrap
The MTM:-unwrap command modifies phase angles in radians in the array M by adding integer multiples of
2\mathrm{Pi}
to ensure the difference between consecutive elements in the result is less than
\mathrm{Pi}
A := <3.8699, 0.62832, -1.2566, 0>;
\textcolor[rgb]{0,0,1}{A}\textcolor[rgb]{0,0,1}{≔}[\begin{array}{c}\textcolor[rgb]{0,0,1}{3.8699}\\ \textcolor[rgb]{0,0,1}{0.62832}\\ \textcolor[rgb]{0,0,1}{-1.2566}\\ \textcolor[rgb]{0,0,1}{0}\end{array}]
MTM:-unwrap( A );
[\begin{array}{c}\textcolor[rgb]{0,0,1}{3.8699}\\ \textcolor[rgb]{0,0,1}{6.911505308}\\ \textcolor[rgb]{0,0,1}{5.026585308}\\ \textcolor[rgb]{0,0,1}{6.283185308}\end{array}]
The verify/wildcard command verifies a relation between two expressions, independent of variable names.
For example, the following returns true because both expressions contain a single name and substituting y for x in the first expression yields the second:
verify(x^2-x, y^2-y, wildcard)
\textcolor[rgb]{0,0,1}{\mathrm{true}}
andmap and ormap
Maple uses three-valued logic (true, false, FAIL) for its Boolean operators. The commands andmap and ormap now use the same three-valued logic. For example, the following returns FAIL because true and FAIL evaluates to FAIL.
andmap(x->evalb(x<4),[3,I])
\textcolor[rgb]{0,0,1}{\mathrm{FAIL}}
In prior versions of Maple, this example raised an exception.
Improvements to max and min
When programming with max and min, you now have a new way to handle cases when the function is called with no arguments or only empty container arguments. You can now opt to have max (or min) return NULL in those cases instead of negative (or positive) infinity by using the index nodefault.
\textcolor[rgb]{0,0,1}{\mathrm{\infty }}
min[nodefault]();
Sorting of Data Frames, Data Series, and Objects
You can now sort data frames and data series. For details, see DataFrame/sort and DataSeries/sort.
Additionally, object overloading of the sort function is now supported. For details, see object/builtins.
|
Euler's equations (rigid body dynamics) - Wikipedia
(Redirected from Euler equation of motion)
{\displaystyle {\textbf {F}}={\frac {d}{dt}}(m{\textbf {v}})}
In classical mechanics, Euler's rotation equations are a vectorial quasilinear first-order ordinary differential equation describing the rotation of a rigid body, using a rotating reference frame with its axes fixed to the body and parallel to the body's principal axes of inertia. Their general form is:
{\displaystyle \mathbf {I} {\dot {\boldsymbol {\omega }}}+{\boldsymbol {\omega }}\times \left(\mathbf {I} {\boldsymbol {\omega }}\right)=\mathbf {M} .}
where M is the applied torques, I is the inertia matrix, and ω is the angular velocity about the principal axes.
In three-dimensional principal orthogonal coordinates, they become:
{\displaystyle {\begin{aligned}I_{1}{\dot {\omega }}_{1}+(I_{3}-I_{2})\omega _{2}\omega _{3}&=M_{1}\\I_{2}{\dot {\omega }}_{2}+(I_{1}-I_{3})\omega _{3}\omega _{1}&=M_{2}\\I_{3}{\dot {\omega }}_{3}+(I_{2}-I_{1})\omega _{1}\omega _{2}&=M_{3}\end{aligned}}}
where Mk are the components of the applied torques, Ik are the principal moments of inertia and ωk are the components of the angular velocity about the principal axes.
1 Motivation and derivation
2 Torque-free solutions
Motivation and derivation[edit]
Starting from Newton's second law, in an inertial frame of reference (subscripted "in"), the time derivative of the angular momentum L equals the applied torque
{\displaystyle {\frac {d\mathbf {L} _{\text{in}}}{dt}}\ {\stackrel {\mathrm {def} }{=}}\ {\frac {d}{dt}}\left(\mathbf {I} _{\text{in}}{\boldsymbol {\omega }}\right)=\mathbf {M} _{\text{in}}}
where Iin is the moment of inertia tensor calculated in the inertial frame. Although this law is universally true, it is not always helpful in solving for the motion of a general rotating rigid body, since both Iin and ω can change during the motion.
Therefore, we change to a coordinate frame fixed in the rotating body, and chosen so that its axes are aligned with the principal axes of the moment of inertia tensor. In this frame, at least the moment of inertia tensor is constant (and diagonal), which simplifies calculations. As described in the moment of inertia, the angular momentum L can be written
{\displaystyle \mathbf {L} \ {\stackrel {\mathrm {def} }{=}}\ L_{1}\mathbf {e} _{1}+L_{2}\mathbf {e} _{2}+L_{3}\mathbf {e} _{3}=I_{1}\omega _{1}\mathbf {e} _{1}+I_{2}\omega _{2}\mathbf {e} _{2}+I_{3}\omega _{3}\mathbf {e} _{3}}
where Mk, Ik and ωk are as above.
In a rotating reference frame, the time derivative must be replaced with (see time derivative in rotating reference frame)
{\displaystyle \left({\frac {d\mathbf {L} }{dt}}\right)_{\mathrm {rot} }+{\boldsymbol {\omega }}\times \mathbf {L} =\mathbf {M} }
where the subscript "rot" indicates that it is taken in the rotating reference frame. The expressions for the torque in the rotating and inertial frames are related by
{\displaystyle \mathbf {M} _{\text{in}}=\mathbf {Q} \mathbf {M} ,}
where Q is the rotation tensor (not rotation matrix), an orthogonal tensor related to the angular velocity vector by
{\displaystyle {\boldsymbol {\omega }}\times {\boldsymbol {v}}={\dot {\mathbf {Q} }}\mathbf {Q} ^{-1}{\boldsymbol {v}}}
for any vector v.
In general, L = Iω is substituted and the time derivatives are taken realizing that the inertia tensor, and so also the principal moments, do not depend on time. This leads to the general vector form of Euler's equations
{\displaystyle \mathbf {I} {\dot {\boldsymbol {\omega }}}+{\boldsymbol {\omega }}\times \left(\mathbf {I} {\boldsymbol {\omega }}\right)=\mathbf {M} .}
If principal axis rotation
{\displaystyle L_{k}\ {\stackrel {\mathrm {def} }{=}}\ I_{k}\omega _{k}}
is substituted, and then taking the cross product and using the fact that the principal moments do not change with time, we arrive at the Euler equations in components at the beginning of the article.
Torque-free solutions[edit]
For the RHSs equal to zero there are non-trivial solutions: torque-free precession. Notice that since I is constant (because the inertia tensor is a 3×3 diagonal matrix (see the previous section), because we work in the intrinsic frame, or because the torque is driving the rotation around the same axis
{\displaystyle \mathbf {\hat {n}} }
so that I is not changing) then we may write
{\displaystyle \mathbf {M} \ {\stackrel {\mathrm {def} }{=}}\ I{\frac {d\omega }{dt}}\mathbf {\hat {n}} =I\alpha \,\mathbf {\hat {n}} }
α is called the angular acceleration (or rotational acceleration) about the rotation axis
{\displaystyle \mathbf {\hat {n}} }
However, if I is not constant in the external reference frame (i.e. the body is moving and its inertia tensor is not constantly diagonal) then we cannot take the I outside the derivative. In this case we will have torque-free precession, in such a way that I(t) and ω(t) change together so that their derivative is zero. This motion can be visualized by Poinsot's construction.
It is also possible to use these equations if the axes in which
{\displaystyle \left({\frac {d\mathbf {L} }{dt}}\right)_{\mathrm {relative} }}
is described are not connected to the body. Then ω should be replaced with the rotation of the axes instead of the rotation of the body. It is, however, still required that the chosen axes are still principal axes of inertia. This form of the Euler equations is useful for rotation-symmetric objects that allow some of the principal axes of rotation to be chosen freely.
Poinsot's construction
C. A. Truesdell, III (1991) A First Course in Rational Continuum Mechanics. Vol. 1: General Concepts, 2nd ed., Academic Press. ISBN 0-12-701300-8. Sects. I.8-10.
C. A. Truesdell, III and R. A. Toupin (1960) The Classical Field Theories, in S. Flügge (ed.) Encyclopedia of Physics. Vol. III/1: Principles of Classical Mechanics and Field Theory, Springer-Verlag. Sects. 166–168, 196–197, and 294.
Landau L.D. and Lifshitz E.M. (1976) Mechanics, 3rd. ed., Pergamon Press. ISBN 0-08-021022-8 (hardcover) and ISBN 0-08-029141-4 (softcover).
Goldstein H. (1980) Classical Mechanics, 2nd ed., Addison-Wesley. ISBN 0-201-02918-9
Retrieved from "https://en.wikipedia.org/w/index.php?title=Euler%27s_equations_(rigid_body_dynamics)&oldid=1066475879"
|
Energy and Chemical Changes – user's Blog!
Home › SPM Science › Archive for Energy and Chemical Changes
5.4.3 Extraction of Metals from their Ores Using Coke
1. In industries, metal ores which are less reactive than carbon are heated with carbon to obtain pure metal.
2. Pure metals which can be extracted using carbon are zinc, iron, tin and lead.
Extracting tin ore in a blast furnace
1. Tin ore exists naturally as cassiterite (or tin oxide).
2. Tin ore is washed with water to remove sand, clay and others impurities.
3. After that, tin ore is roasted to take away foreign matter such as carbon, sulphur and oil.
4. Lastly, the tin ore is mixed with carbon and limestone in the form of coal and is heated in a blast furnace at a high temperature.
5. The function of the limestone is to remove impurities.
6. Reduction reaction occurs during heating, carbon which is more reactive that tin removes oxygen from the tin oxide to produce pure tin and carbon dioxide.
7. Pure tin flows out from the furnace into moulds to harden as tin ingots.
8. At the same time, the limestone (calcium carbonate) breaks down to form quicklime (calcium oxide) which reacts with impurities to form slag.
5.9.4 Electrolysis (Structured Questions)
Diagram 1 shows the setting up of apparatus in an experiment.
(a) Name the process in Diagram 1. [1 mark]
(b)(i) Name metal Q. [1 mark]
(ii) What happens to metal Q during the process in Diagram 1? [1 mark]
(c) Which metal functions as the cathode? [1 mark]
(d)(i) What will happen to the iron key at the end of the experiment? [1 mark]
(ii) State one method to get a good result in (d)(i) [1 mark]
(b)(i) Copper
(b)(ii) Metal Q dissolves in the copper(II) sulphate solution to form copper ions and becomes thinner.
(c) Iron key.
(d)(i) The surface of the iron key will be coated with a brown layer of copper.
(d)(ii) The surface of the metal to be plated must be cleaned with sandpaper before electrolysis begins.
5.9.3 Application of Reactivity Series of Metals (Structured Questions)
5.9.2 The Reactivity Series of Metals (Structured Questions)
\begin{array}{l}\text{Sodium }\overline{)}\\ \text{Calcium }\overline{)}\end{array}
\begin{array}{l}\text{Sodium }\overline{)\vee }\\ \text{Calcium }\overline{)}\end{array}
5.9.1 Heat Change in Chemical Reactions (Structured Questions)
5.5.3 Electrolysis of Molten lead (II) bromide
1. Figure above shows the apparatus set up for electrolysis of molten lead (II) bromide.
2. Lead (II) bromide powder in a crucible is heated.
3. The electrolysis process start when le an (II) bromide start melting.
4. At the Cathode
When electricity is flowing, a silvery deposit of lead metal forms on the cathode.
5. At the Anode
When electricity is flowing, brown fumes of bromine gas are seen at the anode.
6. Thus, electrolysis of lead (II) bromide produces lead and bromine gas.
5.5.2 Electrolysis of Copper (II) Chloride Solution
1. Electrolysis of copper (II) chloride solution.
(a) Copper (II) ion with positive charge will attract to cathode to discharge as a copper.
(b) Chloride ion will attract to anode to discharge as a chlorine gas.
(c) At anode, chloride ions lose electrons. Greenish gas which can bleach the litmus paper is produced.
(d) At cathode, copper (II) ion receives electron. Brown solid deposited on the surface of the electrode.
2. Thus, electrolysis of copper (II) chloride produces copper and chlorine gas.
1. Electrolysis is a process where a compound is separated into its constituent elements when electric current passes through an electrolyte.
2. In electrolysis, energy is changed as shown below:
\overline{)\text{Electrical energy}\to \text{chemical energy}}
3. The apparatus used in an electrolytic cell consists of a dry cell or battery, an electrolyte and two electrodes as shown below.
(a) An electrolyte is a compound in a molten form or in aqueous solution which conducts electric current.
(b) Electrolyte contains two types of charged ions which move freely:
(i) Ion with positive charge (cation), for example, metal ions and hydrogen ions.
(ii) Ion with negative charge (anion), for example, non-metal ions.
(c) Example of electrolyte: molten potassium chloride and hydrocloric acid.
(a) Electrode is a conductor which is immersed in an electrolyte and connected to an electric source.
(b) Examples of electrode: carbon (graphite) and platinum.
(c) The electrode connected to the positive terminal of the cell is positive electrode and is given a name, anode.
(d) The electrode connected to the negative terminal of the cell is negative electrode and is called the cathode.
Ammeter is used to detect the flow of current in the circuit.
Dry cell or battery
The source that generates electrical energy.
5.4.2 Reactivity Series and Extraction of Metals
1. The method that is used in the extraction of metal from its are depends on the position of the metal in the reactivity series of metals.
2. Metals which are located higher than carbon in the reactivity series are extracted from their molten ores using the electrolysis method.
3. Metals which are located lower than carbon in the reactivity series are extracted using the reduction method with coke (or carbon).
4. Carbon is used in the extraction process because
(a) It is cheap
(b) Easily obtained
5. Metals located the lowest in the reactivity series like silver and gold can be extracted naturally without any complex chemical reaction. These metals exist as free elements in the Earth’s crust.
|
DeWiki > Atomorbital
{\displaystyle |\Psi ({\vec {r}})|^{2}}
{\displaystyle |\Psi ({\vec {r}})|^{2}}
{\displaystyle \varphi }
{\displaystyle \psi }
{\displaystyle |\psi ({\vec {r}})|^{2}}
{\displaystyle {\vec {r}}=(x,y,z)}
{\displaystyle \psi _{nlm_{l}}({\vec {r}})}
{\displaystyle n,}
der Bahndrehimpulsquantenzah{\displaystyle l}
{\displaystyle m_{l}}
{\displaystyle \Psi \colon \mathbb {R} ^{3}\to \mathbb {R} }
{\displaystyle \Psi \colon \mathbb {R} ^{3}\to \mathbb {C} }
{\displaystyle \Psi ({\vec {r}})}
{\displaystyle H\Psi ({\vec {r}})=E\Psi ({\vec {r}})}
{\displaystyle R(r)}
{\displaystyle Y_{l}^{m}(\theta ,\phi )}
{\displaystyle \Psi ({\vec {r}})=R(r)Y_{l}^{m}(\theta ,\phi )}
{\displaystyle |\Psi ({\vec {r}})|^{2}}
{\displaystyle \Psi ({\vec {r}})}
{\displaystyle {\text{const}}=|\Psi ({\vec {r}})|^{2}=|R(r)|^{2}|Y_{l}^{m}(\theta ,\phi )|^{2}}
{\displaystyle \theta ,\phi }
{\displaystyle Y_{l}^{m}(\theta ,\phi )}
{\displaystyle |\Psi ({\vec {r}})|^{2}}
{\displaystyle \Psi ({\vec {r}})}
{\displaystyle n,l,m_{l}}
{\displaystyle n,l,j,m_{j}}
Hauptquantenzahl n: Schale
{\displaystyle n=1,2,3,\dotsc }
{\displaystyle n}
{\displaystyle n=1.}
{\displaystyle n}
{\displaystyle (nlm_{l})}
{\displaystyle n^{2}.}
{\displaystyle 2\cdot n^{2}}
Neben- oder Bahndrehimpuls-Quantenzahl l
{\displaystyle l=0,1,2,\dotsc ,(n-1)}
{\displaystyle |{\vec {l}}|=\hbar \cdot {\sqrt {l(l+1)}}}
{\displaystyle m_{l}}
{\displaystyle n>l}
{\displaystyle p_{z}}
{\displaystyle d}
{\displaystyle 4p}
{\displaystyle (l=1)}
{\displaystyle |\Psi ({\vec {r}})|^{2}.}
{\displaystyle l=2}
{\displaystyle |\Psi ({\vec {r}})|^{2}}
{\displaystyle (l=1,\,m_{x}=0)}
{\displaystyle 2l+1}
{\displaystyle \,l=0}
{\displaystyle \,l=1}
{\displaystyle \,l=2}
{\displaystyle \,l=3}
{\displaystyle \,l=4}
{\displaystyle \,l=5}
{\displaystyle m_{l}}
{\displaystyle {\hat {l}}_{z}}
{\displaystyle m_{l}{\mathord {=}}0}
{\displaystyle m_{l}=\pm 1,}
{\displaystyle {\hat {l}}_{x}}
{\displaystyle {\hat {l}}_{y},}
{\displaystyle m_{x,y}{\mathord {=}}0,}
{\displaystyle {\hat {l}}_{z}}
{\displaystyle l}
{\displaystyle n}
{\displaystyle l=0}
{\displaystyle r=0,}
{\displaystyle l=n-1}
Da bei Atomen mit mehreren Elektronen die inneren Elektronen die anziehende Kernladung abschirmen, verringert sich die Bindungsenergie der äußeren Elektronen. Da die mittleren Kernabstände von der Nebenquantenzahl abhängen, ergeben sich zum gleiche{\displaystyle n}
{\displaystyle n}
{\displaystyle n}
{\displaystyle n=1}
{\displaystyle n=3}
{\displaystyle l=0,1,2}
{\displaystyle 2l+1}
{\displaystyle m_{l}}
{\displaystyle n^{2}}
Magnetquantenzahl ml: Neigung des Drehimpulsvektors
{\displaystyle m_{l}=-l,-(l-1),\dotsc ,0,\dotsc ,(l-1),l}
{\displaystyle m_{l}\hbar }
{\displaystyle \cos \vartheta ={\frac {m_{l}}{\sqrt {l(l+1)}}}.}
{\displaystyle m_{l}=+l\Leftrightarrow \cos \vartheta ={\text{max}}\Leftrightarrow \vartheta \approx 0^{\circ }}
{\displaystyle m_{l}=-l\Leftrightarrow \cos \vartheta ={\text{min}}\Leftrightarrow \vartheta \approx 180^{\circ }}
{\displaystyle l}
{\displaystyle 2l+1}
{\displaystyle 2l+1}
{\displaystyle 2l+1}
{\displaystyle \psi _{nlm_{l}}}
{\displaystyle m_{s}=\pm {\tfrac {1}{2}}}
Gesamtdrehimpuls j und magnetische Quantenzahl mj
{\displaystyle n>1,l>0}
{\displaystyle j=l\pm {\tfrac {1}{2}}.}
{\displaystyle m_{j}=-j,-(j-1),\dotsc ,+j}
{\displaystyle 2j+1}
{\displaystyle j}
{\displaystyle nl}
{\displaystyle 2p_{3/2}.}
{\displaystyle {\hat {H}}={\frac {{\hat {p}}^{2}}{2m}}+V(r)}
{\displaystyle V(r)={\frac {Ze}{r}}}
{\displaystyle {\hat {H}},}
{\displaystyle {\hat {l}}^{2}}
{\displaystyle {\hat {l}}_{z}}
{\displaystyle n,l,m_{l}}
{\displaystyle {\hat {H}}\cdot \psi _{n,l,m_{l}}(r,\vartheta ,\phi )=E_{n,l,m_{l}}\cdot \psi _{n,l,m_{l}}(r,\vartheta ,\phi )}
{\displaystyle \psi _{n,l,m_{l}}}
{\displaystyle Y_{lm_{l}}(\vartheta ,\varphi )}
{\displaystyle \Phi _{nl}(r)\colon }
{\displaystyle \psi _{n,l,m_{l}}(r,\vartheta ,\phi )=Y_{lm_{l}}(\vartheta ,\varphi )\cdot \Phi _{nl}(r)}
{\displaystyle n{\mathord {=}}3}
{\displaystyle a_{0}}
{\displaystyle Z}
{\displaystyle {\hat {l}}_{z}}
{\displaystyle l}
{\displaystyle m_{l}}
{\displaystyle |\psi ({\vec {r}})|^{2}}
{\displaystyle \psi ({\vec {r}})}
{\displaystyle n}
{\displaystyle l}
{\displaystyle m_{l}}
{\displaystyle \psi _{n,l,m_{l}}(r,\theta ,\phi )}
{\displaystyle {\frac {1}{\sqrt {\pi }}}\left({\frac {Z}{a_{0}}}\right)^{\frac {3}{2}}e^{-\textstyle {\frac {Zr}{a_{0}}}}}
{\displaystyle {\frac {1}{4{\sqrt {2\pi }}}}\left({\frac {Z}{a_{0}}}\right)^{\frac {3}{2}}\left(2-{\frac {Zr}{a_{0}}}\right)e^{-\textstyle {\frac {Zr}{2a_{0}}}}}
{\displaystyle {\frac {1}{4{\sqrt {2\pi }}}}\left({\frac {Z}{a_{0}}}\right)^{\frac {3}{2}}{\frac {Zr}{a_{0}}}e^{-\textstyle {\frac {Zr}{2a_{0}}}}\cos \theta }
{\displaystyle {\frac {1}{8{\sqrt {\pi }}}}\left({\frac {Z}{a_{0}}}\right)^{\frac {3}{2}}{\frac {Zr}{a_{0}}}e^{-\textstyle {\frac {Zr}{2a_{0}}}}\sin \theta e^{\pm i\phi }}
{\displaystyle {\frac {1}{81{\sqrt {3\pi }}}}\left({\frac {Z}{a_{0}}}\right)^{\frac {3}{2}}\left(27-18{\frac {Zr}{a_{0}}}+2{\frac {Z^{2}r^{2}}{a_{0}^{2}}}\right)e^{-\textstyle {\frac {Zr}{3a_{0}}}}}
{\displaystyle {\frac {\sqrt {2}}{81{\sqrt {\pi }}}}\left({\frac {Z}{a_{0}}}\right)^{\frac {3}{2}}\left(6-{\frac {Zr}{a_{0}}}\right){\frac {Zr}{a_{0}}}e^{-\textstyle {\frac {Zr}{3a_{0}}}}\cos \theta }
{\displaystyle {\frac {1}{81{\sqrt {\pi }}}}\left({\frac {Z}{a_{0}}}\right)^{\frac {3}{2}}\left(6-{\frac {Zr}{a_{0}}}\right){\frac {Zr}{a_{0}}}e^{-\textstyle {\frac {Zr}{3a_{0}}}}\sin \theta e^{\pm i\phi }}
{\displaystyle {\frac {1}{81{\sqrt {6\pi }}}}\left({\frac {Z}{a_{0}}}\right)^{\frac {3}{2}}{\frac {Z^{2}r^{2}}{a_{0}^{2}}}e^{-\textstyle {\frac {Zr}{3a_{0}}}}(3\cos ^{2}\theta -1)}
{\displaystyle {\frac {1}{81{\sqrt {\pi }}}}\left({\frac {Z}{a_{0}}}\right)^{\frac {3}{2}}{\frac {Z^{2}r^{2}}{a_{0}^{2}}}e^{-\textstyle {\frac {Zr}{3a_{0}}}}\sin \theta \cos \theta e^{\pm i\phi }}
{\displaystyle {\frac {1}{162{\sqrt {\pi }}}}\left({\frac {Z}{a_{0}}}\right)^{\frac {3}{2}}{\frac {Z^{2}r^{2}}{a_{0}^{2}}}e^{-\textstyle {\frac {Zr}{3a_{0}}}}\sin ^{2}\theta e^{\pm 2i\phi }}
Natürliches Orbital
{\displaystyle {\hat {F}}}
Mehr-Elektronen-Wellenfunktionen
Hydrogen eigenstate n3 l1 m-1.png
Calculated 3p-1orbital of an electron's eigenstate in the Coulomb-field of a hydrogen nucleus. An eigenstate is a state which keeps it's shape except for a complex phase when the Hamilton operator is applied, thus being invariant in time while obeying the Schrödinger equation. The orbital is aligned around the z-axis, but remains an eigenfunction if rotated to any direction.
The wavefunction is:
{\displaystyle \psi _{3,1,-1}(r,\vartheta ,\varphi )={\sqrt {{\left({\frac {2}{3\,a_{0}}}\right)}^{3}{\frac {1}{6\cdot 4!}}}}\cdot e^{\textstyle -r/(3\,a_{0})}\cdot \left({\frac {2\,r}{3\,a_{0}}}\right)\cdot L_{1}^{3}(2\,r/(3\,a_{0}))\cdot Y_{1}^{-1}(\vartheta ,\varphi )}
The state is an eigenstate of H, L² and Lz, which constitute a complete set of commuting observables.
The quantum numbers mean that the following quantities have a sharp certain value:
n = 3: Energy:
{\displaystyle E=-1\,\mathrm {Ry} /n^{2}=-13.6\,\mathrm {eV} /9}
l = 1: Angular momentum:
{\displaystyle |L|={\sqrt {l\,(l+1)}}\,\hbar ={\sqrt {2}}\,\hbar }
m = -1: Angular momentum in z-direction:
{\displaystyle L_{z}=m\,\hbar =-\hbar }
Since l=1, this is called a p-orbital.
The depicted rigid body is where the probability density exceeds a certain value. The color shows the complex phase of the wavefunction, where blue means real positive, red means imaginary positive, yellow means real negative and green means imaginary negative. The image is raytraced using modified Phong lighting.
The fine structure is neglected.
Atomic-orbital-cloud n4 px.png
Atomic hydrogen-like single-electron orbital showing
{\displaystyle (\psi _{n{=}4,l{=}1,m{=}-1}-\psi _{n{=}4,l{=}1,m{=}1})/{\sqrt {2}}}
, also called 4px-orbital for its alignment in x-direction. The image is a 3D rendering of the spatial density distribution of |𝜓|² with the color depicting the phase of 𝜓. The spatial distribution is smooth and vanishes for large radii. The cloud is a more realistic representation of an orbital than the more common solid-body approximations. At full resolution, 1Å=6.3px.
Orbitalesd.JPG
Autor/Urheber: J3D3, Lizenz: CC BY-SA 4.0
Modelo de orbitales d
Hydrogen eigenstate n3 l2 m1.png
Calculated 3d1orbital of an electron's eigenstate in the Coulomb-field of a hydrogen nucleus. An eigenstate is a state which keeps it's shape except for a complex phase when the Hamilton operator is applied, thus being invariant in time while obeying the Schrödinger equation. The orbital is aligned around the z-axis, but remains an eigenfunction if rotated to any direction.
{\displaystyle \psi _{3,2,1}(r,\vartheta ,\varphi )={\sqrt {{\left({\frac {2}{3\,a_{0}}}\right)}^{3}{\frac {1}{6\cdot 5!}}}}\cdot e^{\textstyle -r/(3\,a_{0})}\cdot \left({\frac {2\,r}{3\,a_{0}}}\right)^{2}\cdot Y_{2}^{1}(\vartheta ,\varphi )}
{\displaystyle E=-1\,\mathrm {Ry} /n^{2}=-13.6\,\mathrm {eV} /9}
{\displaystyle |L|={\sqrt {l\,(l+1)}}\,\hbar ={\sqrt {6}}\,\hbar }
m = 1: Angular momentum in z-direction:
{\displaystyle L_{z}=m\,\hbar =\hbar }
Since l=2, this is called a d-orbital.
{\displaystyle \psi _{2,1,-1}(r,\vartheta ,\varphi )={\sqrt {{\left({\frac {1}{a_{0}}}\right)}^{3}{\frac {1}{24}}}}\cdot e^{\textstyle -r/(2\,a_{0})}\cdot \left({\frac {r}{a_{0}}}\right)\cdot Y_{1}^{-1}(\vartheta ,\varphi )}
{\displaystyle E=-1\,\mathrm {Ry} /n^{2}=-13.6\,\mathrm {eV} /4}
{\displaystyle |L|={\sqrt {l\,(l+1)}}\,\hbar ={\sqrt {2}}\,\hbar }
{\displaystyle L_{z}=m\,\hbar =-\hbar }
Calculated 2p0orbital of an electron's eigenstate in the Coulomb-field of a hydrogen nucleus. An eigenstate is a state which keeps it's shape except for a complex phase when the Hamilton operator is applied, thus being invariant in time while obeying the Schrödinger equation. The orbital is aligned around the z-axis, but remains an eigenfunction if rotated to any direction.
{\displaystyle \psi _{2,1,0}(r,\vartheta ,\varphi )={\sqrt {{\left({\frac {1}{a_{0}}}\right)}^{3}{\frac {1}{24}}}}\cdot e^{\textstyle -r/(2\,a_{0})}\cdot \left({\frac {r}{a_{0}}}\right)\cdot Y_{1}^{0}(\vartheta ,\varphi )}
{\displaystyle E=-1\,\mathrm {Ry} /n^{2}=-13.6\,\mathrm {eV} /4}
{\displaystyle |L|={\sqrt {l\,(l+1)}}\,\hbar ={\sqrt {2}}\,\hbar }
{\displaystyle L_{z}=m\,\hbar =0}
{\displaystyle \psi _{2,1,1}(r,\vartheta ,\varphi )={\sqrt {{\left({\frac {1}{a_{0}}}\right)}^{3}{\frac {1}{24}}}}\cdot e^{\textstyle -r/(2\,a_{0})}\cdot \left({\frac {r}{a_{0}}}\right)\cdot Y_{1}^{1}(\vartheta ,\varphi )}
{\displaystyle E=-1\,\mathrm {Ry} /n^{2}=-13.6\,\mathrm {eV} /4}
{\displaystyle |L|={\sqrt {l\,(l+1)}}\,\hbar ={\sqrt {2}}\,\hbar }
{\displaystyle L_{z}=m\,\hbar =\hbar }
{\displaystyle \psi _{3,1,0}(r,\vartheta ,\varphi )={\sqrt {{\left({\frac {2}{3\,a_{0}}}\right)}^{3}{\frac {1}{6\cdot 4!}}}}\cdot e^{\textstyle -r/(3\,a_{0})}\cdot \left({\frac {2\,r}{3\,a_{0}}}\right)\cdot L_{1}^{3}(2\,r/(3\,a_{0}))\cdot Y_{1}^{0}(\vartheta ,\varphi )}
{\displaystyle E=-1\,\mathrm {Ry} /n^{2}=-13.6\,\mathrm {eV} /9}
{\displaystyle |L|={\sqrt {l\,(l+1)}}\,\hbar ={\sqrt {2}}\,\hbar }
{\displaystyle L_{z}=m\,\hbar =0}
Hydrogen eigenstate n1 l0 m0 wedgecut.png
Computed 1s0orbital of an electron's eigenstate of the energy- and angular momentum operators in the Coulomb-field of a hydrogen nucleus. Such an eigenstate keeps its spatial shape over time while obeying the Schrödinger equation and only advances its complex phase.
{\displaystyle \psi _{1,0,0}(r,\vartheta ,\varphi )={\sqrt {{\left({\frac {2}{a_{0}}}\right)}^{3}{\frac {1}{2}}}}\cdot e^{\textstyle -r/a_{0}}\cdot {\frac {1}{\sqrt {4\,\pi }}}}
{\displaystyle E=-1\,\mathrm {Ry} /n^{2}=-13.6\,\mathrm {eV} }
{\displaystyle |L|={\sqrt {l\,(l+1)}}\,\hbar =0}
{\displaystyle L_{z}=m\,\hbar =0}
Since l=0, this is called a s-orbital.
A wedge was cut away from the orbital to make the internal structure visible.
AOs-3D-dots.png
{\displaystyle \psi _{3,2,2}(r,\vartheta ,\varphi )={\sqrt {{\left({\frac {2}{3\,a_{0}}}\right)}^{3}{\frac {1}{6\cdot 5!}}}}\cdot e^{\textstyle -r/(3\,a_{0})}\cdot \left({\frac {2\,r}{3\,a_{0}}}\right)^{2}\cdot Y_{2}^{2}(\vartheta ,\varphi )}
{\displaystyle E=-1\,\mathrm {Ry} /n^{2}=-13.6\,\mathrm {eV} /9}
{\displaystyle |L|={\sqrt {l\,(l+1)}}\,\hbar ={\sqrt {6}}\,\hbar }
{\displaystyle L_{z}=m\,\hbar =2\,\hbar }
{\displaystyle \psi _{3,2,0}(r,\vartheta ,\varphi )={\sqrt {{\left({\frac {2}{3\,a_{0}}}\right)}^{3}{\frac {1}{6\cdot 5!}}}}\cdot e^{\textstyle -r/(3\,a_{0})}\cdot \left({\frac {2\,r}{3\,a_{0}}}\right)^{2}\cdot Y_{2}^{0}(\vartheta ,\varphi )}
{\displaystyle E=-1\,\mathrm {Ry} /n^{2}=-13.6\,\mathrm {eV} /9}
{\displaystyle |L|={\sqrt {l\,(l+1)}}\,\hbar ={\sqrt {6}}\,\hbar }
{\displaystyle L_{z}=m\,\hbar =0}
Calculated 3d-1orbital of an electron's eigenstate in the Coulomb-field of a hydrogen nucleus. An eigenstate is a state which keeps it's shape except for a complex phase when the Hamilton operator is applied, thus being invariant in time while obeying the Schrödinger equation. The orbital is aligned around the z-axis, but remains an eigenfunction if rotated to any direction.
{\displaystyle \psi _{3,2,-1}(r,\vartheta ,\varphi )={\sqrt {{\left({\frac {2}{3\,a_{0}}}\right)}^{3}{\frac {1}{6\cdot 5!}}}}\cdot e^{\textstyle -r/(3\,a_{0})}\cdot \left({\frac {2\,r}{3\,a_{0}}}\right)^{2}\cdot Y_{2}^{-1}(\vartheta ,\varphi )}
{\displaystyle E=-1\,\mathrm {Ry} /n^{2}=-13.6\,\mathrm {eV} /9}
{\displaystyle |L|={\sqrt {l\,(l+1)}}\,\hbar ={\sqrt {6}}\,\hbar }
{\displaystyle L_{z}=m\,\hbar =-\hbar }
{\displaystyle \psi _{3,2,-2}(r,\vartheta ,\varphi )={\sqrt {{\left({\frac {2}{3\,a_{0}}}\right)}^{3}{\frac {1}{6\cdot 5!}}}}\cdot e^{\textstyle -r/(3\,a_{0})}\cdot \left({\frac {2\,r}{3\,a_{0}}}\right)^{2}\cdot Y_{2}^{-2}(\vartheta ,\varphi )}
{\displaystyle E=-1\,\mathrm {Ry} /n^{2}=-13.6\,\mathrm {eV} /9}
{\displaystyle |L|={\sqrt {l\,(l+1)}}\,\hbar ={\sqrt {6}}\,\hbar }
{\displaystyle L_{z}=m\,\hbar =-2\,\hbar }
{\displaystyle \psi _{3,0,0}(r,\vartheta ,\varphi )={\sqrt {{\left({\frac {2}{3\,a_{0}}}\right)}^{3}{\frac {1}{18}}}}\cdot e^{\textstyle -r/(3\,a_{0})}\cdot L_{2}^{1}(2\,r/(3\,a_{0}))\cdot {\frac {1}{\sqrt {4\,\pi }}}}
{\displaystyle E=-1\,\mathrm {Ry} /n^{2}=-13.6\,\mathrm {eV} /9}
{\displaystyle |L|={\sqrt {l\,(l+1)}}\,\hbar =0}
{\displaystyle L_{z}=m\,\hbar =0}
Autor/Urheber: RJHall, Lizenz: CC BY-SA 3.0
{\displaystyle \psi _{2,0,0}(r,\vartheta ,\varphi )={\sqrt {{\left({\frac {1}{a_{0}}}\right)}^{3}{\frac {1}{8}}}}\cdot e^{\textstyle -r/(2\,a_{0})}\cdot L_{1}^{1}(r/a_{0})\cdot {\frac {1}{\sqrt {4\,\pi }}}}
{\displaystyle E=-1\,\mathrm {Ry} /n^{2}=-13.6\,\mathrm {eV} /4}
{\displaystyle |L|={\sqrt {l\,(l+1)}}\,\hbar =0}
{\displaystyle L_{z}=m\,\hbar =0}
{\displaystyle \psi _{3,1,1}(r,\vartheta ,\varphi )={\sqrt {{\left({\frac {2}{3\,a_{0}}}\right)}^{3}{\frac {1}{6\cdot 4!}}}}\cdot e^{\textstyle -r/(3\,a_{0})}\cdot \left({\frac {2\,r}{3\,a_{0}}}\right)\cdot L_{1}^{3}(2\,r/(3\,a_{0}))\cdot Y_{1}^{1}(\vartheta ,\varphi )}
{\displaystyle E=-1\,\mathrm {Ry} /n^{2}=-13.6\,\mathrm {eV} /9}
{\displaystyle |L|={\sqrt {l\,(l+1)}}\,\hbar ={\sqrt {2}}\,\hbar }
{\displaystyle L_{z}=m\,\hbar =\hbar }
Pz orbital.png
Autor/Urheber: Dhatfield, Lizenz: CC BY-SA 3.0
Pz orbital. Quantum numbers: n=2, l=1, mz=0.
|
Molar heat capacity - Simple English Wikipedia, the free encyclopedia
The molar heat capacity of a substance is the energy needed to raise the temperature of one mole of it by one degree Celsius.
When using SI units, it can be calculated with the equation
{\displaystyle c_{n}={\frac {Q}{\Delta T}}}
{\displaystyle c_{n}}
refers to the molar heat capcity (in joules per Kelvin),
{\displaystyle Q}
to the heat supplied (in joules) and
{\displaystyle \Delta T}
(in Kelvin) to the temperature change in the substance.[1]
The molar heat capacity of a given substance can be found by heating the substance by releasing a known amount of energy into the substance and measuring the temperature change.
For example, a common school experiment to find the molar heat capacity of water involves heating a beaker of water with an immersion heater (that can display the heat released in joules on a display) and stirring the water, while checking the temperature at specific intervals.
For more accurate results, a bomb calorimeter can be used; these contain a chamber of fuel (in this case, a compound that will release heat when needed) inside a chamber of water, with the water chamber protected by heat-proof walls (to ensure minimal heat loss, which would affect the final heat capacity recorded).
↑ "Molar Heat Capacity Definition and Examples". Thoughtco. Retrieved 7 August 2019.
Retrieved from "https://simple.wikipedia.org/w/index.php?title=Molar_heat_capacity&oldid=7297323"
|
<< 3.3. Searching in the catalogue3.5. Constraints argument patterns >>
The catalogue contains the following types of figures:
Figures that give the normalised signature tree of the arguments of a global constraint These figures are located in Section 3.5.
Figures that provide the implication graph between global constraints that have the same normalised signature tree for their arguments (e.g., see the figure embedded in the lower part of Table 3.5.1).
Figures that illustrate a global constraint or a keyword (e.g., see Figure 3.7.37 that illustrates the keyword limited discrepancy search).
Figures that depict the initial as well as the final graphs associated with a global constraint (e.g., see Figure 5.61.2 that provides the initial and final graphs of the
\mathrm{𝚌𝚑𝚊𝚗𝚐𝚎}
constraint).
Figures that provide an automaton that only recognises the solutions associated with a given global constraint (e.g., see Figure 5.168.3 that gives the automaton of the
\mathrm{𝚐𝚕𝚘𝚋𝚊𝚕}_\mathrm{𝚌𝚘𝚗𝚝𝚒𝚐𝚞𝚒𝚝𝚢}
Figures that give the hypergraph associated with the decomposition of an automaton in terms of signature and transition constraints (e.g., see Figure 5.168.4 that gives the hypergraph of the automaton-based reformulation of the
\mathrm{𝚐𝚕𝚘𝚋𝚊𝚕}_\mathrm{𝚌𝚘𝚗𝚝𝚒𝚐𝚞𝚒𝚝𝚢}
Figures for the graph structure of the XML schema of the parameters of a global constraint. They are only available in the on-line version of the catalogue.
Figures for visualising different views (i.e., compulsory part and cumulative profile) of two-dimensional placement of constraints. These figures are only available in the on-line version of the catalogue. They are accessible from the table containing the squared squares problem instances.
Most of the graph figures that depict the initial and final graph of a global constraint of this catalogue as well as the graph structure of the XML schema of the parameters of a global constraint were automatically generated by using the open source graph drawing software Graphviz [GansnerNorth00] available from AT&T.http://www.research.att.com/sw/tools/graphviz Since late 2012 TikZ [Tantau12] is used for generating all new figures and for converting the old Xfig, PSTricks [Voss07] and Graphviz figures so that all figures are done with TikZ.
|
\mathrm{with}\left(\mathrm{EssayTools}\right):
\mathrm{Reduce}\left("The car was super fast. It was rocket screaming fast. Nothing else could touch it."\right)
[\textcolor[rgb]{0,0,1}{"car be super fast"}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{"car be rocket screaming fast"}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{"nothing can touch touch"}]
\mathrm{Reduce}\left("The tortoise and hare was a great story because it showed how an underdog can succeed with dedication and perseverance."\right)
[\textcolor[rgb]{0,0,1}{"tortoise hare be great story"}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{"tortoise show how underdog can succeed dedication perseverance"}]
|
This problem is a checkpoint for solving for one variable in an equation with two or more variables. It will be referred to as Checkpoint 4B.
Rewrite the following equations so that you could enter them into a graphing calculator. In other words, solve for
y
x − 3(y + 2) = 6
\frac { 6 x - 1 } { y } - 3 = 2
\sqrt { y - 4 } = x + 1
\sqrt { y + 4 } = x + 2
Answers and extra practice for the Checkpoint problems are located in the back of your printed textbook or in the Reference Tab of your eBook. If you have an eBook for CCA2, login and then click the following link: Checkpoint 4B: Solving for One Variable in an Equation with Two or More Variables
|
Adjoint (operator theory) - Citizendium
In mathematics, the adjoint of an operator is a generalization of the notion of the Hermitian conjugate of a complex matrix to linear operators on complex Hilbert spaces. In this article the adjoint of a linear operator M will be indicated by M∗, as is common in mathematics. In physics the notation M† is more usual.
2 Existence of the adjoint
3 Formal definition of the adjoint of an operator
Consider a complex n×n matrix M. Apart from being an array of complex numbers, M can also be viewed as a linear map or operator from ℂn to itself. In order to generalize the idea of the Hermitian conjugate of a complex matrix to linear operators on more general complex Hilbert spaces, it is necessary to be able to characterize the Hermitian conjugate as an operator. The crucial observation here is the following: for any complex matrix M, its Hermitian tranpose, denoted by M∗, is the unique linear operator on ℂn satisfying:
{\displaystyle \langle Mx,y\rangle =\langle x,M^{*}y\rangle \quad \forall x,y\in \mathbb {C} ^{n}.}
This suggests that the "Hermitian conjugate" or, as it is more commonly known in mathematics, the adjoint of a linear operator T on an arbitrary complex Hilbert space H, with inner product ⟨ ⋅, ⋅ ⟩H, could be defined generally as an operator T∗ on H satisfying the so-called "turn-over rule":
{\displaystyle \langle Tx,y\rangle _{H}=\langle x,T^{*}y\rangle _{H}\qquad \forall x,y\in H.\qquad \qquad \qquad \qquad (1)}
It turns out that this idea is almost correct. It is correct and a unique T∗ exists, if T is a bounded operator on H, but additional care has to be taken on infinite dimensional Hilbert spaces since operators on such spaces can be unbounded and there may not exist an operator T∗ satisfying (1).
Existence of the adjoint
Suppose that T is a densely defined operator on H with domain D(T). Consider the vector space
{\displaystyle K(T)=\{\;v\in H\;\mid \;{\underset {u\in D(T)}{\sup }}|\langle Tu,v\rangle _{H}|<\infty \;\},}
that is, the space consists of all vectors v of which the supremum of the absolute value of ⟨Tu, v ⟩H is finite. Since T has a dense domain in H and
{\displaystyle f_{v}(u)\equiv \langle Tu,v\rangle _{H}}
is a continuous linear functional on D(T) for any v ∈ K(T), fv can be extended to a unique continuous linear functional
{\displaystyle {\tilde {f}}_{v}}
on H. By the Riesz representation theorem there is a unique element v∗ ∈ H such that
{\displaystyle {\tilde {f}}_{v}(u)=\langle u,v^{*}\rangle _{H}\quad \forall u\in H.}
A linear operator T∗ with domain D(T∗) = K(T) may now be defined as the map
{\displaystyle T^{*}v=v^{*}\quad \forall v\in D(T).}
By construction, the operator T∗ satisfies:
{\displaystyle \langle Tx,y\rangle _{H}=\langle x,T^{*}y\rangle _{H}\qquad \forall x\in D(T),\quad \forall y\in D(T^{*}).\qquad \qquad \qquad \qquad (2)}
When T is a bounded operator (hence D(T) = H) then it can be shown, again using the Riesz representation theorem, that T∗ is the unique bounded linear operator satisfying equation (2).
Formal definition of the adjoint of an operator
Let T be an operator on a Hilbert space H with dense domain D(T). Then the adjoint T∗ of T is an operator with domain
{\displaystyle D(T^{*})=\{\;v\in H\mid {\underset {u\in D(T)}{\sup }}|\langle Tu,v\rangle _{H}|<\infty \;\}}
defined as the map
{\displaystyle T^{*}v=v^{*}\quad \forall v\in D(T^{*}),}
where for each v in D(T∗), v∗ is the unique element of H such that
{\displaystyle \langle u,v^{*}\rangle =\langle Tu,v\rangle _{H}\quad \forall u\in D(T).}
Additionally, if T is a bounded operator then T∗ is the unique bounded operator satisfying
{\displaystyle \langle Tx,y\rangle _{H}=\langle x,T^{*}y\rangle _{H}\quad \forall x,y\in H.}
Consider two linear operators S and T on H with overlapping domains. For convenience we assume D(T) = D(S) and D(T∗) = D(S∗). Then
{\displaystyle \langle \;aTS\,u,\;v\;\rangle _{H}=\langle \;u,\;(aTS)^{*}\,v\;\rangle _{H}=\langle \;u,\;{\overline {a}}S^{*}T^{*}\,v\;\rangle _{H},\quad a\in \mathbb {C} .}
The fact that the complex conjugate of the complex number a appears is due to the property of the inner product on complex Hilbert space. The fact that the multiplication order of the operators reverts under the turnover rule follows thus
{\displaystyle \langle TS(u),v\rangle _{H}=\langle Tu',v\rangle _{H}=\langle u',T^{*}(v)\rangle _{H}=\langle u',v'\rangle _{H}=\langle S(u),v'\rangle _{H}=\langle u,S^{*}v'\rangle _{H}=\langle u,S^{*}\,T^{*}(v)\rangle _{H},}
{\displaystyle u'\equiv S(u),\quad v'\equiv T^{*}(v).}
Retrieved from "https://citizendium.org/wiki/index.php?title=Adjoint_(operator_theory)&oldid=330247"
|
Measurable function - formulasearchengine
Revision as of 13:51, 25 September 2014 by en>Brirush (Added image)
A function is Lebesgue measurable if and only if the preimage of each of the sets
{\displaystyle [a,\infty ]}
is a Lebesgue measurable set.
3 Special measurable functions
4 Properties of measurable functions
5 Non-measurable functions
Let (X, Σ) and (Y, Τ) be measurable spaces, meaning that X and Y are sets equipped with respective sigma algebras Σ and Τ. A function f: X → Y is said to be measurable if the preimage of E under f is in Σ for every E ∈ Τ; i.e.
{\displaystyle f^{-1}(E):=\{x\in X|\;f(x)\in E\}\in \Sigma ,\;\;\forall E\in T.}
The notion of measurability depends on the sigma algebras Σ and Τ. To emphasize this dependency, if f: X → Y is a measurable function, we will write
{\displaystyle f\colon (X,\Sigma )\rightarrow (Y,T)}
This definition can be deceptively simple, however, as special care must be taken regarding the σ-algebras involved. In particular, when a function f: R → R is said to be Lebesgue measurable what is actually meant is that
{\displaystyle f:(\mathbf {R} ,{\mathcal {L}})\to (\mathbf {R} ,{\mathcal {B}})}
is a measurable function—that is, the domain and range represent different σ-algebras on the same underlying set (here
{\displaystyle {\mathcal {L}}}
is the sigma algebra of Lebesgue measurable sets, and
{\displaystyle {\mathcal {B}}}
is the Borel algebra on R). As a result, the composition of Lebesgue-measurable functions need not be Lebesgue-measurable.
Special measurable functions
If (X, Σ) and (Y, Τ) are Borel spaces, a measurable function f: (X, Σ) → (Y, Τ) is also called a Borel function. Continuous functions are Borel functions but not all Borel functions are continuous. However, a measurable function is nearly a continuous function; see Luzin's theorem. If a Borel function happens to be a section of some map
{\displaystyle Y{\stackrel {\pi }{\to }}X}
, it is called a Borel section.
A Lebesgue measurable function is a measurable function
{\displaystyle f:(\mathbf {R} ,{\mathcal {L}})\to (\mathbf {C} ,{\mathcal {B}}_{\mathbf {C} })}
{\displaystyle {\mathcal {L}}}
{\displaystyle {\mathcal {B}}_{\mathbf {C} }}
is the Borel algebra on the complex numbers C. Lebesgue measurable functions are of interest in mathematical analysis because they can be integrated. In the case
{\displaystyle f:X\to \mathbf {R} }
{\displaystyle f}
is Lebesgue measurable iff
{\displaystyle \{f>\alpha \}=\{x\in X:f(x)>\alpha \}}
is measurable for all real
{\displaystyle \alpha }
. This is also equivalent to any of
{\displaystyle \{f\geq \alpha \},\{f<\alpha \},\{f\leq \alpha \}}
being measurable for all
{\displaystyle \alpha }
. Continuous functions, monotone functions, step functions, semicontinuous functions, Riemann-integrable functions, and functions of bounded variation are all Lebesgue measurable.[2] A function
{\displaystyle f:X\to \mathbb {C} }
is measurable iff the real and imaginary parts are measurable.
Random variables are by definition measurable functions defined on sample spaces.
The composition of measurable functions is measurable; i.e., if f: (X, Σ1) → (Y, Σ2) and g: (Y, Σ2) → (Z, Σ3) are measurable functions, then so is g(f(⋅)): (X, Σ1) → (Z, Σ3).[1] But see the caveat regarding Lebesgue-measurable functions in the introduction.
The pointwise limit of a sequence of measurable functions is measurable (if the codomain in endowed with the Borel algebra); note that the corresponding statement for continuous functions requires stronger conditions than pointwise convergence, such as uniform convergence. (This is correct when the counter domain of the elements of the sequence is a metric space. It is false in general; see pages 125 and 126 of.[5])
Real-valued functions encountered in applications tend to be measurable; however, it is not difficult to find non-measurable functions.
So long as there are non-measurable sets in a measure space, there are non-measurable functions from that space. If (X, Σ) is some measurable space and A ⊂ X is a non-measurable set, i.e. if A ∉ Σ, then the indicator function 1A: (X, Σ) → R is non-measurable (where R is equipped with the Borel algebra as usual), since the preimage of the measurable set {1} is the non-measurable set A. Here 1A is given by
{\displaystyle \mathbf {1} _{A}(x)={\begin{cases}1&{\text{ if }}x\in A\\0&{\text{ otherwise}}\end{cases}}}
Any non-constant function can be made non-measurable by equipping the domain and range with appropriate σ-algebras. If f: X → R is an arbitrary non-constant, real-valued function, then f is non-measurable if X is equipped with the indiscrete algebra Σ = {∅, X}, since the preimage of any point in the range is some proper, nonempty subset of X, and therefore does not lie in Σ.
Vector spaces of measurable functions: the Lp spaces
Retrieved from "https://en.formulasearchengine.com/index.php?title=Measurable_function&oldid=222141"
|
Error, invalid input: factor uses a 1st argument, xFP, which is missing - Maple Help
Home : Support : Online Help : Error, invalid input: factor uses a 1st argument, xFP, which is missing
i
\mathrm{sum}\left(i\right)
i.
\mathrm{sum}\left(i,i\right)
\frac{\textcolor[rgb]{0,0,1}{1}}{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{}{\textcolor[rgb]{0,0,1}{i}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{-}\frac{\textcolor[rgb]{0,0,1}{1}}{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{i}
\mathrm{factor}\left(\right)
\mathrm{factor}\left(6 {x}^{2}+18\cdot x-24\right)
\textcolor[rgb]{0,0,1}{6}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{4}\right)\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{1}\right)
\mathrm{Adder}≔\mathbf{proc}\left(a∷\mathrm{integer} ,b\right) a+b \mathbf{end} \mathbf{proc};
\textcolor[rgb]{0,0,1}{\mathrm{Adder}}\textcolor[rgb]{0,0,1}{:=}\textcolor[rgb]{0,0,1}{\mathbf{proc}}\left(\textcolor[rgb]{0,0,1}{a}\textcolor[rgb]{0,0,1}{::}\textcolor[rgb]{0,0,1}{\mathrm{integer}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{b}\right)\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\textcolor[rgb]{0,0,1}{a}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{b}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\textcolor[rgb]{0,0,1}{\mathbf{end proc}}
\mathrm{Adder}\left(3\right)
\mathrm{Adder}\left(3,2.5\right)
\textcolor[rgb]{0,0,1}{5.5}
\mathrm{eliminate}\left(\left\{{x}^{2}+{y}^{2}-1,{x}^{3}-{y}^{2}x+xy-3\right\}\right)
\mathrm{eliminate}\left(\left\{{x}^{2}+{y}^{2}-1,{x}^{3}-{y}^{2}x+xy-3\right\},x\right)
\left[\left\{\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{=}\frac{\textcolor[rgb]{0,0,1}{3}}{\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{}{\textcolor[rgb]{0,0,1}{y}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{1}}\right\}\textcolor[rgb]{0,0,1}{,}\left\{\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{7}\textcolor[rgb]{0,0,1}{}{\textcolor[rgb]{0,0,1}{y}}^{\textcolor[rgb]{0,0,1}{4}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{4}\textcolor[rgb]{0,0,1}{}{\textcolor[rgb]{0,0,1}{y}}^{\textcolor[rgb]{0,0,1}{6}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{6}\textcolor[rgb]{0,0,1}{}{\textcolor[rgb]{0,0,1}{y}}^{\textcolor[rgb]{0,0,1}{3}}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{4}\textcolor[rgb]{0,0,1}{}{\textcolor[rgb]{0,0,1}{y}}^{\textcolor[rgb]{0,0,1}{5}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{4}\textcolor[rgb]{0,0,1}{}{\textcolor[rgb]{0,0,1}{y}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{8}\right\}\right]
|
Various Consequences: No Interactions? OFAT is still a Bad Idea
Y={b}_{1}{X}_{1}+{b}_{2}{X}_{2}+{b}_{3}{X}_{3}+{b}_{4}{X}_{4}+{b}_{5}{X}_{5}+{b}_{6}{X}_{6}
{b}_{i}
{b}_{7}{X}_{1}{X}_{4}
X
Table 2 shows a 36 run OFAT design. There are three repeated cases for each treatment. Table 1 shows a 32 run D-optimal design. There are no repeated runs. You might expect that you would be better able to estimate error from the design in Table 2 because of replication, but you’d be wrong. In fact, as Figure 1 shows,
Figure 1: 1000 fits of the model in equation 1 to synthetic data
the average standard error in the coefficient estimates for the model in equation 1 are significantly lower for the D-optimal design most of the time even with fewer runs than the OFAT design.
Why does this happen? Each run in the D-optimal design contributes to the estimate of every term in the model. However, each run in the OFAT design can only contribute to the estimate of a single term in the model. The “error bars” for OFAT designs will almost always be significantly larger than D-optimal designs (other optimality criteria give largely the same improvement over OFAT in practice).
Table 1: D-Optimal Design
12 1 1 -1 1 -1 -1
14 1 -1 1 1 -1 -1
15 -1 1 1 1 -1 -1
17 -1 -1 -1 -1 1 -1
20 1 1 -1 -1 1 -1
23 -1 1 1 -1 1 -1
26 1 -1 -1 1 1 -1
27 -1 1 -1 1 1 -1
29 -1 -1 1 1 1 -1
34 1 -1 -1 -1 -1 1
35 -1 1 -1 -1 -1 1
37 -1 -1 1 -1 -1 1
41 -1 -1 -1 1 -1 1
49 -1 -1 -1 -1 1 1
55 -1 1 1 -1 1 1
59 -1 1 -1 1 1 1
Table 2: OFAT Design
onlyvix.blogspot.com Sunday, February 03, 2013
Hello VC, I am a bit confused by the topic of the post (and also I am not good with R). Basic question: in the table 1 shouldn't these be 0 and 1, not -1 and 1?
It's common practice to center the factor levels so a two-level factor takes values -1 and 1.
With the AlgDesign function gen.factorial used in the script above you can change this with the 'center' option (center=FALSE instead of center=TRUE).
My goal with the post certainly wasn't to confuse, so please ask more questions if you've got them. Anything in particular that is especially confusing?
|
Surrogate optimization for global minimization of time-consuming objective functions - MATLAB surrogateopt - MathWorks Nordic
\underset{x}{\mathrm{min}}f\left(x\right)\text{ such that }\left\{\begin{array}{l}\text{lb}\le x\le \text{ub}\\ A·x\le b\\ \text{Aeq}·x=\text{beq}\\ c\left(x\right)\le 0\\ {x}_{i}\text{ integer, }i\in \text{intcon}\text{.}\end{array}
100\left(x\left(2\right)-x\left(1{\right)}^{2}{\right)}^{2}+\left(1-x\left(1\right){\right)}^{2}
\left(x\left(1\right)-1/3{\right)}^{2}+\left(x\left(2\right)-1/3{\right)}^{2}\le \left(1/3{\right)}^{2}
c\left(x\right)\le 0
|
Spin | Brilliant Math & Science Wiki
July Thomas and Manjunadh Ch contributed
Spin is one of the angular momenta carried by particles. It is considered an intrinsic property along the lines of electric charge and mass.
As a metaphor, consider the Earth in its motion around the sun. The angular momentum of the Earth due to the orbit is akin to the orbital angular momentum of a particle. The angular momentum of the Earth about its own axis due to its rotation represents the spin.
Every particle can be classified as a boson or fermion by its spin
Bosons are said to have integer spin as a result of their symmetric wave functions. This means they are capable of occupying the same state. This is leveraged in applications such as LASER and superfluids. While the elementary particles classified as bosons are the charge carriers and the Higgs, atoms can also be bosons if they have integer spin.
Fermions are said to be spin-
\frac12
as a result of their antisymmetric wave functions. Thus they are incapable of occupying the same state and forming condensates. The elementary fermions are quarks and leptons (electrons, muons, taus, and neutrinos) while composite fermions include protons and neutrons. Collectively, fermions are known are as mass.
While helium 4 is a boson, the much less common isotope helium 3 actually has spin-1/2 and thus is classified as a fermion.[2] The isotope helium-4 has spin-0. Is it classified as a boson or a fermion?
Only bosons are permitted to have spin-0, so
^4\text{He}
is classified as a boson. And in fact, at low temperatures, it forms liquid helium which exhibits superfluid properties.
Spoon, S. Earth's axis. Retrieved May 4, 2016, from https://commons.wikimedia.org/wiki/File:Earth%27s_Axis.gif
W., H. Helium 3 and Helium 4. Retrieved May 4, 2016, from https://commons.wikimedia.org/wiki/File:Helium-3_and_Helium-4.gif
Cite as: Spin. Brilliant.org. Retrieved from https://brilliant.org/wiki/spin/
|
Global Constraint Catalog: Kalpha-acyclic_constraint_networkNb2
<< 3.7.8. All different3.7.10. Alpha-acyclic constraint network(3) >>
\mathrm{𝚊𝚖𝚘𝚗𝚐}
\mathrm{𝚊𝚖𝚘𝚗𝚐}_\mathrm{𝚍𝚒𝚏𝚏}_\mathtt{0}
\mathrm{𝚊𝚖𝚘𝚗𝚐}_\mathrm{𝚒𝚗𝚝𝚎𝚛𝚟𝚊𝚕}
\mathrm{𝚊𝚖𝚘𝚗𝚐}_\mathrm{𝚕𝚘𝚠}_\mathrm{𝚞𝚙}
\mathrm{𝚊𝚖𝚘𝚗𝚐}_\mathrm{𝚖𝚘𝚍𝚞𝚕𝚘}
\mathrm{𝚊𝚝𝚕𝚎𝚊𝚜𝚝}
\mathrm{𝚊𝚝𝚖𝚘𝚜𝚝}
\mathrm{𝚌𝚘𝚞𝚗𝚝}
\mathrm{𝚌𝚘𝚞𝚗𝚝𝚜}
\mathrm{𝚍𝚒𝚏𝚏𝚎𝚛}_\mathrm{𝚏𝚛𝚘𝚖}_\mathrm{𝚊𝚝}_\mathrm{𝚕𝚎𝚊𝚜𝚝}_𝚔_\mathrm{𝚙𝚘𝚜}
\mathrm{𝚎𝚡𝚊𝚌𝚝𝚕𝚢}
\mathrm{𝚏𝚞𝚕𝚕}_\mathrm{𝚐𝚛𝚘𝚞𝚙}
\mathrm{𝚐𝚛𝚘𝚞𝚙}
\mathrm{𝚐𝚛𝚘𝚞𝚙}_\mathrm{𝚜𝚔𝚒𝚙}_\mathrm{𝚒𝚜𝚘𝚕𝚊𝚝𝚎𝚍}_\mathrm{𝚒𝚝𝚎𝚖}
\mathrm{𝚜𝚕𝚒𝚍𝚒𝚗𝚐}_\mathrm{𝚌𝚊𝚛𝚍}_\mathrm{𝚜𝚔𝚒𝚙}\mathtt{0}
Before defining alpha-acyclic constraint network(2) we first need to introduce the following notions:
The dual graph of a constraint network
𝒩
is defined in the following way: to each constraint of
𝒩
corresponds a vertex in the dual graph and if two constraints have a non-empty set
S
of shared variables, there is an edge labelled
S
between their corresponding vertices in the dual graph.
An edge in the dual graph of a constraint network is redundant if its variables are shared by every edge along an alternative path between the two end points [Dechter89].
If the subgraph resulting from the removal of the redundant edges of the dual graph is a tree the original constraint network is called
\alpha
-acyclic [Fagin83].
Alpha-acyclic constraint network(2) denotes an
\alpha
-acyclic constraint network such that, for any pair of constraints, the two sets of involved variables share at most two variables.
|
Eigenvalues - MATLAB & Simulink
The symbolic eigenvalues of a square matrix A or the symbolic eigenvalues and eigenvectors of A are computed, respectively, using the commands E = eig(A) and [V,E] = eig(A).
The variable-precision counterparts are E = eig(vpa(A)) and [V,E] = eig(vpa(A)).
The eigenvalues of A are the zeros of the characteristic polynomial of A, det(A-x*I), which is computed by charpoly(A).
The matrix H from the last section provides the first example:
H = sym([8/9 1/2 1/3; 1/2 1/3 1/4; 1/3 1/4 1/5])
The matrix is singular, so one of its eigenvalues must be zero. The statement
[T,E] = eig(H)
produces the matrices T and E. The columns of T are the eigenvectors of H and the diagonal elements of E are the eigenvalues of H:
[ 3/10, 218/285 - (4*12589^(1/2))/285, (4*12589^(1/2))/285 + 218/285]
[ -6/5, 292/285 - 12589^(1/2)/285, 12589^(1/2)/285 + 292/285]
[ 1, 1, 1]
[ 0, 0, 0]
[ 0, 32/45 - 12589^(1/2)/180, 0]
[ 0, 0, 12589^(1/2)/180 + 32/45]
It may be easier to understand the structure of the matrices of eigenvectors, T, and eigenvalues, E, if you convert T and E to decimal notation. To do so, proceed as follows. The commands
Td = double(T)
Ed = double(E)
The first eigenvalue is zero. The corresponding eigenvector (the first column of Td) is the same as the basis for the null space found in the last section. The other two eigenvalues are the result of applying the quadratic formula to
{x}^{2}-\frac{64}{45}x+\frac{253}{2160}
which is the quadratic factor in factor(charpoly(H, x)):
g = factor(charpoly(H, x))/x
solve(g(3))
[ 1/(2160*x), 1, (2160*x^2 - 3072*x + 253)/x]
32/45 - 12589^(1/2)/180
12589^(1/2)/180 + 32/45
Closed form symbolic expressions for the eigenvalues are possible only when the characteristic polynomial can be expressed as a product of rational polynomials of degree four or less. The Rosser matrix is a classic numerical analysis test matrix that illustrates this requirement. The statement
R = sym(rosser)
p = charpoly(R, x);
The characteristic polynomial (of degree 8) factors nicely into the product of two linear terms and three quadratic terms. You can see immediately that four of the eigenvalues are 0, 1020, and a double root at 1000. The other four roots are obtained from the remaining quadratics. Use
eig(R)
to find all these values
The Rosser matrix is not a typical example; it is rare for a full 8-by-8 matrix to have a characteristic polynomial that factors into such simple form. If you change the two “corner” elements of R from 29 to 30 with the commands
p = charpoly(S, x)
x^8 - 4040*x^7 + 5079941*x^6 + 82706090*x^5...
- 5327831918568*x^4 + 4287832912719760*x^3...
- 1082699388411166000*x^2 + 51264008540948000*x...
You also find that factor(p) is p itself. That is, the characteristic polynomial cannot be factored over the rationals.
For this modified Rosser matrix
F = eig(S)
Notice that these values are close to the eigenvalues of the original Rosser matrix.
It is also possible to try to compute eigenvalues of symbolic matrices, but closed form solutions are rare. The Givens transformation is generated as the matrix exponential of the elementary matrix
A=\left[\begin{array}{cc}0& 1\\ -1& 0\end{array}\right].
Symbolic Math Toolbox™ commands
A = sym([0 1; -1 0]);
G = expm(t*A)
[ exp(-t*1i)/2 + exp(t*1i)/2,
(exp(-t*1i)*1i)/2 - (exp(t*1i)*1i)/2]
[ - (exp(-t*1i)*1i)/2 + (exp(t*1i)*1i)/2,
exp(-t*1i)/2 + exp(t*1i)/2]
You can simplify this expression using simplify:
[ cos(t), sin(t)]
[ -sin(t), cos(t)]
Next, the command
g = eig(G)
cos(t) - sin(t)*1i
cos(t) + sin(t)*1i
You can rewrite g in terms of exponents:
g = rewrite(g, 'exp')
exp(-t*1i)
exp(t*1i)
|
17.1 Collecting and organising data | Data collection and presentation | Siyavula
\times
\text{2}\times {5}
\text{3}
\text{13}
\text{5 ; 1 ; 5 ; 1 ; 1 ; 2 ; 3 ; 5 ; 6 ; 4 ; 2 ; 4 ; 2 ; 5 ; 5}
\text{5 ; 1 ; 3 ; 3 ; 4 ; 5 ; 3 ; 1 ; 1 ; 3 ; 6 ; 3 ; 3 ; 6 ; 4}
\text{5 ; 7 ; 5 ; 7 ; 6 ; 7 ; 7 ; 5 ; 8 ; 8 ; 5 ; 5 ; 8 ; 10 ; 6 ; 5 ; 5 ; 10 ; 7 ; 6}
\text{2; 6; 8; 10; 12; 14; 18}
\dfrac{\text{the sum of the data values}}{\text{the number of data values}} = \dfrac{\text{2 }\text{+ 6 }\text{+ 8 }\text{+ 10 }\text{+ 12 }\text{+ 14 }\text{+ 18 }}{ \text{7}} = \text{10}
\text{4; 6; 7; 3; 4; 8; 4; 2; 9}
\text{4}
\text{6}
\text{7}
\text{3}
\text{4}
\text{8}
\text{4}
\text{2}
\text{9}
\text{47}
\dfrac{\text{the sum of the data values}}{\text{the number of data values}} = \dfrac{\text{47}}{\text{9}} = \text{5.2}
\text{8}
\text{6}
\text{9}
\text{5}
\text{11}
\text{6}
\text{7}
\text{8}
\text{60}
\dfrac{\text{the sum of the data values}}{\text{the number of data values}} = \dfrac{\text{60}}{\text{8}} = \text{7.5}
\text{6 + 8 + 7 + 5 + 10 + 5 + 7}
\text{48}
\dfrac{\text{48}}{\text{7}} = \text{6.86}
\text{8 + 7 + 10 + 8 + 7 + 6 + 5 + 5 + 6 + 11}
\text{73}
\dfrac{\text{73}}{\text{10}} = \text{7.3}
\text{2; 5; 7; 7; 7; 10; 12; 12; 15}
\text{10; 7; 7; 6; 9; 5}
\text{10; 8; 8; 9; 7; 9; 10; 11; 6; 11}
\text{1; 2; 3; 4; 5; 1; 2; 3; 4; 1; 2; 3; 1; 2; 1}
\text {1; 1; 1; 1; 1; 2; 2; 2; 2; 3; 3; 3; 4; 4; 5}
\text{5; 5; 6; 8; 9; 9; 11; 11}
\text{ 4 m; 7 m; 6 m; 11 m; 8 m; 5 m; 9 m; 6 m; 11 m}
\text{4 m; 5 m; 6 m; 6 m; 7 m; 8 m; 9 m; 11 m; 11 m}
\text{2; 3; 4; 5; 6; 7; 8}
\text{4; 6; 7; 4; 3; 4; 8; 2; 9; 7; 2}
\text{4; 6; 4; 7; 2; 3; 8; 9; 7; 4}
\frac{\text{4} + \text{6}}{\text{2}} = \text{5}
\text{8; 5; 10; 9; 5; 7; 9; 10; 11; 7; 9; 5}
\text{5; 5; 5; 7; 7; 8; 9; 9; 9; 10; 10; 11}
\frac{\text{8} + \text{9}}{\text{2}} = \text{8.5}
\text{6; 7; 10; 5; 9; 7; 7; 5}
\text{5; 5; 6; 7; 7; 7; 9; 10}
\frac{\text{7} + \text{7}}{\text{2}} = \text{7}
\text{7; 6; 11; 5; 10; 7; 11; 6; 9}
\text{5; 6; 6; 7; 7; 9; 10; 11; 11}
\text{5; 11; 9; 9; 10; 8; 9; 10; 6}
\text{The mean}
\dfrac{\text{the sum of the data values}}{\text{the number of data values}} = \dfrac{\text{5 + 11 + 9 + 9 + 10 + 8 + 9 + 10 + 6}}{\text{9}} = \dfrac{\text{77}}{\text{9}} = \text{8.56}
\text{5; 6; 8; 9; 9; 9; 10; 10; 11}
\text{5; 6; 8; 9; 9; 9; 10; 10; 11}
\text{11; 7; 9; 5; 7; 8; 7}
\text{Mean}
\dfrac{\text{the sum of the data values}}{\text{the number of data values}} = \dfrac{\text{11 + 7 + 9 + 5 + 7 + 8 + 7}}{\text{7}} = \dfrac{\text{54}}{\text{7}} = \text{7.71}
\text{5 ; 7 ; 7 ; 7 ; 8 ; 9 ; 11}
\text{9; 6; 5; 9; 6; 10; 11; 8}
\text{Mean}
\dfrac{\text{9 + 6 + 5 + 9 + 6 + 10 + 11 + 8}}{\text{7}} = \dfrac{\text{64}}{\text{8}} = \text{8}
\text{5 ; 6 ; 6 ; 8 ; 9 ; 9 ; 10 ; 11}
=\frac{\text{8} + \text{9}}{\text{2}} = \text{8.5}
|
\begin{array}{l}\text{cos}{70}^{o}=\frac{1}{h}\\ h=\frac{1}{\text{cos}{70}^{o}}\\ h=\frac{1}{0.342}\\ h=2.924\text{ m}\\ \\ \text{Hence, the length of the ladder}\\ =2.924\text{ m}\end{array}
\begin{array}{l}\text{tan}{70}^{o}=\frac{T}{1}\\ T=\text{tan}{70}^{o}×1\\ \text{ }=2.747×1\\ \text{ }=2.747\text{ m}\\ \\ \text{The height of the lamp post}\\ =2.747+1.2\\ =3.947\text{ m}\end{array}
In diagram below, AEC and BCD are straight lines. E is the midpoint of AC.
\text{Given }\mathrm{cos}x=\frac{5}{13}\text{ and }\mathrm{sin}y=\frac{3}{5}
(a) find the value of tan x.
(b) Calculate the length, in cm, of BD.
\begin{array}{l}\text{(a)}\\ \text{Given cos }x=\frac{5}{13},\text{ therefore }BC=5,\text{ }AB=13\\ AC=\sqrt{{13}^{2}-{5}^{2}}\\ \text{ }=\sqrt{169-25}\\ \text{ }=\sqrt{144}\\ \text{ }=12\text{ cm}\\ \\ \text{tan }x=\frac{AC}{BC}=\frac{12}{5}\end{array}
\begin{array}{l}\text{(b)}\\ \text{For }\Delta DCE:\\ \mathrm{sin}y=\frac{3}{5}\\ \frac{EC}{DE}=\frac{3}{5}\\ \frac{EC}{10}=\frac{3}{5}\\ EC=\frac{3}{5}×10=6\text{ cm}\\ \\ D{C}^{2}={10}^{2}-{6}^{2}\\ \text{ }=64\\ \text{ }DC=8\text{ cm}\\ \\ \text{For }\Delta ABC:\\ AC=2×6=12\text{ cm}\\ \\ \mathrm{tan}x=\frac{12}{5}\\ \frac{12}{CB}=\frac{12}{5}\\ CB=5\text{ cm}\\ \\ BD=DC+CB\\ =\text{8 cm + 5 cm}\\ =\text{13 cm}\end{array}
In diagram below, T is the midpoint of the line PR.
(a) Find the value of tan xo.
(b) Calculate the length, in cm, of PQ.
\begin{array}{l}\text{(a)}\\ T{R}^{2}={13}^{2}-{12}^{2}\\ \text{ }=169-144\\ \text{ }=25\\ TR=\sqrt{25}\\ \text{ }=5\text{ cm}\\ \mathrm{tan}{x}^{o}=\frac{12}{5}\end{array}
\begin{array}{l}\text{(b)}\\ PR=2×5\text{ cm}\\ \text{ }=10\text{ cm}\\ P{Q}^{2}={10}^{2}-{8}^{2}\\ \text{ }=100-64\\ \text{ }=36\\ PQ=\sqrt{36}\\ \text{ }=6\text{ cm}\end{array}
In diagram below, ABE and DBC are two right-angled triangles ABC and DEB are straight lines.
\text{It is given that }\mathrm{cos}{y}^{o}=\frac{3}{5}.
(b) Calculate the length, in cm, of DE.
\text{(a) }\mathrm{tan}{x}^{o}=\frac{7}{24}
\begin{array}{l}\text{(b)}\\ \mathrm{cos}{y}^{o}=\frac{BC}{20}\\ \text{ }\frac{3}{5}=\frac{BC}{20}\\ BC=\frac{3}{5}×20\\ \text{ }=12\text{ cm}\\ \\ \therefore B{D}^{2}={20}^{2}-{12}^{2}\\ \text{ }=400-144\\ \text{ }=256\\ BD=\sqrt{256}\\ \text{ }=16\text{ cm}\\ \\ DE=16-7\\ \text{ }=9\text{ cm}\end{array}
Diagram below shows a vertical pole, PQ. At 2.30 p.m. and 5.00 p.m., the shadow of the pole falls on QR and QS respectively.
(a) the height, in m, of the pole.
(b) the value of w.
\begin{array}{l}\text{tan }{55}^{o}=\frac{\text{Height of the pole}}{3.2}\\ \text{Height of the pole}=\text{tan }{55}^{o}×3.2\\ \text{}=1.428×3.2\\ \text{}=4.57\text{ m}\end{array}
\begin{array}{l}\text{tan }w=\frac{4.57}{3.20+2}\\ \text{ }=\frac{4.57}{5.20}\\ \text{ }=0.879\\ \text{ }w={41}^{o}18\text{'}\end{array}
Diagram below shows a right-angled triangle ABC.
\mathrm{cos}{x}^{o}=\frac{5}{13}
, calculate the length, in cm, of AB.
\begin{array}{l}\mathrm{cos}{x}^{o}=\frac{AB}{AC}\\ \mathrm{cos}{x}^{o}=\frac{5}{13}\\ \frac{AB}{39}=\frac{5}{13}\\ AB=\frac{5}{13}×39\\ \text{ }=15\text{ cm}\end{array}
In the diagram, PQR and QTS are straight lines.
\mathrm{tan}y=\frac{3}{4}
, calculate the length, in cm, of RS.
\begin{array}{l}\text{In }△\text{ }PQT,\\ \mathrm{tan}y=\frac{PQ}{QT}\\ \frac{3}{4}=\frac{6}{QT}\\ QT=6×\frac{4}{3}\\ \text{ }=8\text{ cm}\\ \\ \text{In }△QRS,\text{ }QS=8+8=16\text{ cm}\\ \\ \therefore R{S}^{2}={12}^{2}+{16}^{2}←\overline{)\text{ pythagoras' Theorem }}\\ \text{ }=144+256\\ \text{ }=400\\ RS=\sqrt{400}\\ \text{ }=20\text{ cm}\end{array}
In the diagram, PQR is a straight line.
\mathrm{cos}{x}^{o}=\frac{3}{5}
, hence sin yo =
\begin{array}{l}\mathrm{cos}{x}^{o}=\frac{PQ}{PS}\\ \frac{PQ}{10}=\frac{3}{5}\\ PQ=\frac{3}{5}×10\\ \text{ }=6\text{ cm}\\ QR=PR-PQ\\ =21-6\\ =15\text{ cm}\end{array}
\begin{array}{l}Q{S}^{2}={10}^{2}-{6}^{2}←\overline{)\text{ pythagoras' Theorem }}\\ \text{ }=100-36\\ \text{ }=64\\ QS=\sqrt{64}\\ \text{ }=8\text{ cm}\\ R{S}^{2}={15}^{2}+{8}^{2}\\ \text{ }=225+64\\ \text{ }=289\\ RS=\sqrt{289}\\ \text{ }=17\text{ cm}\\ \mathrm{sin}{y}^{o}=\frac{15}{17}\end{array}
Diagram below consists of two right-angled triangles.
Determine the value of cos xo.
\begin{array}{l}AC=\sqrt{{13}^{2}-{12}^{2}}\\ \text{ }=\sqrt{25}\\ \text{ }=5\text{ cm}\\ CD=\sqrt{{5}^{2}-{3}^{2}}\\ \text{ }=\sqrt{16}\\ \text{ }=4\text{ cm}\\ \mathrm{cos}{x}^{o}=\frac{CD}{AC}\\ \text{ }=\frac{4}{5}\end{array}
Diagram below consists of two right-angled triangles ABE and DBC.
ABC and EBD are straight lines.
sin{x}^{o}=\frac{5}{13}\text{ and }\mathrm{cos}{y}^{o}=\frac{3}{5}.
(b) Calculate the length, in cm, of ABC.
\begin{array}{l}sin{x}^{o}=\frac{5}{13},\text{ }DC=13\text{ cm}\\ BC=\sqrt{{13}^{2}-{5}^{2}}\\ \text{ }=\sqrt{144}\\ \text{ }=12\text{ cm}\\ \text{Thus},\text{ }\mathrm{tan}{x}^{o}=\frac{5}{12}\end{array}
\begin{array}{l}\mathrm{cos}{y}^{o}=\frac{AB}{15}\\ \text{ }\frac{3}{5}=\frac{AB}{15}\\ \text{}AB=9\text{ cm}\\ \text{Thus, }ABC=9+12\\ \text{ }=21\text{ cm}\end{array}
15.1.1 Trigonometrical Ratios of an Acute Angle
2. Hypotenuse is the longest side of the right-angled triangle which is opposite the right angle.
3. Adjacent side is the side, other than the hypotenuse, which has direct contact with the given angle, θ.
4. Opposite side is the side which is opposite the given angle, θ.
5. In a right-angled triangle,
\overline{)\begin{array}{l}\text{ }\\ \text{ sin}\theta =\frac{\text{opposite side}}{\text{hypotenuse}}\text{ }\\ \\ \text{ }\mathrm{cos}\theta =\frac{\text{adjacent side}}{\text{hypotenuse}}\\ \\ \text{ }\mathrm{tan}\theta =\frac{\text{opposite side}}{\text{adjacent side}}\text{ }\\ \text{ }\end{array}}
6. When the size of an angle θ increases from 0o to 90o,
· sin θ increases.
· cos θ increases.
· tan θ increases.
7. The values of sin θ, cos θ and tan θ remain the same even though the size of the triangle has changed.
Find the sine, cosine and tangent of the give angle, θ, in triangle ABC.
\begin{array}{l}\mathrm{sin}\theta =\frac{BC}{AB}=\frac{8}{17}\\ \mathrm{cos}\theta =\frac{AC}{AB}=\frac{15}{17}\\ tan\theta =\frac{BC}{AC}=\frac{8}{15}\end{array}
15.1.2 Values of Tangent, Sine and Cosine
1. The values of the trigonometric ratios of 30o, 45o and 60o (special angle) are as below.
2. 1 degree is equal to 60 minutes.
1o= 60’
3. A scientific calculator can be used to find the value of the sine, cosine or tangent of an angle.
sin 40.6o = 0.5954
Calculator Computation
Press [sin] [40.6] [=] 0.595383839
4. Given the values of sine, cosine and tangent, we can find the angles using a scientific calculator.
tan x = 1.7862
x = 67o30’
Press [shift] [tan][1.7862]
[=][o’’’] 67o30’
\begin{array}{l}\text{The speed of the bicycle from Town }P\text{ to Town }Q\\ =\frac{\text{Distance}}{\text{Time}}\\ =\frac{10}{2}\\ =5\text{ km/h}\end{array}
\begin{array}{l}\text{Speed}=\frac{\text{Distance}}{\text{Time}}\\ \\ \text{Rahim took 30 minutes rest at Town }Q\text{.}\\ \text{Time taken when his journey to Town }R\text{ three times}\\ \text{faster than his earlier speed}\text{.}\\ =\frac{25}{5×3}\\ =\frac{5}{3}\\ =1\frac{2}{3}\text{ hours}\\ =1\text{ hour 40 minutes }←\overline{)\begin{array}{l}\frac{2}{3}×60\\ =40\text{ minutes}\end{array}}\\ \\ \text{Total time taken from Town }P\text{ to Town }Q\text{ and Town }Q\text{ to Town }R\\ =2\text{ hours }+30\text{ minutes}+1\text{ hour 40 minutes}\\ =4\text{ hour 10 minutes}\\ \\ \text{The time he reached Town }R\text{ at 1}\text{.10 p}\text{.m}\text{.}\end{array}
\begin{array}{l}\text{Mass of concrete pipe on the trailer}\\ =500\text{ kg}×8\\ =4000\text{ kg}\\ =\frac{4000}{1000}\\ =4\text{ tonnes}\\ \\ \text{Total mass of the trailer and its load}\\ =1.5+4.0\\ =5.5\text{ tonnes}\end{array}
\begin{array}{l}\text{Speed}=\frac{\text{Distance}}{\text{Time}}\\ \\ \text{Time taken to travel from the Factory to Location }P\\ =10.00\text{ a}\text{.m}\text{.}-8.00\text{ a}\text{.m}\text{.}\\ =2\text{ hours}\\ \\ \text{Speed of the trailer}\\ =\frac{80}{2}\\ =40\text{ km/h}\\ \\ \text{It stopped for }1\frac{1}{2}\text{ hours to unload half of the concrete pipes}\text{.}\\ \\ \text{Time taken to continue its journey to Location }Q\\ =\frac{\text{Distance}}{\text{Speed}}\\ =\frac{200}{40×2}\\ =2\frac{1}{2}\text{ hours}\\ \\ \text{The time the trailer reached at Location }Q\\ =1400\text{ hours or }2.00\text{ p}\text{.m}\text{.}\end{array}
\begin{array}{l}\text{Speed}=\frac{\text{Distance}}{\text{Time}}\\ \text{Total distance from Town }A\text{ to Town }B\\ =\left(60×3\right)+\left(75×2\right)\\ =180+150\\ =330\text{ km}\\ \\ \text{Time taken by Mary}\\ =\frac{330}{100}\\ =3.3\text{ hours}\\ =3\text{ hours }18\text{ minutes }←\overline{)\begin{array}{l}0.3×60\\ =18\text{ minutes}\end{array}}\\ \\ \text{The time Mary started her journey from Town }A=9.42\text{ a}\text{.m}\text{.}\end{array}
\begin{array}{l}\text{Speed from Kuantan to Kuala Terengganu}\\ =\frac{170}{2}=85{\text{ kmh}}^{-1}\\ \\ \text{Acceleration}\\ =\frac{\text{Final speed}-\text{Initial speed}}{\text{Time taken}}\\ =\frac{100-85}{\frac{45}{60}}\\ =20{\text{ kmh}}^{-2}\end{array}
\begin{array}{l}\text{The time taken from }K\text{ to }L\\ =\frac{160}{80}\\ =2\text{ hrs}\\ \\ \text{The average speed from }L\text{ to }K\\ =80×1.2\\ =96{\text{ kmh}}^{-1}\\ \\ \text{The time taken from }L\text{ to }K\\ =\frac{160}{96}\\ =1\frac{2}{3}\text{ hrs}\\ \\ \text{The total time taken for the whole journey}\\ =2+1\frac{1}{2}+1\frac{2}{3}\\ =5\frac{1}{6}\text{ hrs}\\ =5\text{ hour }10\text{ minutes}\end{array}
\begin{array}{l}\text{Distance travelled at the first }\frac{1}{2}\text{ hour}\\ =70×\frac{1}{2}\\ =35\text{ km}\\ \\ \text{Remaining distance}=60\text{ km}-35\text{ km}\\ \text{ }=25\text{ km}\\ \text{Time taken for the remaining journey}\\ =\frac{25}{75}\\ =\frac{1}{3}×60\\ =20\text{ minutes}\\ \\ \text{Mr Wong arrives at}\\ =1.25\text{ p}\text{.m}+30\text{ minutes}+20\text{ minutes}\\ =2.15\text{ p}\text{.m}\\ \\ \text{Yes, he will arrive before 2}\text{.30 p}\text{.m}\\ \end{array}
\begin{array}{l}\text{From }P\text{ to }Q\text{:}\\ \text{Average speed}=80\text{ km/h}\\ \text{Time taken = 2 hours 15 minutes}\\ \text{ = 2}\frac{1}{4}\text{ hours}\\ \text{Distance}=\text{average speed}×\text{time taken}\\ \text{Distance }=80×2\frac{1}{4}\\ \text{ }=80×\frac{9}{4}\\ \text{ }=180\text{ km}\\ \\ \text{From }Q\text{ to }R\text{:}\\ \text{Distance }=90\text{ km}\\ \text{Time taken = 45 minutes}\\ \text{ = }\frac{3}{4}\text{ hour}\\ \\ \text{Average speed from }P\text{ to }R\\ =\frac{180+90}{2\frac{1}{4}+\frac{3}{4}}←\overline{)\frac{\text{Total distance}}{\text{Total time}}}\\ =\frac{270}{3}\\ =90\text{ km/h}\end{array}
|
Global Constraint Catalog: Cconnect_points
<< 5.84. cond_lex_lesseq5.86. connected >>
\mathrm{𝚌𝚘𝚗𝚗𝚎𝚌𝚝}_\mathrm{𝚙𝚘𝚒𝚗𝚝𝚜}\left(\mathrm{𝚂𝙸𝚉𝙴}\mathtt{1},\mathrm{𝚂𝙸𝚉𝙴}\mathtt{2},\mathrm{𝚂𝙸𝚉𝙴}\mathtt{3},\mathrm{𝙽𝙶𝚁𝙾𝚄𝙿},\mathrm{𝙿𝙾𝙸𝙽𝚃𝚂}\right)
\mathrm{𝚂𝙸𝚉𝙴}\mathtt{1}
\mathrm{𝚒𝚗𝚝}
\mathrm{𝚂𝙸𝚉𝙴}\mathtt{2}
\mathrm{𝚒𝚗𝚝}
\mathrm{𝚂𝙸𝚉𝙴}\mathtt{3}
\mathrm{𝚒𝚗𝚝}
\mathrm{𝙽𝙶𝚁𝙾𝚄𝙿}
\mathrm{𝚍𝚟𝚊𝚛}
\mathrm{𝙿𝙾𝙸𝙽𝚃𝚂}
\mathrm{𝚌𝚘𝚕𝚕𝚎𝚌𝚝𝚒𝚘𝚗}\left(𝚙-\mathrm{𝚍𝚟𝚊𝚛}\right)
\mathrm{𝚂𝙸𝚉𝙴}\mathtt{1}>0
\mathrm{𝚂𝙸𝚉𝙴}\mathtt{2}>0
\mathrm{𝚂𝙸𝚉𝙴}\mathtt{3}>0
\mathrm{𝙽𝙶𝚁𝙾𝚄𝙿}\ge 0
\mathrm{𝙽𝙶𝚁𝙾𝚄𝙿}\le |\mathrm{𝙿𝙾𝙸𝙽𝚃𝚂}|
\mathrm{𝚂𝙸𝚉𝙴}\mathtt{1}*\mathrm{𝚂𝙸𝚉𝙴}\mathtt{2}*\mathrm{𝚂𝙸𝚉𝙴}\mathtt{3}=|\mathrm{𝙿𝙾𝙸𝙽𝚃𝚂}|
\mathrm{𝚛𝚎𝚚𝚞𝚒𝚛𝚎𝚍}
\left(\mathrm{𝙿𝙾𝙸𝙽𝚃𝚂},𝚙\right)
On a 3-dimensional grid of variables, number of groups, where a group consists of a connected set of variables that all have a same value distinct from 0.
\left(\begin{array}{c}8,4,2,2,〈\begin{array}{c}𝚙-0,𝚙-0,\hfill \\ 𝚙-1,𝚙-1,\hfill \\ 𝚙-0,𝚙-2,\hfill \\ 𝚙-0,𝚙-0,\hfill \\ 𝚙-0,𝚙-0,\hfill \\ 𝚙-0,𝚙-1,\hfill \\ 𝚙-0,𝚙-2,\hfill \\ 𝚙-0,𝚙-0,\hfill \\ 𝚙-0,𝚙-0,\hfill \\ 𝚙-0,𝚙-1,\hfill \\ 𝚙-1,𝚙-1,\hfill \\ 𝚙-1,𝚙-1,\hfill \\ 𝚙-0,𝚙-2,\hfill \\ 𝚙-0,𝚙-1,\hfill \\ 𝚙-0,𝚙-2,\hfill \\ 𝚙-0,𝚙-0,\hfill \\ 𝚙-0,𝚙-0,\hfill \\ 𝚙-0,𝚙-0,\hfill \\ 𝚙-0,𝚙-0,\hfill \\ 𝚙-0,𝚙-0,\hfill \\ 𝚙-0,𝚙-0,\hfill \\ 𝚙-0,𝚙-0,\hfill \\ 𝚙-0,𝚙-2,\hfill \\ 𝚙-0,𝚙-0,\hfill \\ 𝚙-0,𝚙-2,\hfill \\ 𝚙-2,𝚙-2,\hfill \\ 𝚙-2,𝚙-2,\hfill \\ 𝚙-0,𝚙-0,\hfill \\ 𝚙-0,𝚙-2,\hfill \\ 𝚙-0,𝚙-0,\hfill \\ 𝚙-0,𝚙-2,\hfill \\ 𝚙-0,𝚙-0\hfill \end{array}〉\hfill \end{array}\right)
Figure 5.85.1 corresponds to the solution where we describe separately each layer of the grid. The
\mathrm{𝚌𝚘𝚗𝚗𝚎𝚌𝚝}_\mathrm{𝚙𝚘𝚒𝚗𝚝𝚜}
constraint holds since we have two groups (
\mathrm{𝙽𝙶𝚁𝙾𝚄𝙿}=2
): a first one for the variables of the
\mathrm{𝙿𝙾𝙸𝙽𝚃𝚂}
collection assigned to value 1, and a second one for the variables assigned to value 2.
Figure 5.85.1. The two layers of the solution
\mathrm{𝚂𝙸𝚉𝙴}\mathtt{1}>1
\mathrm{𝚂𝙸𝚉𝙴}\mathtt{2}>1
\mathrm{𝙽𝙶𝚁𝙾𝚄𝙿}>0
\mathrm{𝙽𝙶𝚁𝙾𝚄𝙿}<|\mathrm{𝙿𝙾𝙸𝙽𝚃𝚂}|
|\mathrm{𝙿𝙾𝙸𝙽𝚃𝚂}|>3
\mathrm{𝙿𝙾𝙸𝙽𝚃𝚂}.𝚙
that are both different from 0 can be swapped; all occurrences of a value of
\mathrm{𝙿𝙾𝙸𝙽𝚃𝚂}.𝚙
that is different from 0 can be renamed to any unused value that is also different from 0.
\mathrm{𝙽𝙶𝚁𝙾𝚄𝙿}
\mathrm{𝚂𝙸𝚉𝙴}\mathtt{1}
\mathrm{𝚂𝙸𝚉𝙴}\mathtt{2}
\mathrm{𝚂𝙸𝚉𝙴}\mathtt{3}
\mathrm{𝙿𝙾𝙸𝙽𝚃𝚂}
Wiring problems [Simonis90], [Zhou96].
Since the graph corresponding to the 3-dimensional grid is symmetric one could certainly use as a starting point the filtering algorithm associated with the number of connected components graph property described in [BeldiceanuPetitRochart06] (see the paragraphs “Estimating
\underline{\mathrm{𝐍𝐂𝐂}}
” and “Estimating
\overline{\mathrm{𝐍𝐂𝐂}}
”). One may also try to take advantage of the fact that the considered initial graph is a grid in order to simplify the previous filtering algorithm.
final graph structure: strongly connected component, symmetric.
problems: channel routing.
\mathrm{𝙿𝙾𝙸𝙽𝚃𝚂}
\mathrm{𝐺𝑅𝐼𝐷}
\left(\left[\mathrm{𝚂𝙸𝚉𝙴}\mathtt{1},\mathrm{𝚂𝙸𝚉𝙴}\mathtt{2},\mathrm{𝚂𝙸𝚉𝙴}\mathtt{3}\right]\right)↦\mathrm{𝚌𝚘𝚕𝚕𝚎𝚌𝚝𝚒𝚘𝚗}\left(\mathrm{𝚙𝚘𝚒𝚗𝚝𝚜}\mathtt{1},\mathrm{𝚙𝚘𝚒𝚗𝚝𝚜}\mathtt{2}\right)
•\mathrm{𝚙𝚘𝚒𝚗𝚝𝚜}\mathtt{1}.𝚙\ne 0
•\mathrm{𝚙𝚘𝚒𝚗𝚝𝚜}\mathtt{1}.𝚙=\mathrm{𝚙𝚘𝚒𝚗𝚝𝚜}\mathtt{2}.𝚙
\mathrm{𝐍𝐒𝐂𝐂}
=\mathrm{𝙽𝙶𝚁𝙾𝚄𝙿}
\mathrm{𝚂𝚈𝙼𝙼𝙴𝚃𝚁𝙸𝙲}
Figure 5.85.2 gives the initial graph constructed by the
\mathrm{𝐺𝑅𝐼𝐷}
arc generator associated with the Example slot.
Figure 5.85.2. Graph generated by
\mathrm{𝐺𝑅𝐼𝐷}
([8,4,2])
|
Inequalities for the Minimum Eigenvalue of Doubly Strictly Diagonally Dominant -Matrices
Ming Xu, Suhua Li, Chaoqian Li, "Inequalities for the Minimum Eigenvalue of Doubly Strictly Diagonally Dominant -Matrices", Journal of Applied Mathematics, vol. 2014, Article ID 535716, 8 pages, 2014. https://doi.org/10.1155/2014/535716
Ming Xu,1,2 Suhua Li,2 and Chaoqian Li2
1School of Mathematical Sciences, Kaili University, Kaili 556011, China
Let be a doubly strictly diagonally dominant -matrix. Inequalities on upper and lower bounds for the entries of the inverse of are given. And some new inequalities on the lower bound for the minimal eigenvalue of and the corresponding eigenvector are presented to establish an upper bound for the -norm of the solution for the linear differential system , .
For a positive integer , denotes the set . For , we write () if all (), . If (), we say is nonnegative (positive, resp.). Let denote the class of all real matrices all of whose off-diagonal entries are nonpositive. A matrix is called an -matrix [1] if and the inverse of , denoted by , is nonnegative.
Let be an -matrix. Then there exist a positive eigenvalue of , , and a corresponding eigenvector , where is the Perron eigenvalue of the nonnegative matrix , , and denotes the spectrum of . is called the minimum eigenvalue of [2, 3]. If, in addition, is irreducible, then and is simple and , which is unique if we assume that the -norm of equals ; that is, [3]. If is the diagonal matrix of an -matrix and , then the spectral radius of the Jacobi iterative matrix of is denoted by . For a set , we denote by the cardinality of . Note that if and only if .
For convenience, we employ the following notations throughout. Let be nonsingular with , for all , and . We denote, for any ,
Definition 1 (see [4]). A matrix is called(i)(strictly) diagonally dominant, if (, resp.) for all , and is called doubly (strictly) diagonally dominant if (, resp.) for all ;(ii)weakly chained diagonally dominant, if , and for all , there exist indices in with , , where and .
Remark. (i) It is well known that a doubly strictly diagonally dominant matrix is nonsingular and that [5]. If , we denote by the unique element throughout; that is, . Meanwhile, if is doubly strictly diagonally dominant and , then is strictly diagonally dominant.
(ii) It is clear that a strictly diagonally dominant matrix is doubly strictly diagonally dominant and also weakly chained diagonally dominant. Also clearly, for a doubly strictly diagonally dominant matrix , if , then is weakly chained diagonally dominant; otherwise, is not weakly chained diagonally dominant.
Estimating the bounds of the minimum eigenvalue of an -matrix and its corresponding eigenvector is an interesting subject in matrix theory and has important applications in many practical problems; see [4, 6–8]. In particular, these bounds are used to estimate upper bounds of the -norm of the solution for the following system of ordinary differential equations: where , , and is a constant -matrix. And it is proved in [6] that where and is the positive eigenvector of corresponding to . When the order of is large, it is difficult to compute and . Hence it is necessary to estimate the bounds of and .
In [4], Shivakumar et al. obtained the following bounds of when is a weakly chained diagonally dominant -matrix.
Theorem 2 (see [4, Theorem 4.1]). Let be a weakly chained diagonally dominant -matrix and . Then
Recently, Tian and Huang [9] provided lower bounds of by using the spectral radius of the Jacobi iterative matrix for a general -matrix .
Theorem 3 (see [9, Theorem 3.1]). Let be an -matrix and . Then
Also in [9], a lower bound of , which depends only on the entries of , has been presented when is a strictly diagonally dominant -matrix.
Theorem 4 (see [9, Corollary 3.4]). Let be a strictly diagonally dominant -matrix. Then
As shown in [9], it is possible that equals zero or that is very small, and moreover, whenever is not weakly chained diagonally dominant, Theorems 2 and 4 cannot be used to estimate the bounds of effectively. On the other hand, it is difficult to estimate by using Theorem 3 because of the difficulty of computing the diagonal elements of and when is very large.
In this paper, we continue to research the problems mentioned previously. For a doubly strictly diagonally dominant -matrix , we in Section 3 give some inequalities on the bounds of the entries of . And in Section 4, some inequalities on bounds of and the corresponding eigenvector are established. Lastly, an example, in which we estimate the -norm of the solution for the system (2) when is a doubly strictly diagonally dominant -matrix, is given in Section 5.
In this section, we give a lemma which involves some results for a doubly strictly diagonally dominant -matrix. First, some notations are listed: for a doubly strictly diagonally dominant matrix and , where Note here that let if ().
Lemma 5. Let be a doubly strictly diagonally dominant -matrix and . And, for any , let , where and , . Then is a strictly diagonally dominant -matrix. Furthermore, , for and for any .
Proof. Since is a doubly strictly diagonally dominant -matrix and , we have hence, from , And, for any , if , and if , inequality (11) is obvious.
From inequality (11), we have Let . Then From inequality (10), we have And, for any , from inequality (12), we have From inequality (14) and inequality (15), is strictly diagonally dominant. Moreover, it is clear that and , which implies that is an -matrix.
Furthermore, from the definition of , we have that and for any , We now prove for any . Since is doubly strictly diagonally dominant, we get that there is , , such that (otherwise, a contradiction to the definition of doubly strictly diagonally dominant matrices). Hence and equivalently, And for any , Hence, from inequality (19), inequality (20), and the fact that is an -matrix, we have that, for any , The proof is completed.
Lemma 6 (see [10, Page 719]). Let be an complex matrix and let be positive real numbers. Then all the eigenvalues of lie in the
3. Bounds for the Entries of the Inverse of a Doubly Strictly Diagonally Dominant -Matrix
In this section, upper and lower bounds for the entries of are given when is a doubly strictly diagonally dominant -matrix.
Lemma 7 (see [11, Lemma 2.2]). Let be a strictly diagonally dominant -matrix and let . Then, for all ,
Next, we present a similar result for a doubly strictly diagonally dominant -matrix.
Theorem 8. Let be a doubly strictly diagonally dominant -matrix and let . Then, for all ,
Proof. If , then is strictly diagonally dominant and the conclusion follows from Lemma 7. We next suppose that . From Lemma 5, we get that is a strictly diagonally dominant -matrix for any , where , , and , . Let and . Then If , from Lemma 7, we have that that is, If and , from Lemma 7, then that is, moreover, by , we have And if and , from Lemma 7, then that is, Hence, from inequality (30) and inequality (32) and letting , we have that, for any , The conclusion follows from inequality (27) and inequality (33).
We next establish the upper and lower bounds for the diagonal entries of the inverse of a doubly strictly diagonally dominant -matrix.
Proof. If , then the conclusion follows from Lemma 2.2 of [9]. We next suppose that . Since is a doubly strictly diagonally dominant -matrix, and , , . By , we have that, for all , which implies Moreover, from equality (35) and Theorem 8, we have that, for any , And similar to the proof of Theorem 8, is a strictly diagonally dominant -matrix, where is given in Lemma 5. Let . Then, from , we have that Hence, from inequality (37), inequality (38), and Lemma 5, we obtain that for any The conclusion follows from inequality (36) and inequality (39).
Next a lower bound of the entries of the inverse of a doubly strictly diagonally dominant -matrix will be established. Firstly, a lemma is given.
Lemma 10 (see [4, Theorem 3.5]). Let be a weakly chained diagonally dominant -matrix and let . Then
Theorem 11. Let be a doubly strictly diagonally dominant -matrix and let . Then where
Proof. If , then is a strictly diagonally dominant -matrix, also a weakly chained diagonally dominant -matrix. The conclusion is evident from Lemma 10. We next suppose that . Similar to the proof of Theorem 8, is a strictly diagonally dominant -matrix, where is given in Lemma 5. Let and . By Lemma 10, we have that Moreover, note that and ; we have And also note that, for any , Hence, we need only prove that for any . In fact, if , then If , then If , then Hence, for any , The conclusion follows from inequalities (44), (45), and (49).
4. Bounds for the Minimum Eigenvalue of a Doubly Strictly Diagonally Dominant -Matrix
In this section, we give some lower bounds for which depend only on the entries of when is a doubly strictly diagonally dominant -matrix. First, for , we give an upper bound of , where .
Theorem 12. Let be a doubly strictly diagonally dominant -matrix. Then
Proof. Let . Then The proof is completed.
Proof. If is irreducible, then ; meanwhile, from the irreducibility of and the definition of , we have for any . We next consider the spectral radius of . From Lemma 6, we have that there is such that which, from [12], leads to Hence,
If is reducible, then we can obtain a doubly strictly diagonally dominant -matrix such that is irreducible by replacing some nondiagonal zero entries of with sufficiently small negative real number . Now replace with in the previous case. Let approach 0; the conclusion follows by the continuity of about the entries of .
From Theorems 12 and 13, we have the following result.
Theorem 14. Let be a doubly strictly diagonally dominant -matrix. Then where
Proof. By Theorem 12 and the fact that , we have that Hence, from Theorem 13, .
We now give upper and lower bounds for the components of the eigenvector corresponding to the minimum eigenvalue for an irreducible doubly strictly diagonally dominant -matrix.
Theorem 15. Let be an irreducible doubly strictly diagonally dominant -matrix and let . And let be the positive eigenvector of corresponding to with . Then, for all , Furthermore,
Proof. It is clear that exists and . From and , we have and ; hence, where . The lower bound for is proved similarly. Furthermore, by Theorem 3.1 of [12], By Theorem 8, . Hence, The proof is completed.
Consider the following system: where It is easy to verify that is an irreducible doubly strictly diagonally dominant -matrix and that . Hence is not a weakly chained diagonally dominant -matrix. We now establish the upper bound for the -norm of the solution . Let . By Theorems 8 and 9, we have By Theorem 11, we have By Theorem 14, we have Hence, by inequality (3) and Theorem 15, we have Hence, Note here that we cannot estimate the lower bound of by using Theorem 2 (Theorem 4.1 of [4]) and Theorem 4 (Corollary 3.4 of [9]) because is not a strictly diagonally dominant -matrix and not a weakly chained diagonally dominant -matrix.
Ming Xu, Suhua Li, and Chaoqian Li contributed equally to this work. All authors read and approved the final paper.
The authors are grateful to the referees for their useful and constructive suggestions. The first author is supported by Science Foundation of Guizhou Province (20132260, LKK201331, and LKK201424). The third author is supported by National Natural Science Foundations of China (11326242, 11361074), Natural Science Foundations of Yunnan Province (2013FD002), and IRTSTYN.
A. Berman and R. J. Plemmons, Nonnegative Matrices in the Mathematical Sciences, Academic Press, New York, NY, USA, 1979. View at: MathSciNet
M. Fiedler and T. L. Markham, “An inequality for the Hadamard product of an
M
-matrix and an inverse
M
-matrix,” Linear Algebra and Its Applications, vol. 101, pp. 1–8, 1988. View at: Publisher Site | Google Scholar | MathSciNet
R. A. Horn and C. R. Johnson, Topics in Matrix Analysis, Cambridge University Press, Cambridge, UK, 1991. View at: Publisher Site | MathSciNet
P. N. Shivakumar, J. J. Williams, Q. Ye, and C. A. Marinov, “On two-sided bounds related to weakly diagonally dominant
M
-matrices with application to digital circuit dynamics,” SIAM Journal on Matrix Analysis and Applications, vol. 17, no. 2, pp. 298–312, 1996. View at: Publisher Site | Google Scholar | MathSciNet
R. S. Varga, Geršgorin and His Circles, Springer, Berlin, Germany, 2004.
C. Corduneanu, Principles of Differential and Integral Equations, Chelsea, New York, NY, USA, 1988.
C. Q. Li, Y. T. Li, and R. J. Zhao, “New inequalities for the minimum eigenvalue of M-matrices,” Linear and Multilinear Algebra, vol. 61, no. 9, pp. 1267–1279, 2013. View at: Publisher Site | Google Scholar | MathSciNet
W. Walter, Differential and Integral Inequalities, Springer, New York, NY, USA, 1970. View at: MathSciNet
G. X. Tian and T. Z. Huang, “Inequalities for the minimum eigenvalue of
M
-matrices,” Electronic Journal of Linear Algebra, vol. 20, pp. 291–302, 2010. View at: Google Scholar | MathSciNet
R. S. Varga, “Minimal Gerschgorin sets,” Pacific Journal of Mathematics, vol. 15, pp. 719–729, 1965. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
Y. Li, F. Chen, and D. Wang, “New lower bounds on eigenvalue of the Hadamard product of an
M
-matrix and its inverse,” Linear Algebra and its Applications, vol. 430, no. 4, pp. 1423–1431, 2009. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
H. Minc, Nonnegative Matrices, John Wiley & Sons, New York, NY, USA, 1987.
|
Global Constraint Catalog: Csame_and_global_cardinality_low_up
<< 5.335. same_and_global_cardinality5.337. same_intersection >>
\mathrm{𝚜𝚊𝚖𝚎}
\mathrm{𝚐𝚕𝚘𝚋𝚊𝚕}_\mathrm{𝚌𝚊𝚛𝚍𝚒𝚗𝚊𝚕𝚒𝚝𝚢}_\mathrm{𝚕𝚘𝚠}_\mathrm{𝚞𝚙}
\mathrm{𝚜𝚊𝚖𝚎}_\mathrm{𝚊𝚗𝚍}_\mathrm{𝚐𝚕𝚘𝚋𝚊𝚕}_\mathrm{𝚌𝚊𝚛𝚍𝚒𝚗𝚊𝚕𝚒𝚝𝚢}_\mathrm{𝚕𝚘𝚠}_\mathrm{𝚞𝚙}\left(\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{1},\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{2},\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}\right)
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{1}
\mathrm{𝚌𝚘𝚕𝚕𝚎𝚌𝚝𝚒𝚘𝚗}\left(\mathrm{𝚟𝚊𝚛}-\mathrm{𝚍𝚟𝚊𝚛}\right)
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{2}
\mathrm{𝚌𝚘𝚕𝚕𝚎𝚌𝚝𝚒𝚘𝚗}\left(\mathrm{𝚟𝚊𝚛}-\mathrm{𝚍𝚟𝚊𝚛}\right)
\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}
\mathrm{𝚌𝚘𝚕𝚕𝚎𝚌𝚝𝚒𝚘𝚗}\left(\mathrm{𝚟𝚊𝚕}-\mathrm{𝚒𝚗𝚝},\mathrm{𝚘𝚖𝚒𝚗}-\mathrm{𝚒𝚗𝚝},\mathrm{𝚘𝚖𝚊𝚡}-\mathrm{𝚒𝚗𝚝}\right)
|\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{1}|=|\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{2}|
\mathrm{𝚛𝚎𝚚𝚞𝚒𝚛𝚎𝚍}
\left(\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{1},\mathrm{𝚟𝚊𝚛}\right)
\mathrm{𝚛𝚎𝚚𝚞𝚒𝚛𝚎𝚍}
\left(\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{2},\mathrm{𝚟𝚊𝚛}\right)
\mathrm{𝚛𝚎𝚚𝚞𝚒𝚛𝚎𝚍}
\left(\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂},\left[\mathrm{𝚟𝚊𝚕},\mathrm{𝚘𝚖𝚒𝚗},\mathrm{𝚘𝚖𝚊𝚡}\right]\right)
\mathrm{𝚍𝚒𝚜𝚝𝚒𝚗𝚌𝚝}
\left(\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂},\mathrm{𝚟𝚊𝚕}\right)
\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}.\mathrm{𝚘𝚖𝚒𝚗}\ge 0
\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}.\mathrm{𝚘𝚖𝚊𝚡}\le |\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{1}|
\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}.\mathrm{𝚘𝚖𝚒𝚗}\le \mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}.\mathrm{𝚘𝚖𝚊𝚡}
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{2}
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{1}
\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}\left[i\right].\mathrm{𝚟𝚊𝚕}
i\in \left[1,|\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}|\right]
) should be taken by at least
\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}\left[i\right].\mathrm{𝚘𝚖𝚒𝚗}
\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}\left[i\right].\mathrm{𝚘𝚖𝚊𝚡}
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{1}
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{1}
\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}\left[i\right].\mathrm{𝚟𝚊𝚕}
i\in \left[1,|\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}|\right]
\left(\begin{array}{c}〈1,9,1,5,2,1〉,\hfill \\ 〈9,1,1,1,2,5〉,\hfill \\ 〈\begin{array}{ccc}\mathrm{𝚟𝚊𝚕}-1\hfill & \mathrm{𝚘𝚖𝚒𝚗}-2\hfill & \mathrm{𝚘𝚖𝚊𝚡}-3,\hfill \\ \mathrm{𝚟𝚊𝚕}-2\hfill & \mathrm{𝚘𝚖𝚒𝚗}-1\hfill & \mathrm{𝚘𝚖𝚊𝚡}-1,\hfill \\ \mathrm{𝚟𝚊𝚕}-5\hfill & \mathrm{𝚘𝚖𝚒𝚗}-1\hfill & \mathrm{𝚘𝚖𝚊𝚡}-1,\hfill \\ \mathrm{𝚟𝚊𝚕}-7\hfill & \mathrm{𝚘𝚖𝚒𝚗}-0\hfill & \mathrm{𝚘𝚖𝚊𝚡}-2,\hfill \\ \mathrm{𝚟𝚊𝚕}-9\hfill & \mathrm{𝚘𝚖𝚒𝚗}-1\hfill & \mathrm{𝚘𝚖𝚊𝚡}-1\hfill \end{array}〉\hfill \end{array}\right)
\mathrm{𝚜𝚊𝚖𝚎}_\mathrm{𝚊𝚗𝚍}_\mathrm{𝚐𝚕𝚘𝚋𝚊𝚕}_\mathrm{𝚌𝚊𝚛𝚍𝚒𝚗𝚊𝚕𝚒𝚝𝚢}_\mathrm{𝚕𝚘𝚠}_\mathrm{𝚞𝚙}
|\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{1}|
|\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{2}|
The values 1, 2, 5, 7 and 6 are respectively used 3 (
2\le 3\le 3
1\le 1\le 1
1\le 1\le 1
0\le 0\le 2
1\le 1\le 1
) times.
|\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{1}|>1
\mathrm{𝚛𝚊𝚗𝚐𝚎}
\left(\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{1}.\mathrm{𝚟𝚊𝚛}\right)>1
\mathrm{𝚛𝚊𝚗𝚐𝚎}
\left(\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{2}.\mathrm{𝚟𝚊𝚛}\right)>1
|\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}|>1
\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}.\mathrm{𝚘𝚖𝚒𝚗}\le |\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{1}|
\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}.\mathrm{𝚘𝚖𝚊𝚡}>0
\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}.\mathrm{𝚘𝚖𝚊𝚡}<|\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{1}|
|\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{1}|>|\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}|
\left(\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{1},\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{2}\right)
\left(\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}\right)
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{1}
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{2}
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{1}.\mathrm{𝚟𝚊𝚛}
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{2}.\mathrm{𝚟𝚊𝚛}
\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}.\mathrm{𝚟𝚊𝚕}
\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}.\mathrm{𝚟𝚊𝚕}
\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}
\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}.\mathrm{𝚘𝚖𝚒𝚗}
\ge 0
\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}.\mathrm{𝚘𝚖𝚊𝚡}
\le |\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{1}|
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{1}.\mathrm{𝚟𝚊𝚛}
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{2}.\mathrm{𝚟𝚊𝚛}
\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}.\mathrm{𝚟𝚊𝚕}
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{1}.\mathrm{𝚟𝚊𝚛}
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{2}.\mathrm{𝚟𝚊𝚛}
\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}.\mathrm{𝚟𝚊𝚕}
\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}
\mathrm{𝚜𝚊𝚖𝚎}_\mathrm{𝚊𝚗𝚍}_\mathrm{𝚐𝚕𝚘𝚋𝚊𝚕}_\mathrm{𝚌𝚊𝚛𝚍𝚒𝚗𝚊𝚕𝚒𝚝𝚢}_\mathrm{𝚕𝚘𝚠}_\mathrm{𝚞𝚙}
constraint can be used for modelling the following assignment problem with a single constraint. The organisation Doctors Without Borders has a list of doctors and a list of nurses, each of whom volunteered to go on one rescue mission. Each volunteer specifies a list of possible dates and each mission should include one doctor and one nurse. In addition we have for each date the minimum and maximum number of missions that should be effectively done. The task is to produce a list of pairs such that each pair includes a doctor and a nurse who are available on the same date and each volunteer appears in exactly one pair so that for each day we build the required number of missions.
In [BeldiceanuKatrielThiel05b], the flow network that was used to model the
\mathrm{𝚜𝚊𝚖𝚎}
constraint [BeldiceanuKatrielThiel04a], [BeldiceanuKatrielThiel04b] is extended to support the cardinalities. Figure 3.7.31 illustrates this flow model. Then, algorithms are developed to compute arc-consistency and bound-consistency.
\mathrm{𝚜𝚊𝚖𝚎}_\mathrm{𝚊𝚗𝚍}_\mathrm{𝚐𝚕𝚘𝚋𝚊𝚕}_\mathrm{𝚌𝚊𝚛𝚍𝚒𝚗𝚊𝚕𝚒𝚝𝚢}
\mathrm{𝚏𝚒𝚡𝚎𝚍}
\mathrm{𝚒𝚗𝚝𝚎𝚛𝚟𝚊𝚕}
\mathrm{𝚟𝚊𝚛𝚒𝚊𝚋𝚕𝚎}
\mathrm{𝚐𝚕𝚘𝚋𝚊𝚕}_\mathrm{𝚌𝚊𝚛𝚍𝚒𝚗𝚊𝚕𝚒𝚝𝚢}_\mathrm{𝚕𝚘𝚠}_\mathrm{𝚞𝚙}
\mathrm{𝚐𝚕𝚘𝚋𝚊𝚕}_\mathrm{𝚌𝚊𝚛𝚍𝚒𝚗𝚊𝚕𝚒𝚝𝚢}_\mathrm{𝚕𝚘𝚠}_\mathrm{𝚞𝚙}_\mathrm{𝚗𝚘}_\mathrm{𝚕𝚘𝚘𝚙}
\mathrm{𝚜𝚊𝚖𝚎}
filtering: bound-consistency, arc-consistency, flow.
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{1}
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{2}
\mathrm{𝑃𝑅𝑂𝐷𝑈𝐶𝑇}
↦\mathrm{𝚌𝚘𝚕𝚕𝚎𝚌𝚝𝚒𝚘𝚗}\left(\mathrm{𝚟𝚊𝚛𝚒𝚊𝚋𝚕𝚎𝚜}\mathtt{1},\mathrm{𝚟𝚊𝚛𝚒𝚊𝚋𝚕𝚎𝚜}\mathtt{2}\right)
\mathrm{𝚟𝚊𝚛𝚒𝚊𝚋𝚕𝚎𝚜}\mathtt{1}.\mathrm{𝚟𝚊𝚛}=\mathrm{𝚟𝚊𝚛𝚒𝚊𝚋𝚕𝚎𝚜}\mathtt{2}.\mathrm{𝚟𝚊𝚛}
•\text{for}\text{all}\text{connected}\text{components:}
\mathrm{𝐍𝐒𝐎𝐔𝐑𝐂𝐄}
=
\mathrm{𝐍𝐒𝐈𝐍𝐊}
•
\mathrm{𝐍𝐒𝐎𝐔𝐑𝐂𝐄}
=|\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{1}|
•
\mathrm{𝐍𝐒𝐈𝐍𝐊}
=|\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{2}|
\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\mathtt{1}
\mathrm{𝑆𝐸𝐿𝐹}
↦\mathrm{𝚌𝚘𝚕𝚕𝚎𝚌𝚝𝚒𝚘𝚗}\left(\mathrm{𝚟𝚊𝚛𝚒𝚊𝚋𝚕𝚎𝚜}\right)
\mathrm{𝚟𝚊𝚛𝚒𝚊𝚋𝚕𝚎𝚜}.\mathrm{𝚟𝚊𝚛}=\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}.\mathrm{𝚟𝚊𝚕}
•
\mathrm{𝐍𝐕𝐄𝐑𝐓𝐄𝐗}
\ge \mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}.\mathrm{𝚘𝚖𝚒𝚗}
•
\mathrm{𝐍𝐕𝐄𝐑𝐓𝐄𝐗}
\le \mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}.\mathrm{𝚘𝚖𝚊𝚡}
\mathrm{𝐍𝐒𝐎𝐔𝐑𝐂𝐄}
\mathrm{𝐍𝐒𝐈𝐍𝐊}
\mathrm{𝚜𝚊𝚖𝚎}_\mathrm{𝚊𝚗𝚍}_\mathrm{𝚐𝚕𝚘𝚋𝚊𝚕}_\mathrm{𝚌𝚊𝚛𝚍𝚒𝚗𝚊𝚕𝚒𝚝𝚢}_\mathrm{𝚕𝚘𝚠}_\mathrm{𝚞𝚙}
|
Global Constraint Catalog: Kstrong_articulation_point
<< 3.7.245. Strip packing3.7.247. Strong bridge >>
\mathrm{𝚝𝚛𝚎𝚎}
A constraint for which the filtering algorithm uses the notion of strong articulation point. A strong articulation point of a strongly connected digraph
G
is a vertex such that if we remove it,
G
is broken into at least two strongly connected components. Figure 3.7.75 illustrates the notion of strong articulation point on the digraph depicted by part (A). The vertex labelled by 3 is a strong articulation point since its removal creates the three strongly connected components depicted by part (B) (i.e., the first, second and third strongly connected components correspond respectively to the sets of vertices
\left\{1,4\right\}
\left\{2\right\}
\left\{5\right\}
). From an algorithmic point of view, it was shown in [ItalianoLauraSantaroni10] how to compute all the strong articulation points of a digraph
G
in linear time with respect to the number of arcs of
G
Figure 3.7.75. (A) A connected digraph, (B) its three strongly connected components
{\mathrm{𝑠𝑐𝑐}}_{1}
{\mathrm{𝑠𝑐𝑐}}_{2}
{\mathrm{𝑠𝑐𝑐}}_{3}
when its unique strong articulation point (i.e., the vertex labelled by 3) is removed
|
Power transmission system with tightly wound rope around cylindrical drum - MATLAB - MathWorks Nordic
Rope windup type
Drum inertia
Drum initial velocity
Power transmission system with tightly wound rope around cylindrical drum
The Rope Drum block represents a power transmission system where a rope is tightly wound around a cylindrical drum at a sufficient tension level to prevent slipping. You can configure the rope so that the ends point in the same or opposite directions. You can set the direction that the ends move with the Rope windup type parameter:
If the rope ends point in the same direction, they translate in opposite directions.
If the rope ends point in opposite directions, they translate in the same direction.
The equations refer to rope ends A and B, as illustrated by the figure.
During normal operation, a driving torque causes the cylindrical drum to rotate about its axis of symmetry. The rotating drum transmits tensile force to the rope ends, which translate with respect to the drum centerline. The relative direction of translation between the two rope ends depends on the setting of the Rope windup type parameter. However, a positive drum rotation always corresponds to a positive translation at port A.
The block assumes that each rope end remains taut during simulation because slack rope ends do not transmit force. You can set the block to warn you if the rope loses tension.
You can optionally set parameters for:
Viscous friction at the drum bearing
The net torque acting on the cylinder satisfies the force balance equation
T={F}_{B}·R·\delta -{F}_{A}·R+\mu ·\omega ,
T is the net torque acting on the drum.
FA and FB are the external forces pulling on rope ends A and B.
R is the drum radius.
μ is the viscous friction coefficient at the drum bearings.
ω is the drum angular velocity.
δ is the rope windup type according to the parameter settings:
When you set Rope windup type to Ends move in the same direction, δ = -1.
When you set Rope windup type to Ends move in the opposite directions, δ = +1.
The figure shows the equation variables.
The translational velocities of the two rope ends are functions of the drum radius and angular velocity. Each translational velocity is equal to the tangential velocity of a point at the drum periphery according to the expressions
\begin{array}{c}{V}_{A}=-\omega \text{\hspace{0.17em}}·R\\ {V}_{B}=+\omega \text{\hspace{0.17em}}·R\end{array}
VA is the translational velocity of rope end A.
VB is the translational velocity of rope end B.
The block ignores slip at the rope-drum contact surface.
The block ignores rope compliance.
S — Drum shaft
Mechanical rotational conserving port associated with the drum shaft.
A — Rope end A
Mechanical translational conserving port associated with rope end A.
B — Rope end B
Mechanical translational conserving port associated with rope end B.
Drum radius — Size of drum
Distance between the drum center and its periphery. The drum periphery coincides with the contact surface between the drum and the rope.
Rope windup type — Direction of B rope end with respect to A rope end
Ends move in same direction (default) | Ends move in opposite direction
Relative direction of the translation motion of the two rope ends, A and B, where B is the following end.
Tension warning — Optional lost tension warning
Do not check tension (default) | Warn if either end loses tension
Option to generate a warning when a rope end becomes slack. A rope end becomes slack when the net tensile forces responsible for keeping it taut fall below zero. Because slack cables do not transmit force, the simulation results may not be accurate when the rope loses tension.
Bearing viscous friction coefficient — Viscous friction for drum rotation
Linear damping coefficient in effect at the drum bearing. This coefficient determines the power losses due to viscous friction at a given drum angular velocity.
Inertia — Resistance to angular acceleration
Option to account for drum inertia.
No inertia — Select this setting if drum inertia has a negligible impact on driveline dynamics. This setting sets drum inertia to zero.
Specify inertia and initial velocity — Select this setting if drum inertia has a significant impact on driveline dynamics.
Drum inertia — Drum tendency to remain in motion
Moment of inertia of the drum about its rotation axis. In the simple case of a solid cylindrical drum, the moment of inertia is
I=\frac{1}{2}M{R}^{2},
where M is the drum mass and R is the drum radius.
Drum initial velocity — Initial drum state
Angular velocity of the drum at the start of the simulation.
Rope | Wheel and Axle
|
04.10 Customizing Ticks
Matplotlib's default tick locators and formatters are designed to be generally sufficient in many common situations, but are in no way optimal for every plot. This section will give several examples of adjusting the tick locations and formatting for the particular plot type you're interested in.
Before we go into examples, it will be best for us to understand further the object hierarchy of Matplotlib plots. Matplotlib aims to have a Python object representing everything that appears on the plot: for example, recall that the figure is the bounding box within which plot elements appear. Each Matplotlib object can also act as a container of sub-objects: for example, each figure can contain one or more axes objects, each of which in turn contain other objects representing plot contents.
The tick marks are no exception. Each axes has attributes xaxis and yaxis, which in turn have attributes that contain all the properties of the lines, ticks, and labels that make up the axes.
Within each axis, there is the concept of a major tick mark, and a minor tick mark. As the names would imply, major ticks are usually bigger or more pronounced, while minor ticks are usually smaller. By default, Matplotlib rarely makes use of minor ticks, but one place you can see them is within logarithmic plots:
We see here that each major tick shows a large tickmark and a label, while each minor tick shows a smaller tickmark with no label.
These tick properties—locations and labels—that is, can be customized by setting the formatter and locator objects of each axis. Let's examine these for the x axis of the just shown plot:
<matplotlib.ticker.LogLocator object at 0x10dbaf630> <matplotlib.ticker.LogLocator object at 0x10dba6e80>
<matplotlib.ticker.LogFormatterMathtext object at 0x10db8dbe0> <matplotlib.ticker.NullFormatter object at 0x10db9af60>
We see that both major and minor tick labels have their locations specified by a LogLocator (which makes sense for a logarithmic plot). Minor ticks, though, have their labels formatted by a NullFormatter: this says that no labels will be shown.
We'll now show a few examples of setting these locators and formatters for various plots.
Perhaps the most common tick/label formatting operation is the act of hiding ticks or labels. This can be done using plt.NullLocator() and plt.NullFormatter(), as shown here:
Notice that we've removed the labels (but kept the ticks/gridlines) from the x axis, and removed the ticks (and thus the labels as well) from the y axis. Having no ticks at all can be useful in many situations—for example, when you want to show a grid of images. For instance, consider the following figure, which includes images of different faces, an example often used in supervised machine learning problems (see, for example, In-Depth: Support Vector Machines):
Notice that each image has its own axes, and we've set the locators to null because the tick values (pixel number in this case) do not convey relevant information for this particular visualization.
One common problem with the default settings is that smaller subplots can end up with crowded labels. We can see this in the plot grid shown here:
Particularly for the x ticks, the numbers nearly overlap and make them quite difficult to decipher. We can fix this with the plt.MaxNLocator(), which allows us to specify the maximum number of ticks that will be displayed. Given this maximum number, Matplotlib will use internal logic to choose the particular tick locations:
This makes things much cleaner. If you want even more control over the locations of regularly-spaced ticks, you might also use plt.MultipleLocator, which we'll discuss in the following section.
Matplotlib's default tick formatting can leave a lot to be desired: it works well as a broad default, but sometimes you'd like do do something more. Consider this plot of a sine and a cosine:
There are a couple changes we might like to make. First, it's more natural for this data to space the ticks and grid lines in multiples of
\pi
. We can do this by setting a MultipleLocator, which locates ticks at a multiple of the number you provide. For good measure, we'll add both major and minor ticks in multiples of
\pi/4
But now these tick labels look a little bit silly: we can see that they are multiples of
\pi
, but the decimal representation does not immediately convey this. To fix this, we can change the tick formatter. There's no built-in formatter for what we want to do, so we'll instead use plt.FuncFormatter, which accepts a user-defined function giving fine-grained control over the tick outputs:
|
How Price Converges - Premia
How Price Converges
C-level Recovery
Price Impact by Size
The Premia pricing model enables pools to converge to the optimal market price and liquidity utilization rate in minimal time.
The price convergence mechanism uses a continuous, on-chain reinforcement learning model to consistently move towards the optimal market price and liquidity utilization rate over time.
Each transaction takes place in a discrete time interval from t to t+1. After every transaction is made, the pool updates its price level according to the nature of the transaction. Intuitively, in order to find the relationship between the
BS
pricing model and the actual market pricing curve, we're going to start with a random guess for this constant,
C_0
, and let the market forces make the adjustment over every ensuing capital flow in and out of the pool.
Liquidity adjusted price in relation to simplified Black Scholes price curve.
Starting from an overpriced "guess constant" and converging to a market clearing equilibrium
To illustrate the convergence mechanisms, there are some standard assumptions/practicalities to consider:
Market participants collectively know the market clearing price for any option.
If the price quoted by the pool at any point in time is greater than the true market clearing price, option buyers will buy fewer options, compared to equilibrium, and LPs will seek to provide more capital, compared to equilibrium.
If the price quoted by the pool at any point in time, is smaller than the true market clearing price, option buyers will buy more options, compared to equilibrium, and LPs will seek to provide less capital, compared to equilibrium.
Option buyer demand can only be observed when a purchase is made and capital within a pool is locked as "booked". In other words, if there is no capital in the pool that can be used in option underwriting, the amount of demand by option buyers is unobservable.
Buyers and LPs will asymptotically act rationally, but bursts of random behavior are tolerated, unless they are highly un-economical and exceed the variance of the rational behavior model.
The best way to illustrate the convergence is to go through how it works in practice. Suppose the magic fairy this time graciously tells us that the market clearing constant between the
BS
model output,
BS(V_i)
, and the actual market price is 2, so
C_{clearing}=2.0
Keep in mind, in reality the "true"
C_{clearing}
value is never observable.
At initialization, there are zero assets in the pool, and a guess,
C_{0}
, which will be used as the initial multiple to
BS(V_i)
, to produce the option quote provided to the buyer. Suppose we set
C_0=5
Step 0: Since we know that at initialization the pool overprices the options substantially (by design), the LPs are incentivized to provide capital, as they would expect to earn above market returns for their capital. Suppose that during step 0, some LP joins a pool and provides 100 ETH. The pool makes no adjustments to the C-value at step 0, as the initial liquidity level needs to first be established.
Step 1: The options are still overpriced, so further LPs are attracted to provide liquidity in expectation of above market returns. Suppose another LP joins in with an additional 100 ETH. Now, the pool observes that the market pressures are biased towards the supply of liquidity. The "step size" is calculated as the relative change in available liquidity in the pool relative previous state. In this case, the
stepSize = \frac{(200-100)}{200}=0.5
This step size is used as input to an exponential decay function, to get
e^{-(0.5)}=0.6065
. We multiply this number by our initial
C_{0}
to update our beliefs of what the market clearing C-value should be. Our improved guess after step 1 is now
C_{1}=C_0*0.6065=5*0.6065=3.0325
, which is much closer to the market clearing C-value of 2.
Step -> n:
Price level will trend towards the market-clearing C-level and is resilient to high volatility.
Due to the volatility of user behavior, we are likely to see option purchases even if the buying price has not yet reached market equilibrium, so the C-value will go up and down, until it ultimately converges to within one digit percentage errors of the market clearing conditions.
Key insight: The convergence is conditional on
C_{0}
being initialized above the market clearing level because demand cannot be observed unless there is capital in the pool.
Key explorations to follow:
C_0
value has a direct impact on convergence dynamics. The higher this initial value, the greater the likelihood of asymptotic convergence. This relationship flips for limited step count models.
The exponent base in the model directly influences the convergence profile. It is not immediately obvious what the key characteristics are of a convergence process that constitute a healthy market.
Is there a better base pricing alternative to vanilla BS?
Does a multi-dimensional C-value, specific to option strike price and maturity, provide a more optimal pricing mechanism than the current single-dimensional value?
|
Ratio, Rates and Proportions II – user's Blog!
Home › PT3 Maths › Archive for Ratio, Rates and Proportions II
Diagram below shows the distance from Town P to Town Q and Town Q to Town R.
(a) Rahim rode his bicycle from Town P at 9.00 a.m. and took 2 hours to reach Town Q.
What is the speed, in km/h, of the bicycle?
(b) Rahim took 30 minutes rest at Town Q and continued his journey to Town R three times faster than his earlier speed.
State the time he reached Town R.
\begin{array}{l}\text{The speed of the bicycle from Town }P\text{ to Town }Q\\ =\frac{\text{Distance}}{\text{Time}}\\ =\frac{10}{2}\\ =5\text{ km/h}\end{array}
\begin{array}{l}\text{Speed}=\frac{\text{Distance}}{\text{Time}}\\ \\ \text{Rahim took 30 minutes rest at Town }Q\text{.}\\ \text{Time taken when his journey to Town }R\text{ three times}\\ \text{faster than his earlier speed}\text{.}\\ =\frac{25}{5×3}\\ =\frac{5}{3}\\ =1\frac{2}{3}\text{ hours}\\ =1\text{ hour 40 minutes }←\overline{)\begin{array}{l}\frac{2}{3}×60\\ =40\text{ minutes}\end{array}}\\ \\ \text{Total time taken from Town }P\text{ to Town }Q\text{ and Town }Q\text{ to Town }R\\ =2\text{ hours }+30\text{ minutes}+1\text{ hour 40 minutes}\\ =4\text{ hour 10 minutes}\\ \\ \text{The time he reached Town }R\text{ at 1}\text{.10 p}\text{.m}\text{.}\end{array}
Diagram below shows a trailer travelling from a factory to location P and location P to location Q. The trailer departs at 8.00 a.m.
(a) Based on the Table, calculate the total mass, in tonne, of the trailer and its load.
(b) The trailer arrived at location P at 10.00 a.m. and it stopped for 1½ hours to unload half of the concrete pipes. The trailer then continued its journey to location Q two times faster than its earlier speed. State the time, the trailer reached at location Q.
\begin{array}{l}\text{Mass of concrete pipe on the trailer}\\ =500\text{ kg}×8\\ =4000\text{ kg}\\ =\frac{4000}{1000}\\ =4\text{ tonnes}\\ \\ \text{Total mass of the trailer and its load}\\ =1.5+4.0\\ =5.5\text{ tonnes}\end{array}
\begin{array}{l}\text{Speed}=\frac{\text{Distance}}{\text{Time}}\\ \\ \text{Time taken to travel from the Factory to Location }P\\ =10.00\text{ a}\text{.m}\text{.}-8.00\text{ a}\text{.m}\text{.}\\ =2\text{ hours}\\ \\ \text{Speed of the trailer}\\ =\frac{80}{2}\\ =40\text{ km/h}\\ \\ \text{It stopped for }1\frac{1}{2}\text{ hours to unload half of the concrete pipes}\text{.}\\ \\ \text{Time taken to continue its journey to Location }Q\\ =\frac{\text{Distance}}{\text{Speed}}\\ =\frac{200}{40×2}\\ =2\frac{1}{2}\text{ hours}\\ \\ \text{The time the trailer reached at Location }Q\\ =1400\text{ hours or }2.00\text{ p}\text{.m}\text{.}\end{array}
Diagram below shows travel information of Jason and Mary from Town A to Town B. Jason drives a lorry while Mary drives a car.
(a) Jason started his journey from Town A at 7.00 a.m.
State the time, Jason reached at Town B.
(b) If both of them reached Town B at the same time, state the time Mary started her journey from Town A.
Total time taken from Town A to Town B
= 3 hours + 1 hour + 2 hours
Time Jason reached Town B
= 1300 hours → 1.00 p.m.
\begin{array}{l}\text{Speed}=\frac{\text{Distance}}{\text{Time}}\\ \text{Total distance from Town }A\text{ to Town }B\\ =\left(60×3\right)+\left(75×2\right)\\ =180+150\\ =330\text{ km}\\ \\ \text{Time taken by Mary}\\ =\frac{330}{100}\\ =3.3\text{ hours}\\ =3\text{ hours }18\text{ minutes }←\overline{)\begin{array}{l}0.3×60\\ =18\text{ minutes}\end{array}}\\ \\ \text{The time Mary started her journey from Town }A=9.42\text{ a}\text{.m}\text{.}\end{array}
Mei Ling drives her car from Kuantan to Kuala Terengganu for a distance of 170 km for 2 hours. She then continues her journey to Kota Bahru and increases her speed to 100 kmh-1 for 45 minutes.
Calculate the acceleration, in kmh-2, of the car.
\begin{array}{l}\text{Speed from Kuantan to Kuala Terengganu}\\ =\frac{170}{2}=85{\text{ kmh}}^{-1}\\ \\ \text{Acceleration}\\ =\frac{\text{Final speed}-\text{Initial speed}}{\text{Time taken}}\\ =\frac{100-85}{\frac{45}{60}}\\ =20{\text{ kmh}}^{-2}\end{array}
Diagram shows the distance between K and L.
A car moves from K to L with an average speed of 80 kmh-1. After rest for 1 hour 30 minutes, the car then returns to K. The average speed of the car from L to K increases 20%. If the car reaches K at 5 p.m., calculate the time the car starts its journey from K.
\begin{array}{l}\text{The time taken from }K\text{ to }L\\ =\frac{160}{80}\\ =2\text{ hrs}\\ \\ \text{The average speed from }L\text{ to }K\\ =80×1.2\\ =96{\text{ kmh}}^{-1}\\ \\ \text{The time taken from }L\text{ to }K\\ =\frac{160}{96}\\ =1\frac{2}{3}\text{ hrs}\\ \\ \text{The total time taken for the whole journey}\\ =2+1\frac{1}{2}+1\frac{2}{3}\\ =5\frac{1}{6}\text{ hrs}\\ =5\text{ hour }10\text{ minutes}\end{array}
The car starts its journey from K at 11:50 a.m.
Mr Wong is going to watch a movie at 2.30 p.m at a cinema that is 60 km away from his house. He leaves at 1.25 p.m and drives at an average speed of 70 km/h for half an hour. If he drives at an average speed of 75 km/h for the remaining journey, will he arrive before the movie starts?
Give your reason with calculation.
\begin{array}{l}\text{Distance travelled at the first }\frac{1}{2}\text{ hour}\\ =70×\frac{1}{2}\\ =35\text{ km}\\ \\ \text{Remaining distance}=60\text{ km}-35\text{ km}\\ \text{ }=25\text{ km}\\ \text{Time taken for the remaining journey}\\ =\frac{25}{75}\\ =\frac{1}{3}×60\\ =20\text{ minutes}\\ \\ \text{Mr Wong arrives at}\\ =1.25\text{ p}\text{.m}+30\text{ minutes}+20\text{ minutes}\\ =2.15\text{ p}\text{.m}\\ \\ \text{Yes, he will arrive before 2}\text{.30 p}\text{.m}\\ \end{array}
Karen drives her car from town P to town Q at an average speed of 80 km/h for 2 hours 15 minutes. She continues her journey for a distance of 90 km from town Q to town R and takes 45 minutes.
Calculate the average speed, in km/h, for the journey from P to R.
\begin{array}{l}\text{From }P\text{ to }Q\text{:}\\ \text{Average speed}=80\text{ km/h}\\ \text{Time taken = 2 hours 15 minutes}\\ \text{ = 2}\frac{1}{4}\text{ hours}\\ \text{Distance}=\text{average speed}×\text{time taken}\\ \text{Distance }=80×2\frac{1}{4}\\ \text{ }=80×\frac{9}{4}\\ \text{ }=180\text{ km}\\ \\ \text{From }Q\text{ to }R\text{:}\\ \text{Distance }=90\text{ km}\\ \text{Time taken = 45 minutes}\\ \text{ = }\frac{3}{4}\text{ hour}\\ \\ \text{Average speed from }P\text{ to }R\\ =\frac{180+90}{2\frac{1}{4}+\frac{3}{4}}←\overline{)\frac{\text{Total distance}}{\text{Total time}}}\\ =\frac{270}{3}\\ =90\text{ km/h}\end{array}
\begin{array}{l}\text{Time taken}=\frac{\text{distance travelled}}{\text{average speed}}\\ =\frac{120\text{ km}}{80{\text{ km h}}^{-1}}\\ =\frac{3}{2}\text{ hours}\\ \text{= 1 hour 30 minutes}\\ \text{1 hour 30 minutes after 11}\text{.00 a}\text{.m}\text{. is 12}\text{.30 p}\text{.m}\text{.}\end{array}
\begin{array}{l}\text{Time taken by Faisal}\\ \text{= 3 hours}-30\text{ minutes}\\ \text{= 3 hours}-\frac{1}{2}\text{ hour}\\ \text{= 2}\frac{1}{2}\text{ hours}\\ \\ \text{Average speed of Faisal's car}\\ =\frac{\text{distance travelled}}{\text{time taken}}\\ =\frac{180\text{ km}}{\text{2}\frac{1}{2}\text{ hours}}\\ =72\text{ km/h}\end{array}
\begin{array}{l}\text{Distance}=\text{average speed}×\text{time taken}\\ \text{Distance from }L\text{ to }M=90×2\frac{40}{60}\\ \text{ }=90×2\frac{2}{3}\\ \text{ }=90×\frac{8}{3}\\ \text{ }=240\text{ km}\\ \\ \text{Total distance from }L\text{ to }N=240+100\\ \text{ }=340\text{ km}\\ \\ \text{Total time taken = 2 hours 40 minutes + 1 hour 20 minutes}\\ \text{ = 4 hours}\\ \text{Average speed for the journey from }L\text{ to }N=\frac{340\text{ km}}{4\text{ h}}\\ \text{ }=85\text{ km/h}\end{array}
\begin{array}{l}\text{Distance from }F\text{ to }G\\ \text{= 105 km/h}×\text{3 hours}\\ \text{=315 km}\\ \\ \text{Average speed for return journey}\\ =\frac{\text{distance travelled}}{\text{time taken}}\\ =\frac{315\text{ km}}{3\frac{1}{2}\text{ hours}}\\ =90\text{ km/h}\end{array}
\begin{array}{l}\text{Time taken for vehicle }A=\frac{230}{115}=2\text{ hours}\\ \text{Time taken for vehicle }B=\frac{250}{100}=2\frac{1}{2}\text{ hours}\\ \text{Time taken for vehicle }C=\frac{170}{85}=2\text{ hours}\\ \text{Time taken for vehicle }D=\frac{245}{60}=3\frac{1}{2}\text{ hours}\end{array}
|
Immigration, obesity and labor market outcomes in the UK | IZA Journal of Development and Migration | Full Text
{H}_{\mathit{it}}=\alpha +\beta {X}_{\mathit{it}}+\phi IM{M}_{i}+\tau DU{R}_{\mathit{it}}+\theta Y{2006}_{t}+{ϵ}_{\mathit{it}}
{L}_{\mathit{it}}=\alpha +\beta {X}_{\mathit{it}}+\gamma O{W}_{\mathit{it}}+\eta O{B}_{\mathit{it}}+\tau DU{R}_{\mathit{it}}+\theta Y{2006}_{t}+{ϵ}_{\mathit{it}}
{L}_{\mathit{it}}=\alpha +\beta {X}_{\mathit{it}}+\Gamma IM{M}_{i}+\gamma O{W}_{\mathit{it}}+\eta O{B}_{\mathit{it}}+\varphi \left(IM{M}_{i}*O{W}_{\mathit{it}}\right)+\lambda \left(IM{M}_{i}*O{B}_{\mathit{it}}\right)+\tau DU{R}_{\mathit{it}}+\theta Y{2006}_{t}+{ϵ}_{\mathit{it}}\phantom{\rule{0.25em}{0ex}}
|
Hölder's Inequality | Brilliant Math & Science Wiki
Seq O, Alexander Katz, Hamza A, and
Durward McDonell
Hölder's inequality is a statement about sequences that generalizes the Cauchy-Schwarz inequality to multiple sequences and different exponents.
Hölder's inequality states that, for sequences
{a_i}, {b_i}, \ldots , {z_i} ,
(a_1 + a_2 + \cdots + a_n)^{ \lambda_a} \cdots (z_1 + z_2 + \cdots + z_n)^{ \lambda_z } \geq a_1^{ \lambda_a } b_1^{ \lambda_b} \cdots z_1^{ \lambda_z } + \cdots + a_n^{ \lambda_a } b_n^{ \lambda_b } \cdots z_n^{ \lambda_z }
\lambda_a + \lambda_b + \cdots + \lambda_z = 1 .
For instance, in the case of
\lambda_a = \lambda_b = \frac{1}{2},
Hölder's inequality reduces to
(a_1 + a_2 + \cdots + a_n)^{ \frac{1}{2}} (b_1 + b_2 + \cdots + b_n)^{ \frac{1}{2} } \geq (a_1 b_1)^{\frac{1}{2}} + (a_2 b_2)^{\frac{1}{2}} + \cdots + (a_n b_n)^{ \frac{1}{2}},
which is the Cauchy-Schwarz inequality.
Hölder's inequality can be proven using Jensen's inequality. In particular, we may seek to prove the following form of Hölder's inequality:
\prod _{j=1} ^z \left( \int f _j(x) ^{p_j} \, dx \right) ^{1 /p _j} \ge \int \prod _{j=1} ^z f _j(x) \, dx,
\frac{1}{p _j} = \lambda _j,\ \sum_{j=1}^z \lambda _j = 1,\ \big\{ f _1(x_i) ^{p _1} dx \big\} = \big\{ a_1, a_2, \dots , a_n,\big\},\ \big\{ f _2(x_i) ^{p _2} dx \big\} = \big\{ b_1, b_2, \dots , b_n \big\}, \dots,
\big\{ f _z(x_i) ^{p _z} dx \big\} = \{ z_1, z_2, \dots , z_n \}.
First, please refer to the proof presented in the Wikipedia article on Hölder's inequality for the simpler case of
\left( \int f ^p d\mu \right) ^ {1 /p} \left( \int g ^q d\mu \right) ^{1 /q} \ge \int fg\, d\mu
\frac {1} {p} +\frac {1} {q} = 1
Show/hide proof from Wikipedia
Recall the Jensen's inequality for the convex function
x^p
(
it is convex because obviously
p\geq1):
\int h\,d\nu\leq\left(\int h^pd\nu \right )^{\frac{1}{p}}
\nu
is any probability distribution and
h
\nu
-measurable function. Let
\mu
be any measure, and
\nu
the distribution whose density w.r.t.
\mu
g^q
d\nu = \frac{g^q}{\int g^q\,d\mu}d\mu
. Then we have, using
\frac{1}{p}+\frac{1}{q}=1
p(1-q)+q=0
h=fg^{1-q}
\int fg\,d\mu = \left (\int g^q\,d\mu \right )\int \underbrace{fg^{1-q}}_h\underbrace{\frac{g^q}{\int g^q\,d\mu}d\mu}_{d\nu} \leq \left (\int g^qd\mu \right ) \left (\int \underbrace{f^pg^{p(1-q)}}_{h^p}\underbrace{\frac{g^q}{\int g^q\,d\mu}\,d\mu}_{d\nu} \right )^{\frac{1}{p}} = \left (\int g^q\,d\mu \right ) \left (\int \frac{f^p}{\int g^q\,d\mu}\,d\mu \right )^{\frac{1}{p}}.
\int fg\,d\mu \leq \left(\int f^p\,d\mu \right )^{\frac{1}{p}} \left(\int g^q\,d\mu \right )^{\frac{1}{q}}
_\square
\left( \int f ^p d\mu \right) ^ {1 /p} \left( \int g ^q d\mu \right) ^{1 /q} \ge \left( \int (fg)^r d\mu \right) ^{1 /r}
p
q \gt 0
\frac {1} {p} +\frac {1} {q} = \frac {1} {r}
\frac {r} {p} +\frac {r} {q} = \frac {r} {r} = 1
\textstyle \left( \int F ^{p /r} d\mu \right) ^ {r /p} \left( \int G ^{q /r} d\mu \right) ^{r /q} \ge \int FG\, d\mu
, using the proof above for the simpler case.
f = F ^{1/r}
g = G ^{1/r}
. With some algebra, we can see that
\textstyle \left( \left( \int f ^p d\mu \right) ^ {r /p} \left( \int g ^q d\mu \right) ^{r /q} \right) ^{1/r} \ge \left( \int f ^r g ^r d\mu \right) ^{1 /r}
Given the above, we can show that
\left( \int f _1 ^{p _1} d\mu \right) ^{1 /p _1} \left( \int f _2 ^{p _2} d\mu \right) ^{1 /p _2} \left( \int f _3 ^{p _3} d\mu \right) ^{1 /p _3} \ge \left( \int (f _1 f _2) ^{\frac {1} {(1 /p _1) +(1 /p _2)}} d\mu \right) ^{(1 /p _1) + (1 /p_2)} \left( \int f _3 ^{p _3} d\mu \right) ^{1 /p _3} \ge \left( \int (f _1 f _2 f _3) ^{\frac {1} {(1 /p _1) +(1 /p _2) +(1 /p _3)}} d\mu \right) ^{(1 /p _1) +(1 /p_2) +(1 /p _3)}.
In fact, define
s(m) = \sum_{j=1}^{m} \frac{1}{p _j}.
\prod _{j=1}^{m-1} \left( \int f _j ^{p _j} d\mu \right) ^{1 /p _j} \ge \left( \int \left( \prod_{j=1}^{m-1} f _j \right) ^{\frac {1} {s(m-1)}} d\mu \right) ^{s(m-1)},
\prod _{j=1}^{m-1} \left( \int f _j ^{p _j} d\mu \right) ^{1 /p _j} \left( \int f _m ^{p _m} d\mu \right) ^ {1 /p _m} \ge \left( \int \left( \prod_{j=1}^{m-1} f _j \right) ^{\frac {1} {s(m-1)}} d\mu \right) ^{s(m-1)} \left( \int f _m ^{p _m} d\mu \right) ^ {1 /p _m} \ge \left( \int \left( \prod_{j=1}^{m} f _j \right) ^{\frac {1} {s(m)}} d\mu \right) ^{s(m)}.
Therefore, by induction,
\prod _{j=1}^{z} \left( \int f _j ^{p _j} d\mu \right) ^{1 /p _j} \ge \left( \int \left( \prod_{j=1}^{z} f _j \right) ^{\frac {1} {s(z)}} d\mu \right) ^{s(z)}.
s(z) = 1
, we arrive at the original claim that we've sought to prove.
Hölder's inequality may also be proven using Young's inequality. Young's inequality states that if
p,q
are positive reals satisfying
\frac{1}{p}+\frac{1}{q}=1
ab \leq \frac{a^p}{p}+\frac{b^q}{q}
for all nonnegative reals
a,b
It follows from the concavity of the logarithm function and Jensen's inequality; in particular,
\log\left(\frac{1}{p}a^{p}+\frac{1}{q}b^q\right) \geq \frac{1}{p}\log a^p+\frac{1}{q}\log b^q=\log a+\log b=\log ab
and exponentiating gives Young's inequality.
Hölder's inequality is often used to deal with square (or higher-power) roots of expressions in inequalities since those can be eliminated through successive multiplication. Here is an example:
a,b,c
be positive reals satisfying
a+b+c=3
. What is the minimum possible value of
\frac{1}{\sqrt{a}}+\frac{1}{\sqrt{b}}+\frac{1}{\sqrt{c}}?
\begin{aligned} \left(\frac{1}{\sqrt{a}}+\frac{1}{\sqrt{b}}+\frac{1}{\sqrt{c}}\right)^{1/3}\left(\frac{1}{\sqrt{a}}+\frac{1}{\sqrt{b}}+\frac{1}{\sqrt{c}}\right)^{1/3}(a+b+c)^{1/3} &\geq 1^{1/3}+1^{1/3}+1^{1/3} \\ &= 3 \\ \Rightarrow \left(\frac{1}{\sqrt{a}}+\frac{1}{\sqrt{b}}+\frac{1}{\sqrt{c}}\right)\left(\frac{1}{\sqrt{a}}+\frac{1}{\sqrt{b}}+\frac{1}{\sqrt{c}}\right)(a+b+c) &\geq 3^3. \end{aligned}
a+b+c=3
, the minimum possible value of
\frac{1}{\sqrt{a}}+\frac{1}{\sqrt{b}}+\frac{1}{\sqrt{c}}
3
_\square
(2001 IMO, Problem 2)
a,b,c
\frac{a}{\sqrt{a^{2}+8bc}}+\frac{b}{\sqrt{b^{2}+8ca}}+\frac{c}{\sqrt{c^{2}+8ab}}\ge 1.
\left(\sum\frac{a}{\sqrt{a^{2}+8bc}}\right)\left(\sum\frac{a}{\sqrt{a^{2}+8bc}}\right)\left(\sum a(a^{2}+8bc)\right)\ge (a+b+c)^{3}.
Thus it suffices to show that
(a+b+c)^{3}\ge a^{3}+b^{3}+c^{3}+24abc,
which follows immediately from the famous inequality (incidentally, easily provable with Hölder's)
(a+b)(b+c)(c+a)\ge 8abc.\ _\square
A cycling strategy can also be used, as in the demonstration that Hölder's inequality implies AM-GM:
(a_1+a_2+\ldots+a_n)^{\frac{1}{n}}(a_2+a_3+\cdots+a_1)^{\frac{1}{n}}\cdots(a_n+a_1+\cdots+a_{n-1})^{\frac{1}{n}} \geq n\sqrt[n]{a_1a_2\cdots a_n},
which rearranges to AM-GM immediately.
_\square
Another useful strategy is to insert constants (especially 1) as members of a sequence, especially to "reduce" powers. For instance,
a,b
be positive real numbers. Show that
4\big(a^3+b^3\big) \geq (a+b)^3.
\big(a^3+b^3\big)^{\frac{1}{3}}(1+1)^{\frac{1}{3}}(1+1)^{\frac{1}{3}} \geq \big(a^3\big)^{\frac{1}{3}}1^{\frac{1}{3}}1^{\frac{1}{3}}+\big(b^3\big)^{\frac{1}{3}}1^{\frac{1}{3}}1^{\frac{1}{3}},
which rearranges to the desired inequality.
_\square
and
b
a^{2015}+b^{2015}=2015,
a+b
a^2 +b^2+c^2+d^2
a,b,c
and
a^{2016}+b^{2016}+c^{2016}+d^{2016}=4032
, maximize the expression above to 3 decimal places
Minkowski's inequality easily follows from Holder's inequality.
Minkowski's inequality states that
\left(\sum _{ n=1 }^{ k } ({ x }_{ n }+{ y }_{ n })^p\right)^{ \frac { 1 }{ p } }\le \left(\sum _{ n=1 }^{ k }{ { x }_{ n }^{ p } } \right) ^{ \frac { 1 }{ p } }+\left(\sum _{ n=1 }^{ k }{ { y }_{ n }^{ p } } \right)^{ \frac { 1 }{ p } }
p>1
{x}_{n},{y}_{n}\ge0.
\sum _{ n=1 }^{ k }{ ({ x }_{ n }+{ y }_{ n })^{ p }=\displaystyle\sum _{ n=1 }^{ k }{ { x }_{ n }({ x }_{ n }+{ y }_{ n })^{ p-1 } } } +\displaystyle\sum _{ n=1 }^{ k }{ { y }_{ n }({ x }_{ n }+{ y }_{ n } } )^{ p-1 }.
as
a=\frac { p }{ p-1 } ,
then by Holder's inequality we have
\sum_{n \mathop = 1}^k x_n \left({x_n + y_n}\right)^{p-1} + \sum_{n \mathop = 1}^k y_n \left({x_n + y_n}\right)^{p-1} \le \left[ \left(\sum _{ n=1 }^{ k }{ { x }_{ n }^{ p } } \right)^{ \frac { 1 }{ p } }+\left(\sum _{ n=1 }^{ k }{ { y }_{ n }^{ p } } \right)^{ \frac { 1 }{ p } }\right] \left[ \sum _{ n=1 }^{ k }{ ({ x }_{ n }+{ y }_{ n })^{ p } } \right]^{ \frac { 1 }{ a } }.
\left[\sum _{ n=1 }^{ k }{ ({ x }_{ n }+{ y }_{ n })^{ p } } \right]^{ \frac { 1 }{ a }},
\left(\sum _{ n=1 }^{ k } ({ x }_{ n }+{ y }_{ n })^p\right)^{ \frac { 1 }{ p } }\le \left(\sum _{ n=1 }^{ k }{ { x }_{ n }^{ p } } \right) ^{ \frac { 1 }{ p } }+\left(\sum _{ n=1 }^{ k }{ { y }_{ n }^{ p } } \right)^{ \frac { 1 }{ p } }.
_\square
Inequalities - Convexity Arguments
Classical Inequalities Problem Solving - Basic
Classical Inequalities Problem Solving - Intermediate
Classical Inequalities Problem Solving - Advanced
Cite as: Hölder's Inequality. Brilliant.org. Retrieved from https://brilliant.org/wiki/holders-inequality/
|
Imports coordinate system information into images.
This task reads coordinate system information from an AST file and uses it to modify the World Coordinate System (WCS) components of the given images. A new coordinate system is added (the same for each image) within which a set of images can be aligned. The newly added coordinate system becomes the Current one.
If a coordinate system with the same Domain (name) already exists it will be overwritten, and a warning message issued.
AST files for use by this program will normally be those written by the ASTEXP program, and may either be standard ones designed for use with a particular instrument, or prepared by the user.
ASTIMP in astfile indomain
A file containing a sequence of framesets describing the relative coordinate systems of images from different sources.
It is intended that this file should be one written by the ASTEXP application when a successful registration is made, and the user need not be aware of its internal structure. The files are readable text however, and can in principle be written by other applications or doctored by hand, if this is done with care, and with knowledge of AST objects (SUN/210). The format of the file is explained in the Notes section.
The name of a FITS header keyword whose value gives a number of degrees to rotate the coordinate system by when it is imported. This rotation is done after the mappings given in the AST file itself have been applied. If any lower case characters are given, they are converted to upper case. This may be a compound name to handle hierarchical keywords, in which case it has the form keyword1.keyword2 etc. Each keyword must be no longer than 8 characters.
It will normally not be necessary to supply this keyword, since it can be given instead within the AST file. If it is supplied however, it overrides any value given there. [!]
A list of image names whose WCS components are to be modified according to ASTFILE. The image names may be specified using wildcards, or may be specified using an indirection file (the indirection character is "
\text{^}
INDICES(
\ast
) = _INTEGER (Read)
This parameter is a list of integers with as many elements as there are images accessed by the IN parameter. If the frameset identifiers are of the type "INDEX" then it indicates, for each image, what its index number is. Thus if only one image is given in the IN list, and the value of INDICES is [3], then the frameset with the identifier "INDEX 3" will be chosen. If set null (!) the images will be considered in the order 1,2,3,…which will be appropriate unless the images are being presented in a different order from that in which they were presented to ASTEXP when generating the AST file. [!]
INDOMAIN = LITERAL (Read)
The Domain name to be used for the Current frames of the framesets which are imported. If a null (!) value is given, the frames will assume the same name as in the AST file. [!]
ROT = _DOUBLE (Read)
An angle through which all the imported frames should be rotated. This rotation is done after the mappings in the AST file itself have been applied. [0]
astimp data
\ast
camera.ast obs1
This will apply the AST file "camera.ast" to all the images in the current directory with names beginning "data". The file "camera.ast" has previously been written using ASTEXP with the parameter ASTFILE=camera.ast. A new frame with a Domain called "OBS1" is added to the WCS component of each image.
astimp "data3,data4" instrum.ast obs1 indices=[3,4]
This imports frameset information from the AST file instrum.ast which was written by ASTEXP with the IDTYPE parameter set to INDEX. In this case images of only the third and fourth types described in that file are being modified. The name of the new coordinate system will be OBS1, overriding the name used when the AST file was written.
astimp astfile=instrum.ast in=! logto=terminal accept
This will simply report on the framesets contained within the AST file "instrum.ast", writing the ID of each to the terminal only.
“Re-use of coordinate system information with AST files”, ASTEXP.
<
>
<
>
<
>
<
>
<
>
|
High-resolution FFT of a portion of a spectrum - Simulink - MathWorks España
High-resolution FFT of a portion of a spectrum
The Zoom FFT block computes the fast Fourier Transform (FFT) of a signal over a portion of frequencies in the Nyquist interval. By setting an appropriate decimation factor D, and sampling rate Fs, you can choose the bandwidth of frequencies to analyze BW, where BW = Fs/D. You can also select a specific range of frequencies to analyze in the Nyquist interval by choosing the center frequency of the desired band.
The resolution of a signal is the ratio of Fs and the FFT length (L). Using zoom FFT, you can retain the same resolution you would achieve with a full-size FFT on your original signal by computing a small FFT on a shorter signal. The shorter signal comes from decimating the original signal. The savings come from being able to compute a much shorter FFT while achieving the same resolution. For a decimation factor of D, the new sampling rate, Fsd, is Fs/D, and the new frame size (and FFT length) is Ld = L/D. The resolution of the decimated signal is Fsd/Ld = Fs/L. To achieve a higher resolution of the shorter band, use the original FFT length, L, instead of the decimated FFT length, Ld.
Data input whose zoom FFT the block computes, specified as a vector or a matrix. The number of input rows must be a multiple of the decimation factor.
This block supports variable-size input signals, as long as the input frame size is a multiple of the decimation factor. That is, you can change the input frame size (number of rows) during the simulation. However, the number of channels (number of columns) must remain constant.
This port is unnamed until you select the Specify center frequency from input port parameter and click Apply.
Example: randn(22,2)
Fc — Center frequency input
Center frequency of the desired band in Hz, passed through this input port as a real scalar in the range (– SampleRate/2, SampleRate/2). SampleRate is the input sample rate either inherited from the input signal or specified through the Input sample rate (Hz) parameter. This port appears only when you select the Specify center frequency from input port check box.
This port appears only when you select the Specify center frequency from input port check box and click Apply.
Port_1 — Zoom FFT output
Zoom FFT output, returned as a vector or matrix. If you select the Inherit FFT Length from input dimensions check box, the output frame size equals the input frame size divided by the decimation factor. If you clear the Inherit FFT Length from input dimensions check box and specify the FFT length, the output frame size equals the specified FFT length. The output data type matches the input data type.
Decimation factor, specified as a positive integer. This value specifies the factor by which the block reduces the bandwidth of the input signal. The number of rows in the input signal must be a multiple of the decimation factor.
Specify center frequency from input port — Flag to specify center frequency from input port
When you select this option and click Apply, the input port Fc appears on the block icon. You can pass the center frequency through this input port as a scalar.
Center frequency (Hz) — Center frequency
Center frequency of the desired band in Hz, specified as a real scalar in the range (– SampleRate/2, SampleRate/2). SampleRate is the input sample rate either inherited from the input or specified through the Input sample rate (Hz) parameter.
This parameter applies when you clear the Specify center frequency from input port check box.
Inherit FFT Length from input dimensions — Flag to inherit FFT length from input dimensions
When you select this option, the FFT length is the ratio of the input frame size (number of rows in the input) and the Decimation factor.
FFT length, specified as a positive integer. The FFT length must be greater than or equal to the ratio of the frame size (number of input rows) and the Decimation factor.
This parameter applies when you clear the Inherit FFT Length from input dimensions check box.
Inherit Sample rate from input — Flag to inherit sample rate from input
When you clear this check box, the block inherits the sample rate from the input signal.
Input sample rate in Hz, specified as positive real scalar.
The zoom FFT algorithm leverages bandpass filtering before computing the FFT of the signal. The concept of bandpass filtering is that suppose you are interested in the band [F1, F2] of the original input signal, sampled at the rate Fs Hz. If you pass this signal through a complex (one-sided) bandpass filter centered at Fc = (F1+F2)/2, with the bandwidth BW = F2 – F1, and then downsample the signal by a factor of D = floor(Fs/BW), the desired band comes down to the baseband.
If Fc cannot be expressed in the form of k×Fs/D, where k is an integer, then the shifted, decimated spectrum is not centered at DC. In this case, the center frequency gets translated to Fd.
{F}_{d}={F}_{c}-\left({F}_{s}/D\right)×floor\left(\left(D×{F}_{c}+{F}_{s}/2\right)/{F}_{s}\right)
The complex bandpass filter is obtained by first designing a lowpass filter prototype and then multiplying the lowpass coefficients with a complex exponential. This algorithm uses a multirate, multistage FIR filter as the lowpass filter prototype. To obtain the bandpass filter, the coefficients of each stage are frequency shifted. The decimation factor is the cumulative decimation factor of each stage. The complex bandpass filter followed by the decimator are implemented using an efficient polyphase structure. For more details on the design of the complex bandpass filter from the multirate multistage FIR filter prototype, see Zoom FFT and Complex Bandpass Filter Design.
[1] Harris, F.J. Multirate Signal Processing for Communication Systems. Prentice Hall, 2004, pp. 208–209.
dsp.ZoomFFT | dsp.FFT | dsphdl.FFT (DSP HDL Toolbox) | dsp.IFFT | dsphdl.IFFT (DSP HDL Toolbox)
FFT | FFT (DSP HDL Toolbox) | Magnitude FFT | Short-Time FFT
|
Global Constraint Catalog: Kstrong_bridge
<< 3.7.246. Strong articulation point3.7.248. Strongly connected component >>
\mathrm{𝚌𝚒𝚛𝚌𝚞𝚒𝚝}
\mathrm{𝚌𝚢𝚌𝚕𝚎}
A constraint for which the filtering algorithm may use the notion of strong bridge (i.e., enforce arcs corresponding to strong bridges to be part of the solution in order to avoid creating too many strongly connected components). A strong bridge of a strongly connected digraph
G
is an arc such that, if we remove it,
G
is broken into at least two strongly connected components. Figure 3.7.76 illustrates the notion of strong bridge on the digraph depicted by part (A). The arc from the vertex labelled by 2 to the vertex labelled by 1 is a strong bridge since its removal creates the three strongly connected components depicted by part (B) (i.e., the first, second and third strongly connected components correspond respectively to the sets of vertices
\left\{1,3,4\right\}
\left\{2\right\}
\left\{5\right\}
). The other strong bridges of the digraph depicted by part (A) are the arcs
1\to 3
5\to 2
. From an algorithmic point of view, it was shown in [ItalianoLauraSantaroni10] how to compute all the strong bridges of a digraph
G
G
{\mathrm{𝑠𝑐𝑐}}_{1}
{\mathrm{𝑠𝑐𝑐}}_{2}
{\mathrm{𝑠𝑐𝑐}}_{3}
when one of its strong bridge, the arc
2\to 1
, is removed
|
4.1. Differences from the 2000 report
4.3. Graph invariants
4.3.1. Graph classes
4.3.2. Format of an invariant
4.3.3. Using the database of invariants
4.3.4. The database of graph invariants
4.3.4.1. one parameter/one final graph
4.3.4.2. two parameters/one final graph
4.3.4.3. three parameters/one final graph
4.3.4.4. four parameters/one final graph
4.3.4.5. five parameters/one final graph
4.3.4.6. two parameters/two final graphs
4.3.4.7. three parameters/two final graphs
4.3.4.8. four parameters/two final graphs
4.3.4.9. five parameters/two final graphs
4.3.4.10. six parameters/two final graphs
4.4. Functional dependency invariants
4.4.1. Functional dependency invariants involving two constraints
4.4.2. Functional dependency invariants involving three constraints
4.4.3. Functional dependency invariants involving four constraints
4.5. The electronic version of the catalogue
4.5.1. Prolog facts describing a constraint
4.5.2. XML schema associated with a global constraint
4.5.2.1. Related work
4.5.2.3. Structure of schema
4.5.2.4. Generating schema from the catalogue
4.2. Differences from the 2005 report >>
This section summarises the main differences with the SICS report [Beldiceanu00] as well as of the corresponding article [BeldiceanuR00]. The main differences are listed below:
We have both simplified and extended the way to generate the vertices of the initial graph and we have introduced a new way of defining set of vertices. We have also removed the
\mathrm{𝖢𝖫𝖨𝖰𝖴𝖤}\left(\mathrm{𝖬𝖠𝖷}\right)
set of vertices generator since it cannot in general be evaluated in polynomial time. Therefore, we have modified the description of the constraints
\mathrm{𝚊𝚜𝚜𝚒𝚐𝚗}_\mathrm{𝚊𝚗𝚍}_\mathrm{𝚌𝚘𝚞𝚗𝚝𝚜}
\mathrm{𝚊𝚜𝚜𝚒𝚐𝚗}_\mathrm{𝚊𝚗𝚍}_\mathrm{𝚗𝚟𝚊𝚕𝚞𝚎𝚜}
\mathrm{𝚒𝚗𝚝𝚎𝚛𝚟𝚊𝚕}_\mathrm{𝚊𝚗𝚍}_\mathrm{𝚌𝚘𝚞𝚗𝚝}
\mathrm{𝚒𝚗𝚝𝚎𝚛𝚟𝚊𝚕}_\mathrm{𝚊𝚗𝚍}_\mathrm{𝚜𝚞𝚖}
\mathrm{𝚋𝚒𝚗}_\mathrm{𝚙𝚊𝚌𝚔𝚒𝚗𝚐}
\mathrm{𝚌𝚞𝚖𝚞𝚕𝚊𝚝𝚒𝚟𝚎}
\mathrm{𝚌𝚞𝚖𝚞𝚕𝚊𝚝𝚒𝚟𝚎𝚜}
\mathrm{𝚌𝚘𝚕𝚘𝚞𝚛𝚎𝚍}_\mathrm{𝚌𝚞𝚖𝚞𝚕𝚊𝚝𝚒𝚟𝚎}
\mathrm{𝚌𝚘𝚕𝚘𝚞𝚛𝚎𝚍}_\mathrm{𝚌𝚞𝚖𝚞𝚕𝚊𝚝𝚒𝚟𝚎𝚜}
\mathrm{𝚌𝚞𝚖𝚞𝚕𝚊𝚝𝚒𝚟𝚎}_\mathrm{𝚝𝚠𝚘}_𝚍
, which all used this feature.
We have introduced the new arc generators
\mathrm{𝑃𝐴𝑇𝐻}_\mathit{1}
\mathrm{𝑃𝐴𝑇𝐻}_𝑁
, which allow for specifying an
-ary constraint for which
n
is not fixed.
\mathrm{𝚜𝚒𝚣𝚎}_\mathrm{𝚖𝚊𝚡}_\mathrm{𝚜𝚝𝚊𝚛𝚝𝚒𝚗𝚐}_\mathrm{𝚜𝚎𝚚}_\mathrm{𝚊𝚕𝚕𝚍𝚒𝚏𝚏𝚎𝚛𝚎𝚗𝚝}
\mathrm{𝚜𝚒𝚣𝚎}_\mathrm{𝚖𝚊𝚡}_\mathrm{𝚜𝚎𝚚}_\mathrm{𝚊𝚕𝚕𝚍𝚒𝚏𝚏𝚎𝚛𝚎𝚗𝚝}
are examples of global constraints that use these arc generators in order to generate a set of sliding
\mathrm{𝚊𝚕𝚕𝚍𝚒𝚏𝚏𝚎𝚛𝚎𝚗𝚝}
In addition to traditional domain variables we have introduced float, set and multiset variables as well as several global constraints mentioning float and set variables (see for instance the
\mathrm{𝚌𝚑𝚘𝚚𝚞𝚎𝚝}
[LeHuedeGrabischLabreucheSaveant06] and the
\mathrm{𝚊𝚕𝚕𝚍𝚒𝚏𝚏𝚎𝚛𝚎𝚗𝚝}_\mathrm{𝚋𝚎𝚝𝚠𝚎𝚎𝚗}_\mathrm{𝚜𝚎𝚝𝚜}
constraints). This decision was initially motivated by the fact that several constraint systems and articles mention global constraints dealing with these types of variables. Later on, we realised that set variables also greatly simplify the interface of existing global constraints. This was especially true for those global constraints that explicitly deal with a graph, like
\mathrm{𝚌𝚕𝚒𝚚𝚞𝚎}
\mathrm{𝚌𝚞𝚝𝚜𝚎𝚝}
. In this context, using a set variable for catching the successors of a vertex is quite natural. This is especially true when a vertex of the final graph can have more than one successor since it allows for avoiding a lot of 0-1 variables.
We have introduced the possibility of using more than one graph constraint for defining a given global constraint (see for instance the
\mathrm{𝚌𝚞𝚖𝚞𝚕𝚊𝚝𝚒𝚟𝚎}
\mathrm{𝚜𝚘𝚛𝚝}
constraints). Therefore we have removed the notion of dual graph, which was initially introduced in the original report. In this context, we now use two graph constraints (see for instance
\mathrm{𝚌𝚑𝚊𝚗𝚐𝚎}_\mathrm{𝚌𝚘𝚗𝚝𝚒𝚗𝚞𝚒𝚝𝚢}
On the one hand, we have introduced the following new graph parameters:
\mathrm{𝐌𝐀𝐗}_\mathrm{𝐃𝐑𝐆}
\mathrm{𝐌𝐀𝐗}_\mathrm{𝐎𝐃}
\mathrm{𝐌𝐈𝐍}_\mathrm{𝐃𝐑𝐆}
\mathrm{𝐌𝐈𝐍}_\mathrm{𝐈𝐃}
\mathrm{𝐌𝐈𝐍}_\mathrm{𝐎𝐃}
\mathrm{𝐍𝐓𝐑𝐄𝐄}
\mathrm{𝐏𝐀𝐓𝐇}_\mathrm{𝐅𝐑𝐎𝐌}_\mathrm{𝐓𝐎}
\mathrm{𝐏𝐑𝐎𝐃}
\mathrm{𝐑𝐀𝐍𝐆𝐄}
\mathrm{𝐑𝐀𝐍𝐆𝐄}_\mathrm{𝐃𝐑𝐆}
\mathrm{𝐑𝐀𝐍𝐆𝐄}_\mathrm{𝐍𝐂𝐂}
\mathrm{𝐒𝐔𝐌}
\mathrm{𝐒𝐔𝐌}_\mathrm{𝐖𝐄𝐈𝐆𝐇𝐓}_\mathrm{𝐀𝐑𝐂}
On the other hand, we have removed the following graph parameters:
\mathrm{𝐍𝐂𝐂}\left(\mathrm{𝙲𝙾𝙼𝙿},\mathrm{𝚟𝚊𝚕}\right)
\mathrm{𝐍𝐒𝐂𝐂}\left(\mathrm{𝙲𝙾𝙼𝙿},\mathrm{𝚟𝚊𝚕}\right)
\mathrm{𝐍𝐓𝐑𝐄𝐄}\left(\mathrm{𝙰𝚃𝚃𝚁},\mathrm{𝙲𝙾𝙼𝙿},\mathrm{𝚟𝚊𝚕}\right)
\mathrm{𝐍𝐒𝐎𝐔𝐑𝐂𝐄}_\mathrm{𝐄𝐐}_\mathrm{𝐍𝐒𝐈𝐍𝐊}
\mathrm{𝐍𝐒𝐎𝐔𝐑𝐂𝐄}_\mathrm{𝐆𝐑𝐄𝐀𝐓𝐄𝐑𝐄𝐐}_\mathrm{𝐍𝐒𝐈𝐍𝐊}
\mathrm{𝐌𝐀𝐗}_\mathrm{𝐈𝐍}_\mathrm{𝐃𝐄𝐆𝐑𝐄𝐄}
has been renamed
\mathrm{𝐌𝐀𝐗}_\mathrm{𝐈𝐃}
We have introduced an iterator over the items of a collection in order to specify in a generic way a set of similar elementary constraints or a set of similar graph properties. This was required for describing some global constraints such as
\mathrm{𝚐𝚕𝚘𝚋𝚊𝚕}_\mathrm{𝚌𝚊𝚛𝚍𝚒𝚗𝚊𝚕𝚒𝚝𝚢}
\mathrm{𝚌𝚢𝚌𝚕𝚎}_\mathrm{𝚛𝚎𝚜𝚘𝚞𝚛𝚌𝚎}
\mathrm{𝚜𝚝𝚛𝚎𝚝𝚌𝚑}
. All these global constraints mention a condition involving some limit depending on the specific values that are actually used. For instance the
\mathrm{𝚐𝚕𝚘𝚋𝚊𝚕}_\mathrm{𝚌𝚊𝚛𝚍𝚒𝚗𝚊𝚕𝚒𝚝𝚢}
constraint forces each value
v
to be respectively used at least
{\mathrm{𝚊𝚝𝚕𝚎𝚊𝚜𝚝}}_{v}
{\mathrm{𝚊𝚝𝚖𝚘𝚜𝚝}}_{v}
times. This iterator was also necessary in the context of graph covering constraints where one wants to cover a digraph with some patterns. Each pattern consists of one resource and several tasks. One can now attach specific constraints to the different resources. Both the
\mathrm{𝚌𝚢𝚌𝚕𝚎}_\mathrm{𝚛𝚎𝚜𝚘𝚞𝚛𝚌𝚎}
\mathrm{𝚝𝚛𝚎𝚎}_\mathrm{𝚛𝚎𝚜𝚘𝚞𝚛𝚌𝚎}
constraints illustrate this point.
We have added some standard existing global constraints that were obviously missing from the previous report. This was for instance the case of the
\mathrm{𝚎𝚕𝚎𝚖𝚎𝚗𝚝}
In order to make clear the notion of family of global constraints we have computed for each global constraint a signature, which summarises its structure. Each signature was inserted into the index so that one can retrieve all the global constraints sharing the same structure.
We have generalised some existing global constraints. For instance the
\mathrm{𝚌𝚑𝚊𝚗𝚐𝚎}_\mathrm{𝚙𝚊𝚒𝚛}
constraint extends the
\mathrm{𝚌𝚑𝚊𝚗𝚐𝚎}
constraint. Finally we have introduced some novel global constraints like
\mathrm{𝚍𝚒𝚜𝚓𝚘𝚒𝚗𝚝}_\mathrm{𝚝𝚊𝚜𝚔𝚜}
\mathrm{𝚜𝚢𝚖𝚖𝚎𝚝𝚛𝚒𝚌}_\mathrm{𝚐𝚌𝚌}
We have defined the rules for specifying arc constraints.
|
Mathematical set formed from two given sets
{\displaystyle \scriptstyle A\times B}
{\displaystyle \scriptstyle A=\{x,y,z\}}
{\displaystyle \scriptstyle B=\{1,2,3\}}
{\displaystyle A\times B=\{(a,b)\mid a\in A\ {\mbox{ and }}\ b\in B\}.}
1.2 A two-dimensional coordinate system
2 Most common implementation (set theory)
2.1 Non-commutativity and non-associativity
2.2 Intersections, unions, and subsets
3 Cartesian products of several sets
3.1 n-ary Cartesian product
3.2 n-ary Cartesian power
3.3 Infinite Cartesian products
4.1 Abbreviated form
4.2 Cartesian product of functions
5 Definitions outside set theory
A deck of cards[edit]
An illustrative example is the standard 52-card deck. The standard playing card ranks {A, K, Q, J, 10, 9, 8, 7, 6, 5, 4, 3, 2} form a 13-element set. The card suits {♠, ♥, ♦, ♣} form a four-element set. The Cartesian product of these sets returns a 52-element set consisting of 52 ordered pairs, which correspond to all 52 possible playing cards.
Ranks × Suits returns a set of the form {(A, ♠), (A, ♥), (A, ♦), (A, ♣), (K, ♠), …, (3, ♣), (2, ♠), (2, ♥), (2, ♦), (2, ♣)}.
Suits × Ranks returns a set of the form {(♠, A), (♠, K), (♠, Q), (♠, J), (♠, 10), …, (♣, 6), (♣, 5), (♣, 4), (♣, 3), (♣, 2)}.
These two sets are distinct, even disjoint.
A two-dimensional coordinate system[edit]
Cartesian coordinates of example points
Most common implementation (set theory)[edit]
Main article: Implementation of mathematics in set theory
A formal definition of the Cartesian product from set-theoretical principles follows from a definition of ordered pair. The most common definition of ordered pairs, Kuratowski's definition, is
{\displaystyle (x,y)=\{\{x\},\{x,y\}\}}
. Under this definition,
{\displaystyle (x,y)}
{\displaystyle {\mathcal {P}}({\mathcal {P}}(X\cup Y))}
{\displaystyle X\times Y}
is a subset of that set, where
{\displaystyle {\mathcal {P}}}
represents the power set operator. Therefore, the existence of the Cartesian product of any two sets in ZFC follows from the axioms of pairing, union, power set, and specification. Since functions are usually defined as a special case of relations, and relations are usually defined as subsets of the Cartesian product, the definition of the two-set Cartesian product is necessarily prior to most other definitions.
Non-commutativity and non-associativity[edit]
Let A, B, C, and D be sets.
The Cartesian product A × B is not commutative,
{\displaystyle A\times B\neq B\times A,}
A is equal to B, or
A or B is the empty set.
{\displaystyle (A\times B)\times C\neq A\times (B\times C)}
If for example A = {1}, then (A × A) × A = {((1, 1), 1)} ≠ {(1, (1, 1))} = A × (A × A).
Intersections, unions, and subsets[edit]
See also: List of set identities and relations
A = {y ∈ ℝ : 1 ≤ y ≤ 4}, B = {x ∈ ℝ : 2 ≤ x ≤ 5},
and C = {x ∈ ℝ : 4 ≤ x ≤ 7}, demonstrating
A × (B∩C) = (A×B) ∩ (A×C),
A × (B∪C) = (A×B) ∪ (A×C), and
A × (B \ C) = (A×B) \ (A×C)
A = {x ∈ ℝ : 2 ≤ x ≤ 5}, B = {x ∈ ℝ : 3 ≤ x ≤ 7},
C = {y ∈ ℝ : 1 ≤ y ≤ 3}, D = {y ∈ ℝ : 2 ≤ y ≤ 4}, demonstrating
(A∩B) × (C∩D) = (A×C) ∩ (B×D).
(A∪B) × (C∪D) ≠ (A×C) ∪ (B×D) can be seen from the same example.
{\displaystyle (A\cap B)\times (C\cap D)=(A\times C)\cap (B\times D)}
{\displaystyle (A\cup B)\times (C\cup D)\neq (A\times C)\cup (B\times D)}
In fact, we have that:
{\displaystyle (A\times C)\cup (B\times D)=[(A\setminus B)\times C]\cup [(A\cap B)\times (C\cup D)]\cup [(B\setminus A)\times D]}
For the set difference, we also have the following identity:
{\displaystyle (A\times C)\setminus (B\times D)=[A\times (C\setminus D)]\cup [(A\setminus B)\times C]}
{\displaystyle {\begin{aligned}A\times (B\cap C)&=(A\times B)\cap (A\times C),\\A\times (B\cup C)&=(A\times B)\cup (A\times C),\\A\times (B\setminus C)&=(A\times B)\setminus (A\times C),\end{aligned}}}
{\displaystyle (A\times B)^{\complement }=\left(A^{\complement }\times B^{\complement }\right)\cup \left(A^{\complement }\times B\right)\cup \left(A\times B^{\complement }\right)\!,}
{\displaystyle A^{\complement }}
denotes the absolute complement of A.
Other properties related with subsets are:
{\displaystyle {\text{if }}A\subseteq B{\text{, then }}A\times C\subseteq B\times C;}
{\displaystyle {\text{if both }}A,B\neq \emptyset {\text{, then }}A\times B\subseteq C\times D\!\iff \!A\subseteq C{\text{ and }}B\subseteq D.}
Cardinality[edit]
See also: Cardinal arithmetic
The cardinality of a set is the number of elements of the set. For example, defining two sets: A = {a, b} and B = {5, 6}. Both set A and set B consist of two elements each. Their Cartesian product, written as A × B, results in a new set which has the following elements:
In this case, |A × B| = 4
Cartesian products of several sets[edit]
n-ary Cartesian product[edit]
{\displaystyle X_{1}\times \cdots \times X_{n}=\{(x_{1},\ldots ,x_{n})\mid x_{i}\in X_{i}\ {\text{for every}}\ i\in \{1,\ldots ,n\}\}}
of n-tuples. If tuples are defined as nested ordered pairs, it can be identified with (X1 × ⋯ × Xn−1) × Xn. If a tuple is defined as a function on {1, 2, …, n} that takes its value at i to be the ith element of the tuple, then the Cartesian product X1×⋯×Xn is the set of functions
{\displaystyle \{x:\{1,\ldots ,n\}\to X_{1}\cup \cdots \cup X_{n}\ |\ x(i)\in X_{i}\ {\text{for every}}\ i\in \{1,\ldots ,n\}\}.}
n-ary Cartesian power[edit]
The n-ary Cartesian power of a set X, denoted
{\displaystyle X^{n}}
, can be defined as
{\displaystyle X^{n}=\underbrace {X\times X\times \cdots \times X} _{n}=\{(x_{1},\ldots ,x_{n})\ |\ x_{i}\in X\ {\text{for every}}\ i\in \{1,\ldots ,n\}\}.}
Infinite Cartesian products[edit]
It is possible to define the Cartesian product of an arbitrary (possibly infinite) indexed family of sets. If I is any index set, and
{\displaystyle \{X_{i}\}_{i\in I}}
is a family of sets indexed by I, then the Cartesian product of the sets in
{\displaystyle \{X_{i}\}_{i\in I}}
{\displaystyle \prod _{i\in I}X_{i}=\left\{\left.f:I\to \bigcup _{i\in I}X_{i}\ \right|\ (\forall i\in I)(f(i)\in X_{i})\right\},}
For each j in I, the function
{\displaystyle \pi _{j}:\prod _{i\in I}X_{i}\to X_{j},}
{\displaystyle \pi _{j}(f)=f(j)}
is called the jth projection map.
{\displaystyle \prod _{i\in I}X_{i}=\prod _{i\in I}X}
is the set of all functions from I to X, and is frequently denoted XI. This case is important in the study of cardinal exponentiation. An important special case is when the index set is
{\displaystyle \mathbb {N} }
, the natural numbers: this Cartesian product is the set of all infinite sequences with the ith term in its corresponding set Xi. For example, each element of
{\displaystyle \prod _{n=1}^{\infty }\mathbb {R} =\mathbb {R} \times \mathbb {R} \times \cdots }
can be visualized as a vector with countably infinite real number components. This set is frequently denoted
{\displaystyle \mathbb {R} ^{\omega }}
{\displaystyle \mathbb {R} ^{\mathbb {N} }}
Abbreviated form[edit]
Cartesian product of functions[edit]
{\displaystyle (f\times g)(x,y)=(f(x),g(y)).}
Cylinder[edit]
{\displaystyle A}
{\displaystyle B\subseteq A}
. Then the cylinder of
{\displaystyle B}
{\displaystyle A}
is the Cartesian product
{\displaystyle B\times A}
{\displaystyle B}
{\displaystyle A}
Normally,
{\displaystyle A}
is considered to be the universe of the context and is left away. For example, if
{\displaystyle B}
is a subset of the natural numbers
{\displaystyle \mathbb {N} }
, then the cylinder of
{\displaystyle B}
{\displaystyle B\times \mathbb {N} }
Definitions outside set theory[edit]
Concatenation of sets of strings
Join (SQL) § Cross join
Axiom of power set (to prove the existence of the Cartesian product)
^ a b c Weisstein, Eric W. "Cartesian Product". mathworld.wolfram.com. Retrieved September 5, 2020.
^ Nykamp, Duane. "Cartesian product definition". Math Insight. Retrieved September 5, 2020.
^ a b c "Cartesian Product". web.mnstate.edu. Archived from the original on July 18, 2020. Retrieved September 5, 2020.
^ "Cartesian". Merriam-Webster.com. 2009. Retrieved December 1, 2009.
^ a b "CartesianProduct". PlanetMath.
^ Peter S. (1998). A Crash Course in the Mathematics of Infinite Sets. St. John's Review, 44(2), 35–59. Retrieved August 1, 2011, from http://www.mathpath.org/concepts/infinity.htm
^ Osborne, M., and Rubinstein, A., 1994. A Course in Game Theory. MIT Press.
"Direct product", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Retrieved from "https://en.wikipedia.org/w/index.php?title=Cartesian_product&oldid=1084304623"
|
Allison Arieff, Bryan Burkhardt, Spa, Taschen
Do males go to spas? I don't know. In every photo you ever see there's usually a pretty female reclining in some sort of liquid with, maybe, cucumbers covering her eyes. I'm quite curious about this. Maybe the blokes are banished to a beer shed or something ... or maybe the next town.
The way they're depicted, they're a sort of opposite to, say, a rugby tour of New Zealand or one of those faraway fishing places or anywhere for grownups with "camp" in the name.
Anyway, this is a big beautiful book about spas around the world. It is primarily pictorial but there are descriptions for each of the eighty-three spas that are featured. They are in all parts of the world -- Africa, the Middle East, Asia, Caribbean, Europe, North America, South Pacific, and Mexico and South America.
As you make your way through you will discover the mention of one celebrity male -- Robert De Niro in connection with Shambhala at Parrot Quay.
The object of course, for the humans involved, is beautification and purification, and so some pains have been gone to with all these places to make them attractive and soothing and generally good for the old bien-etre. And all of that makes for some pretty pictures.
If you are actually in the market to go to such a place then this book might be an excellent place to go to sort out the contenders.
Review of The Road to Reality: A Complete Guide to the Laws of the Universe
Roger Penrose, The Road to Reality: A Complete Guide to the Laws of the Universe, Knopf
Roger Penrose states in the preface of this book, "The purpose of this book is to convey to the reader some feeling for what is surely one of the most important and exciting voyages of discovery that humanity has embarked upon." He then procedes to justify his inclusion of mathematical formulas, but to advise the reader, "Do not be afraid to skip equations (I do this frequently myself) and, if you wish, whole chapters or parts of chapters, when they begin to gget a mite too turgid!" He also says, "My hope is that the extensive cross-referencing may suficiently illuminate unfamiliar notions, so it should be possible to track down needed concepts and notation by turning back to earlier unread sections for clarification."
The first thing you may notice about this description of how to read the book is that it isn't much like bedtime reading. For a book to be good bedtime reading, it needs to meet several conditions:
You must physically be able to see and turn the pages from a recumbant or semi-recumbant position. "The Road to Reality", at 1000+ pages and about 4 pounds, makes this difficult but not impossible.
The writing needs to be modular enough that you can imbibe some satisfactory idea in the time between getting into bed and turning out the light to go to sleep. It is conceivable that the kind of browsing that Penrose recommends would turn up some sections that would work like this, but I found that trying to read the first few chapters straight through didn't do this for me.
Understanding the work must not involve extensive use of paper and pencil to work out problems, indexes, references to other books, and other study methods more appropriate to sitting at a desk. I have not attempted to work the exercises. But I found that almost all the writing in this book was dense enough to require reference to other parts of the book, often separated by many pages or chapters and findable only by using the index.
This book is being marketed by Knopf as being aimed at the popular science market, presumably the people who read Scientific American and Steven Hawking. I found it much harder reading than those books.
For a great deal of my life I've been an enthusiastic consumer of the literature of popular science. In elementary school, I read and reread the book about the solar system by Patrick Moore. In high school I devoured the books about cosmology, stellar evolution, and quantum physics by George Gamov and Fred Hoyle. I was led by the picture of what physicists do I derived from these books to make one of the major mistakes of my life and major in physics in college. One book that allowed me to survive my four years as a physics major was "The Feynman Lectures in Physics" by Feynman, Leighton and Sands. This was not marketed as popular science writing, but Penrose is claiming to write on something like the level of this book, which is a heavily edited version of the lectures Richard Feynman gave to a freshman physics class at CalTech.
I did in fact read the Feynman lectures in bed as an undergraduate, although I also did the reading at a desk with index and comparisons to my textbook and working out problems with pencil and paper. Since I was finding Penrose's writing much harder than I remembered Feynman's, I decided to compare them on the same topic.
Here's the first mention of the term "partial derivative" from the index of both books. In Feynman, it's in chapter 14, "Work and Potential Energy (conclusion)":
We find that the force is:
F=-\Delta U/\Delta x
Of course this is not exact. What we really want is the limit as
\Delta x
gets smaller and smaller, because it is only exactly right in the limit ofinfinitesimal
\Delta x
. This we recognize as the derivative of
U
x
, and we would be inclined, therefore to write
-\mathrm{dU}/\mathrm{dx}
U
x
y
z
, and the mathematicians have invented a different symbol to remind us to be very careful when we are differentiating such a function, so as to remember that we are considering that only
x
varies, and
y
z
do not vary. Instead of a
d
they simply make a "backwards 6", or
\partial
. (A
\partial
should have been used in the beginning of calculus because we always want to cancel that
d
, but we never want to cancel a
\partial
!) So they write
\partial U/\partial x
, and furthermore, in moments of duress, if they want to be very careful, they put a line beside it with a little
\mathrm{yz}
at the bottom (
\partial U/\partial x{|}_{\mathrm{yx}}
), which means "Take the derivative of
U
x
y
z
constant." Most often we leave out the remark about what is kept constant because it is usually evident from the context, so we usually do not use the line with the
y
z
. However, always use a
\partial
d
as a warning that it is a derivative with some other variables kept constant. This is called a partial derivative; it is a derivative in which we vary only
x
And in Penrose, it's in Chapter 10, "Surfaces":
I have not yet explained what 'differentiability' is to mean for a function of more than one variable. Although intuitively clear, the precise definition is a little too technical for me to go into thoroughly here. Some clarifying comments are nevertheless appropriate.
First of all, for
f
be [sic] differentiable, as a function of the pair of variables
\left(x,y\right)
, it is certainly necessary that if we consider
f\left(x,y\right)
in its capacity as a function of only the one variable
x
y
is held to some constant value, then this function must be smooth (at least
{C}^{1}
), as a function of
x
, in the sense of functions of a single variable (see §6.3); moreover, if we consider
f\left(x,y\right)
as a function of just the one variable
y
, where it is
x
that is now to be held constant, then it must be smooth (
{C}^{1}
y
. However, this is far from sufficient. There are many functions
f\left(x,y\right)
which are separately smooth in
x
y
, but for which would be quite unreasonable to call smooth in the
\mathrm{pair}\left(x,y\right)
. A sufficient additional requirement for smoothness is that the derivatives with respect to
x
y
separately are each continuous functions of the pair
\left(x,y\right)
. Similar statements ... would hold if we consider functions of more than two variables. We use the 'partial derivative' symbol
\partial
to denote differentiation with respect to one variable, holding the other(s) fixed. The partial derivatives of
f\left(x,y\right)
x
and with respect to
y
, respectively, are written
\frac{\partial f}{/}\partial x
\frac{\partial f}{/}\partial y
(As an example, we note that if
f\left(x,y\right)={x}^{2}+{\mathrm{xy}}^{2}+{y}^{3}
\partial f/\partial x=2x+{y}^{2}
\partial f/\partial y=2\mathrm{xy}+3{y}^{2}
.) If these quantities exist and are continuous, then we say that
\Phi
is a (
{C}^{1}-
) smooth function on the surface.
Note that this appears on page 183-4, the definitions of
{C}^{n}
smoothness are on page 110. You can find this from the index.
My guess is that if you don't already have some idea of why you want to know what a partial derivative is, you won't make it through either quote very easily. But I think you'd have a lot better chance of figuring out what it is from the Feynman explanation.
I was hoping when I read about the Penrose book that it would be as interesting and well-written as I had found the Feynman book, but have the stuff that's been done in the forty years since the Feynman book was written. It does indeed have some of that stuff, but I think the reader will have to be very determined indeed to get to it. I don't see any chance at all of someone who isn't already excited about modern physics becoming so by reading this book.
File translated from TEX by TTM, version 3.68.
various, A Pocket Guide to O'Reilly Animals, O'Reilly (free)
This is a nice little free guide to the animals used on O'Reilly covers, complete with an explanation by Tim O'Reilly about how the series came to be.
Each animal has a page with an illustration and information about it including its degree of endangerment. At the bottom of the pages, there are discreet little ads about the books that have that animal on its cover.
It's in the pocket book format, and should be available from larger stores.
William Gibson, Pattern Recognition, Penguin
This is newish in paperback but has been out for a while in hardback. Also, the publisher is different in the USA I think. Our copy came from the UK.
William Gibson first surfaced in a big way with 1984's Neuromancer, with big thoughts and concepts with punk sensibilities. Since then there really haven't been ideas with the scope of Neuromancer's (the Net!) but more an examination, usually in a near dystopian future, of some way in which things aren't working too well.
In this novel we're right in an undefined present (Putin is the only head of state mentioned) and we scoot around between NYC, London, Tokyo, and Moscow while on a mission. The mission involves cult video footage, spys and demi-gangsters, and a cool and very likeable female protaginist.
The flow is nice and the descriptions, just as in other Gibson books, occasionally inspired. Characters are getting more fully developed these days and aren't just attitude on a stick or plot handholds.
I like the quote from one of the characters towards the end "I think it's all actually about the money for him." He grimaces. "Ultimately I find that that was the whole problem, with most of the dot-com people ..."
Mike Hally, Electronic Brains: stories from
the dawn of the computer age, www.granta.com
This was originally a radio series broadcast by the BBC and this book is an enlargement of the script into a series of stories that cover the USA, UK, Russia, and Australia. As we go through there is a quest to see who was first.
It's not just about the machines though. There is a fair bit of context as well to give us a greater feel for what was happening and we make our way around the world following the realpolitik of the World Wars and the Cold War and the personalities who range from inspired dreamer to cut throat business people -- as it ever was.
One of the interesting things is the juxtaposition of the clueless of that time with the globalists of today: if you miss out on a technology boat, it hardly matters because of the movement of bits between various parts of the world. If your homegrown companies are all taken over by a foreign behemoth, who should worry? Yes, well, we don't deal with the Alfred E. Newmans (Mad magazine!) of the world further except to say ... oh, you know, what Will Robinson's robot said.
So anyway, there is a fair balance of the clueless and the cluefull in the book and it all makes very entertaining reading.
The only thing I found annoying was the dismissal of Turing as being too aiery-faery (not a quote from the book) to be of relevance to these wrench-whirlers. Perhaps he wasn't a direct influence on the mechanical work but his thoughts and dreams of intelligent machines almost certainly were, even if indirectly. Another thing I get tired of is the constant reference these days to his sexuality. Is this the tabloid mind at work? What possible relevance does it have? Some people will say "well, he committed suicide because of it, and thus a great career was brought short". Except that he left no evidence of his state of mind, only a portion of apple soaked in cyanide.
Stuart Russell, Peter Norvig, Artificial Intelligence: A Modern Approach, Prentice Hall
Summer vacation/holidays coming up and we're reviewing a textbook? It is indeed, and a very widely used one at that. But never fear, when texts aren't read under the dumbed-down hammer of continuous assessment, they can actually be fun.
I've had a habit for a while of scanning the shelves in computer sections looking for a good, fairly all-purpose book, that acted as an introduction but also took you on a fair journey through what has been done and what is happening now. This book does those things and is also written in a way that is engaging.
The book first looks at the question of what AI is and then goes on to problem solving and knowledge and reasoning, planning, uncertainty or fuzziness, learning and perception. Leaving aside philosophical questions about any of this (which will bring everything to a grinding halt but questions of "right" and "wrong" do have to be addressed at some stage), this is as good a guide as there is.
One appealling thing about this area of study is that it hasn't all been done yet. There has been a huge amount of work done but the early breakthroughs that were expected have not come so, in addition to the fine tuning of improved academic hypothesis in existing areas there's also the possibility of great logical leaps being rewarded.
|
Global Constraint Catalog: Croots
<< 5.332. remainder5.334. same >>
[BessiereHebrardHnichKiziltanWalsh05IJCAI]
\mathrm{𝚛𝚘𝚘𝚝𝚜}\left(𝚂,𝚃,\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\right)
𝚂
\mathrm{𝚜𝚟𝚊𝚛}
𝚃
\mathrm{𝚜𝚟𝚊𝚛}
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}
\mathrm{𝚌𝚘𝚕𝚕𝚎𝚌𝚝𝚒𝚘𝚗}\left(\mathrm{𝚟𝚊𝚛}-\mathrm{𝚍𝚟𝚊𝚛}\right)
𝚂\le |\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}|
\mathrm{𝚛𝚎𝚚𝚞𝚒𝚛𝚎𝚍}
\left(\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂},\mathrm{𝚟𝚊𝚛}\right)
𝚂
is the set of indices of the variables in the collection
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}
taking their values in
𝚃
𝚂=\left\{i|\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\left[i\right].\mathrm{𝚟𝚊𝚛}\in 𝚃\right\}
. Positions are numbered from 1.
\left(\left\{2,4,5\right\},\left\{2,3,8\right\},〈1,3,1,2,3〉\right)
\mathrm{𝚛𝚘𝚘𝚝𝚜}
constraint holds since values 2 and 3 in
𝚃
occur in the collection
〈1,3,1,2,3〉
only at positions
𝚂=\left\{2,4,5\right\}
8\in 𝚃
does not occur within the collection
〈1,3,1,2,3〉
|\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}|>1
\mathrm{𝚛𝚊𝚗𝚐𝚎}
\left(\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}.\mathrm{𝚟𝚊𝚛}\right)>1
Bessière et al. showed [BessiereHebrardHnichKiziltanWalsh05IJCAI] that many counting and occurence constraints can be specified with two global primitives:
\mathrm{𝚛𝚘𝚘𝚝𝚜}
\mathrm{𝚛𝚊𝚗𝚐𝚎}
. For instance, the
\mathrm{𝚌𝚘𝚞𝚗𝚝}
constraint can be decomposed into one
\mathrm{𝚛𝚘𝚘𝚝𝚜}
\mathrm{𝚌𝚘𝚞𝚗𝚝}
\left(\mathrm{𝚅𝙰𝙻},\mathrm{𝚅𝙰𝚁𝚂},\mathrm{𝙾𝙿},\mathrm{𝙽𝚅𝙰𝚁}\right)
\mathrm{𝚛𝚘𝚘𝚝𝚜}
\left(𝚂,\left\{\mathrm{𝚅𝙰𝙻}\right\},\mathrm{𝚅𝙰𝚁𝚂}\right)\wedge |𝚂|\mathrm{𝙾𝙿}\mathrm{𝙽𝚅𝙰𝚁}
\mathrm{𝚛𝚘𝚘𝚝𝚜}
does not count but collects the set of variables using particular values. It provides then a way of channeling.
\mathrm{𝚛𝚘𝚘𝚝𝚜}
generalises, for instance, the
\mathrm{𝚕𝚒𝚗𝚔}_\mathrm{𝚜𝚎𝚝}_\mathrm{𝚝𝚘}_\mathrm{𝚋𝚘𝚘𝚕𝚎𝚊𝚗𝚜}
\mathrm{𝚕𝚒𝚗𝚔}_\mathrm{𝚜𝚎𝚝}_\mathrm{𝚝𝚘}_\mathrm{𝚋𝚘𝚘𝚕𝚎𝚊𝚗𝚜}
\left(𝚂,\mathrm{𝙱𝙾𝙾𝙻𝙴𝙰𝙽𝚂}\right)
\mathrm{𝚛𝚘𝚘𝚝𝚜}
\left(𝚂,\left\{1\right\},\mathrm{𝙱𝙾𝙾𝙻𝙴𝙰𝙽𝚂}.\mathrm{𝚋𝚘𝚘𝚕}\right)
, or may be used instead of the
\mathrm{𝚍𝚘𝚖𝚊𝚒𝚗}_\mathrm{𝚌𝚘𝚗𝚜𝚝𝚛𝚊𝚒𝚗𝚝}
Other examples of reformulations are given in [BessiereHebrardHnichKiziltanWalsh09].
In [BessiereHebrardHnichKiziltanWalsh06CP], Bessière et al. shows that enforcing hybrid-consistency on
\mathrm{𝚛𝚘𝚘𝚝𝚜}
is NP-hard. They consider the decomposition of
\mathrm{𝚛𝚘𝚘𝚝𝚜}
into a network of ternary constraints:
\forall i
i\in 𝚂⇒\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\left[i\right].\mathrm{𝚟𝚊𝚛}\in 𝚃
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\left[i\right].\mathrm{𝚟𝚊𝚛}⇒𝚃\wedge i\in 𝚂
. Enforcing bound consistency on the decomposition achieves bound consistency on
\mathrm{𝚛𝚘𝚘𝚝𝚜}
. Enforcing hybrid consistency on the decomposition achieves at least bound consistency on
\mathrm{𝚛𝚘𝚘𝚝𝚜}
, until hybrid consistency in some special cases:
\mathrm{𝑑𝑜𝑚}\left(\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\left[i\right].\mathrm{𝚟𝚊𝚛}\right)\subset \underline{𝚃},\forall i\in \underline{𝚂}
\mathrm{𝑑𝑜𝑚}\left(\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\left[i\right].\mathrm{𝚟𝚊𝚛}\right)\cap \overline{𝚃}=\varnothing ,\forall i\notin \overline{𝚂}
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}
are ground,
𝚃
is ground.
Enforcing hybrid consistency on the decomposition can be done in
O\left(nd\right)
n=|\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}|
and
the maximum domain size of
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\left[i\right].\mathrm{𝚟𝚊𝚛}
𝚃
roots in Gecode, roots in MiniZinc.
\mathrm{𝚕𝚒𝚗𝚔}_\mathrm{𝚜𝚎𝚝}_\mathrm{𝚝𝚘}_\mathrm{𝚋𝚘𝚘𝚕𝚎𝚊𝚗𝚜}
(constraint involving set variables).
\mathrm{𝚊𝚖𝚘𝚗𝚐}
(can be expressed with
\mathrm{𝚛𝚘𝚘𝚝𝚜}
\mathrm{𝚊𝚜𝚜𝚒𝚐𝚗}_\mathrm{𝚊𝚗𝚍}_\mathrm{𝚗𝚟𝚊𝚕𝚞𝚎𝚜}
\mathrm{𝚛𝚘𝚘𝚝𝚜}
\mathrm{𝚛𝚊𝚗𝚐𝚎}
\mathrm{𝚊𝚝𝚕𝚎𝚊𝚜𝚝}
\mathrm{𝚊𝚝𝚖𝚘𝚜𝚝}
\mathrm{𝚛𝚘𝚘𝚝𝚜}
\mathrm{𝚌𝚘𝚖𝚖𝚘𝚗}
\mathrm{𝚛𝚘𝚘𝚝𝚜}
\mathrm{𝚛𝚊𝚗𝚐𝚎}
\mathrm{𝚌𝚘𝚞𝚗𝚝}
\mathrm{𝚛𝚘𝚘𝚝𝚜}
\mathrm{𝚍𝚘𝚖𝚊𝚒𝚗}_\mathrm{𝚌𝚘𝚗𝚜𝚝𝚛𝚊𝚒𝚗𝚝}
\mathrm{𝚐𝚕𝚘𝚋𝚊𝚕}_\mathrm{𝚌𝚊𝚛𝚍𝚒𝚗𝚊𝚕𝚒𝚝𝚢}
\mathrm{𝚐𝚕𝚘𝚋𝚊𝚕}_\mathrm{𝚌𝚘𝚗𝚝𝚒𝚐𝚞𝚒𝚝𝚢}
\mathrm{𝚛𝚘𝚘𝚝𝚜}
\mathrm{𝚜𝚢𝚖𝚖𝚎𝚝𝚛𝚒𝚌}_\mathrm{𝚊𝚕𝚕𝚍𝚒𝚏𝚏𝚎𝚛𝚎𝚗𝚝}
\mathrm{𝚞𝚜𝚎𝚜}
\mathrm{𝚛𝚘𝚘𝚝𝚜}
\mathrm{𝚛𝚊𝚗𝚐𝚎}
characteristic of a constraint: disequality.
constraint type: counting constraint, value constraint, decomposition.
\mathrm{𝚌𝚘𝚕}\left(\mathrm{𝚂𝙴𝚃𝚂}-\mathrm{𝚌𝚘𝚕𝚕𝚎𝚌𝚝𝚒𝚘𝚗}\left(𝚜-\mathrm{𝚜𝚟𝚊𝚛},𝚝-\mathrm{𝚜𝚟𝚊𝚛}\right),\left[\mathrm{𝚒𝚝𝚎𝚖}\left(𝚜-𝚂,𝚝-𝚃\right)\right]\right)
\mathrm{𝚂𝙴𝚃𝚂}
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}
\mathrm{𝑃𝑅𝑂𝐷𝑈𝐶𝑇}
↦\mathrm{𝚌𝚘𝚕𝚕𝚎𝚌𝚝𝚒𝚘𝚗}\left(\mathrm{𝚜𝚎𝚝𝚜},\mathrm{𝚟𝚊𝚛𝚒𝚊𝚋𝚕𝚎𝚜}\right)
\begin{array}{c}\mathrm{𝚒𝚗}_\mathrm{𝚜𝚎𝚝}\left(\mathrm{𝚟𝚊𝚛𝚒𝚊𝚋𝚕𝚎𝚜}.\mathrm{𝚔𝚎𝚢},\mathrm{𝚜𝚎𝚝𝚜}.𝚜\right)⇔\hfill \\ \mathrm{𝚒𝚗}_\mathrm{𝚜𝚎𝚝}\left(\mathrm{𝚟𝚊𝚛𝚒𝚊𝚋𝚕𝚎𝚜}.\mathrm{𝚟𝚊𝚛},\mathrm{𝚜𝚎𝚝𝚜}.𝚝\right)\hfill \end{array}
\mathrm{𝐍𝐀𝐑𝐂}
=|\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}|
\mathrm{𝚛𝚘𝚘𝚝𝚜}
|
Volatility Surface Oracle - Premia
Premia has partnered with Chainlink to provide our Volatility Surface on-chain for all of DeFi.
Based on our extensive background of research on options volatility, specifically in crypto markets, and inspired by the extreme lack of volatility tools in DeFi, we have decided to launch our own Volatility Surface oracle. Similar to existing Price Feed oracles (such as those used by Premia pools), our Volatility Surface oracle will allow any smart contract to query the data from on-chain, allowing external contracts to accurately price options or measure risk at a granular level using this volatility data.
Early on in our research, we realized it was extremely important to provide a full 3-dimensional volatility surface for pricing options. The term structure (how the volatility changes over time) and the volatility smile/skew (how the volatility changes in regards to option "moneyness") cannot be ignored, otherwise options will be wildly mis-priced. This, coupled with the low-compute constraints of the Ethereum mainnet, means we are forced to calculate the ideal volatility surface off-chain, and periodically report the updated surface to the on-chain oracle through the use of decentralized technology such as Chainlink nodes.
We use a volume-weighted combination of the most liquid CEX options data, along with data from trades executed on the Premia platform, to form our three-dimensional, off-chain volatility surface for each pool. This surface is then parameterized so it can be put on-chain as a simple polynomial, allowing for any area on the surface to be accurately priced.
To price an option, a user (or contract) can call getBlackScholesPrice on the Volatility Surface oracle contract, passing in the token pair, call or put, the spot price, the strike price, and the maturity of the option. This will return the Black Scholes price with the volatility surface value for that specific option plugged into the
\sigma
value in the original
BS
However, option pricing is not the only usage for volatility surface data. If one would like to get the entire surface, they can call getVolatilitySurface , passing in the token pair. Alternatively, one can call getAnnualizedVolatility64x64, with the same parameters as passed to getBlackScholesPrice, to get the 64x64 fixed decimal representation of the volatility for a specific option.
The pairs currently supported by the Premia Volatility Surface oracle:
WETH/DAI Call and Put - Market-based surface
WBTC/DAI Call and Put - Market-based surface
LINK/DAI Call and Put - Synthetic surface (no sufficiently liquid external option markets exist)
We are excited to make this Volatility Surface data available for all of DeFi to build on-top of.
|
Pole-Zero Simplification - MATLAB & Simulink - MathWorks 한êµ
Pole-Zero Simplification in the Model Reducer App
Generate MATLAB Code for Pole-Zero Simplification
Pole-Zero Cancellation at the Command Line
Pole-zero simplification reduces the order of your model exactly by canceling pole-zero pairs or eliminating states that have no effect on the overall model response. Pole-zero pairs can be introduced, for example, when you construct closed-loop architectures. Normal small errors associated with numerical computation can convert such canceling pairs to near-canceling pairs. Removing these states preserves the model response characteristics while simplifying analysis and control design. Types of pole-zero simplification include:
Structural elimination — Eliminate states that are structurally disconnected from the inputs or outputs. Eliminating structurally disconnected states is a good first step in model reduction because the process does not involve any numerical computation. It also preserves the state structure of the remaining states. Such structurally nonminimal states can arise, for example, when you linearize a Simulink® model that includes some unconnected state-space or transfer function blocks. At the command line, perform structural elimination with sminreal.
Pole-zero cancellation or minimal realization — Eliminate canceling or near-canceling pole-zero pairs from transfer functions. Eliminate unobservable or uncontrollable states from state-space models. At the command line, perform this kind of simplification with minreal.
In the Model Reducer app and the Reduce Model Order Live Editor task, the Pole-Zero Simplification method automatically eliminates structurally disconnected states and also performs pole-zero cancellation or minimal realization.
Model Reducer provides an interactive tool for performing model reduction and examining and comparing the responses of the original and reduced-order models. To reduce a model by pole-zero simplification in Model Reducer:
Open the app and import a model to reduce. For instance, suppose that there is a model named build in the MATLAB® workspace. The following command opens Model Reducer and imports the LTI model build.
In the Data Browser, select the model to reduce. Click Pole-Zero Simplification.
In the Pole-Zero Simplification tab, Model Reducer displays a plot of the frequency response of the original model and a reduced version of the model. The app also displays a pole-zero map of both models.
The pole-zero map marks pole locations with x and zero locations with o.
The frequency response is a Bode plot for SISO models, and a singular-value plot for MIMO models.
Optionally, change the tolerance with which Model Reducer identifies canceling pole-zero pairs. Model Reducer cancels pole-zero pairs that fall within the tolerance specified by the Simplification of pole-zero pairs value. In this case, no pole-zero pairs are close enough together for Model Reducer to cancel them at the default tolerance of 1e-05. To cancel pairs that are a little further apart, move the slider to the right or enter a larger value in the text box.
The blue x and o marks on the pole-zero map show the near-canceling pole-zero pairs in the original model that are eliminated from the simplified model. Poles and zeros remaining in the simplified model are marked with red x and o.
Try different simplification tolerances while observing the frequency response of the original and simplified model. Remove as many poles and zeros as you can while preserving the system behavior in the frequency region that is important for your application. Optionally, examine absolute or relative error between the original and simplified model. Select the error-plot type using the buttons on the Pole-Zero Simplification tab.
When you have a simplified model that you want to store and analyze further, click . The new model appears in the Data Browser with a name that reflects the reduced model order.
After creating a reduced model in the Data Browser, you can continue changing the simplification parameters and create reduced models with different orders for analysis and comparison.
Model Reducer creates a script that uses the minreal command to perform model reduction with the parameters you have set on the Pole-Zero Simplification tab. The script opens in the MATLAB editor.
To reduce the order of a model by pole-zero cancellation at the command line, use minreal.
Create a model of the following system, where C is a PI controller, and G has a zero at
3Ã1{0}^{-8}
rad/s. Such a low-frequency zero can arise from derivative action somewhere in the plant dynamics. For example, the plant may include a component that computes speed from position measurements.
G = zpk(3e-8,[-1,-3],1);
C = pid(1,0.3);
(s+0.3) (s-3e-08)
s (s+4.218) (s+0.7824)
In the closed-loop model T, the integrator
\left(1/s\right)
from C very nearly cancels the low-frequency zero of G.
Force a cancellation of the integrator with the zero near the origin.
Tred = minreal(T,1e-7)
Tred =
By default, minreal reduces transfer function order by canceling exact pole-zero pairs or near pole-zero pairs within sqrt(eps). Specifying 1e-7 as the second input causes minreal to eliminate pole-zero pairs within
1{0}^{-7}
rad/s of each other.
The reduced model Tred includes all the dynamics of the original closed-loop model T, except for the near-canceling zero-pole pair.
Compare the frequency responses of the original and reduced systems.
bode(T,Tred,'r--')
legend('T','Tred')
Because the canceled pole and zero do not match exactly, some extreme low-frequency dynamics evident in the original model are missing from Tred. In many applications, you can neglect such extreme low-frequency dynamics. When you increase the matching tolerance of minreal, make sure that you do not eliminate dynamic features that are relevant to your application.
minreal | sminreal
|
N-Channel metal oxide semiconductor field effect transistor using either Shichman-Hodges equation or surface-potential-based model - MATLAB - MathWorks 日本
{I}_{DS}=0
{I}_{DS}=K\left(\left({V}_{GS}â{V}_{th}\right){V}_{DS}â{V}_{DS}{}^{2}/2\right)\left(1+\mathrm{λ}|{V}_{DS}|\right)
{I}_{DS}=\left(K/2\right){\left({V}_{GS}â{V}_{th}\right)}^{2}\left(1+\mathrm{λ}|{V}_{DS}|\right)
{V}_{BS}â¤0
{V}_{th}={V}_{T0}+\mathrm{γ}\left(â\sqrt{2{\mathrm{Ï}}_{B}}\right)+\mathrm{γ}\left(\sqrt{2{\mathrm{Ï}}_{B}â{V}_{BS}}\right)
0<{V}_{BS}â¤4{\mathrm{Ï}}_{B}
{V}_{th}={V}_{T0}â\frac{\mathrm{γ}{V}_{BS}}{\sqrt{2{\mathrm{Ï}}_{B}}}
{V}_{BS}>4{\mathrm{Ï}}_{B}
{V}_{th}={V}_{T0}+\mathrm{γ}\left(â\sqrt{2{\mathrm{Ï}}_{B}}\right)
\frac{{â}^{2}\mathrm{Ï}}{â{x}^{2}}+\frac{{â}^{2}\mathrm{Ï}}{â{y}^{2}}=\frac{q{N}_{A}}{{\mathrm{ε}}_{Si}}\left[1â\mathrm{exp}\left(\frac{â\mathrm{Ï}}{{\mathrm{Ï}}_{T}}\right)+\mathrm{exp}\left(\frac{\mathrm{Ï}â2{\mathrm{Ï}}_{B}â{V}_{CB}}{{\mathrm{Ï}}_{T}}\right)\right]
{\mathrm{Ï}}_{T}=\frac{{k}_{B}T}{q}
{\left({V}_{GB}â{V}_{FB}â{\mathrm{Ï}}_{s}\right)}^{2}={\mathrm{γ}}^{2}\left[{\mathrm{Ï}}_{s}+{\mathrm{Ï}}_{T}\left(\mathrm{exp}\left(\frac{â{\mathrm{Ï}}_{s}}{{\mathrm{Ï}}_{T}}\right)â1\right)+{\mathrm{Ï}}_{T}\mathrm{exp}\left(â\frac{2{\mathrm{Ï}}_{B}+{V}_{CB}}{{\mathrm{Ï}}_{T}}\right)\left(\mathrm{exp}\left(\frac{{\mathrm{Ï}}_{s}}{{\mathrm{Ï}}_{T}}\right)â1\right)\right]
\mathrm{γ}=\frac{\sqrt{2q{\mathrm{ε}}_{Si}{N}_{A}}}{{C}_{ox}}
{I}_{D}=\frac{W{\mathrm{μ}}_{0}}{L{G}_{\mathrm{Î}L}{G}_{mob}\sqrt{1+{\left({\mathrm{θ}}_{sat}\mathrm{Î}\mathrm{Ï}\right)}^{2}}}\left[â{\stackrel{¯}{Q}}_{inv}\mathrm{Î}\mathrm{Ï}+{\mathrm{Ï}}_{T}\left({Q}_{invL}â{Q}_{inv0}\right)\right]
{\stackrel{¯}{Q}}_{inv}
{G}_{\mathrm{Î}L}=1â\frac{\mathrm{Î}L}{L}=1â\mathrm{α}\mathrm{ln}\left[\frac{{V}_{DB}â{V}_{DB,eff}+\sqrt{{\left({V}_{DB}â{V}_{DB,eff}\right)}^{2}+{V}_{p}^{2}}}{{V}_{p}}\right]
\mathrm{β}=\frac{W{\mathrm{μ}}_{0}{C}_{ox}}{L}
{V}_{T}={V}_{FB}+2{\mathrm{Ï}}_{B}+2{\mathrm{Ï}}_{T}+\mathrm{γ}\sqrt{2{\mathrm{Ï}}_{B}+2{\mathrm{Ï}}_{T}}
{I}_{dio}={I}_{s}\left[\mathrm{exp}\left(â\frac{{V}_{DB}}{n{\mathrm{Ï}}_{T}}\right)â1\right]
{C}_{j}=\frac{{C}_{j0}}{\sqrt{1+\frac{{V}_{DB}}{{V}_{bi}}}}
{C}_{diff}=\frac{\mathrm{Ï}{I}_{s}}{n{\mathrm{Ï}}_{T}}\mathrm{exp}\left(â\frac{{V}_{DB}}{n{\mathrm{Ï}}_{T}}\right)
{K}_{Ts}={K}_{Tm1}{\left(\frac{{T}_{s}}{{T}_{m1}}\right)}^{BEX}
{V}_{BS}â¤0
\frac{d{V}_{th}}{dT}=\frac{d{V}_{T0}}{dT}â\frac{\mathrm{γ}}{2\sqrt{2{\mathrm{Ï}}_{B}}}\frac{d2{\mathrm{Ï}}_{B}}{dT}+\frac{\mathrm{γ}}{2\sqrt{2{\mathrm{Ï}}_{B}â{V}_{BS}}}\frac{d2{\mathrm{Ï}}_{B}}{dT}
0<{V}_{BS}â¤4{\mathrm{Ï}}_{B}
\frac{d{V}_{th}}{dT}=\frac{d{V}_{T0}}{dT}â\frac{\mathrm{γ}{V}_{BS}}{4}{\left(2{\mathrm{Ï}}_{B}\right)}^{â\frac{3}{2}}\frac{d2{\mathrm{Ï}}_{B}}{dT}
{V}_{BS}>4{\mathrm{Ï}}_{B}
\frac{d{V}_{th}}{dT}=\frac{d{V}_{T0}}{dT}â\frac{\mathrm{γ}}{2\sqrt{2{\mathrm{Ï}}_{B}}}\frac{d2{\mathrm{Ï}}_{B}}{dT}
{\mathrm{Ï}}_{B}=\frac{kT}{q}\mathrm{ln}\left(\frac{{N}_{B}}{{n}_{i}}\right)
\frac{d2{\mathrm{Ï}}_{B}}{dT}=\frac{1}{T}\left[2{\mathrm{Ï}}_{B}â\left(\frac{{E}_{g}\left(0\right)}{q}+\frac{3kT}{q}\right)\right]
Parameter\left(t\right)=Paramete{r}_{faulted}â\left(Paramete{r}_{faulted}âParamete{r}_{unfaulted}\right)\text{sech}\left(\frac{tâ{t}_{th}}{\mathrm{Ï}}\right),
{G}_{mob}=\sqrt{1+{\left({\mathrm{θ}}_{sr}{V}_{eff}\right)}^{4}}
{I}_{s}={I}_{s,m1}{\left(\frac{{T}_{s}}{{T}_{m1}}\right)}^{{\mathrm{η}}_{Is}}â
\mathrm{exp}\left(\frac{{E}_{G}}{{k}_{B}}â
\left(\frac{1}{{T}_{m1}}â\frac{1}{{T}_{s}}\right)\right).
{I}_{s}={I}_{s,m1}{\left(\frac{{T}_{s}}{{T}_{m1}}\right)}^{{\mathrm{η}}_{Is}}â
\mathrm{exp}\left(\frac{{E}_{G}}{{k}_{B}}â
\left(\frac{1}{{T}_{m1}}â\frac{1}{{T}_{s}}\right)\right).
|
Dynamic range expander - MATLAB - MathWorks 한êµ
Assuming a hard knee characteristic and a steady-state input such that x[n] dB < thresholdValue, the expansion ratio is defined as
R=\frac{\left(y\left[n\right]âT\right)}{\left(x\left[n\right]âT\right)}
y=x+\frac{\left(1âR\right)Ã{\left(xâTâ\frac{W}{2}\right)}^{2}}{\left(2ÃW\right)}
\left(2Ã|xâT|\right)â¤W
{x}_{\text{dB}}\left[n\right]=20Ã{\mathrm{log}}_{10}|x\left[n\right]|
{x}_{\text{sc}}\left({x}_{\text{dB}}\right)=\left\{\begin{array}{cc}T+\left({x}_{\text{dB}}âT\right)ÃR& {x}_{\text{dB}}<\left(Tâ\frac{W}{2}\right)\\ {x}_{\text{dB}}+\frac{\left(1âR\right){\left({x}_{\text{dB}}âTâ\frac{W}{2}\right)}^{2}}{2W}& \text{â}\left(Tâ\frac{W}{2}\right)â¤{x}_{\text{dB}}â¤\left(T+\frac{W}{2}\right)\\ {x}_{\text{dB}}& {x}_{\text{dB}}>\left(T+\frac{W}{2}\right)\end{array}\text{â},
{x}_{\text{sc}}\left({x}_{\text{dB}}\right)=\left\{\begin{array}{cc}T+\left({x}_{\text{dB}}âT\right)ÃR& \text{â}{x}_{\text{dB}}<T\\ {x}_{\text{dB}}& \text{â}{x}_{\text{dB}}â¥T\end{array}
{g}_{\text{c}}\left[n\right]={x}_{\text{sc}}\left[n\right]â{x}_{\text{dB}}\left[n\right].
{g}_{\text{s}}\left[n\right]=\left\{\begin{array}{cc}{\mathrm{α}}_{\text{A}}{g}_{\text{s}}\left[nâ1\right]+\left(1â{\mathrm{α}}_{\text{A}}\right){g}_{\text{c}}\left[n\right]& \text{â}\left({C}_{\text{A}}>{T}_{\text{H}}\right)\text{â}\text{â}&\text{â}\text{â}\left({g}_{\text{c}}\left[n\right]â¤{g}_{\text{s}}\left[nâ1\right]\right)\\ {g}_{\text{s}}\left[nâ1\right]\text{â}& {C}_{\text{A}}â¤{T}_{\text{H}}\\ {\mathrm{α}}_{\text{R}}{g}_{\text{s}}\left[nâ1\right]+\left(1â{\mathrm{α}}_{\text{R}}\right){g}_{\text{c}}\left[n\right]\text{â}& {g}_{\text{c}}\left[n\right]>{g}_{\text{s}}\left[nâ1\right]\end{array}
{\mathrm{α}}_{\text{A}}=\mathrm{exp}\left(\frac{â\mathrm{log}\left(9\right)}{FsÃ{T}_{\text{A}}}\right)\text{â}.
{\mathrm{α}}_{\text{R}}=\mathrm{exp}\left(\frac{â\mathrm{log}\left(9\right)}{FsÃ{T}_{\text{R}}}\right)\text{â}.
{g}_{\text{lin}}\left[n\right]={10}^{\left(\frac{{g}_{\text{s}}\left[n\right]}{20}\right)}
y\left[n\right]=x\left[n\right]Ã{g}_{\text{lin}}\left[n\right].
|
Harmonic Mean | Brilliant Math & Science Wiki
Lawrence Chiou, Tom Verhoeff, Worranat Pakornrat, and
The harmonic mean
f_{\text{HM}}
a_1, \ldots, a_k
is the mean with the property that the reciprocal of the mean is equal to the average of the reciprocals.
Suppose one is given data that are to be weighted by their reciprocals. In that case, one might consider a mean with the property that
k
times the reciprocal of the mean equals the sum of the reciprocals of the values:
\frac{k}{f_{\text{HM}}} = \frac 1{a_1} + \frac 1{a_2} + \frac 1{a_3} + \cdots + \frac 1{a_k}.
This leads to the harmonic mean defined as
f_{\text{HM}} = \frac{k}{\frac 1{a_1} + \frac 1{a_2} + \frac 1{a_3} + \cdots + \frac 1{a_k}}.
What is the harmonic mean of
1, \frac{3}{2}, 3?
\frac{3}{1 + \frac{2}{3} + \frac{1}{3}} = \frac{3}{2}.\ _\square
Ben, the ultimate hero, spends 10 minutes to solve a math problem.
The pretty and smarter Gwen only needs 6 minutes to come up with an answer.
Kevin, on the other hand, doesn't attend any school, so he takes 15 minutes for the same task.
What is the average time per person required for this team to solve a math problem?
The head chef A can cook one dish in 4 minutes.
His fellow chef B can finish in 8 minutes,
A young chef C has an unknown capability.
Then two more chefs are hired into the team.
Chef D can cook one dish in 6 minutes.
Chef E can cook one dish in 9 minutes.
If the average cooking time per person before and after the new hiring is unchanged, in how many minutes can chef C cook one dish?
The reciprocal of the harmonic mean is the arithmetic mean of the reciprocals.
By the QM-AM-GM-HM inequality, the harmonic mean is smaller than either the arithmetic mean or geometric mean and is the smallest of the classical (Pythagorean) means.
Cite as: Harmonic Mean. Brilliant.org. Retrieved from https://brilliant.org/wiki/harmonic-mean/
|
Essential extension - Wikipedia
In mathematics, specifically module theory, given a ring R and R-modules M with a submodule N, the module M is said to be an essential extension of N (or N is said to be an essential submodule or large submodule of M) if for every submodule H of M,
{\displaystyle H\cap N=\{0\}\,}
{\displaystyle H=\{0\}\,}
As a special case, an essential left ideal of R is a left ideal that is essential as a submodule of the left module RR. The left ideal has non-zero intersection with any non-zero left ideal of R. Analogously, an essential right ideal is exactly an essential submodule of the right R module RR.
The usual notations for essential extensions include the following two expressions:
{\displaystyle N\subseteq _{e}M\,}
(Lam 1999), and
{\displaystyle N\trianglelefteq M}
(Anderson & Fuller 1992)
The dual notion of an essential submodule is that of superfluous submodule (or small submodule). A submodule N is superfluous if for any other submodule H,
{\displaystyle N+H=M\,}
{\displaystyle H=M\,}
The usual notations for superfluous submodules include:
{\displaystyle N\subseteq _{s}M\,}
{\displaystyle N\ll M}
Here are some of the elementary properties of essential extensions, given in the notation introduced above. Let M be a module, and K, N and H be submodules of M with K
{\displaystyle \subseteq }
Clearly M is an essential submodule of M, and the zero submodule of a nonzero module is never essential.
{\displaystyle K\subseteq _{e}M}
{\displaystyle K\subseteq _{e}N}
{\displaystyle N\subseteq _{e}M}
{\displaystyle K\cap H\subseteq _{e}M}
{\displaystyle K\subseteq _{e}M}
{\displaystyle H\subseteq _{e}M}
Using Zorn's Lemma it is possible to prove another useful fact: For any submodule N of M, there exists a submodule C such that
{\displaystyle N\oplus C\subseteq _{e}M}
Furthermore, a module with no proper essential extension (that is, if the module is essential in another module, then it is equal to that module) is an injective module. It is then possible to prove that every module M has a maximal essential extension E(M), called the injective hull of M. The injective hull is necessarily an injective module, and is unique up to isomorphism. The injective hull is also minimal in the sense that any other injective module containing M contains a copy of E(M).
Many properties dualize to superfluous submodules, but not everything. Again let M be a module, and K, N and H be submodules of M with K
{\displaystyle \subseteq }
The zero submodule is always superfluous, and a nonzero module M is never superfluous in itself.
{\displaystyle N\subseteq _{s}M}
{\displaystyle K\subseteq _{s}M}
{\displaystyle N/K\subseteq _{s}M/K}
{\displaystyle K+H\subseteq _{s}M}
{\displaystyle K\subseteq _{s}M}
{\displaystyle H\subseteq _{s}M}
Since every module can be mapped via a monomorphism whose image is essential in an injective module (its injective hull), one might ask if the dual statement is true, i.e. for every module M, is there a projective module P and an epimorphism from P onto M whose kernel is superfluous? (Such a P is called a projective cover). The answer is "No" in general, and the special class of rings whose right modules all have projective covers is the class of right perfect rings.
One form of Nakayama's lemma is that J(R)M is a superfluous submodule of M when M is a finitely-generated module over R.
This definition can be generalized to an arbitrary abelian category C. An essential extension is a monomorphism u : M → E such that for every non-zero subobject s : N → E, the fibre product N ×E M ≠ 0.
In a general category, a morphism f : X → Y is essential if any morphism g : Y → Z is a monomorphism if and only if g ° f is a monomorphism (Porst 1981, Introduction). Taking g to be the identity morphism of Y shows that an essential morphism f must be a monomorphism.
If X has an injective hull Y, then Y is the largest essential extension of X (Porst 1981, Introduction (v)). But the largest essential extension may not be an injective hull. Indeed, in the category of T1 spaces and continuous maps, every object has a unique largest essential extension, but no space with more than one element has an injective hull (Hoffmann 1981).
Dense submodules are a special type of essential submodule
Anderson, F.W.; Fuller, K.R. (1992), Rings and Categories of Modules, Graduate Texts in Mathematics, vol. 13 (2nd ed.), Springer-Verlag, ISBN 3-540-97845-3
Hoffmann, Rudolf-E. (1981), "Essential extensions of T1-spaces", Canadian Mathematical Bulletin, 24 (2): 237–240, doi:10.4153/CMB-1981-037-1
Lam, Tsit-Yuen (1999), Lectures on modules and rings, Graduate Texts in Mathematics No. 189, Berlin, New York: Springer-Verlag, ISBN 978-0-387-98428-5, MR 1653294
Mitchell, Barry (1965). Theory of categories. Pure and applied mathematics. Vol. 17. Academic Press. ISBN 978-0-124-99250-4. MR 0202787. Section III.2
Porst, Hans-E. (1981), "Characterization of injective envelopes", Cahiers de Topologie et Géométrie Différentielle Catégoriques, 22 (4): 399–406
Retrieved from "https://en.wikipedia.org/w/index.php?title=Essential_extension&oldid=1055853970"
|
Annealing | Brilliant Math & Science Wiki
Simulated Annealing is a heuristic technique that is used to find the global optimal solution to a function. It is a probabilistic technique, similar to a Monte-Carlo method. In fact, simluated annealing was adapted from the Metropolis-Hastings algorithm, a Monte-Carlo method.
Other techniques, such as hill climbing, gradient descent, or a brute-force search are used when finding a local maxima/minima is more important than finding a global maxima/minima. These techniques are faster than simulated annealing, but they don't guarantee that their results are globally optimal.
In the below graphic, the simulated annealing algorithm is used to find the global maximum for the graph. It chooses a random value and calculates its cost. Then, it chooses a neighboring solution and calculates its cost. If the new cost is better, the new solution is chosen, and even if its not better, the new solution is sometimes chosen. It stops when the desired cost is found.
An example of simulated annealing [1]
Simulated annealing is a good probabilistic technique because it does not accidentally think a local extrema is a globsl extrema. So, it is especially useful when the search space has many local maximas/minimas and when the search function is non-linear. Simulated annealing is used in many algorithm-related problems such as the set cover problem, the maximum cut problem, or the independent set problem. It is also used to solve design problems such as water transportation and deisel engine design[2].
Overview of Annealing
Annealing is a heuristic. Heuristics are rules of thumb that can be used to find a good answer to a question. They are often used to solve large, non-linear and non-convex problems. Heuristics won't guarantee an optimal answer, but they can get very close and are often a good trade off between precision and computation. They often incorporate randomization. Annealing introduces just the right amount of randomness and probability to help it escape those local optimums.
There are two kinds of heuristics: construction and improvement. Construction heuristics find a feasible solution and improve upon it. Improvement heuristics start with some feasible solution and only try to improve it. Annealing is an improvement heuristic.
Annealing was made to mirror the cooling of atoms to a state of minimum energy. There is an analogy to cooling a system and bringing it to its lowest energy state and finding a global optimum for that system. If a liquid cools and anneals too quickly, then the material will solidify into a sub-optimal configuration. However, if it is cooled slowly, then the crystals inside will solidify optimally into a state of minimum energy. This state of minimum energy is analogous to the minimum cost of the function being analyzed by simulated annealing.
Two systems of atoms, with different energy levels[3]
Annealing was introduced in 1983 as a method of solving combinatorial optimization by applying statistical mechanics.
Annealing requires a few things to get started:
Annealing uses the following elements:
Cost Function - this function is what is being minimized by annealing.
Search Space - the space of possible solutions.
Temperature - the "temperature" of the system, a function of which iteration the annealing process is on.
For example, take the traveling salesperson problem. In this problem, the goal is to find a path that a salesperson can take through various cities, visiting each city once, and returning to their original city. The cost function of a given tour (a path that ends where it began) will just be the length of that tour. The search space will be the set of all possible tours. The temperature of this process is up to the programmer, but it usually starts at 1 and decays, following some sort of cooling schedule. A typical cooling schedule might just multiply the temperature by some
\alpha
(typically .99 or .8). The annealing process stops when the temperature reaches a certain point.
The annealing algorithm works as follows:
Pick an initial, random solution,
x_0
Calculate the cost of your solution,
c_{x_0}
Pick a new solution that neighbors your current solution,
x_1
Calculate the cost of the new solution,
c_{x_1}
Compare the cost of your current and new solutions. If
c_{x_1} \lt c_{x_0}
, move to the new solution. Otherwise, move to the new solution with probability
\beta
Update system temperature.
Repeat steps 3 - 6 until the desired temperature is reached.
In relation to the gif in the introduction (where the highest point is being searched for), this is what is happening:
A random point on the graph is found
x
The "cost" of this point is something that is the inverse of its height, potentially
\frac{1}{x}
Pick a random solution on either side of
x
x_1
Calculate the cost of this new solution.
\frac{1}{x_1} \lt \frac{1}{x}
, move to
x_1
. If not, move to
x_1
\beta
Update system temperature. In this case, it lowers temperature and at the same time lowers
\beta
. By lowering
\beta
, it decreases the chance that it chooses a sub-optimal solution as time goes on.
The main appeal of this procedure is that in step 5, there's a chance that you might actually take the suboptimal choice between the two choices. This allows annealing to avoid potential pitfalls in local optimums.
\beta
, the chance that this process chooses the new solution suboptimally, is a function of the difference in cost of the two potential solutions and of the current system temperature.
In the case of the traveling salesperson problem, if a new path is much shorter than the other, annealing will favor switching to that and
\beta
will be high. However, if they are similar, there's a more even chance for either. Furthermore, as the annealing process continues (and the temperature drops), annealing should be more focused on following the solution of lower cost, not of escaping local optimums - so,
\beta
The idea of a neighboring solution is important, but what that means will vary from solution space to solution space. In the case of the traveling salesperson problem, there are many ways to define two neighboring solutions. For example, two neighboring solutions might be two paths that have the smallest possible section reversed. A poorly devised neighboring system (such as a random configuration) in the solution space can aversely affect the outcome of the annealing process. However, this can be mitigated by more computation.
This general process is very basic, and steps are often modified to achieve best results. For example, a slower cooling schedule for the temperature can be used that causes the annealing process to take longer. However, this will give it more time to escape local maximas and minimas. Furthermore, production-level implementations of annealing often compare multiple pairs of neighbors in every iteration.
The following pseudocode is a more computer science oriented approach to understanding this algorithm.
Annealing(number_of_iterations)
s = random_solution()
for k from 0 through number_of_iterations
T = temperature(k, number_of_iterations)
s_new = random_neighbor(s)
beta = beta_func(C(s), C(s_new), T)
if C(s_new) < C(s):
s = s_new
The general process of this algorithm is finding neighbors from the current state to move to. In general, optimal neighbors are chosen but sometimes less optimal neighbors are chosen to avoid local maximums and minimums. Here, beta is a boolean term that is chosen by the relationship between the two cost functions of the two candidate solutions, and T. It can be, however, whatever the programmer needs to make the if statement on line 10 work
The following code is a quick implementation of annealing. In this example, a set of possible solutions is generated and then, through annealing, the optimal solution is found.
def annealing():
solutions = generateSolutions()
random_index = random.randint(0, len(solutions) - 1)
T_accept = 0.0000001
s = solutions[random_index]
while T > T_accept:
s_new = randomNeighbor(s, solutions)
beta = betaFunc(cost(s), cost(s_new), T)
if cost(s_new) < cost(s):
T = T * alpha
def randomNeighbor(sln, solutions):
currentIndex = solutions.index(sln)
randomStep = random.randint(1, 2)
if randomStep == 1:
if currentIndex > 0:
return solutions[currentIndex - 1]
return solutions[currentIndex + 1]
elif randomStep == 2:
if currentIndex < len(solutions) - 1:
def cost(sln):
return abs(401 - sln)
def betaFunc(oldCost, newCost, T):
def generateSolutions():
nums.append(500 - a)
What is the optimal solution for this annealing process. Try to figure it out from the code before you run it or check the answer.
The optimal answer is 401. If you look inside the cost() function, it's checking to see how close the candidate solution is to the number 401.
Annealing can be used in any optimization scenario. If the global optimum is not the goal, there are faster techniques, such as the hill climbing algorithm. However, when a global optimum is needed, and especially when the input signal is non-linear with many local maximas/minimas, annealing is useful because it will do the best job of getting close to the optimum.
Annealing can be used to find an optimal solution for graph-related problems such as the set cover problem, the maximum cut problem, or the independent set problem. Set cover, for example, can use many heuristics. Simulated annealing will produce a better output than a greedy heuristic, if that is important to an application. Coupling annealing with recursive backtracking will guarantee an optimal result, though it will be more computationally expensive.
Annealing is also frequently used in system design. Often, a system has a set of requirements and limit on cost. Annealing can help find the maximum value for a given cost limit. This is very similar to the knapsack problem in computer science. For example, a building might have a certain budget, but it still needs to reach a certain level of structural support. Annealing can find the global maximum for the related function.
13, K. Simulated Annealing. Retrieved June 14, 2016, from https://en.wikipedia.org/wiki/Simulated-annealing
Graff, . System Modeling, Analysis, and Optimization for Diesel Exhaust After-treatment Technologies. Retrieved June 28, 2016, from http://strategic.mit.edu/docs/SM-21-Graff-C-2006.pdf
Weck, . Simulated Annealing. Retrieved June 28, 2016, from http://ocw.mit.edu/courses/engineering-systems-division/esd-77-multidisciplinary-system-design-optimization-spring-2010/lecture-notes/MITESD_77S10_lec10.pdf
Cite as: Annealing. Brilliant.org. Retrieved from https://brilliant.org/wiki/annealing/
|
IDICURS
Views and writes position lists interactively
This program displays an image or Set of images on the screen and provides a graphical user interface for marking points on it. Points can be read in from a position list file at the start (if READLIST is true) or written out to a position list file at the end (if WRITELIST is true) or both. If OVERWRITE is true then a position list file can be viewed and edited in place.
The graphical interface used for marking features on the image should be fairly self-explanatory. The image can be scrolled using the scrollbars, the window can be resized, and there are controls for zooming the image in or out, changing the style of display, and altering the percentile cutoff limits which control the mapping of pixel value to displayed colour. The position of the cursor is reported below the display using the coordinates of the selected coordinate frame for information, but the position list written out is always written in Pixel coordinates, since that is how all CCDPACK applications expect to find it written. Points are marked on the image by clicking mouse button 1 (usually the left one) and may be removed using mouse button 3 (usually the right one). When you have marked all the points that you wish to, click the ’Done’ button.
idicurs in
CENTROID = _LOGICAL (Read and Write)
This parameter controls whether points marked on the image are to be centroided. If true, then when you click on the image to add a new point IDICURS will attempt to find a centroidable object near to where the click was made and add the point there. If no centroidable feature can be found nearby, you will not be allowed to add a point. Note that the centroiding routine is capable of identifying spurious objects in noise, but where a genuine feature is nearby this should find its centre.
Having centroiding turned on does not guarantee that all points on the image have been centroided, it only affects points added by clicking on the image. In particular any points read from the INLIST file will not be automatically centroided.
This parameter only gives the initial centroiding state - centroiding can be turned on and off interactively while the program is running.
Gives the name of the images to display and get coordinates from. If multiple images are specified using wildcards or separating their names with commas, the program will run on each one in turn, or on each Set in turn if applicable (see the USESET parameter).
If the READLIST parameter is true, then this parameter determines where the input position list comes from. If it is true, then the position list currently associated with the image will be used. If it is false, then the input position list names will be obtained from the INLIST parameter. [FALSE]
INLIST = FILENAME (Read)
If the READLIST parameter is true, and the INEXT parameter is false, then this parameter gives the names of the files in which the input position list is stored. This parameter may use modifications of the input image name.
MARKSTYLE = LITERAL (Read and Write)
A string indicating how markers are initially to be plotted on the image. It consists of a comma-separated list of "attribute=value" type strings. The available attributes are:
showindex – 1 to show index numbers, 0 not to do so.
This parameter only gives the initial marker type; it can be changed interactively while the program is running. If specifying this value on the command line, it is not necessary to give values for all the attributes; missing ones will be given sensible defaults. ["showindex=1"]
READLIST = _LOGICAL (Read)
If this parameter is true, then the program will start up with with some positions already marked (where the points come from depends on the INEXT and INLIST parameters). If it is false, the program will start up with no points initially plotted. [FALSE]
If WRITELIST is true, and OVERWRITE is false, then this parameter determines the names of the files to use to write the position lists into. It can be given as a comma-separated list with the same number of filenames as there are IN files, but wildcards can also be used to act as modifications of the input image names.
This parameter is ignored if WRITELIST is false or READLIST and OVERWRITE are true.
OVERWRITE = _LOGICAL (Read)
If READLIST and WRITELIST are both true, then setting OVERWRITE to true causes the input position list file to be used as the output position list file as well. Thus, setting this parameter to true allows position list files to be edited in place. [FALSE]
PERCENTILES( 2 ) = _DOUBLE (Read and Write)
The initial values for the low and high percentiles of the data range to use when displaying the images; any pixels with a value lower than the first element will have the same colour, and any with a value higher than the second will have the same colour. Must be in the range 0
<
<
<
= 100. These values can be changed interactively while the program runs. [2,98]
This parameter determines whether Set header information should be used or not. If USESET is true, IDICURS will try to group images according to their Set Name attribute before displaying them, rather than treating them one by one. All images which share the same (non-blank) Set Name attribute, and which have a CCD_SET attached coordinate system, will be shown together in the viewer resampled into their CCD_SET coordinates.
If USESET is false, then regardless of Set headers, each individual image will be displayed for marking separately.
If the input images have no Set headers, or if they have no CCD_SET coordinates in their WCS components, the value of USESET will make no difference.
VERBOSE = _LOGICAL (Read)
If this parameter is true, then at the end of processing all the positions will be written through the CCDPACK log system. [TRUE]
The width in pixels of the window to display the image and associated controls in. If the image is larger than the area allocated for display, it can be scrolled around within the window. The window can be resized in the normal way using the window manager while the program is running. [450]
The height in pixels of the window to display the image and associated controls in. If the image is larger than the area allocated for display, it can be scrolled around within the window. The window can be resized in the normal way using the window manager while the program is running. [600]
WRITELIST = _LOGICAL (Read)
This parameter determines whether an output position list file will be written and associated with the input images.
If the program exits normally, there are points are marked on the image, and WRITELIST is true, then the points will be written to a position list file and that file will be associated with the image file. The name of the position list file is determined by the OUTLIST and OVERWRITE parameters. The positions will be written to the file using the standard CCDPACK format as described in the Notes section.
If WRITELIST is false, then no position lists are written and no changes are made to the image associated position lists. [FALSE]
ZOOM = _INTEGER (Read and Write)
A factor giving the initial level to zoom in to the image displayed, that is the number of screen pixels to use for one image pixel. It will be rounded to one of the values ... 3, 2, 1, 1/2, 1/3 .... The zoom can be changed interactively from within the program. The initial value may be limited by MAXCANV. [1]
idicurs mosaic mos.lis
This starts up the graphical user interface, allowing you to select a number of points which will be written to the position list file ’mos.lis’, which will be associated with the image file.
idicurs in=
\ast
out=
\ast
.pts percentiles=[10,90] useset=false
Each of the images in the current directory will be displayed, and the positions marked on it written to a list with the same name as the image but the extension ’.pts’, which will be associated with the image in question. The display will initially be scaled so that pixels with a value higher than the 90th percentile will all be displayed as the brightest colour and those with a value lower than the 10th percentile as the dimmest colour, but this may be changed interactively while the program is running. Since USESET is explicitly set to false, each input image will be viewed and marked separately, even if some they have Set headers and Set alignment coordinates,
idicurs in=gc6253 readlist inlist=found.lis outlist=out.lis markstyle="colour=skyblue,showindex=0"
The image gc6253 will be displayed, with the points stored in the position list ’found.lis’ already plotted on it. These may be added to, moved and deleted, and the resulting list will be written to the file out.lis. Points will initially be marked using skyblue markers, and not labelled with index numbers.
\ast
readlist writelist inext overwrite
All the images in the current directory will be displayed, one after the other, with the points which are in their currently associated position lists already plotted. You can add and remove points, and the modified position lists will be written back into the same files.
CCDALIGN, PAIRNDF.
Input position lists read when READLIST is true may be in either of these formats. The output list named by the OUTLIST parameter will be written in CCDPACK (3 column) format.
On normal exit, unless OUTLIST is set to null (!), the CURRENT_LIST items in the CCDPACK extensions (.MORE.CCDPACK) of the input NDFs are set to the name of the output list. These items will be used by other CCDPACK position list processing routines to automatically access the list.
Some of the parameters (MAXCANV, PERCENTILES, WINX, WINY, ZOOM, MARKSTYLE, CENTROID) give initial values for quantities which can be modified while the program is running. Although these may be specified on the command line, it is normally easier to start the program up and modify them using the graphical user interface. If the program exits normally, their values at the end of the run will be used as defaults next time the program starts up.
|
Create 3-D Convolution Layer
Specify Initial Weights and Biases in 3-D Convolutional Layer
Create Convolutional Layer That Fully Covers 3-D Input
A 3-D convolutional layer applies sliding cuboidal convolution filters to 3-D input. The layer convolves the input by moving the filters along the input vertically, horizontally, and along the depth, computing the dot product of the weights and the input, and then adding a bias term.
For 3-D image input (data with five dimensions corresponding to pixels in three spatial dimensions, the channels, and the observations), the layer convolves over the spatial dimensions.
For 3-D image sequence input (data with six dimensions corresponding to the pixels in three spatial dimensions, the channels, the observations, and the time steps), the layer convolves over the spatial dimensions.
For 2-D image sequence input (data with five dimensions corresponding to the pixels in two spatial dimensions, the channels, the observations, and the time steps), the layer convolves over the spatial and time dimensions.
layer = convolution3dLayer(filterSize,numFilters,Name,Value) sets the optional Stride, DilationFactor, NumChannels, Parameters and Initialization, Learning Rate and Regularization, and Name properties using name-value pairs. To specify input padding, use the 'Padding' name-value pair argument. For example, convolution3dLayer(11,96,'Stride',4,'Padding',1) creates a 3-D convolutional layer with 96 filters of size [11 11 11], a stride of [4 4 4], and padding of size 1 along all edges of the layer input. You can specify multiple name-value pairs. Enclose each property name in single quotes.
Example: convolution3dLayer(3,16,'Padding','same') creates a 3-D convolutional layer with 16 filters of size [3 3 3] and 'same' padding. At training time, the software calculates and sets the size of the padding so that the layer output has the same size as the input.
Height, width, and depth of the filters, specified as a vector [h w d] of three positive integers, where h is the height, w is the width, and d is the depth. FilterSize defines the size of the local regions to which the neurons connect in the input.
When creating the layer, you can specify FilterSize as a scalar to use the same value for the height, width, and depth.
Example: [5 5 5] specifies filters with a height, width, and depth of 5.
Factor for dilated convolution (also known as atrous convolution), specified as a vector [h w d] of three positive integers, where h is the vertical dilation, w is the horizontal dilation, and d is the dilation along the depth. When creating the layer, you can specify DilationFactor as a scalar to use the same value for dilation in all three directions.
The layer expands the filters by inserting zeros between each filter element. The dilation factor determines the step size for sampling the input or equivalently the upsampling factor of the filter. It corresponds to an effective filter size of (Filter Size – 1) .* Dilation Factor + 1. For example, a 3-by-3-by-3 filter with the dilation factor [2 2 2] is equivalent to a 5-by-5-by-5 filter with zeros between the elements.
Example: [2 3 1] dilates the filter vertically by a factor of 2, horizontally by a factor of 3, and along the depth by a factor of 1.
\left[\begin{array}{ccc}3& 1& 4\\ 1& 5& 9\\ 2& 6& 5\end{array}\right]\to \left[\begin{array}{ccccccc}0& 0& 0& 0& 0& 0& 0\\ 0& 0& 0& 0& 0& 0& 0\\ 0& 0& 3& 1& 4& 0& 0\\ 0& 0& 1& 5& 9& 0& 0\\ 0& 0& 2& 6& 5& 0& 0\\ 0& 0& 0& 0& 0& 0& 0\\ 0& 0& 0& 0& 0& 0& 0\end{array}\right]
\left[\begin{array}{ccc}3& 1& 4\\ 1& 5& 9\\ 2& 6& 5\end{array}\right]\to \left[\begin{array}{ccccccc}5& 1& 1& 5& 9& 9& 5\\ 1& 3& 3& 1& 4& 4& 1\\ 1& 3& 3& 1& 4& 4& 1\\ 5& 1& 1& 5& 9& 9& 5\\ 6& 2& 2& 6& 5& 5& 6\\ 6& 2& 2& 6& 5& 5& 6\\ 5& 1& 1& 5& 9& 9& 5\end{array}\right]
\left[\begin{array}{ccc}3& 1& 4\\ 1& 5& 9\\ 2& 6& 5\end{array}\right]\to \left[\begin{array}{ccccccc}5& 6& 2& 6& 5& 6& 2\\ 9& 5& 1& 5& 9& 5& 1\\ 4& 1& 3& 1& 4& 1& 3\\ 9& 5& 1& 5& 9& 5& 1\\ 5& 6& 2& 6& 5& 6& 2\\ 9& 5& 1& 5& 9& 5& 1\\ 4& 1& 3& 1& 4& 1& 3\end{array}\right]
\left[\begin{array}{ccc}3& 1& 4\\ 1& 5& 9\\ 2& 6& 5\end{array}\right]\to \left[\begin{array}{ccccccc}3& 3& 3& 1& 4& 4& 4\\ 3& 3& 3& 1& 4& 4& 4\\ 3& 3& 3& 1& 4& 4& 4\\ 1& 1& 1& 5& 9& 9& 9\\ 2& 2& 2& 6& 5& 5& 5\\ 2& 2& 2& 6& 5& 5& 5\\ 2& 2& 2& 6& 5& 5& 5\end{array}\right]
At training time, Weights is a FilterSize(1)-by-FilterSize(2)-by-FilterSize(3)-by-NumChannels-by-NumFilters array.
At training time, Bias is a 1-by-1-by-1-by-NumFilters array.
Create a 3-D convolution layer with 16 filters, each with a height, width, and depth of 5. Use a stride (step size) of 4 in all three directions.
layer = convolution3dLayer(5,16,'Stride',4)
FilterSize: [5 5 5]
DilationFactor: [1 1 1]
Include a 3-D convolution layer in a Layer array.
Create a 3-D convolutional layer with 32 filters, each with a height, width, and depth of 5. Specify the weights initializer to be the He initializer.
Create a convolutional layer with 32 filters, each with a height, width, and depth of 5. Specify initializers that sample the weights and biases from a Gaussian distribution with a standard deviation of 0.0001.
Create a 3-D convolutional layer compatible with color images. Set the weights and bias to W and b in the MAT file Conv3dWeights.mat respectively.
Weights: [5-D double]
Bias: [1x1x1x32 double]
Suppose the size of the input is 28-by-28-by-28-by-1. Create a 3-D convolutional layer with 16 filters, each with a height of 6, a width of 4, and a depth of 5. Set the stride in all dimensions to 4.
Make sure the convolution covers the input completely. For the convolution to fully cover the input, the output dimensions must be integer numbers. When there is no dilation, the i-th output dimension is calculated as (imageSize(i) - filterSize(i) + padding(i)) / stride(i) + 1.
For the horizontal output dimension to be an integer, two rows of padding are required: (28 – 6 + 2)/4 + 1 = 7. Distribute the padding symmetrically by adding one row of padding at the top and bottom of the image.
For the vertical output dimension to be an integer, no padding is required: (28 – 4+ 0)/4 + 1 = 7.
For the depth output dimension to be an integer, one plane of padding is required: (28 – 5 + 1)/4 + 1 = 7. You must distribute the padding asymmetrically across the front and back of the image. This example adds one plane of padding to the back of the image.
Construct the convolutional layer. Specify 'Padding' as a 2-by-3 matrix. The first row specifies prepadding and the second row specifies postpadding in the three dimensions.
layer = convolution3dLayer([6 4 5],16,'Stride',4,'Padding',[1 0 0;1 0 1])
A convolutional layer applies sliding convolutional filters to the input. A 3-D convolutional layer extends the functionality of a 2-D convolutional layer to a third dimension, depth. The layer convolves the input by moving the filters along the input vertically, horizontally, and along the depth, computing the dot product of the weights and the input, and then adding a bias term. To learn more, see the definition of convolutional layer on the convolution2dLayer reference page.
convolution2dLayer | globalAveragePooling3dLayer | maxPooling3dLayer | image3dInputLayer | averagePooling3dLayer
|
Transform lists of positions
This routine transforms positions stored in position lists. Transformations are defined either by a set of 6 coefficients for the linear transform, by an algebraic expression given by you, by using a forward or inverse mapping from a TRANSFORM structure, or by a mapping between two frames stored in a WCS component.
tranlist inlist outlist trtype
EPOCHIN = _DOUBLE (Read)
If a "Sky Co-ordinate System" specification is supplied (using parameter FRAMEIN) for a celestial co-ordinate system, then an epoch value is needed to qualify it. This is the epoch at which the supplied sky positions were determined. It should be given as a decimal years value, with or without decimal places ("1996.8" for example). Such values are interpreted as a Besselian epoch if less than 1984.0 and as a Julian epoch otherwise. [Dynamic]
EPOCHOUT = _DOUBLE (Read)
If a "Sky Co-ordinate System" specification is supplied (using parameter FRAMEOUT) for a celestial co-ordinate system, then an epoch value is needed to qualify it. This is the epoch at which the supplied sky positions were determined. It should be given as a decimal years value, with or without decimal places ("1996.8" for example). Such values are interpreted as a Besselian epoch if less than 1984.0 and as a Julian epoch otherwise. [Dynamic]
These parameters supply the values of "sub-expressions" used in the expressions XFOR and YFOR. These parameters should be used when repeated expressions are present in complex transformations. Sub-expressions may contain references to other sub-expressions and constants (PA-PZ). An example of using sub-expressions is:
>
\ast
\ast
>
\ast
\ast
>
\ast
+
\ast
>
FORWARD = _LOGICAL (Read)
If TRTYPE="STRUCT" is chosen then this parameter’s value controls whether the forward or inverse mapping in the transform structure is used. [TRUE]
FRAMEIN = LITERAL (Read)
If TRTYPE="WCS" then the transformation is a mapping from the frame specified by this parameter to that specified by the FRAMEOUT parameter. The value of this parameter can be one of the following:
A domain name such as SKY, AXIS, PIXEL, etc.
A "Sky Co-ordinate System" (SCS) value such as EQUAT(J2000) (see section "Sky Co-ordinate Systems" in SUN/95).
A domain name is usually the most suitable choice. [PIXEL]
FRAMEOUT = LITERAL (Read)
If TRTYPE="WCS" then the transformation is a mapping from the frame specified by the FRAMEIN parameter to that specified by this parameter. The value of this parameter can be one of the following:
Null (!), indicating the Current frame.
INEXT = _LOGICAL (Read)
If NDFNAMES is TRUE and the transformation is to be specified using a WCS component (TRTYPE="WCS"), then this parameter controls whether or not the WCS component should be located in each of the NDFs. If set FALSE, the WCSFILE parameter will be used.
If NDFNAMES is TRUE and the transformation is to be specified using a TRANSFORM structure (TRTYPE="STRUCT") then this parameter controls whether or not the structure should be located in the CCDPACK extension of each of the images. If set FALSE, the TRANSFORM parameter will be used.
If this option is chosen then the WCS component or transform structure in EACH image will be applied to the associated position list. So for instance if you have a set of registered images and positions these may be transformed all at once to and from the reference coordinate system. [TRUE]
This parameter is used to access the names of the lists which contain the positions and, if NDFNAMES is TRUE, the names of the associated images. If NDFNAMES is TRUE the names of the position lists are assumed to be stored in the extension of the images (in the CCDPACK extension item CURRENT_LIST) and the names of the images themselves should be given in response (and may include wildcards).
If NDFNAMES is FALSE then the actual names of the position lists should be given. These may not use wildcards but may be specified using indirection (other CCDPACK position list processing routines will write the names of their results files into a file suitable for use in this manner) the indirection character is "
\text{^}
NAMELIST = _FILENAME
Only used if NDFNAMES is FALSE. This specifies the name of a file to contain a listing of the names of the output lists. This file may then be used to pass the names onto another CCDPACK application using indirection. [TRANLIST.LIS]
NDFNAMES = _LOGICAL (Read)
If TRUE then the routine will assume that the names of the position lists are stored in the NDF CCDPACK extensions under the item "CURRENT_LIST". The names will be present in the extension if the positions were located using a CCDPACK application (such as FINDOBJ). Using this facility allows the transparent propagation of position lists through processing chains.
If a global value for this parameter has been set using CCDSETUP then that value will be used. [TRUE]
OUTLIST = FILENAME (Write)
A list of names specifying the result files. The names of the lists may use modifications of the input names (image names if available otherwise the names of the position lists). So if you want to call the output lists the same name as the input images except to add a type use:
OUTLIST
>
\ast
.FIND
If no image names are given (NDFNAMES is FALSE) then if you want to change the extension of the files (from ".CENT" to ".TRAN" in this case) use:
>
\ast
|
|
|
Or alternatively you can use an explicit list of names. These may use indirection elements as well as names separated by commas.
These parameters supply the values of constants used in the expressions XFOR and YFOR. Using parameters allows the substitution of repeated constants (with extended precisions?) using one reference. It allows easy modification of parameterised expressions (expressions say with an adjustable centre) provided the application has not been used to apply a new transform using expressions. The parameter PI has a default value of 3.14159265359D0. An example of using parameters is:
>
SQRT(FX
\ast
+
\ast
>
ATAN2D(-FY,FX)
>
>
>
>
The form of the transformation which is to be applied to the positions in the input lists. This can take the values
or unique abbreviations of.
COEFF means that a linear transformation of the form:
+
\ast
+
\ast
Y’ = D
+
\ast
+
\ast
is to be applied to the data. In this case a prompt for the values of the coefficients A-F is made.
EXPRES indicates that you want to supply algebraic-like expressions to transform the data. In this case the parameters XFOR and YFOR are used to obtain the expressions. Things like:
>
\ast
+
LOG10(Y)
>
\ast
+
EXP(Y)
are allowed. The expression functions must be in terms of X and Y. For a full set of possible functions see SUN/61 (TRANSFORM).
WCS means that the transformation will be taken from the WCS component of an image. In this case the name of the image containing the WCS component should be supplied (this will be picked up automatically through the association of an image and a position list if NDFNAMES and INEXT are both TRUE). The transformation will be that between the frames defined by the FRAMEIN and FRAMEOUT parameters.
STRUCT signifies that a transform structure (probably created by REGISTER or CCDEDIT) is to be applied to the data. In this case the name of the object containing the structure should be supplied (this will be picked up automatically through the association of an image and a position list if NDFNAMES and INEXT are both TRUE) and whether to use the forward or inverse mappings (the FORWARD parameter). [COEFF]
+
\ast
+
\ast
+
\ast
+
\ast
The default is the identity transformation. [0,1,0,0,0,1] [PA,PB,PC,PD,PE,PF]
If TYPE="STRUCT" and INEXT=FALSE then this parameter is used to access the HDS object which contains the transform structure. The standard place to store a transform structure (in CCDPACK) is
Only one structure can be used at a time.
WCSFILE = NDF (Read)
If TRTYPE="WCS" and INEXT is false, then this parameter gives the name of the image containing the WCS component which is to be used for the transformation.
If TRTYPE="EXPRES" is chosen then this parameter specifies the transformation that maps to the new X coordinate. The expression can contain constants, arithmetic operators (
+
\ast
\ast
\ast
As an inverse mapping is not required in this application there is no need to use the X’=func(X,Y) form only func(X,Y) is required, however, the variables must be given as "X" and "Y".
If TRTYPE="EXPRES" is chosen then this parameter specifies the transformation that maps to the new Y coordinate. The expression can contain constants, arithmetic operators (
+
\ast
\ast
\ast
As an inverse mapping is not required in this application there is no need to use the Y’=func(X,Y) form only func(X,Y) is required, however, the variables must be given as "X" and "Y".
tranlist inlist=’
\ast
’ outlist=’
\ast
.reg’ trtype=wcs framein=pixel
In this example all the images in the current directory are accessed and their associated position lists are opened. The WCS component of each image is used to transform the coordinates in the position lists from pixel coordinates to coordinates in the Current coordinate frame. The output lists are called image-name.reg and are associated with the images.
tranlist inlist="
\ast
" outlist="
\ast
.tran" trtype=struct forward=false
In this example transform structures in each of the images in the current directory are used to transform their associated position lists. The inverse mappings are used.
\ast
_reduced’ outlist=’
\ast
.off’ trtype=coeff tr=’[10,1,0,20,0,1]’
In this example the position lists associated with the images
\ast
_reduced are transformed using the linear fit coefficients [10,1,0,20,0,1] resulting in a shift of all the positions in these lists of
+
10 in X and
+
20 in Y. The output lists are called image_name.off and are now associated with the images.
\ast
_resam’ outlist=’
\ast
.rot’ trtype=coeff tr=’[0,0.707,-0.707,0,0.707,0.707]’
In this example a linear transformation is used to rotate the positions by 45 degrees about [0,0]. The linear coefficients for a rotation are specified as [0, cos, -sin, 0, sin, cos].
tranlist inlist=here outlist=reflected.dat trtype=express
xfor=-x yfor=-y
In this example a transformation expression is used to reflect the positions stored in the list associated with image here about the X and Y axes. A similar effect could be achieved with trtype=coeff and tr=[0,-1,0,0,0,-1].
tranlist inlist=image_with_list outlist=’
\ast
.tran’ trtype=express xfor=’(fx
\ast
(1d0
+
\ast
(fx
\ast
+
\ast
fy)))
\ast
+
yfor=’(fy
\ast
+
\ast
\ast
+
\ast
\ast
+
py’ fx=’(x-px)/ps’ fy=’(y-py)/ps’ pa=pincushion_distortion_factor px=X-centre-value py=Y-centre-value ps=scale_factor
In this example a general transformation (which is of the type used when applying pin cushion distortions) is applied to the position list associated with the image image_with_list. The transformation is parameterised with an offset and scale (converts pixel coordinates to one projection radius units) applied to the input coordinates and a pincushion distortion parameter pa.
tranlist ndfnames=false inlist=’"list1,list2,list3"’ outlist=’"outlist1,outlist2,outlist3"’ namelist=newfiles
In this example the input position lists are not associated with images (ndfnames=false) And have to be specified by name (no wildcards allowed). The output lists are also specified in this fashion, but, the same effect could have been achieved with outlist=out
\ast
as the input list names are now used as as modifiers for the output list names (the image names are always used when they are available – see previous examples). The names of the output lists are written to the file newfiles, this could be used to specify the names of these files to another application using indirection (e.g inlist=
\text{^}
newfiles, with ndfnames=false again). The transformation type is not specified in this example and will be obtained by prompting.
Position list formats.
CCDPACK supports data in two formats.
CCDPACK format - the first three columns are interpreted as the following.
Column 1: an integer identifier
Column 2: the X position
Column 3: the Y position
The column one value must be an integer and is used to identify positions which are the same but which have different locations on different images. Values in any other (trailing) columns are usually ignored.
EXTERNAL format - positions are specified using just an X and a Y entry and no other entries.
This format is used by KAPPA applications such as CURSOR.
Comments may be included in a file using the characters "#" and "!". Columns may be separated by the use of commas or spaces.
NDF extension items.
If NDFNAMES is TRUE then the item "CURRENT_LIST" of the .MORE.CCDPACK structure of the input NDFs will be located and assumed to contain the names of the lists whose positions are to be transformed. On exit this item will be updated to reference the name of the transformed list of positions.
This application may also access the item "TRANSFORM" from the NDF extensions if NDFNAMES and INEXT are TRUE and TRTYPE="STRUCT".
In this application data following the third column are copied without modification into the results files.
Retaining parameter values has the advantage of allowing you to define the default behaviour of the application but does mean that additional care needs to be taken when using the application on new datasets or after a break of sometime. The intrinsic default behaviour of the application may be restored by using the RESET keyword on the command line.
Certain parameters (LOGTO, LOGFILE and NDFNAMES) have global values. These global values will always take precedence, except when an assignment is made on the command line. Global values may be set and reset using the CCDSETUP and CCDCLEAR commands.
|
05.01 What Is Machine Learning
Before we take a look at the details of various machine learning methods, let's start by looking at what machine learning is, and what it isn't. Machine learning is often categorized as a subfield of artificial intelligence, but I find that categorization can often be misleading at first brush. The study of machine learning certainly arose from research in this context, but in the data science application of machine learning methods, it's more helpful to think of machine learning as a means of building models of data.
Fundamentally, machine learning involves building mathematical models to help understand data. "Learning" enters the fray when we give these models tunable parameters that can be adapted to observed data; in this way the program can be considered to be "learning" from the data. Once these models have been fit to previously seen data, they can be used to predict and understand aspects of newly observed data. I'll leave to the reader the more philosophical digression regarding the extent to which this type of mathematical, model-based "learning" is similar to the "learning" exhibited by the human brain.
Understanding the problem setting in machine learning is essential to using these tools effectively, and so we will start with some broad categorizations of the types of approaches we'll discuss here.
At the most fundamental level, machine learning can be categorized into two main types: supervised learning and unsupervised learning.
Supervised learning involves somehow modeling the relationship between measured features of data and some label associated with the data; once this model is determined, it can be used to apply labels to new, unknown data. This is further subdivided into classification tasks and regression tasks: in classification, the labels are discrete categories, while in regression, the labels are continuous quantities. We will see examples of both types of supervised learning in the following section.
Unsupervised learning involves modeling the features of a dataset without reference to any label, and is often described as "letting the dataset speak for itself." These models include tasks such as clustering and dimensionality reduction. Clustering algorithms identify distinct groups of data, while dimensionality reduction algorithms search for more succinct representations of the data. We will see examples of both types of unsupervised learning in the following section.
In addition, there are so-called semi-supervised learning methods, which falls somewhere between supervised learning and unsupervised learning. Semi-supervised learning methods are often useful when only incomplete labels are available.
To make these ideas more concrete, let's take a look at a few very simple examples of a machine learning task. These examples are meant to give an intuitive, non-quantitative overview of the types of machine learning tasks we will be looking at in this chapter. In later sections, we will go into more depth regarding the particular models and how they are used. For a preview of these more technical aspects, you can find the Python source that generates the following figures in the Appendix: Figure Code.
We will first take a look at a simple classification task, in which you are given a set of labeled points and want to use these to classify some unlabeled points.
Imagine that we have the data shown in this figure:
Here we have two-dimensional data: that is, we have two features for each point, represented by the (x,y) positions of the points on the plane. In addition, we have one of two class labels for each point, here represented by the colors of the points. From these features and labels, we would like to create a model that will let us decide whether a new point should be labeled "blue" or "red."
There are a number of possible models for such a classification task, but here we will use an extremely simple one. We will make the assumption that the two groups can be separated by drawing a straight line through the plane between them, such that points on each side of the line fall in the same group. Here the model is a quantitative version of the statement "a straight line separates the classes", while the model parameters are the particular numbers describing the location and orientation of that line for our data. The optimal values for these model parameters are learned from the data (this is the "learning" in machine learning), which is often called training the model.
The following figure shows a visual representation of what the trained model looks like for this data:
Now that this model has been trained, it can be generalized to new, unlabeled data. In other words, we can take a new set of data, draw this model line through it, and assign labels to the new points based on this model. This stage is usually called prediction. See the following figure:
This is the basic idea of a classification task in machine learning, where "classification" indicates that the data has discrete class labels. At first glance this may look fairly trivial: it would be relatively easy to simply look at this data and draw such a discriminatory line to accomplish this classification. A benefit of the machine learning approach, however, is that it can generalize to much larger datasets in many more dimensions.
For example, this is similar to the task of automated spam detection for email; in this case, we might use the following features and labels:
feature 1, feature 2, etc.
\to
normalized counts of important words or phrases ("Viagra", "Nigerian prince", etc.)
\to
"spam" or "not spam"
For the training set, these labels might be determined by individual inspection of a small representative sample of emails; for the remaining emails, the label would be determined using the model. For a suitably trained classification algorithm with enough well-constructed features (typically thousands or millions of words or phrases), this type of approach can be very effective. We will see an example of such text-based classification in In Depth: Naive Bayes Classification.
Some important classification algorithms that we will discuss in more detail are Gaussian naive Bayes (see In Depth: Naive Bayes Classification), support vector machines (see In-Depth: Support Vector Machines), and random forest classification (see In-Depth: Decision Trees and Random Forests).
In contrast with the discrete labels of a classification algorithm, we will next look at a simple regression task in which the labels are continuous quantities.
Consider the data shown in the following figure, which consists of a set of points each with a continuous label:
As with the classification example, we have two-dimensional data: that is, there are two features describing each data point. The color of each point represents the continuous label for that point.
There are a number of possible regression models we might use for this type of data, but here we will use a simple linear regression to predict the points. This simple linear regression model assumes that if we treat the label as a third spatial dimension, we can fit a plane to the data. This is a higher-level generalization of the well-known problem of fitting a line to data with two coordinates.
We can visualize this setup as shown in the following figure:
Notice that the feature 1-feature 2 plane here is the same as in the two-dimensional plot from before; in this case, however, we have represented the labels by both color and three-dimensional axis position. From this view, it seems reasonable that fitting a plane through this three-dimensional data would allow us to predict the expected label for any set of input parameters. Returning to the two-dimensional projection, when we fit such a plane we get the result shown in the following figure:
This plane of fit gives us what we need to predict labels for new points. Visually, we find the results shown in the following figure:
As with the classification example, this may seem rather trivial in a low number of dimensions. But the power of these methods is that they can be straightforwardly applied and evaluated in the case of data with many, many features.
For example, this is similar to the task of computing the distance to galaxies observed through a telescope—in this case, we might use the following features and labels:
\to
brightness of each galaxy at one of several wave lengths or colors
\to
distance or redshift of the galaxy
The distances for a small number of these galaxies might be determined through an independent set of (typically more expensive) observations. Distances to remaining galaxies could then be estimated using a suitable regression model, without the need to employ the more expensive observation across the entire set. In astronomy circles, this is known as the "photometric redshift" problem.
Some important regression algorithms that we will discuss are linear regression (see In Depth: Linear Regression), support vector machines (see In-Depth: Support Vector Machines), and random forest regression (see In-Depth: Decision Trees and Random Forests).
Clustering: Inferring labels on unlabeled data
The classification and regression illustrations we just looked at are examples of supervised learning algorithms, in which we are trying to build a model that will predict labels for new data. Unsupervised learning involves models that describe data without reference to any known labels.
One common case of unsupervised learning is "clustering," in which data is automatically assigned to some number of discrete groups. For example, we might have some two-dimensional data like that shown in the following figure:
By eye, it is clear that each of these points is part of a distinct group. Given this input, a clustering model will use the intrinsic structure of the data to determine which points are related. Using the very fast and intuitive k-means algorithm (see In Depth: K-Means Clustering), we find the clusters shown in the following figure:
k-means fits a model consisting of k cluster centers; the optimal centers are assumed to be those that minimize the distance of each point from its assigned center. Again, this might seem like a trivial exercise in two dimensions, but as our data becomes larger and more complex, such clustering algorithms can be employed to extract useful information from the dataset.
We will discuss the k-means algorithm in more depth in In Depth: K-Means Clustering. Other important clustering algorithms include Gaussian mixture models (See In Depth: Gaussian Mixture Models) and spectral clustering (See Scikit-Learn's clustering documentation).
Dimensionality reduction: Inferring structure of unlabeled data
Dimensionality reduction is another example of an unsupervised algorithm, in which labels or other information are inferred from the structure of the dataset itself. Dimensionality reduction is a bit more abstract than the examples we looked at before, but generally it seeks to pull out some low-dimensional representation of data that in some way preserves relevant qualities of the full dataset. Different dimensionality reduction routines measure these relevant qualities in different ways, as we will see in In-Depth: Manifold Learning.
As an example of this, consider the data shown in the following figure:
Visually, it is clear that there is some structure in this data: it is drawn from a one-dimensional line that is arranged in a spiral within this two-dimensional space. In a sense, you could say that this data is "intrinsically" only one dimensional, though this one-dimensional data is embedded in higher-dimensional space. A suitable dimensionality reduction model in this case would be sensitive to this nonlinear embedded structure, and be able to pull out this lower-dimensionality representation.
The following figure shows a visualization of the results of the Isomap algorithm, a manifold learning algorithm that does exactly this:
Notice that the colors (which represent the extracted one-dimensional latent variable) change uniformly along the spiral, which indicates that the algorithm did in fact detect the structure we saw by eye. As with the previous examples, the power of dimensionality reduction algorithms becomes clearer in higher-dimensional cases. For example, we might wish to visualize important relationships within a dataset that has 100 or 1,000 features. Visualizing 1,000-dimensional data is a challenge, and one way we can make this more manageable is to use a dimensionality reduction technique to reduce the data to two or three dimensions.
Some important dimensionality reduction algorithms that we will discuss are principal component analysis (see In Depth: Principal Component Analysis) and various manifold learning algorithms, including Isomap and locally linear embedding (See In-Depth: Manifold Learning).
Here we have seen a few simple examples of some of the basic types of machine learning approaches. Needless to say, there are a number of important practical details that we have glossed over, but I hope this section was enough to give you a basic idea of what types of problems machine learning approaches can solve.
In short, we saw the following:
Supervised learning: Models that can predict labels based on labeled training data
Classification: Models that predict labels as two or more discrete categories
Regression: Models that predict continuous labels
Unsupervised learning: Models that identify structure in unlabeled data
Clustering: Models that detect and identify distinct groups in the data
Dimensionality reduction: Models that detect and identify lower-dimensional structure in higher-dimensional data
In the following sections we will go into much greater depth within these categories, and see some more interesting examples of where these concepts can be useful.
All of the figures in the preceding discussion are generated based on actual machine learning computations; the code behind them can be found in Appendix: Figure Code.
|
Global Constraint Catalog: Kpattern_sequencing
<< 3.7.182. Partridge3.7.184. Pentomino >>
\mathrm{𝚌𝚞𝚖𝚞𝚕𝚊𝚝𝚒𝚟𝚎}_\mathrm{𝚌𝚘𝚗𝚟𝚎𝚡}
A constraint allowing for expressing the pattern sequencing problem as a single global constraint. The pattern sequencing problem [FinkVoss99] can be described as follows: given a 0-1 matrix in which each column
j
\left(1\le j\le p\right)
corresponds to a product required by the customers and each row
i
\left(1\le i\le c\right)
corresponds to the order of a particular customer (The entry
{c}_{ij}
is equal to 1 if and only if customer
i
has ordered some quantity of product
j
.), the objective is to find a permutation of the products such that the maximum number of open orders at any point in the sequence is minimised. Order
is open at point
k
in the production sequence if there is a product required in order
i
that appears at or before position
k
in the sequence and also a product that appears at or after position
k
in the sequence.
|
MapleUnique - Maple Help
Home : Support : Online Help : Connectivity : OpenMaple : VB Application Programming Interface : Evaluation and Error Handling : MapleUnique
return unique copy of an object in external code
MapleUnique(kv, s)
The MapleUnique function processes the Maple expression, s, and returns the unique normalized copy of that expression. For example, if you create the number num = one-billion, and then compute the number val = 2*500-million, an address comparison of num and val does not indicate equality. For example, after calling num = MapleUnique(kv,num);, both num and val point to the same memory.
Because Maple maintains a table of unique elements, only one copy of most objects exists in memory. The expression
{x}^{2}+x
contains two references to
x
, but both point to the same object. Similarly,
[[1,2],[1,2]]
contains two sublists,
[1,2]
. The surrounding list maintains two pointers to the same sublist. It is usually not safe to directly modify any object that has been processed by Maple. For example, if the sublist were changed from
[1,2]
[3,2]
, the parent list also changes. In fact, all references to the list
[1,2]
in Maple would then point to
[3,2]
. This would cause many problems. For example, table lookups of the
[1,2]
element would be changed to look up the
[3,2]
element. It is safe to directly modify only data blocks of mutable objects like rtables and tables. Never change strings returned by MapleToString, or expression sequences like the args object given to the external entry point.
Sub TestConvertToMaple(ByVal kv As Long)
list = MapleListAlloc(kv, 15)
'MapleGcProtect kv, list
MapleListAssign kv, list, 1, ToMapleBoolean(kv, 1)
MapleListAssign kv, list, 3, ToMapleBoolean(kv, -1)
MapleListAssign kv, list, 5, ToMapleChar(kv, "M")
MapleListAssign kv, list, 6, ToMapleComplex(kv, 1.1, 2.2)
MapleListAssign kv, list, 7, ToMapleComplexFloat(kv, _
ToMapleFloat(kv, 3.3), ToMapleFloat(kv, 4.4))
MapleListAssign kv, list, 8, ToMapleInteger(kv, 98765)
MapleListAssign kv, list, 9, ToMapleFloat(kv, 3.1415)
args = NewMapleExpressionSequence(kv, 2)
MapleExpseqAssign kv, args, 1, ToMapleName(kv, "x", True)
MapleListAssign kv, list, 10, ToMapleFunction(kv, _
ToMapleName(kv, "int", True), args)
MapleListAssign kv, list, 11, ToMapleNULLPointer(kv)
MapleListAssign kv, list, 12, ToMapleRelation(kv, ">", _
ToMapleName(kv, "X", True), ToMapleInteger(kv, 1))
MapleListAssign kv, list, 13, ToMapleString(kv, "A string")
MapleListAssign kv, list, 14, ToMapleUneval(kv, _
ToMapleName(kv, "Z", True))
MapleListAssign kv, list, 15, ToMapleNULL(kv)
'note this will get removed when the list is flattened
MapleALGEB_Printf1 kv, "%a", MapleUnique(kv, list)
|
Global Constraint Catalog: Ksweep
<< 3.7.251. Sum3.7.253. Symmetric >>
\mathrm{𝚌𝚞𝚖𝚞𝚕𝚊𝚝𝚒𝚟𝚎𝚜}
\mathrm{𝚍𝚒𝚏𝚏𝚗}
\mathrm{𝚐𝚎𝚘𝚜𝚝}
\mathrm{𝚐𝚎𝚘𝚜𝚝}_\mathrm{𝚝𝚒𝚖𝚎}
\mathrm{𝚜𝚘𝚏𝚝}_\mathrm{𝚊𝚕𝚕}_\mathrm{𝚎𝚚𝚞𝚊𝚕}_\mathrm{𝚖𝚒𝚗}_\mathrm{𝚟𝚊𝚛}
\mathrm{𝚜𝚙𝚛𝚎𝚊𝚍}
\mathrm{𝚟𝚒𝚜𝚒𝚋𝚕𝚎}
A constraint for which the filtering algorithm may use a sweep algorithm. A sweep algorithm [PreparataShamos85] solves a problem by moving an imaginary object (usually a line, a plane or sometime a point). The object does not move continuously, but only at particular points where we actually do something. A sweep algorithm uses the following two data structures:
A data structure called the sweep status, which contains information related to the current position of the object that moves,
A data structure named the event point series, which holds the events to process.
The algorithm initialises the sweep status for the initial position of the imaginary object. Then the object jumps from one event to the next event; each event is handled by updating the status of the sweep.
A first typical application reported in [BeldiceanuCarlsson01sweep] of the idea of sweep within the context of constraint programming is to aggregate several constraints that have two variables in common in order to perform more deduction. Let:
𝚇
𝚈
be two distinct variables,
{C}_{1}\left({𝚅}_{11},\cdots ,{𝚅}_{1{n}_{1}}\right),\cdots ,{C}_{m}\left({𝚅}_{m1},\cdots ,{𝚅}_{m{n}_{m}}\right)
m
constraints such that all constraints mention
𝚇
𝚈
The sweep algorithm tries to adjust the minimum value of
𝚇
with respect to the conjunction of the previous constraints by moving a sweep-line from the minimum value of
𝚇
to its maximum value. It accumulates within the sweep-line status the values to be currently removed from the domain of
𝚈
. If, for the current position
\Delta
of the sweep-line, all values of
𝚈
have to be removed, then the algorithm removes value
\Delta
from the domain of
𝚇
. The events to process correspond to the starts and ends of forbidden two-dimensional regions with respect to constraints
{C}_{1},\cdots ,{C}_{m}
𝚇
𝚈
. Forbidden regions are a way to represent constraints
{C}_{1},\cdots ,{C}_{m}
that is suited for this sweep algorithm. A forbidden region of the constraint
{C}_{i}
𝚇
𝚈
is an ordered pair
\left(\left[{F}_{x}^{-},{F}_{x}^{+}\right],\left[{F}_{y}^{-},{F}_{y}^{+}\right]\right)
of intervals such that:
\forall x\in \left[{F}_{x}^{-},{F}_{x}^{+}\right],\forall y\in \left[{F}_{y}^{-},{F}_{y}^{+}\right]:{C}_{i}\left({𝚅}_{i1},\cdots ,{𝚅}_{i{n}_{i}}\right)
has no solution in which
𝚇=x
𝚈=y
Figure 3.7.78 shows five constraints and their respective forbidden regions (in pink) with respect to two given variables
𝚇
𝚈
and their domains. The first constraint requires that
𝚇
𝚈
𝚁
be pairwise distinct. Constraints (B,C) are usual arithmetic constraints.Within the context of continuous variables, Chabertet al. [ChabertBeldiceanu10] shows how to compute a forbidden region that contains a given unfeasible point for numerical constraints with arbitrary mathematical expressions. Constraint (D) can be interpreted as requiring that two rectangles of respective origins
\left(𝚇,𝚈\right)
\left(𝚃,𝚄\right)
and sizes
\left(2,4\right)
\left(3,2\right)
do not overlap. Finally, constraint (E) is a parity constraint of the sum of
𝚇
𝚈
Figure 3.7.78. Examples of forbidden regions (in pink) according to the two variables
𝚇
𝚈
𝚇\in \left[0,4\right]
𝚈\in \left[0,4\right]
) for five constraints
We illustrate the use of the sweep algorithm on a concrete example. Assume that we want to find out the minimum value of variable
𝚇
with respect to the conjunction of the five constraints that were introduced by Figure 3.7.78, that is versus the following conjunction of constraints:
\left\{\begin{array}{cc}𝚇\in 0..4,𝚈\in 0..4,𝚁\in 0..9,𝚃\in 0..2,𝚄\in 0..3\hfill & \\ \mathrm{𝚊𝚕𝚕𝚍𝚒𝚏𝚏𝚎𝚛𝚎𝚗𝚝}\left(〈𝚇,𝚈,𝚁〉\right)\hfill & \hfill \left(\mathrm{A}\right)\\ |𝚇-𝚈|>2\hfill & \hfill \left(\mathrm{B}\right)\\ 𝚇+2𝚈-1<𝚂\hfill & \hfill \left(\mathrm{C}\right)\\ 𝚇+1<𝚃\vee 𝚃+2<𝚇\vee 𝚈+3<𝚄\vee 𝚄+1<𝚈\hfill & \hfill \left(\mathrm{D}\right)\\ \left(𝚇+𝚈\right)\mathrm{mod}2=2\hfill & \hfill \left(\mathrm{E}\right)\end{array}\right\
Figure 3.7.79 shows the content of the sweep-line status (i.e., the forbidden values for
𝚈
according the current position of the sweep-line) for different positions of the sweep-line. More precisely, the sweep-line status can be viewed as an array (see the rightmost part of Figure 3.7.79) which records for each possible value of
𝚈
the number of forbidden regions that currently intersect the sweep-line (see the leftmost part of Figure 3.7.79 where these forbidden regions are coloured in red). The smallest possible value of
𝚇
is 4, since this is the first position of the sweep-line where the sweep-line status contains a value of
𝚈
which is not forbidden (i.e.,
𝚇=4
𝚈=0
is not covered by any forbidden region).
Figure 3.7.79. Sweep-line status while sweeping through the potential values of variable
𝚇
(i.e., values from 0 to 4) until a potentially feasible point
𝚇=4
𝚈=4
wrt the five constraints (A), (B), (C), (D) and (E) is found; the sweep-line status (i.e., the rightmost column) records for a potential value
v
of variable
𝚇
and for each potential value
w
𝚈
how many constraints are violated when both
𝚇=v
𝚈=w
A second similar application of the idea of sweep in the context of the cardinality operator [VanHentenryckDeville91], where all constraints have at least two variables in common, is reported in [BeldiceanuCarlssonSoftSweep01]. As before, each constraint
C
of the cardinality operator is defined by its forbidden regions with respect to a pair of variables
\left(𝚇,𝚈\right)
that occur in every constraint. In addition to that, a constraint
C
is also defined by its safe regions, where a safe region is the set of assignments to the pair
\left(𝚇,𝚈\right)
located in a rectangle such that the constraint always holds, no matter which values are taken by the other variables of
C
. Then the extended sweep algorithm filters the pair of variables
\left(𝚇,𝚈\right)
right from the beginning according to the minimum and maximum number of constraints of the cardinality operator that have to hold.
A third typical application reported in [BeldiceanuCarlssonPoderSadekTruchet07] and in [CarlssonBeldiceanuMartin08] of the idea of sweep within the context of multi-dimensional placement problems (see for instance the
\mathrm{𝚍𝚒𝚏𝚏𝚗}
\mathrm{𝚐𝚎𝚘𝚜𝚝}
constraints) for filtering each coordinate of the origin of an object
o
to place is as follows. To adjust the minimum (respectively maximum) value of a coordinate of the origin we perform a recursive traversal of the placement space in increasing (respectively decreasing) lexicographic order and skips infeasible points that are located in a multi-dimensional forbidden set. Each multi-dimensional forbidden set is computed from a constraint where object
occurs (for instance a non-overlapping constraint in the context of the
\mathrm{𝚍𝚒𝚏𝚏𝚗}
\mathrm{𝚐𝚎𝚘𝚜𝚝}
|
Global Constraint Catalog: Cnclass
<< 5.270. nand5.272. neq >>
\mathrm{𝚗𝚟𝚊𝚕𝚞𝚎}
\mathrm{𝚗𝚌𝚕𝚊𝚜𝚜}\left(\mathrm{𝙽𝙲𝙻𝙰𝚂𝚂},\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂},\mathrm{𝙿𝙰𝚁𝚃𝙸𝚃𝙸𝙾𝙽𝚂}\right)
\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}
\mathrm{𝚌𝚘𝚕𝚕𝚎𝚌𝚝𝚒𝚘𝚗}\left(\mathrm{𝚟𝚊𝚕}-\mathrm{𝚒𝚗𝚝}\right)
\mathrm{𝙽𝙲𝙻𝙰𝚂𝚂}
\mathrm{𝚍𝚟𝚊𝚛}
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}
\mathrm{𝚌𝚘𝚕𝚕𝚎𝚌𝚝𝚒𝚘𝚗}\left(\mathrm{𝚟𝚊𝚛}-\mathrm{𝚍𝚟𝚊𝚛}\right)
\mathrm{𝙿𝙰𝚁𝚃𝙸𝚃𝙸𝙾𝙽𝚂}
\mathrm{𝚌𝚘𝚕𝚕𝚎𝚌𝚝𝚒𝚘𝚗}\left(𝚙-\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}\right)
|\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}|\ge 1
\mathrm{𝚛𝚎𝚚𝚞𝚒𝚛𝚎𝚍}
\left(\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂},\mathrm{𝚟𝚊𝚕}\right)
\mathrm{𝚍𝚒𝚜𝚝𝚒𝚗𝚌𝚝}
\left(\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂},\mathrm{𝚟𝚊𝚕}\right)
\mathrm{𝙽𝙲𝙻𝙰𝚂𝚂}\ge 0
\mathrm{𝙽𝙲𝙻𝙰𝚂𝚂}\le \mathrm{𝚖𝚒𝚗}\left(|\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}|,|\mathrm{𝙿𝙰𝚁𝚃𝙸𝚃𝙸𝙾𝙽𝚂}|\right)
\mathrm{𝙽𝙲𝙻𝙰𝚂𝚂}\le
\mathrm{𝚛𝚊𝚗𝚐𝚎}
\left(\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}.\mathrm{𝚟𝚊𝚛}\right)
\mathrm{𝚛𝚎𝚚𝚞𝚒𝚛𝚎𝚍}
\left(\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂},\mathrm{𝚟𝚊𝚛}\right)
\mathrm{𝚛𝚎𝚚𝚞𝚒𝚛𝚎𝚍}
\left(\mathrm{𝙿𝙰𝚁𝚃𝙸𝚃𝙸𝙾𝙽𝚂},𝚙\right)
|\mathrm{𝙿𝙰𝚁𝚃𝙸𝚃𝙸𝙾𝙽𝚂}|\ge 2
Number of partitions of the collection
\mathrm{𝙿𝙰𝚁𝚃𝙸𝚃𝙸𝙾𝙽𝚂}
such that at least one value is assigned to at least one variable of the collection
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}
\left(2,〈3,2,7,2,6〉,〈𝚙-〈1,3〉,𝚙-〈4〉,𝚙-〈2,6〉〉\right)
〈3,2,7,2,6〉
occur within partitions
𝚙-〈1,3〉
𝚙-〈2,6〉
but not within
𝚙-〈4〉
\mathrm{𝚗𝚌𝚕𝚊𝚜𝚜}
\mathrm{𝙽𝙲𝙻𝙰𝚂𝚂}
is set to value 2.
\mathrm{𝙽𝙲𝙻𝙰𝚂𝚂}>1
\mathrm{𝙽𝙲𝙻𝙰𝚂𝚂}<|\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}|
\mathrm{𝙽𝙲𝙻𝙰𝚂𝚂}<
\mathrm{𝚛𝚊𝚗𝚐𝚎}
\left(\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}.\mathrm{𝚟𝚊𝚛}\right)
|\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}|>|\mathrm{𝙿𝙰𝚁𝚃𝙸𝚃𝙸𝙾𝙽𝚂}|
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}
\mathrm{𝙿𝙰𝚁𝚃𝙸𝚃𝙸𝙾𝙽𝚂}
\mathrm{𝙿𝙰𝚁𝚃𝙸𝚃𝙸𝙾𝙽𝚂}.𝚙
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}.\mathrm{𝚟𝚊𝚛}
can be replaced by any other value that also belongs to the same partition of
\mathrm{𝙿𝙰𝚁𝚃𝙸𝚃𝙸𝙾𝙽𝚂}
All occurrences of two distinct tuples of values in
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}.\mathrm{𝚟𝚊𝚛}
\mathrm{𝙿𝙰𝚁𝚃𝙸𝚃𝙸𝙾𝙽𝚂}.𝚙.\mathrm{𝚟𝚊𝚕}
can be swapped; all occurrences of a tuple of values in
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}.\mathrm{𝚟𝚊𝚛}
\mathrm{𝙿𝙰𝚁𝚃𝙸𝚃𝙸𝙾𝙽𝚂}.𝚙.\mathrm{𝚟𝚊𝚕}
can be renamed to any unused tuple of values.
Functional dependency:
\mathrm{𝙽𝙲𝙻𝙰𝚂𝚂}
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}
\mathrm{𝙿𝙰𝚁𝚃𝙸𝚃𝙸𝙾𝙽𝚂}
Extensible wrt.
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}
\mathrm{𝙽𝙲𝙻𝙰𝚂𝚂}=|\mathrm{𝙿𝙰𝚁𝚃𝙸𝚃𝙸𝙾𝙽𝚂}|
[Beldiceanu01], [BeldiceanuCarlssonThiel02].
\mathrm{𝚗𝚎𝚚𝚞𝚒𝚟𝚊𝚕𝚎𝚗𝚌𝚎}
\mathrm{𝚟𝚊𝚛𝚒𝚊𝚋𝚕𝚎}\in \mathrm{𝚙𝚊𝚛𝚝𝚒𝚝𝚒𝚘𝚗}
\mathrm{𝚟𝚊𝚛𝚒𝚊𝚋𝚕𝚎}\mathrm{mod}\mathrm{𝚌𝚘𝚗𝚜𝚝𝚊𝚗𝚝}
\mathrm{𝚗𝚒𝚗𝚝𝚎𝚛𝚟𝚊𝚕}
\mathrm{𝚟𝚊𝚛𝚒𝚊𝚋𝚕𝚎}\in \mathrm{𝚙𝚊𝚛𝚝𝚒𝚝𝚒𝚘𝚗}
\mathrm{𝚟𝚊𝚛𝚒𝚊𝚋𝚕𝚎}/\mathrm{𝚌𝚘𝚗𝚜𝚝𝚊𝚗𝚝}
\mathrm{𝚗𝚙𝚊𝚒𝚛}
\mathrm{𝚟𝚊𝚛𝚒𝚊𝚋𝚕𝚎}\in \mathrm{𝚙𝚊𝚛𝚝𝚒𝚝𝚒𝚘𝚗}
\mathrm{𝚙𝚊𝚒𝚛}
\mathrm{𝚟𝚊𝚛𝚒𝚊𝚋𝚕𝚎𝚜}
\mathrm{𝚗𝚟𝚊𝚕𝚞𝚎}
\mathrm{𝚟𝚊𝚛𝚒𝚊𝚋𝚕𝚎}\in \mathrm{𝚙𝚊𝚛𝚝𝚒𝚝𝚒𝚘𝚗}
\mathrm{𝚟𝚊𝚛𝚒𝚊𝚋𝚕𝚎}
\mathrm{𝚒𝚗}_\mathrm{𝚜𝚊𝚖𝚎}_\mathrm{𝚙𝚊𝚛𝚝𝚒𝚝𝚒𝚘𝚗}
characteristic of a constraint: partition.
constraint arguments: pure functional dependency.
constraint type: counting constraint, value partitioning constraint.
modelling: number of distinct equivalence classes, functional dependency.
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}
\mathrm{𝐶𝐿𝐼𝑄𝑈𝐸}
↦\mathrm{𝚌𝚘𝚕𝚕𝚎𝚌𝚝𝚒𝚘𝚗}\left(\mathrm{𝚟𝚊𝚛𝚒𝚊𝚋𝚕𝚎𝚜}\mathtt{1},\mathrm{𝚟𝚊𝚛𝚒𝚊𝚋𝚕𝚎𝚜}\mathtt{2}\right)
\mathrm{𝚒𝚗}_\mathrm{𝚜𝚊𝚖𝚎}_\mathrm{𝚙𝚊𝚛𝚝𝚒𝚝𝚒𝚘𝚗}
\left(\mathrm{𝚟𝚊𝚛𝚒𝚊𝚋𝚕𝚎𝚜}\mathtt{1}.\mathrm{𝚟𝚊𝚛},\mathrm{𝚟𝚊𝚛𝚒𝚊𝚋𝚕𝚎𝚜}\mathtt{2}.\mathrm{𝚟𝚊𝚛},\mathrm{𝙿𝙰𝚁𝚃𝙸𝚃𝙸𝙾𝙽𝚂}\right)
\mathrm{𝐍𝐒𝐂𝐂}
=\mathrm{𝙽𝙲𝙻𝙰𝚂𝚂}
\mathrm{𝐍𝐒𝐂𝐂}
graph property we show the different strongly connected components of the final graph. Each strongly connected component corresponds to a class of values that was assigned to some variables of the
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}
collection. We effectively use two classes of values that respectively correspond to values
\left\{3\right\}
\left\{2,6\right\}
. Note that we do not consider value 7 since it does not belong to the different classes of values we gave: all corresponding arc constraints do not hold.
\mathrm{𝚗𝚌𝚕𝚊𝚜𝚜}
|
Global Constraint Catalog: Crelaxed_sliding_sum
<< 5.330. range_ctr5.332. remainder >>
\mathrm{𝚛𝚎𝚕𝚊𝚡𝚎𝚍}_\mathrm{𝚜𝚕𝚒𝚍𝚒𝚗𝚐}_\mathrm{𝚜𝚞𝚖}\left(\mathrm{𝙰𝚃𝙻𝙴𝙰𝚂𝚃},\mathrm{𝙰𝚃𝙼𝙾𝚂𝚃},\mathrm{𝙻𝙾𝚆},\mathrm{𝚄𝙿},\mathrm{𝚂𝙴𝚀},\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\right)
\mathrm{𝙰𝚃𝙻𝙴𝙰𝚂𝚃}
\mathrm{𝚒𝚗𝚝}
\mathrm{𝙰𝚃𝙼𝙾𝚂𝚃}
\mathrm{𝚒𝚗𝚝}
\mathrm{𝙻𝙾𝚆}
\mathrm{𝚒𝚗𝚝}
\mathrm{𝚄𝙿}
\mathrm{𝚒𝚗𝚝}
\mathrm{𝚂𝙴𝚀}
\mathrm{𝚒𝚗𝚝}
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}
\mathrm{𝚌𝚘𝚕𝚕𝚎𝚌𝚝𝚒𝚘𝚗}\left(\mathrm{𝚟𝚊𝚛}-\mathrm{𝚍𝚟𝚊𝚛}\right)
\mathrm{𝙰𝚃𝙻𝙴𝙰𝚂𝚃}\ge 0
\mathrm{𝙰𝚃𝙼𝙾𝚂𝚃}\ge \mathrm{𝙰𝚃𝙻𝙴𝙰𝚂𝚃}
\mathrm{𝙰𝚃𝙼𝙾𝚂𝚃}\le |\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}|-\mathrm{𝚂𝙴𝚀}+1
\mathrm{𝚄𝙿}\ge \mathrm{𝙻𝙾𝚆}
\mathrm{𝚂𝙴𝚀}>0
\mathrm{𝚂𝙴𝚀}\le |\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}|
\mathrm{𝚛𝚎𝚚𝚞𝚒𝚛𝚎𝚍}
\left(\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂},\mathrm{𝚟𝚊𝚛}\right)
There are between
\mathrm{𝙰𝚃𝙻𝙴𝙰𝚂𝚃}
\mathrm{𝙰𝚃𝙼𝙾𝚂𝚃}
sequences of
\mathrm{𝚂𝙴𝚀}
consecutive variables of the collection
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}
such that the sum of the variables of the sequence is in
\left[\mathrm{𝙻𝙾𝚆},\mathrm{𝚄𝙿}\right]
\left(3,4,3,7,4,〈2,4,2,0,0,3,4〉\right)
2420034
we have exactly 3 subsequences of
\mathrm{𝚂𝙴𝚀}=4
consecutive values such that their sum is located within the interval
\left[\mathrm{𝙻𝙾𝚆},\mathrm{𝚄𝙿}\right]=\left[3,7\right]
: subsequences
4200
2003
0034
. Consequently the
\mathrm{𝚛𝚎𝚕𝚊𝚡𝚎𝚍}_\mathrm{𝚜𝚕𝚒𝚍𝚒𝚗𝚐}_\mathrm{𝚜𝚞𝚖}
constraint holds since the number of such subsequences is located within the interval
\left[\mathrm{𝙰𝚃𝙻𝙴𝙰𝚂𝚃},\mathrm{𝙰𝚃𝙼𝙾𝚂𝚃}\right]=\left[3,4\right]
\mathrm{𝚂𝙴𝚀}>1
\mathrm{𝚂𝙴𝚀}<|\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}|
\mathrm{𝚛𝚊𝚗𝚐𝚎}
\left(\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}.\mathrm{𝚟𝚊𝚛}\right)>1
\mathrm{𝙰𝚃𝙻𝙴𝙰𝚂𝚃}>0\vee \mathrm{𝙰𝚃𝙼𝙾𝚂𝚃}<|\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}|-\mathrm{𝚂𝙴𝚀}+1
\mathrm{𝙰𝚃𝙻𝙴𝙰𝚂𝚃}
\ge 0
\mathrm{𝙰𝚃𝙼𝙾𝚂𝚃}
\le |\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}|-\mathrm{𝚂𝙴𝚀}+1
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}
[BeldiceanuCarlsson01].
\mathrm{𝚜𝚕𝚒𝚍𝚒𝚗𝚐}_\mathrm{𝚜𝚞𝚖}
\mathrm{𝚜𝚞𝚖}_\mathrm{𝚌𝚝𝚛}
(the sliding constraint).
constraint type: sliding sequence constraint, soft constraint, relaxation.
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}
\mathrm{𝑃𝐴𝑇𝐻}
↦\mathrm{𝚌𝚘𝚕𝚕𝚎𝚌𝚝𝚒𝚘𝚗}
\mathrm{𝚂𝙴𝚀}
•
\mathrm{𝚜𝚞𝚖}_\mathrm{𝚌𝚝𝚛}
\left(\mathrm{𝚌𝚘𝚕𝚕𝚎𝚌𝚝𝚒𝚘𝚗},\ge ,\mathrm{𝙻𝙾𝚆}\right)
•
\mathrm{𝚜𝚞𝚖}_\mathrm{𝚌𝚝𝚛}
\left(\mathrm{𝚌𝚘𝚕𝚕𝚎𝚌𝚝𝚒𝚘𝚗},\le ,\mathrm{𝚄𝙿}\right)
•
\mathrm{𝐍𝐀𝐑𝐂}
\ge \mathrm{𝙰𝚃𝙻𝙴𝙰𝚂𝚃}
•
\mathrm{𝐍𝐀𝐑𝐂}
\le \mathrm{𝙰𝚃𝙼𝙾𝚂𝚃}
Parts (A) and (B) of Figure 5.331.1 respectively show the initial and final graph associated with the Example slot. For each vertex of the graph we show its corresponding position within the collection of variables. The constraint associated with each arc corresponds to a conjunction of two
\mathrm{𝚜𝚞𝚖}_\mathrm{𝚌𝚝𝚛}
constraints involving 4 consecutive variables. In Part (B), we did not put vertex 1 since the single arc constraint that mentions vertex 1 does not hold (i.e., the sum
2+4+2+0=8
is not located in interval
\left[3,7\right]
). However, the directed hypergraph contains 3 arcs, so the
\mathrm{𝚛𝚎𝚕𝚊𝚡𝚎𝚍}_\mathrm{𝚜𝚕𝚒𝚍𝚒𝚗𝚐}_\mathrm{𝚜𝚞𝚖}
constraint is satisfied since it was requested to have between 3 and 4 arcs.
\mathrm{𝚛𝚎𝚕𝚊𝚡𝚎𝚍}_\mathrm{𝚜𝚕𝚒𝚍𝚒𝚗𝚐}_\mathrm{𝚜𝚞𝚖}
\left(3,4,3,7,\mathbf{4},〈2,\mathbf{4},\mathbf{2},\mathbf{0},\mathbf{0},\mathbf{3},\mathbf{4}〉\right)
\mathrm{𝚂𝙴𝚀}=\mathbf{4}
vertices (e.g., the rightmost ellipse represents the constraint
0+0+3+4\in \left[3,7\right]
|
Key Concepts & Glossary | College Algebra | Course Hero
A matrix is a rectangular array of numbers. Entries are arranged in rows and columns.
The dimensions of a matrix refer to the number of rows and the number of columns. A
3\times 2
matrix has three rows and two columns.
We add and subtract matrices of equal dimensions by adding and subtracting corresponding entries of each matrix.
Scalar multiplication involves multiplying each entry in a matrix by a constant.
Scalar multiplication is often required before addition or subtraction can occur.
Multiplying matrices is possible when inner dimensions are the same—the number of columns in the first matrix must match the number of rows in the second.
The product of two matrices,
A
B
, is obtained by multiplying each entry in row 1 of
A
by each entry in column 1 of
B
; then multiply each entry of row 1 of
A
by each entry in columns 2 of
B,\text{}
Many real-world problems can often be solved using matrices.
We can use a calculator to perform matrix operations after saving each matrix as a matrix variable.
a set of numbers aligned vertically in a matrix
an element, coefficient, or constant in a matrix
a rectangular array of numbers
a set of numbers aligned horizontally in a matrix
an entry of a matrix that has been multiplied by a scalar
Blank Key Terms and Concepts Glossary #1.pdf
THEA 120 • San Diego State University
Key financial concepts glossary. Corina Nicole Álvarez Rangel 4.3.docx
GARNET 1989 • San Francisco State University
oder-519152 Key concepts glossary.docx
Week 1 - Glossary of Entrepreneurship Key Concepts.docx
HMT8 3001 • Fanshawe College
Glossary of Guest Relations Key Concepts - Wks 1-3.pdf
GRM2 5089 • Fanshawe College
3-glossary of key concepts.docx
ANTH 118 • Binghamton University
BSM1501+Key+concepts+multi-lingual+glossary
BSM 1501 • University of South Africa
c278_key_concepts.pdf
COLLEGE AL C278 • Western Governors University
Key Concepts and Ideas For Fractions
ALGEBRA 101 • Mississauga Secondary School
Key_Concepts_Day_1_(3).pptx
MATH ALGEBRA • Southwood High School
Boolean Algebra Key Concepts-1.pdf
ECE 2020 • Georgia Institute Of Technology
Key Concept-Quiz Worksheet (2).docx
ACCT 1020 • Conestoga College
4.3_-_4.4_-_Key_Concept_Review_1.pdf
MAT PRECALC • North Shore Community College
NR503_W1_Key_Concepts_Worksheet_
Sophia - College Algebra Unit 1 - Challenge 1- Essential Concepts .docx
MM 212 • Southern New Hampshire University
Key Concept Reading Activity LP4-Larson.docx
SOCSCIEN 10809166 • Mid-State Technical College
Key Concept Reading Activity Chap 11-Larson.docx
NR503_W1_Key_Concepts_Worksheet_.docx
Key Concept ch 2.docx
CJ 232 • Mid-State Technical College
Key Concept Reading Activity(9) (1).docx
BUSINESS 232 • Mid-State Technical College
Key Concepts (1).pdf
CMST 260 • Columbia Basin College
NR503_W1_Key_Concepts_Worksheet .docx
Key+Concept+Reading+Activity.pdf
Key Concepts Chapter 22.docx
BIO 101 • Northern Virginia Community College
Key Concept Reading Activity LP2-Larson chap 4.docx
NR503_W1_Key_Concepts_Worksheet_-1
Key_concepts_unit_7
E-COMMERCE 1 • Saint Mary's College of California
|
Simplifying Expressions | Brilliant Math & Science Wiki
Ashley Toh, Ashish Menon, Raj Magesh, and
gayu garuda
To simplify a mathematical expression is to represent it in the least complicated form possible. In general the simplest form is one that has used the fundamental properties of numbers, exponents, algebraic rules, etc. to remove any duplication or redundancy from the expression. It is essentially the opposite of expanding an expression (e.g., with the distributive property).
Simplified expressions are significantly easier to work with than those that have not been simplified.
"Like terms" refer to terms whose variables are exactly the same, but may have different coefficients. For example, the terms
2xy
5xy
are alike as they have the same variable
xy
. The terms
2xy
2x
are not alike.
Combining like terms refers to adding (or subtracting) like terms together to make just one term.
2xy + 5 xy
2xy
5xy
are like terms (with a variable of
xy
), we can add their coefficients together to get
2xy + 5xy = (2+5) xy = 7 xy
_\square
5xy - 3xy
5xy
3xy
xy
), we can subtract their coefficients together to get
5xy - 3xy = (5-3) xy = 2xy
_\square
When there are multiple like terms, arrange the terms in order of decreasing degree and simplify.
x^2 + 3 + 2x^2 - 4x + 7
? Simplify terms and state the degree of the polynomial.
x^2
2x^2
x^2
), we can combine them.
3
7
1
The remaining terms are not alike.
x^2 + 3 + 2x^2 - 4x + 7 = (1+2) x^2 - 4x + (3+7) = 3x^2 - 4x + 10.
The highest degree term is
x^2
, so the polynomial has degree
2
_\square
2 c
1
c^2
0
a - (a - b - a - c + a) - b + c?
y^4 + \frac{1}{2}y - 2y^3 - y^4 + 5y^2 + \frac{5}{2}y + 3
Combining like terms, we get
\begin{aligned} y^4 + \frac{1}{2}y - 2y^3 - y^4 + 5y^2 + \frac{5}{2}y + 3 &= \left(y^4 - y^4\right) - 2y^3 + 5y^2 + 3y + 3 \\ &= -2y^3 + 5y^2 + 3y + 3 . \end{aligned}
y^3
3
_\square
Remember that when adding and subtracting polynomials, the order of operations still applies.
\left(2a^3 - 4a^2 + a - 5\right) - \left(2a + 2 - a^3\right)
Distributing the minus sign across the terms in the second set of parentheses, we get
2a^3 - 4a^2 + a - 5 - 2a - 2 + a^3.
Collecting similar terms and simplifying, the simplified polynomial is
\left(2a^3 + a^3\right) - 4a^2 + (a - 2a) - (5 + 2) = 3a^3 - 4a^2 -a -7. \ _\square
When adding and subtracting polynomials that are in fractional form, start by finding the common denominator of each term.
\frac{3a - 1}{2} - \frac{a + 2}{4}.
\begin{aligned} \frac{3a - 1}{2} - \frac{a + 2}{4} &= \left( \frac{3a - 1}{2} \times \frac{2}{2} \right) - \frac{a + 2}{4} \\ &= \frac{(6a - 2) - (a+2)}{4} \\ &= \frac{5a - 4}{4}. \ _\square \end{aligned}
You can multiply constants with constants, and variables with variables, then apply the laws of exponents.
2x^3 \times 5x^7
2x^3 \times 5x^7 = (2 \times 5) \times \left(x^3 \times x^7\right) = 10x^{10}. \ _\square
25xy × 4xy
25xy × 4xy = 100x^2y^2.\ _\square
3ab^2 \times \left(-2a^4b^5\right)
3ab^2 \times (-2a^4b^5) = (3 \times -2) \times \left(a^1 \times a^4\right) \times \left(b^2 \times b^5\right) = -6a^5b^7. \ _\square
When dividing, you can convert division to multiplication with variables, just as you would do with constants. For example:
\begin{aligned} 2 \div 3 &= 2 \times \frac{1}{3} \\ x \div y &= x \times \frac{1}{y} \end{aligned}
\begin{aligned} 2 \div \frac{1}{3} &= 2 \times 3 \\ x \div \frac{1}{y} &= x \times y. \end{aligned}
100xy ÷ 10xy
100xy ÷ 10xy = \dfrac{100xy}{10xy}= 10.\ _\square
8x^3y \div 4xy
8x^3y \div 4xy = 8x^3y \times \frac{1}{4xy} = \frac{8}{4} \times \frac{x^3y}{xy} = 2x^2. \ _\square
\dfrac{2a^5}{b^2} \div \dfrac{7a^3}{b}
\frac{2a^5}{b^2} \div \frac{7a^3}{b} = \frac{2a^5}{b^2} \times \frac{b}{7a^3} = \frac{2}{7} \times \frac{a^5b}{a^3b^2} = \frac{2}{7} \times \frac{a^2}{b} = \frac{2a^2}{7b}. \ _\square
Here are a few examples mixing multiplication and division. When doing these types of problems, use your knowledge of order of operations and solve parentheses and exponents first. Convert division to multiplication just as you did above, and remember to multiply constants with constants and variables with variables.
\displaystyle 2x^2y^3 \times \left(3x^3y\right)^2 \div xy^6
2x^2y^3 \times \left(3x^3y\right)^2 \div xy^6 = 2x^2y^3 \times 9x^6y^2 \times \frac{1}{xy^6} = (2 \times 9) \times \frac{x^8y^5}{xy^{6}} = \frac{18x^7}{y}. \ _\square
\left(-2a^2b^3\right)^2 \div \left(a^3b\right)^2 \times 3a^5b
\left(-2a^2b^3\right)^2 \div \left(a^3b\right)^2 \times 3a^5b = 4a^4b^6 \times \frac{1}{a^6b^2} \times 3a^5b = 12a^3b^5. \ _\square
Main Article: Exponents
To simplify exponents, we follow the rules of exponents to combine all terms that can be merged.
\left(3x^2x^4\right)^2
\begin{aligned} \left(3x^2x^4\right)^2 &= \left(3x^6\right)^2 \\ &= 3^2 x^{6\cdot 2} \\ &= 9 x^{12}. \ _\square \end{aligned}
\dfrac{{(5x^3y^4)}^2 × {(4x^4y)}^3}{{(2x^6y^3)}^6}
\begin{aligned} \dfrac{{(5x^3y^4)}^2 × {(4x^4y)}^3}{{(2x^6y^3)}^6} & = \dfrac{25x^6y^8 × 64x^{12}y^3}{64x^{36}y^{18}}\\ & = \dfrac{1600x^{18}y^{11}}{64x^{36}y^{18}}\\ & = \dfrac{25}{x^{18}y^7}. \ _\square \end{aligned}
36
7
6
42
49
\large\color{#20A900} 6^{\color{#3D99F6} 6} + \color{#20A900} 6^{\color{#3D99F6} 6} + \color{#20A900} 6^{\color{#3D99F6} 6} + \color{#20A900} 6^{\color{#3D99F6} 6} + \color{#20A900} 6^{\color{#3D99F6} 6} + \color{#20A900} 6^{\color{#3D99F6} 6} = \color{#20A900}6^ {\color{#624F41}a}
\color{#624F41} a
satisfies the equation above, what is the value of
\color{#624F41} a
Cite as: Simplifying Expressions. Brilliant.org. Retrieved from https://brilliant.org/wiki/simplifying-expressions/
|
Global Constraint Catalog: Csymmetric_alldifferent
<< 5.394. symmetric5.396. symmetric_alldifferent_except_0 >>
\mathrm{𝚜𝚢𝚖𝚖𝚎𝚝𝚛𝚒𝚌}_\mathrm{𝚊𝚕𝚕𝚍𝚒𝚏𝚏𝚎𝚛𝚎𝚗𝚝}\left(\mathrm{𝙽𝙾𝙳𝙴𝚂}\right)
\mathrm{𝚜𝚢𝚖𝚖𝚎𝚝𝚛𝚒𝚌}_\mathrm{𝚊𝚕𝚕𝚍𝚒𝚏𝚏}
\mathrm{𝚜𝚢𝚖𝚖𝚎𝚝𝚛𝚒𝚌}_\mathrm{𝚊𝚕𝚕𝚍𝚒𝚜𝚝𝚒𝚗𝚌𝚝}
\mathrm{𝚜𝚢𝚖𝚖}_\mathrm{𝚊𝚕𝚕𝚍𝚒𝚏𝚏𝚎𝚛𝚎𝚗𝚝}
\mathrm{𝚜𝚢𝚖𝚖}_\mathrm{𝚊𝚕𝚕𝚍𝚒𝚏𝚏}
\mathrm{𝚜𝚢𝚖𝚖}_\mathrm{𝚊𝚕𝚕𝚍𝚒𝚜𝚝𝚒𝚗𝚌𝚝}
\mathrm{𝚘𝚗𝚎}_\mathrm{𝚏𝚊𝚌𝚝𝚘𝚛}
\mathrm{𝚝𝚠𝚘}_\mathrm{𝚌𝚢𝚌𝚕𝚎}
\mathrm{𝙽𝙾𝙳𝙴𝚂}
\mathrm{𝚌𝚘𝚕𝚕𝚎𝚌𝚝𝚒𝚘𝚗}\left(\mathrm{𝚒𝚗𝚍𝚎𝚡}-\mathrm{𝚒𝚗𝚝},\mathrm{𝚜𝚞𝚌𝚌}-\mathrm{𝚍𝚟𝚊𝚛}\right)
|\mathrm{𝙽𝙾𝙳𝙴𝚂}|\mathrm{mod}2=0
\mathrm{𝚛𝚎𝚚𝚞𝚒𝚛𝚎𝚍}
\left(\mathrm{𝙽𝙾𝙳𝙴𝚂},\left[\mathrm{𝚒𝚗𝚍𝚎𝚡},\mathrm{𝚜𝚞𝚌𝚌}\right]\right)
\mathrm{𝙽𝙾𝙳𝙴𝚂}.\mathrm{𝚒𝚗𝚍𝚎𝚡}\ge 1
\mathrm{𝙽𝙾𝙳𝙴𝚂}.\mathrm{𝚒𝚗𝚍𝚎𝚡}\le |\mathrm{𝙽𝙾𝙳𝙴𝚂}|
\mathrm{𝚍𝚒𝚜𝚝𝚒𝚗𝚌𝚝}
\left(\mathrm{𝙽𝙾𝙳𝙴𝚂},\mathrm{𝚒𝚗𝚍𝚎𝚡}\right)
\mathrm{𝙽𝙾𝙳𝙴𝚂}.\mathrm{𝚜𝚞𝚌𝚌}\ge 1
\mathrm{𝙽𝙾𝙳𝙴𝚂}.\mathrm{𝚜𝚞𝚌𝚌}\le |\mathrm{𝙽𝙾𝙳𝙴𝚂}|
All variables associated with the
\mathrm{𝚜𝚞𝚌𝚌}
\mathrm{𝙽𝙾𝙳𝙴𝚂}
collection should be pairwise distinct. In addition enforce the following condition: if variable
\mathrm{𝙽𝙾𝙳𝙴𝚂}\left[i\right].\mathrm{𝚜𝚞𝚌𝚌}
takes value
j
j\ne i
then variable
\mathrm{𝙽𝙾𝙳𝙴𝚂}\left[j\right].\mathrm{𝚜𝚞𝚌𝚌}
i
. This can be interpreted as a graph-covering problem where one has to cover a digraph
G
with circuits of length two in such a way that each vertex of
G
belongs to a single circuit.
\left(\begin{array}{c}〈\begin{array}{cc}\mathrm{𝚒𝚗𝚍𝚎𝚡}-1\hfill & \mathrm{𝚜𝚞𝚌𝚌}-3,\hfill \\ \mathrm{𝚒𝚗𝚍𝚎𝚡}-2\hfill & \mathrm{𝚜𝚞𝚌𝚌}-4,\hfill \\ \mathrm{𝚒𝚗𝚍𝚎𝚡}-3\hfill & \mathrm{𝚜𝚞𝚌𝚌}-1,\hfill \\ \mathrm{𝚒𝚗𝚍𝚎𝚡}-4\hfill & \mathrm{𝚜𝚞𝚌𝚌}-2\hfill \end{array}〉\hfill \end{array}\right)
\mathrm{𝚜𝚢𝚖𝚖𝚎𝚝𝚛𝚒𝚌}_\mathrm{𝚊𝚕𝚕𝚍𝚒𝚏𝚏𝚎𝚛𝚎𝚗𝚝}
\mathrm{𝙽𝙾𝙳𝙴𝚂}\left[1\right].\mathrm{𝚜𝚞𝚌𝚌}=3⇔\mathrm{𝙽𝙾𝙳𝙴𝚂}\left[3\right].\mathrm{𝚜𝚞𝚌𝚌}=1
\mathrm{𝙽𝙾𝙳𝙴𝚂}\left[2\right].\mathrm{𝚜𝚞𝚌𝚌}=4⇔\mathrm{𝙽𝙾𝙳𝙴𝚂}\left[4\right].\mathrm{𝚜𝚞𝚌𝚌}=2
\mathrm{𝚜𝚢𝚖𝚖𝚎𝚝𝚛𝚒𝚌}_\mathrm{𝚊𝚕𝚕𝚍𝚒𝚏𝚏𝚎𝚛𝚎𝚗𝚝}
{S}_{1}\in \left[1,4\right]
{S}_{2}\in \left[1,3\right]
{S}_{3}\in \left[1,4\right]
{S}_{4}\in \left[1,3\right]
\mathrm{𝚜𝚢𝚖𝚖𝚎𝚝𝚛𝚒𝚌}_\mathrm{𝚊𝚕𝚕𝚍𝚒𝚏𝚏𝚎𝚛𝚎𝚗𝚝}
\left(〈1{S}_{1},2{S}_{2},3{S}_{3},4{S}_{4}〉\right)
\mathrm{𝚜𝚢𝚖𝚖𝚎𝚝𝚛𝚒𝚌}_\mathrm{𝚊𝚕𝚕𝚍𝚒𝚏𝚏𝚎𝚛𝚎𝚗𝚝}
\mathrm{𝚒𝚗𝚍𝚎𝚡}
\mathrm{𝚜𝚞𝚌𝚌}
|\mathrm{𝙽𝙾𝙳𝙴𝚂}|\ge 4
\mathrm{𝙽𝙾𝙳𝙴𝚂}
As it was reported in [Regin99], this constraint is useful to express matches between persons or between teams. The
\mathrm{𝚜𝚢𝚖𝚖𝚎𝚝𝚛𝚒𝚌}_\mathrm{𝚊𝚕𝚕𝚍𝚒𝚏𝚏𝚎𝚛𝚎𝚗𝚝}
constraint also appears implicitly in the cycle cover problem and corresponds to the four conditions given in section 1 Modeling the Cycle Cover Problem of [PesantSoriano98].
\mathrm{𝚘𝚗𝚎}_\mathrm{𝚏𝚊𝚌𝚝𝚘𝚛}
in [HenzMullerThiel02] as well as in [Trick02]. From a modelling point of view this constraint can be expressed with the
\mathrm{𝚌𝚢𝚌𝚕𝚎}
constraint [BeldiceanuContejean94] where one imposes the additional condition that each cycle has only two nodes.
\mathrm{𝚜𝚢𝚖𝚖𝚎𝚝𝚛𝚒𝚌}_\mathrm{𝚊𝚕𝚕𝚍𝚒𝚏𝚏𝚎𝚛𝚎𝚗𝚝}
constraint was proposed by J.-C. Régin in [Regin99]. It achieves arc-consistency and its running time is dominated by the complexity of finding all edges that do not belong to any maximum cardinality matching in an undirected
n
-vertex,
m
-edge graph, i.e.,
O\left(m·n\right)
For the soft case of the
\mathrm{𝚜𝚢𝚖𝚖𝚎𝚝𝚛𝚒𝚌}_\mathrm{𝚊𝚕𝚕𝚍𝚒𝚏𝚏𝚎𝚛𝚎𝚗𝚝}
constraint where the cost is the minimum number of variables to assign differently in order to get back to a solution, a filtering algorithm achieving arc-consistency is described in [Cymer13], [CymerPhD13]. It has a complexity of
O\left(p·m\right)
p
is the number of maximal extreme sets in the value graph associated with the constraint and
m
is the number of edges. It iterates over extreme sets and not over vertices as in the algorithm due to J.-C. Régin.
\mathrm{𝚜𝚢𝚖𝚖𝚎𝚝𝚛𝚒𝚌}_\mathrm{𝚊𝚕𝚕𝚍𝚒𝚏𝚏𝚎𝚛𝚎𝚗𝚝}
\left(\mathrm{𝙽𝙾𝙳𝙴𝚂}\right)
constraint can be expressed in term of a conjunction of
{|\mathrm{𝙽𝙾𝙳𝙴𝚂}|}^{2}
reified constraints of the form
\mathrm{𝙽𝙾𝙳𝙴𝚂}\left[i\right].\mathrm{𝚜𝚞𝚌𝚌}=j⇔\mathrm{𝙽𝙾𝙳𝙴𝚂}\left[j\right].\mathrm{𝚜𝚞𝚌𝚌}=i
\left(1\le i,j\le |\mathrm{𝙽𝙾𝙳𝙴𝚂}|\right)
\mathrm{𝚜𝚢𝚖𝚖𝚎𝚝𝚛𝚒𝚌}_\mathrm{𝚊𝚕𝚕𝚍𝚒𝚏𝚏𝚎𝚛𝚎𝚗𝚝}
constraint can also be reformulated as an
\mathrm{𝚒𝚗𝚟𝚎𝚛𝚜𝚎}
constraint as shown below:
\mathrm{𝚜𝚢𝚖𝚖𝚎𝚝𝚛𝚒𝚌}_\mathrm{𝚊𝚕𝚕𝚍𝚒𝚏𝚏𝚎𝚛𝚎𝚗𝚝}\left(\begin{array}{c}〈\begin{array}{cc}\mathrm{𝚒𝚗𝚍𝚎𝚡}-1\hfill & \mathrm{𝚜𝚞𝚌𝚌}-{s}_{1},\hfill \\ \mathrm{𝚒𝚗𝚍𝚎𝚡}-2\hfill & \mathrm{𝚜𝚞𝚌𝚌}-{s}_{2},\hfill \\ ⋮\hfill & ⋮\hfill \\ \mathrm{𝚒𝚗𝚍𝚎𝚡}-n\hfill & \mathrm{𝚜𝚞𝚌𝚌}-{s}_{n}\hfill \end{array}〉\hfill \end{array}\right)
\mathrm{𝚒𝚗𝚟𝚎𝚛𝚜𝚎}\left(\begin{array}{c}〈\begin{array}{ccc}\mathrm{𝚒𝚗𝚍𝚎𝚡}-1\hfill & \mathrm{𝚜𝚞𝚌𝚌}-{s}_{1}\hfill & \mathrm{𝚙𝚛𝚎𝚍}-{s}_{1},\hfill \\ \mathrm{𝚒𝚗𝚍𝚎𝚡}-2\hfill & \mathrm{𝚜𝚞𝚌𝚌}-{s}_{2}\hfill & \mathrm{𝚙𝚛𝚎𝚍}-{s}_{2},\hfill \\ ⋮\hfill & ⋮\hfill & ⋮\hfill \\ \mathrm{𝚒𝚗𝚍𝚎𝚡}-n\hfill & \mathrm{𝚜𝚞𝚌𝚌}-{s}_{n}\hfill & \mathrm{𝚙𝚛𝚎𝚍}-{s}_{n}\hfill \end{array}〉\hfill \end{array}\right)
n
) 2 3 4 5 6 7 8 9 10
Solutions 1 0 3 0 15 0 105 0 945
\mathrm{𝚜𝚢𝚖𝚖𝚎𝚝𝚛𝚒𝚌}_\mathrm{𝚊𝚕𝚕𝚍𝚒𝚏𝚏𝚎𝚛𝚎𝚗𝚝}
0..n
\mathrm{𝚊𝚕𝚕𝚍𝚒𝚏𝚏𝚎𝚛𝚎𝚗𝚝}
\mathrm{𝚌𝚢𝚌𝚕𝚎}
\mathrm{𝚒𝚗𝚟𝚎𝚛𝚜𝚎}
(permutation).
\mathrm{𝚍𝚎𝚛𝚊𝚗𝚐𝚎𝚖𝚎𝚗𝚝}
\mathrm{𝚜𝚢𝚖𝚖𝚎𝚝𝚛𝚒𝚌}_\mathrm{𝚊𝚕𝚕𝚍𝚒𝚏𝚏𝚎𝚛𝚎𝚗𝚝}_\mathrm{𝚎𝚡𝚌𝚎𝚙𝚝}_\mathtt{0}
\mathrm{𝚜𝚢𝚖𝚖𝚎𝚝𝚛𝚒𝚌}_\mathrm{𝚊𝚕𝚕𝚍𝚒𝚏𝚏𝚎𝚛𝚎𝚗𝚝}_\mathrm{𝚕𝚘𝚘𝚙}
𝚔_\mathrm{𝚊𝚕𝚕𝚍𝚒𝚏𝚏𝚎𝚛𝚎𝚗𝚝}
\mathrm{𝚕𝚎𝚡}_\mathrm{𝚊𝚕𝚕𝚍𝚒𝚏𝚏𝚎𝚛𝚎𝚗𝚝}
\mathrm{𝚛𝚘𝚘𝚝𝚜}
combinatorial object: permutation, involution, matching.
constraint type: graph constraint, timetabling constraint, graph partitioning constraint.
final graph structure: circuit.
modelling: cycle.
•
\mathrm{𝚜𝚢𝚖𝚖𝚎𝚝𝚛𝚒𝚌}_\mathrm{𝚊𝚕𝚕𝚍𝚒𝚏𝚏𝚎𝚛𝚎𝚗𝚝}\left(\mathrm{𝙽𝙾𝙳𝙴𝚂}\right)
\mathrm{𝚋𝚊𝚕𝚊𝚗𝚌𝚎}_\mathrm{𝚌𝚢𝚌𝚕𝚎}
\left(\mathrm{𝙱𝙰𝙻𝙰𝙽𝙲𝙴},\mathrm{𝙽𝙾𝙳𝙴𝚂}\right)
\mathrm{𝙱𝙰𝙻𝙰𝙽𝙲𝙴}=0
•
\mathrm{𝚜𝚢𝚖𝚖𝚎𝚝𝚛𝚒𝚌}_\mathrm{𝚊𝚕𝚕𝚍𝚒𝚏𝚏𝚎𝚛𝚎𝚗𝚝}\left(\mathrm{𝙽𝙾𝙳𝙴𝚂}\right)
\mathrm{𝚌𝚢𝚌𝚕𝚎}
\left(\mathrm{𝙽𝙲𝚈𝙲𝙻𝙴},\mathrm{𝙽𝙾𝙳𝙴𝚂}\right)
2*\mathrm{𝙽𝙲𝚈𝙲𝙻𝙴}=|\mathrm{𝙽𝙾𝙳𝙴𝚂}|
•
\mathrm{𝚜𝚢𝚖𝚖𝚎𝚝𝚛𝚒𝚌}_\mathrm{𝚊𝚕𝚕𝚍𝚒𝚏𝚏𝚎𝚛𝚎𝚗𝚝}\left(\mathrm{𝙽𝙾𝙳𝙴𝚂}\right)
\mathrm{𝚙𝚎𝚛𝚖𝚞𝚝𝚊𝚝𝚒𝚘𝚗}
\left(\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}:\mathrm{𝙽𝙾𝙳𝙴𝚂}\right)
\mathrm{𝙽𝙾𝙳𝙴𝚂}
\mathrm{𝐶𝐿𝐼𝑄𝑈𝐸}
\left(\ne \right)↦\mathrm{𝚌𝚘𝚕𝚕𝚎𝚌𝚝𝚒𝚘𝚗}\left(\mathrm{𝚗𝚘𝚍𝚎𝚜}\mathtt{1},\mathrm{𝚗𝚘𝚍𝚎𝚜}\mathtt{2}\right)
•\mathrm{𝚗𝚘𝚍𝚎𝚜}\mathtt{1}.\mathrm{𝚜𝚞𝚌𝚌}=\mathrm{𝚗𝚘𝚍𝚎𝚜}\mathtt{2}.\mathrm{𝚒𝚗𝚍𝚎𝚡}
•\mathrm{𝚗𝚘𝚍𝚎𝚜}\mathtt{2}.\mathrm{𝚜𝚞𝚌𝚌}=\mathrm{𝚗𝚘𝚍𝚎𝚜}\mathtt{1}.\mathrm{𝚒𝚗𝚍𝚎𝚡}
\mathrm{𝐍𝐀𝐑𝐂}
=|\mathrm{𝙽𝙾𝙳𝙴𝚂}|
In order to express the binary constraint that links two vertices one has to make explicit the identifier of the vertices.
\mathrm{𝐍𝐀𝐑𝐂}
\mathrm{𝚜𝚢𝚖𝚖𝚎𝚝𝚛𝚒𝚌}_\mathrm{𝚊𝚕𝚕𝚍𝚒𝚏𝚏𝚎𝚛𝚎𝚗𝚝}
Since all the
\mathrm{𝚒𝚗𝚍𝚎𝚡}
\mathrm{𝙽𝙾𝙳𝙴𝚂}
collection are distinct, and because of the first condition
\mathrm{𝚗𝚘𝚍𝚎𝚜}\mathtt{1}.\mathrm{𝚜𝚞𝚌𝚌}=\mathrm{𝚗𝚘𝚍𝚎𝚜}\mathtt{2}.\mathrm{𝚒𝚗𝚍𝚎𝚡}
of the arc constraint, each vertex of the final graph has at most one successor. Therefore the maximum number of arcs of the final graph is equal to the maximum number of vertices
|\mathrm{𝙽𝙾𝙳𝙴𝚂}|
of the final graph. So we can rewrite
\mathrm{𝐍𝐀𝐑𝐂}=|\mathrm{𝙽𝙾𝙳𝙴𝚂}|
\mathrm{𝐍𝐀𝐑𝐂}\ge |\mathrm{𝙽𝙾𝙳𝙴𝚂}|
\underline{\overline{\mathrm{𝐍𝐀𝐑𝐂}}}
\overline{\mathrm{𝐍𝐀𝐑𝐂}}
|
Camera sensor model with lens in 3D simulation environment - Simulink - MathWorks 한êµ
fx = F × sx
fy = F × sy
\begin{array}{l}{x}_{\text{d}}=xÃ\frac{1+{k}_{1}{r}^{2}+\text{â}{k}_{2}{r}^{4}+{k}_{3}{r}^{6}}{1+{k}_{4}{r}^{2}+{k}_{5}{r}^{4}+{k}_{6}{r}^{6}}\\ {y}_{\text{d}}=yÃ\frac{1+{k}_{1}{r}^{2}+\text{â}{k}_{2}{r}^{4}+{k}_{3}{r}^{6}}{1+{k}_{4}{r}^{2}+{k}_{5}{r}^{4}+{k}_{6}{r}^{6}}\end{array}
xd = x + [2p1xy + p2 × (r2 + 2x2)]
yd = y + [p1 × (r2 + 2y2) + 2p2xy]
[3] Heikkila, J., and O. Silven. “A Four-step Camera Calibration Procedure with Implicit Image Correction.†IEEE International Conference on Computer Vision and Pattern Recognition. 1997.
|
For square matrix A show that |adj(adjA)|= |A|^(n-1)^2 - Maths - Determinants - 11050887 | Meritnation.com
For square matrix A show that |adj(adjA)|= |A|^(n-1)^2
Neha Bhateja answered this
if A is a non-singular square matrix of order n,then we know that
A\left(adjA\right)=\left|A\right|{I}_{n}
⇒A\left(adjA\right)={\left[\begin{array}{cccccc}\left|A\right|& 0& 0& & & 0\\ 0& \left|A\right|& 0& & & 0\\ 0& 0& \left|A\right|& & & 0\\ & & & & & \\ & & & & & \\ 0& 0& 0& & & \left|A\right|\end{array}\right]}_{n*n}
⇒\left|A\left(adjA\right)\right|=\left|\begin{array}{cccccc}\left|A\right|& 0& 0& & & 0\\ 0& \left|A\right|& 0& & & 0\\ 0& 0& \left|A\right|& & & 0\\ & & & & & \\ & & & & & \\ 0& 0& 0& & & \left|A\right|\end{array}\right| ={\left|A\right|}^{n}
\left|AB\right|=\left|A\right|\left|B\right|
\left|A\right|\left|adjA\right|={\left|A\right|}^{n}
\left|adjA\right|={\left|A\right|}^{n-1}
now if we replace A by adjA, we get
\left|adj\left(adjA\right)\right|={\left|adjA\right|}^{n-1}
\left|adj\left(adjA\right)\right|=\left({\left|A\right|}^{n-1}{\right)}^{n-1}
\left|adj\left(adjA\right)\right|={\left|A\right|}^{\left(n-1{\right)}^{2}}
|
Sublime Rickey
\varphi(\mathbb E\left[ X\right]) \le \mathbb E\left[ \varphi(X)\right].
k-th argument as in LaTeX. In order to reduce headaches, this forcibly introduces a whitespace on the left of whatever is inserted which, usually, changes nothing visible (e.g. in a math settings). However there may be situations where you do not want this to happen and you know that the insertion will not clash with anything else. In that case, you should simply use !#k which will not introduce that whitespace. It's probably easier to see this in action:
\exp(-i\pi)+1
CC BY-SA 4.0 Rickey. Last modified: February 06, 2022. Website built with Franklin.jl and the Julia programming language.
|
Global Constraint Catalog: Csize_max_starting_seq_alldifferent
<< 5.347. size_max_seq_alldifferent5.349. sliding_card_skip0 >>
Inspired by
\mathrm{𝚜𝚒𝚣𝚎}_\mathrm{𝚖𝚊𝚡}_\mathrm{𝚜𝚎𝚚}_\mathrm{𝚊𝚕𝚕𝚍𝚒𝚏𝚏𝚎𝚛𝚎𝚗𝚝}
\mathrm{𝚜𝚒𝚣𝚎}_\mathrm{𝚖𝚊𝚡}_\mathrm{𝚜𝚝𝚊𝚛𝚝𝚒𝚗𝚐}_\mathrm{𝚜𝚎𝚚}_\mathrm{𝚊𝚕𝚕𝚍𝚒𝚏𝚏𝚎𝚛𝚎𝚗𝚝}\left(\mathrm{𝚂𝙸𝚉𝙴},\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}\right)
\mathrm{𝚜𝚒𝚣𝚎}_\mathrm{𝚖𝚊𝚡𝚒𝚖𝚊𝚕}_\mathrm{𝚜𝚝𝚊𝚛𝚝𝚒𝚗𝚐}_\mathrm{𝚜𝚎𝚚𝚞𝚎𝚗𝚌𝚎}_\mathrm{𝚊𝚕𝚕𝚍𝚒𝚏𝚏}
\mathrm{𝚜𝚒𝚣𝚎}_\mathrm{𝚖𝚊𝚡𝚒𝚖𝚊𝚕}_\mathrm{𝚜𝚝𝚊𝚛𝚝𝚒𝚗𝚐}_\mathrm{𝚜𝚎𝚚𝚞𝚎𝚗𝚌𝚎}_\mathrm{𝚊𝚕𝚕𝚍𝚒𝚜𝚝𝚒𝚗𝚌𝚝}
\mathrm{𝚜𝚒𝚣𝚎}_\mathrm{𝚖𝚊𝚡𝚒𝚖𝚊𝚕}_\mathrm{𝚜𝚝𝚊𝚛𝚝𝚒𝚗𝚐}_\mathrm{𝚜𝚎𝚚𝚞𝚎𝚗𝚌𝚎}_\mathrm{𝚊𝚕𝚕𝚍𝚒𝚏𝚏𝚎𝚛𝚎𝚗𝚝}
\mathrm{𝚂𝙸𝚉𝙴}
\mathrm{𝚍𝚟𝚊𝚛}
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}
\mathrm{𝚌𝚘𝚕𝚕𝚎𝚌𝚝𝚒𝚘𝚗}\left(\mathrm{𝚟𝚊𝚛}-\mathrm{𝚍𝚟𝚊𝚛}\right)
\mathrm{𝚂𝙸𝚉𝙴}\ge 0
\mathrm{𝚂𝙸𝚉𝙴}\le |\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}|
\mathrm{𝚛𝚎𝚚𝚞𝚒𝚛𝚎𝚍}
\left(\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂},\mathrm{𝚟𝚊𝚛}\right)
\mathrm{𝚂𝙸𝚉𝙴}
is the size of the maximal sequence (among all sequences of consecutive variables of the collection
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}
starting at position one) for which the
\mathrm{𝚊𝚕𝚕𝚍𝚒𝚏𝚏𝚎𝚛𝚎𝚗𝚝}
\left(4,〈9,2,4,5,2,7,4〉\right)
\left(7,〈9,2,4,5,1,7,8〉\right)
\left(6,〈9,2,4,5,1,7,9〉\right)
\mathrm{𝚜𝚒𝚣𝚎}_\mathrm{𝚖𝚊𝚡}_\mathrm{𝚜𝚝𝚊𝚛𝚝𝚒𝚗𝚐}_\mathrm{𝚜𝚎𝚚}_\mathrm{𝚊𝚕𝚕𝚍𝚒𝚏𝚏𝚎𝚛𝚎𝚗𝚝}
constraint holds since the constraint
\mathrm{𝚊𝚕𝚕𝚍𝚒𝚏𝚏𝚎𝚛𝚎𝚗𝚝}
\left(〈\mathrm{𝚟𝚊𝚛}-9,\mathrm{𝚟𝚊𝚛}-2,\mathrm{𝚟𝚊𝚛}-4,\mathrm{𝚟𝚊𝚛}-5〉\right)
holds and since
\mathrm{𝚊𝚕𝚕𝚍𝚒𝚏𝚏𝚎𝚛𝚎𝚗𝚝}
\left(〈\mathrm{𝚟𝚊𝚛}-9,\mathrm{𝚟𝚊𝚛}-2,\mathrm{𝚟𝚊𝚛}-4,\mathrm{𝚟𝚊𝚛}-5,\mathrm{𝚟𝚊𝚛}-2〉\right)
\mathrm{𝚂𝙸𝚉𝙴}>2
\mathrm{𝚂𝙸𝚉𝙴}<|\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}|
\mathrm{𝚛𝚊𝚗𝚐𝚎}
\left(\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}.\mathrm{𝚟𝚊𝚛}\right)>1
\mathrm{𝚟𝚊𝚛}
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}
\mathrm{𝚂𝙸𝚉𝙴}
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}
A conditional constraint [MittalFalkenhainer90] with the specific structure that one can relax the constraints on the last variables of the collection
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}
n
Solutions 9 64 625 7776 117649 2097152 43046721
\mathrm{𝚜𝚒𝚣𝚎}_\mathrm{𝚖𝚊𝚡}_\mathrm{𝚜𝚝𝚊𝚛𝚝𝚒𝚗𝚐}_\mathrm{𝚜𝚎𝚚}_\mathrm{𝚊𝚕𝚕𝚍𝚒𝚏𝚏𝚎𝚛𝚎𝚗𝚝}
0..n
n
Total 9 64 625 7776 117649 2097152 43046721
3 - 24 180 2160 30870 516096 9920232
4 - - 120 1440 23520 430080 8817984
5 - - - 720 12600 268800 6123600
6 - - - - 5040 120960 3265920
7 - - - - - 40320 1270080
\mathrm{𝚜𝚒𝚣𝚎}_\mathrm{𝚖𝚊𝚡}_\mathrm{𝚜𝚝𝚊𝚛𝚝𝚒𝚗𝚐}_\mathrm{𝚜𝚎𝚚}_\mathrm{𝚊𝚕𝚕𝚍𝚒𝚏𝚏𝚎𝚛𝚎𝚗𝚝}
0..n
\mathrm{𝚊𝚕𝚕𝚍𝚒𝚏𝚏𝚎𝚛𝚎𝚗𝚝}
\mathrm{𝚘𝚙𝚎𝚗}_\mathrm{𝚊𝚕𝚕𝚍𝚒𝚏𝚏𝚎𝚛𝚎𝚗𝚝}
\mathrm{𝚜𝚒𝚣𝚎}_\mathrm{𝚖𝚊𝚡}_\mathrm{𝚜𝚎𝚚}_\mathrm{𝚊𝚕𝚕𝚍𝚒𝚏𝚏𝚎𝚛𝚎𝚗𝚝}
\mathrm{𝚊𝚝𝚕𝚎𝚊𝚜𝚝}_\mathrm{𝚗𝚟𝚊𝚕𝚞𝚎}
characteristic of a constraint: all different, disequality, hypergraph.
constraint type: sliding sequence constraint, open constraint, conditional constraint.
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}
\mathrm{𝑃𝐴𝑇𝐻}_\mathit{1}
↦\mathrm{𝚌𝚘𝚕𝚕𝚎𝚌𝚝𝚒𝚘𝚗}
*
\mathrm{𝚊𝚕𝚕𝚍𝚒𝚏𝚏𝚎𝚛𝚎𝚗𝚝}
\left(\mathrm{𝚌𝚘𝚕𝚕𝚎𝚌𝚝𝚒𝚘𝚗}\right)
\mathrm{𝐍𝐀𝐑𝐂}
=\mathrm{𝚂𝙸𝚉𝙴}
Note that this is an example where the arc constraints do not have the same arity. However they correspond to the same constraint.
Parts (A) and (B) of Figure 5.348.1 respectively show the initial and final graph associated with the first example of the Example slot.
\mathrm{𝚜𝚒𝚣𝚎}_\mathrm{𝚖𝚊𝚡}_\mathrm{𝚜𝚝𝚊𝚛𝚝𝚒𝚗𝚐}_\mathrm{𝚜𝚎𝚚}_\mathrm{𝚊𝚕𝚕𝚍𝚒𝚏𝚏𝚎𝚛𝚎𝚗𝚝}
\left(\mathbf{4},〈9,2,4,5,2,7,4〉\right)
constraint of the first example of the Example slot where each ellipse represents an hyperedge corresponding to an
\mathrm{𝚊𝚕𝚕𝚍𝚒𝚏𝚏𝚎𝚛𝚎𝚗𝚝}
constraint (e.g., the fourth ellipse represents the constraint
\mathrm{𝚊𝚕𝚕𝚍𝚒𝚏𝚏𝚎𝚛𝚎𝚗𝚝}
〈9,2,4,5〉
|
Root Approximation - Bisection | Brilliant Math & Science Wiki
Geoff Pilling, Drex Beckman, Agnishom Chattopadhyay, and
Root approximation through bisection is a simple method for determining the root of a function. By testing different
x
-values in a function, the root can be gradually found by simply narrowing down the range of the function's sign change.
Assumption: The function is continuous and continuously differentiable in the given range where we see the sign change. This assumption is key as it will guarantee a root exists in the range by the intermediate value theorem.
Below is a graph of the function
x^{2}+2x-1.
In order to find the root between
[0, 2]
x
-values can be plugged into the function starting with the original domain:
\begin{aligned} f(0)&=0^{2}+2(0)-1\\&= -1\\\\ f(2)&= 2^{2}+2(2)-1\\&= 7. \end{aligned}
Because the function is continuous and there is a sign change, by intermediate value theorem, there must be a root between
(0,2)
. The midpoint between every domain is halved until the approximation becomes sufficiently close. In the previous example, the domain can be narrowed down through the midpoint, i.e.
\frac{x_{1}+x_{2}}{2}
\begin{aligned} f(1)&=1^{2}+2(1)-1\\&= 2\\\\ f(0.5)&=0.5^{2}+2(0.5)-1\\&= 0.25\\\\ f(0.25)&=0.25^{2}+2(0.25)-1\\&= -0.4375. \end{aligned}
The root has been established to lie within (0.25, 0.5). Bisection can be further continued:
\begin{aligned} f(0.375)&=0.375^{2}+2(0.375)-1\\&= -0.109\\\\ f(0.4375)&=0.4375^{2}+2(0.4375)-1\\&= 0.06640625. \end{aligned}
Now, the root's domain lies somewhere within (0.375, 0.4375). Bisection can be continued as precision dictates.
Can you find the root of the equation
y = \frac{1}{x}
in the interval between
y(-1) = -1
y(1) = 1
By the intermediate value theorem you might expect that since the sign changes between
x=-1
x=1
, that there should be a root between these two
x
values; however, this equation is not continuous on this interval. Specifically, at
x=0
\lim_{x\rightarrow0+}{\frac{1}{x}} = +\infty
\lim_{x\rightarrow0-}{\frac{1}{x}} = -\infty
\boxed{no}
this method cannot be used to find a root in this interval.
y = x^2
x=-1
x = 1
Although, clearly a root exists in this interval, specifically
y(0) = 0
, and the function is continuous and continuously differentiable in this range, the root approximation bisection method can't be used in this case since y never takes on a negative value in this range.
\boxed{no}
Given that the function
f(x)=2x^{3}+x^{2}-2x+1
has a root between 0 and -2, find the root to one decimal place.
First we evaluate the function at both intervals:
f(0)=1,\quad f(-2)=-7.
Now we bisect and repeat:
\begin{aligned} f(-1)&=2\\ f(-1.5)&=-0.5\\ f(-1.25)&=1.156\\ f(-1.375)&=0.441. \end{aligned}
We know the root lies somewhere between -1.5 and -1.375. Taking the midpoint,
\frac{-1.5+-1.375}{2}=-1.4375
. So we know the value to 1 decimal place of the root between 0 and -2 of the equation
2x^{3}+x^{2}-2x+1
-1.4.\ _\square
f(x)=\frac{1}{3}-\frac{1}{x}
1
4
, find the root to one decimal place.
Your first reaction might be to say, "Hey wait, this function isn't continuous or differentiable at
x=0
x=0
isn't in the interval between
1
4
, so this method can be used.
f(1)=-\frac{2}{3},\quad f(4)=\frac{1}{12}.
\[\begin{align} f(2.5)&=-.067\\ f(3.25)&=0.025\\ f(2.875)&=-0.014\\ f(3.0625)&=0.0068\\ f(2.969)&=-0.0035.
This has narrowed the search to between
x=2.969
3.0625
. Taking the midpoint gives us an answer of
\boxed{x=2.99}
to within a decimal point.
For the last example, we could see from the equation that the root would approach 3. However, for this example, it is not so obvious. In fact, it might be quite difficult to find the roots without the use of this bisection method.
f(x)=x^5 -4x^4+3x^2-x+2
3
4
f(3)=-55,\quad f(4)=46.
\begin{aligned} f(3.5)&=-39.78\\ f(3.75)&=-9.00\\ f(3.875)&=14.99\\ f(3.8125)&=2.18\\ f(3.78125)&=-3.61. \end{aligned}
x=3.8125
3.78125
\boxed{x=3.80}
Here is a Python implementation of the algorithm:
def bisection(f, a, b, eps): #take two points where the sign of the result is negative and positive respectively and an error bound
mid = (a+b)/2
while abs(f(mid)) > eps:
if f(mid) < 0:
Now we could run it like this, for example:
>>> bisection(lambda x: x**2-2, 0, 2, 0.001)
Cite as: Root Approximation - Bisection. Brilliant.org. Retrieved from https://brilliant.org/wiki/root-approximation-bisection/
|
PDA_SAACx
Sorts the columns of a two dimensional array into ascending order
This routine returns a list of column sorted indices to an array (rows and columns span the first and second dimensions, respectively). This means that the data in the first column is sorted, any tied positions are then sorted by the corresponding values of the data in the second column, any tied values here are then sorted using the values in the third column and so on until the array is completely value ordered, or all columns have been used.
CALL PDA_SAAC[x]( A, NDEC, N, M, IP, LINK, IFAIL )
The matrix to be ranked column by column.
The declared size of the first dimension of A.
The number of rows of A to be used.
The number of columns of A to be used. The declared size of this array should be at least two larger than this value (i.e. A should be at least A(NDEC,M
+
2)).
IP( M
+
2 ) = INTEGER (Returned)
LINK( M
+
2 ) = INTEGER (Given and Returned)
|
Global Constraint Catalog: Ctwo_layer_edge_crossing
<< 5.406. twin5.408. two_orth_are_in_contact >>
Inspired by [HararySchwenk72].
\mathrm{𝚝𝚠𝚘}_\mathrm{𝚕𝚊𝚢𝚎𝚛}_\mathrm{𝚎𝚍𝚐𝚎}_\mathrm{𝚌𝚛𝚘𝚜𝚜𝚒𝚗𝚐}\left(\begin{array}{c}\mathrm{𝙽𝙲𝚁𝙾𝚂𝚂},\hfill \\ \mathrm{𝚅𝙴𝚁𝚃𝙸𝙲𝙴𝚂}_\mathrm{𝙻𝙰𝚈𝙴𝚁}\mathtt{1},\hfill \\ \mathrm{𝚅𝙴𝚁𝚃𝙸𝙲𝙴𝚂}_\mathrm{𝙻𝙰𝚈𝙴𝚁}\mathtt{2},\hfill \\ \mathrm{𝙴𝙳𝙶𝙴𝚂}\hfill \end{array}\right)
\mathrm{𝙽𝙲𝚁𝙾𝚂𝚂}
\mathrm{𝚍𝚟𝚊𝚛}
\mathrm{𝚅𝙴𝚁𝚃𝙸𝙲𝙴𝚂}_\mathrm{𝙻𝙰𝚈𝙴𝚁}\mathtt{1}
\mathrm{𝚌𝚘𝚕𝚕𝚎𝚌𝚝𝚒𝚘𝚗}\left(\mathrm{𝚒𝚍}-\mathrm{𝚒𝚗𝚝},\mathrm{𝚙𝚘𝚜}-\mathrm{𝚍𝚟𝚊𝚛}\right)
\mathrm{𝚅𝙴𝚁𝚃𝙸𝙲𝙴𝚂}_\mathrm{𝙻𝙰𝚈𝙴𝚁}\mathtt{2}
\mathrm{𝚌𝚘𝚕𝚕𝚎𝚌𝚝𝚒𝚘𝚗}\left(\mathrm{𝚒𝚍}-\mathrm{𝚒𝚗𝚝},\mathrm{𝚙𝚘𝚜}-\mathrm{𝚍𝚟𝚊𝚛}\right)
\mathrm{𝙴𝙳𝙶𝙴𝚂}
\mathrm{𝚌𝚘𝚕𝚕𝚎𝚌𝚝𝚒𝚘𝚗}\left(\mathrm{𝚒𝚍}-\mathrm{𝚒𝚗𝚝},\mathrm{𝚟𝚎𝚛𝚝𝚎𝚡}\mathtt{1}-\mathrm{𝚒𝚗𝚝},\mathrm{𝚟𝚎𝚛𝚝𝚎𝚡}\mathtt{2}-\mathrm{𝚒𝚗𝚝}\right)
\mathrm{𝙽𝙲𝚁𝙾𝚂𝚂}\ge 0
\mathrm{𝚛𝚎𝚚𝚞𝚒𝚛𝚎𝚍}
\left(\mathrm{𝚅𝙴𝚁𝚃𝙸𝙲𝙴𝚂}_\mathrm{𝙻𝙰𝚈𝙴𝚁}\mathtt{1},\left[\mathrm{𝚒𝚍},\mathrm{𝚙𝚘𝚜}\right]\right)
\mathrm{𝚅𝙴𝚁𝚃𝙸𝙲𝙴𝚂}_\mathrm{𝙻𝙰𝚈𝙴𝚁}\mathtt{1}.\mathrm{𝚒𝚍}\ge 1
\mathrm{𝚅𝙴𝚁𝚃𝙸𝙲𝙴𝚂}_\mathrm{𝙻𝙰𝚈𝙴𝚁}\mathtt{1}.\mathrm{𝚒𝚍}\le |\mathrm{𝚅𝙴𝚁𝚃𝙸𝙲𝙴𝚂}_\mathrm{𝙻𝙰𝚈𝙴𝚁}\mathtt{1}|
\mathrm{𝚍𝚒𝚜𝚝𝚒𝚗𝚌𝚝}
\left(\mathrm{𝚅𝙴𝚁𝚃𝙸𝙲𝙴𝚂}_\mathrm{𝙻𝙰𝚈𝙴𝚁}\mathtt{1},\mathrm{𝚒𝚍}\right)
\mathrm{𝚍𝚒𝚜𝚝𝚒𝚗𝚌𝚝}
\left(\mathrm{𝚅𝙴𝚁𝚃𝙸𝙲𝙴𝚂}_\mathrm{𝙻𝙰𝚈𝙴𝚁}\mathtt{1},\mathrm{𝚙𝚘𝚜}\right)
\mathrm{𝚛𝚎𝚚𝚞𝚒𝚛𝚎𝚍}
\left(\mathrm{𝚅𝙴𝚁𝚃𝙸𝙲𝙴𝚂}_\mathrm{𝙻𝙰𝚈𝙴𝚁}\mathtt{2},\left[\mathrm{𝚒𝚍},\mathrm{𝚙𝚘𝚜}\right]\right)
\mathrm{𝚅𝙴𝚁𝚃𝙸𝙲𝙴𝚂}_\mathrm{𝙻𝙰𝚈𝙴𝚁}\mathtt{2}.\mathrm{𝚒𝚍}\ge 1
\mathrm{𝚅𝙴𝚁𝚃𝙸𝙲𝙴𝚂}_\mathrm{𝙻𝙰𝚈𝙴𝚁}\mathtt{2}.\mathrm{𝚒𝚍}\le |\mathrm{𝚅𝙴𝚁𝚃𝙸𝙲𝙴𝚂}_\mathrm{𝙻𝙰𝚈𝙴𝚁}\mathtt{2}|
\mathrm{𝚍𝚒𝚜𝚝𝚒𝚗𝚌𝚝}
\left(\mathrm{𝚅𝙴𝚁𝚃𝙸𝙲𝙴𝚂}_\mathrm{𝙻𝙰𝚈𝙴𝚁}\mathtt{2},\mathrm{𝚒𝚍}\right)
\mathrm{𝚍𝚒𝚜𝚝𝚒𝚗𝚌𝚝}
\left(\mathrm{𝚅𝙴𝚁𝚃𝙸𝙲𝙴𝚂}_\mathrm{𝙻𝙰𝚈𝙴𝚁}\mathtt{2},\mathrm{𝚙𝚘𝚜}\right)
\mathrm{𝚛𝚎𝚚𝚞𝚒𝚛𝚎𝚍}
\left(\mathrm{𝙴𝙳𝙶𝙴𝚂},\left[\mathrm{𝚒𝚍},\mathrm{𝚟𝚎𝚛𝚝𝚎𝚡}\mathtt{1},\mathrm{𝚟𝚎𝚛𝚝𝚎𝚡}\mathtt{2}\right]\right)
\mathrm{𝙴𝙳𝙶𝙴𝚂}.\mathrm{𝚒𝚍}\ge 1
\mathrm{𝙴𝙳𝙶𝙴𝚂}.\mathrm{𝚒𝚍}\le |\mathrm{𝙴𝙳𝙶𝙴𝚂}|
\mathrm{𝚍𝚒𝚜𝚝𝚒𝚗𝚌𝚝}
\left(\mathrm{𝙴𝙳𝙶𝙴𝚂},\mathrm{𝚒𝚍}\right)
\mathrm{𝙴𝙳𝙶𝙴𝚂}.\mathrm{𝚟𝚎𝚛𝚝𝚎𝚡}\mathtt{1}\ge 1
\mathrm{𝙴𝙳𝙶𝙴𝚂}.\mathrm{𝚟𝚎𝚛𝚝𝚎𝚡}\mathtt{1}\le |\mathrm{𝚅𝙴𝚁𝚃𝙸𝙲𝙴𝚂}_\mathrm{𝙻𝙰𝚈𝙴𝚁}\mathtt{1}|
\mathrm{𝙴𝙳𝙶𝙴𝚂}.\mathrm{𝚟𝚎𝚛𝚝𝚎𝚡}\mathtt{2}\ge 1
\mathrm{𝙴𝙳𝙶𝙴𝚂}.\mathrm{𝚟𝚎𝚛𝚝𝚎𝚡}\mathtt{2}\le |\mathrm{𝚅𝙴𝚁𝚃𝙸𝙲𝙴𝚂}_\mathrm{𝙻𝙰𝚈𝙴𝚁}\mathtt{2}|
\mathrm{𝙽𝙲𝚁𝙾𝚂𝚂}
is the number of line segments intersections.
\left(\begin{array}{c}2,〈\mathrm{𝚒𝚍}-1\mathrm{𝚙𝚘𝚜}-1,\mathrm{𝚒𝚍}-2\mathrm{𝚙𝚘𝚜}-2〉,\hfill \\ 〈\mathrm{𝚒𝚍}-1\mathrm{𝚙𝚘𝚜}-3,\mathrm{𝚒𝚍}-2\mathrm{𝚙𝚘𝚜}-1,\mathrm{𝚒𝚍}-3\mathrm{𝚙𝚘𝚜}-2〉,\hfill \\ 〈\begin{array}{ccc}\mathrm{𝚒𝚍}-1\hfill & \mathrm{𝚟𝚎𝚛𝚝𝚎𝚡}\mathtt{1}-2\hfill & \mathrm{𝚟𝚎𝚛𝚝𝚎𝚡}\mathtt{2}-2,\hfill \\ \mathrm{𝚒𝚍}-2\hfill & \mathrm{𝚟𝚎𝚛𝚝𝚎𝚡}\mathtt{1}-2\hfill & \mathrm{𝚟𝚎𝚛𝚝𝚎𝚡}\mathtt{2}-3,\hfill \\ \mathrm{𝚒𝚍}-3\hfill & \mathrm{𝚟𝚎𝚛𝚝𝚎𝚡}\mathtt{1}-1\hfill & \mathrm{𝚟𝚎𝚛𝚝𝚎𝚡}\mathtt{2}-1\hfill \end{array}〉\hfill \end{array}\right)
Figure 5.407.1 provides a picture of the example, where one can see the two line segments intersections. Each line segment of Figure 5.407.1 is labelled with its identifier and corresponds to an item of the
\mathrm{𝙴𝙳𝙶𝙴𝚂}
collection. The two vertices on top of Figure 5.407.1 correspond to the items of the
\mathrm{𝚅𝙴𝚁𝚃𝙸𝙲𝙴𝚂}_\mathrm{𝙻𝙰𝚈𝙴𝚁}\mathtt{1}
collection, while the three other vertices are associated with the items of
\mathrm{𝚅𝙴𝚁𝚃𝙸𝙲𝙴𝚂}_\mathrm{𝙻𝙰𝚈𝙴𝚁}\mathtt{2}
Figure 5.407.1. Intersection between line segments joining two layers of the Example slot for the constraint
\mathrm{𝚝𝚠𝚘}_\mathrm{𝚕𝚊𝚢𝚎𝚛}_\mathrm{𝚎𝚍𝚐𝚎}_\mathrm{𝚌𝚛𝚘𝚜𝚜𝚒𝚗𝚐}
\left(\mathrm{𝙽𝙲𝚁𝙾𝚂𝚂},\mathrm{𝚅𝙴𝚁𝚃𝙸𝙲𝙴𝚂}_\mathrm{𝙻𝙰𝚈𝙴𝚁}\mathtt{1},\mathrm{𝚅𝙴𝚁𝚃𝙸𝙲𝙴𝚂}_\mathrm{𝙻𝙰𝚈𝙴𝚁}\mathtt{2},\mathrm{𝙴𝙳𝙶𝙴𝚂}\right)
|\mathrm{𝚅𝙴𝚁𝚃𝙸𝙲𝙴𝚂}_\mathrm{𝙻𝙰𝚈𝙴𝚁}\mathtt{1}|>1
|\mathrm{𝚅𝙴𝚁𝚃𝙸𝙲𝙴𝚂}_\mathrm{𝙻𝙰𝚈𝙴𝚁}\mathtt{2}|>1
|\mathrm{𝙴𝙳𝙶𝙴𝚂}|\ge |\mathrm{𝚅𝙴𝚁𝚃𝙸𝙲𝙴𝚂}_\mathrm{𝙻𝙰𝚈𝙴𝚁}\mathtt{1}|
|\mathrm{𝙴𝙳𝙶𝙴𝚂}|\ge |\mathrm{𝚅𝙴𝚁𝚃𝙸𝙲𝙴𝚂}_\mathrm{𝙻𝙰𝚈𝙴𝚁}\mathtt{2}|
\left(\mathrm{𝙽𝙲𝚁𝙾𝚂𝚂}\right)
\left(\mathrm{𝚅𝙴𝚁𝚃𝙸𝙲𝙴𝚂}_\mathrm{𝙻𝙰𝚈𝙴𝚁}\mathtt{1},\mathrm{𝚅𝙴𝚁𝚃𝙸𝙲𝙴𝚂}_\mathrm{𝙻𝙰𝚈𝙴𝚁}\mathtt{2}\right)
\left(\mathrm{𝙴𝙳𝙶𝙴𝚂}\right)
\mathrm{𝚅𝙴𝚁𝚃𝙸𝙲𝙴𝚂}_\mathrm{𝙻𝙰𝚈𝙴𝚁}\mathtt{1}
\mathrm{𝚅𝙴𝚁𝚃𝙸𝙲𝙴𝚂}_\mathrm{𝙻𝙰𝚈𝙴𝚁}\mathtt{2}
\mathrm{𝙽𝙲𝚁𝙾𝚂𝚂}
\mathrm{𝚅𝙴𝚁𝚃𝙸𝙲𝙴𝚂}_\mathrm{𝙻𝙰𝚈𝙴𝚁}\mathtt{1}
\mathrm{𝚅𝙴𝚁𝚃𝙸𝙲𝙴𝚂}_\mathrm{𝙻𝙰𝚈𝙴𝚁}\mathtt{2}
\mathrm{𝙴𝙳𝙶𝙴𝚂}
The two-layer edge crossing minimisation problem was proved to be NP-hard in [GareyJohnson83].
\mathrm{𝚌𝚛𝚘𝚜𝚜𝚒𝚗𝚐}
\mathrm{𝚐𝚛𝚊𝚙𝚑}_\mathrm{𝚌𝚛𝚘𝚜𝚜𝚒𝚗𝚐}
(line segments intersection).
characteristic of a constraint: derived collection.
geometry: geometrical constraint, line segments intersection.
\mathrm{𝚌𝚘𝚕}\left(\begin{array}{c}\mathrm{𝙴𝙳𝙶𝙴𝚂}_\mathrm{𝙴𝚇𝚃𝚁𝙴𝙼𝙸𝚃𝙸𝙴𝚂}-\mathrm{𝚌𝚘𝚕𝚕𝚎𝚌𝚝𝚒𝚘𝚗}\left(\mathrm{𝚕𝚊𝚢𝚎𝚛}\mathtt{1}-\mathrm{𝚍𝚟𝚊𝚛},\mathrm{𝚕𝚊𝚢𝚎𝚛}\mathtt{2}-\mathrm{𝚍𝚟𝚊𝚛}\right),\hfill \\ \left[\begin{array}{c}\mathrm{𝚒𝚝𝚎𝚖}\left(\begin{array}{c}\mathrm{𝚕𝚊𝚢𝚎𝚛}\mathtt{1}-\mathrm{𝙴𝙳𝙶𝙴𝚂}.\mathrm{𝚟𝚎𝚛𝚝𝚎𝚡}\mathtt{1}\left(\mathrm{𝚅𝙴𝚁𝚃𝙸𝙲𝙴𝚂}_\mathrm{𝙻𝙰𝚈𝙴𝚁}\mathtt{1},\mathrm{𝚙𝚘𝚜},\mathrm{𝚒𝚍}\right),\hfill \\ \mathrm{𝚕𝚊𝚢𝚎𝚛}\mathtt{2}-\mathrm{𝙴𝙳𝙶𝙴𝚂}.\mathrm{𝚟𝚎𝚛𝚝𝚎𝚡}\mathtt{2}\left(\mathrm{𝚅𝙴𝚁𝚃𝙸𝙲𝙴𝚂}_\mathrm{𝙻𝙰𝚈𝙴𝚁}\mathtt{2},\mathrm{𝚙𝚘𝚜},\mathrm{𝚒𝚍}\right)\hfill \end{array}\right)\hfill \end{array}\right]\hfill \end{array}\right)
\mathrm{𝙴𝙳𝙶𝙴𝚂}_\mathrm{𝙴𝚇𝚃𝚁𝙴𝙼𝙸𝚃𝙸𝙴𝚂}
\mathrm{𝐶𝐿𝐼𝑄𝑈𝐸}
\left(<\right)↦\mathrm{𝚌𝚘𝚕𝚕𝚎𝚌𝚝𝚒𝚘𝚗}\left(\mathrm{𝚎𝚍𝚐𝚎𝚜}_\mathrm{𝚎𝚡𝚝𝚛𝚎𝚖𝚒𝚝𝚒𝚎𝚜}\mathtt{1},\mathrm{𝚎𝚍𝚐𝚎𝚜}_\mathrm{𝚎𝚡𝚝𝚛𝚎𝚖𝚒𝚝𝚒𝚎𝚜}\mathtt{2}\right)
\bigvee \left(\begin{array}{c}\bigwedge \left(\begin{array}{c}\mathrm{𝚎𝚍𝚐𝚎𝚜}_\mathrm{𝚎𝚡𝚝𝚛𝚎𝚖𝚒𝚝𝚒𝚎𝚜}\mathtt{1}.\mathrm{𝚕𝚊𝚢𝚎𝚛}\mathtt{1}<\mathrm{𝚎𝚍𝚐𝚎𝚜}_\mathrm{𝚎𝚡𝚝𝚛𝚎𝚖𝚒𝚝𝚒𝚎𝚜}\mathtt{2}.\mathrm{𝚕𝚊𝚢𝚎𝚛}\mathtt{1},\hfill \\ \mathrm{𝚎𝚍𝚐𝚎𝚜}_\mathrm{𝚎𝚡𝚝𝚛𝚎𝚖𝚒𝚝𝚒𝚎𝚜}\mathtt{1}.\mathrm{𝚕𝚊𝚢𝚎𝚛}\mathtt{2}>\mathrm{𝚎𝚍𝚐𝚎𝚜}_\mathrm{𝚎𝚡𝚝𝚛𝚎𝚖𝚒𝚝𝚒𝚎𝚜}\mathtt{2}.\mathrm{𝚕𝚊𝚢𝚎𝚛}\mathtt{2}\hfill \end{array}\right),\hfill \\ \bigwedge \left(\begin{array}{c}\mathrm{𝚎𝚍𝚐𝚎𝚜}_\mathrm{𝚎𝚡𝚝𝚛𝚎𝚖𝚒𝚝𝚒𝚎𝚜}\mathtt{1}.\mathrm{𝚕𝚊𝚢𝚎𝚛}\mathtt{1}>\mathrm{𝚎𝚍𝚐𝚎𝚜}_\mathrm{𝚎𝚡𝚝𝚛𝚎𝚖𝚒𝚝𝚒𝚎𝚜}\mathtt{2}.\mathrm{𝚕𝚊𝚢𝚎𝚛}\mathtt{1},\hfill \\ \mathrm{𝚎𝚍𝚐𝚎𝚜}_\mathrm{𝚎𝚡𝚝𝚛𝚎𝚖𝚒𝚝𝚒𝚎𝚜}\mathtt{1}.\mathrm{𝚕𝚊𝚢𝚎𝚛}\mathtt{2}<\mathrm{𝚎𝚍𝚐𝚎𝚜}_\mathrm{𝚎𝚡𝚝𝚛𝚎𝚖𝚒𝚝𝚒𝚎𝚜}\mathtt{2}.\mathrm{𝚕𝚊𝚢𝚎𝚛}\mathtt{2}\hfill \end{array}\right)\hfill \end{array}\right)
\mathrm{𝐍𝐀𝐑𝐂}
=\mathrm{𝙽𝙲𝚁𝙾𝚂𝚂}
As usual for the two-layer edge crossing problem [HararySchwenk72], [DiBattistaEadesTamassiaTollis99], positions of the vertices on each layer are represented as a permutation of the vertices. We generate a derived collection that, for each edge, contains the position of its extremities on both layers. In the arc generator we use the restriction
<
in order to generate a single arc for each pair of segments. This is required, since otherwise we would count more than once a line segments intersection.
\mathrm{𝐍𝐀𝐑𝐂}
graph property, the arcs of the final graph are stressed in bold.
\mathrm{𝚝𝚠𝚘}_\mathrm{𝚕𝚊𝚢𝚎𝚛}_\mathrm{𝚎𝚍𝚐𝚎}_\mathrm{𝚌𝚛𝚘𝚜𝚜𝚒𝚗𝚐}
|
Global Constraint Catalog: Copen_global_cardinality_low_up
<< 5.298. open_global_cardinality5.300. open_maximum >>
\mathrm{𝚘𝚙𝚎𝚗}_\mathrm{𝚐𝚕𝚘𝚋𝚊𝚕}_\mathrm{𝚌𝚊𝚛𝚍𝚒𝚗𝚊𝚕𝚒𝚝𝚢}_\mathrm{𝚕𝚘𝚠}_\mathrm{𝚞𝚙}\left(𝚂,\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂},\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}\right)
𝚂
\mathrm{𝚜𝚟𝚊𝚛}
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}
\mathrm{𝚌𝚘𝚕𝚕𝚎𝚌𝚝𝚒𝚘𝚗}\left(\mathrm{𝚟𝚊𝚛}-\mathrm{𝚍𝚟𝚊𝚛}\right)
\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}
\mathrm{𝚌𝚘𝚕𝚕𝚎𝚌𝚝𝚒𝚘𝚗}\left(\mathrm{𝚟𝚊𝚕}-\mathrm{𝚒𝚗𝚝},\mathrm{𝚘𝚖𝚒𝚗}-\mathrm{𝚒𝚗𝚝},\mathrm{𝚘𝚖𝚊𝚡}-\mathrm{𝚒𝚗𝚝}\right)
𝚂\ge 1
𝚂\le |\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}|
\mathrm{𝚛𝚎𝚚𝚞𝚒𝚛𝚎𝚍}
\left(\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂},\mathrm{𝚟𝚊𝚛}\right)
|\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}|>0
\mathrm{𝚛𝚎𝚚𝚞𝚒𝚛𝚎𝚍}
\left(\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂},\left[\mathrm{𝚟𝚊𝚕},\mathrm{𝚘𝚖𝚒𝚗},\mathrm{𝚘𝚖𝚊𝚡}\right]\right)
\mathrm{𝚍𝚒𝚜𝚝𝚒𝚗𝚌𝚝}
\left(\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂},\mathrm{𝚟𝚊𝚕}\right)
\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}.\mathrm{𝚘𝚖𝚒𝚗}\ge 0
\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}.\mathrm{𝚘𝚖𝚊𝚡}\le |\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}|
\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}.\mathrm{𝚘𝚖𝚒𝚗}\le \mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}.\mathrm{𝚘𝚖𝚊𝚡}
Each value
\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}\left[i\right].\mathrm{𝚟𝚊𝚕}
\left(1\le i\le |\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}|\right)
\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}\left[i\right].\mathrm{𝚘𝚖𝚒𝚗}
\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}\left[i\right].\mathrm{𝚘𝚖𝚊𝚡}
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}
collection for which the corresponding position belongs to the set
𝚂
\left(\begin{array}{c}\left\{2,3,4\right\},\hfill \\ 〈3,3,8,6〉,\hfill \\ 〈\begin{array}{ccc}\mathrm{𝚟𝚊𝚕}-3\hfill & \mathrm{𝚘𝚖𝚒𝚗}-1\hfill & \mathrm{𝚘𝚖𝚊𝚡}-3,\hfill \\ \mathrm{𝚟𝚊𝚕}-5\hfill & \mathrm{𝚘𝚖𝚒𝚗}-0\hfill & \mathrm{𝚘𝚖𝚊𝚡}-1,\hfill \\ \mathrm{𝚟𝚊𝚕}-6\hfill & \mathrm{𝚘𝚖𝚒𝚗}-1\hfill & \mathrm{𝚘𝚖𝚊𝚡}-2\hfill \end{array}〉\hfill \end{array}\right)
\mathrm{𝚘𝚙𝚎𝚗}_\mathrm{𝚐𝚕𝚘𝚋𝚊𝚕}_\mathrm{𝚌𝚊𝚛𝚍𝚒𝚗𝚊𝚕𝚒𝚝𝚢}_\mathrm{𝚕𝚘𝚠}_\mathrm{𝚞𝚙}
Values 3, 5 and 6 are respectively used 1 (
1\le 1\le 3
), 0 (
0\le 0\le 1
) and 1 (
1\le 1\le 2
) times within the collection
〈3,3,8,6〉
(the first item 3 of
〈3,3,8,6〉
is ignored since value 1 does not belong to the first argument
S=\left\{2,3,4\right\}
\mathrm{𝚘𝚙𝚎𝚗}_\mathrm{𝚐𝚕𝚘𝚋𝚊𝚕}_\mathrm{𝚌𝚊𝚛𝚍𝚒𝚗𝚊𝚕𝚒𝚝𝚢}_\mathrm{𝚕𝚘𝚠}_\mathrm{𝚞𝚙}
No constraint was specified for value 8.
|\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}|>1
\mathrm{𝚛𝚊𝚗𝚐𝚎}
\left(\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}.\mathrm{𝚟𝚊𝚛}\right)>1
|\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}|>1
\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}.\mathrm{𝚘𝚖𝚒𝚗}\le |\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}|
\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}.\mathrm{𝚘𝚖𝚊𝚡}>0
\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}.\mathrm{𝚘𝚖𝚊𝚡}\le |\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}|
|\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}|>|\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}|
\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}.\mathrm{𝚟𝚊𝚛}
\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}.\mathrm{𝚟𝚊𝚕}
\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}.\mathrm{𝚟𝚊𝚕}
\mathrm{𝚘𝚙𝚎𝚗}_\mathrm{𝚐𝚕𝚘𝚋𝚊𝚕}_\mathrm{𝚌𝚊𝚛𝚍𝚒𝚗𝚊𝚕𝚒𝚝𝚢}_\mathrm{𝚕𝚘𝚠}_\mathrm{𝚞𝚙}
\mathrm{𝚘𝚙𝚎𝚗}_\mathrm{𝚐𝚕𝚘𝚋𝚊𝚕}_\mathrm{𝚌𝚊𝚛𝚍𝚒𝚗𝚊𝚕𝚒𝚝𝚢}_\mathrm{𝚕𝚘𝚠}_\mathrm{𝚞𝚙}
\mathrm{𝚍𝚒𝚏𝚏𝚗}
\mathrm{𝚐𝚕𝚘𝚋𝚊𝚕}_\mathrm{𝚌𝚊𝚛𝚍𝚒𝚗𝚊𝚕𝚒𝚝𝚢}
constraint [Regin96] is described in [HoeveRegin06].
\mathrm{𝚐𝚕𝚘𝚋𝚊𝚕}_\mathrm{𝚌𝚊𝚛𝚍𝚒𝚗𝚊𝚕𝚒𝚝𝚢}
(assignment,counting constraint).
\mathrm{𝚘𝚙𝚎𝚗}_\mathrm{𝚐𝚕𝚘𝚋𝚊𝚕}_\mathrm{𝚌𝚊𝚛𝚍𝚒𝚗𝚊𝚕𝚒𝚝𝚢}
\mathrm{𝚏𝚒𝚡𝚎𝚍}
\mathrm{𝚒𝚗𝚝𝚎𝚛𝚟𝚊𝚕}
\mathrm{𝚟𝚊𝚛𝚒𝚊𝚋𝚕𝚎}
\mathrm{𝚐𝚕𝚘𝚋𝚊𝚕}_\mathrm{𝚌𝚊𝚛𝚍𝚒𝚗𝚊𝚕𝚒𝚝𝚢}_\mathrm{𝚕𝚘𝚠}_\mathrm{𝚞𝚙}
\mathrm{𝚘𝚙𝚎𝚗}_\mathrm{𝚊𝚕𝚕𝚍𝚒𝚏𝚏𝚎𝚛𝚎𝚗𝚝}
(each active valueAn active value corresponds to a value occuring at a position mentionned in the set
𝚂
. should occur at most once).
\mathrm{𝚒𝚗}_\mathrm{𝚜𝚎𝚝}
constraint type: open constraint, value constraint, counting constraint.
\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}
\mathrm{𝑆𝐸𝐿𝐹}
↦\mathrm{𝚌𝚘𝚕𝚕𝚎𝚌𝚝𝚒𝚘𝚗}\left(\mathrm{𝚟𝚊𝚛𝚒𝚊𝚋𝚕𝚎𝚜}\right)
•\mathrm{𝚟𝚊𝚛𝚒𝚊𝚋𝚕𝚎𝚜}.\mathrm{𝚟𝚊𝚛}=\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}.\mathrm{𝚟𝚊𝚕}
•
\mathrm{𝚒𝚗}_\mathrm{𝚜𝚎𝚝}
\left(\mathrm{𝚟𝚊𝚛𝚒𝚊𝚋𝚕𝚎𝚜}.\mathrm{𝚔𝚎𝚢},𝚂\right)
•
\mathrm{𝐍𝐕𝐄𝐑𝐓𝐄𝐗}
\ge \mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}.\mathrm{𝚘𝚖𝚒𝚗}
•
\mathrm{𝐍𝐕𝐄𝐑𝐓𝐄𝐗}
\le \mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}.\mathrm{𝚘𝚖𝚊𝚡}
\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}
” iterator. The only difference with the graph model of the
\mathrm{𝚐𝚕𝚘𝚋𝚊𝚕}_\mathrm{𝚌𝚊𝚛𝚍𝚒𝚗𝚊𝚕𝚒𝚝𝚢}_\mathrm{𝚕𝚘𝚠}_\mathrm{𝚞𝚙}
constraint is the arc constraint where we also specify that the position of the considered variable should belong to the first argument
𝚂
Part (A) of Figure 5.299.1 shows the initial graphs associated with each value 3, 5 and 6 of the
\mathrm{𝚅𝙰𝙻𝚄𝙴𝚂}
collection of the Example slot. Part (B) of Figure 5.299.1 shows the two corresponding final graphs respectively associated with values 3 and 6 that are both assigned to the variables of the
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}
collection (since value 5 is not assigned to any variable of the
\mathrm{𝚅𝙰𝚁𝙸𝙰𝙱𝙻𝙴𝚂}
collection the final graph associated with value 5 is empty). Since we use the
\mathrm{𝐍𝐕𝐄𝐑𝐓𝐄𝐗}
graph property, the vertices of the final graphs are stressed in bold.
\mathrm{𝚘𝚙𝚎𝚗}_\mathrm{𝚐𝚕𝚘𝚋𝚊𝚕}_\mathrm{𝚌𝚊𝚛𝚍𝚒𝚗𝚊𝚕𝚒𝚝𝚢}_\mathrm{𝚕𝚘𝚠}_\mathrm{𝚞𝚙}
|
Home : Support : Online Help : Programming : CodeTools : Profiling : Select
select procedures from a table of profiling data
Select(selector, tab)
The Select(selector, tab) command is similar to the select function. The boolean valued procedure, selector, is called on each element in tab. A new table is returned containing only those elements of tab for which selector returns true.
The selector parameter is a procedure that accepts two arguments. The first argument is the encoded name (see EncodeName) of the procedure and the second is the rtable containing the profiling data.
\mathrm{with}\left(\mathrm{CodeTools}[\mathrm{Profiling}]\right):
\textcolor[rgb]{0,0,1}{\mathrm{selector}}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{\mathbf{proc}}\left(\textcolor[rgb]{0,0,1}{n}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{t}\right)\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\textcolor[rgb]{0,0,1}{\mathbf{if}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\textcolor[rgb]{0,0,1}{5}\textcolor[rgb]{0,0,1}{<}\textcolor[rgb]{0,0,1}{t}\textcolor[rgb]{0,0,1}{[}\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{]}\textcolor[rgb]{0,0,1}{[}\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{]}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\textcolor[rgb]{0,0,1}{\mathbf{then}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\textcolor[rgb]{0,0,1}{\mathbf{return}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\textcolor[rgb]{0,0,1}{\mathrm{true}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\textcolor[rgb]{0,0,1}{\mathbf{else}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\textcolor[rgb]{0,0,1}{\mathbf{return}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\textcolor[rgb]{0,0,1}{\mathrm{false}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\textcolor[rgb]{0,0,1}{\mathbf{end if}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\textcolor[rgb]{0,0,1}{\mathbf{end proc}}
\textcolor[rgb]{0,0,1}{a}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{\mathbf{proc}}\left(\textcolor[rgb]{0,0,1}{}\right)\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\textcolor[rgb]{0,0,1}{\mathbf{return}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\textcolor[rgb]{0,0,1}{1}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\textcolor[rgb]{0,0,1}{\mathbf{end proc}}
\textcolor[rgb]{0,0,1}{b}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{\mathbf{proc}}\left(\textcolor[rgb]{0,0,1}{}\right)\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\textcolor[rgb]{0,0,1}{\mathbf{local}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\textcolor[rgb]{0,0,1}{i}\textcolor[rgb]{0,0,1}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\textcolor[rgb]{0,0,1}{\mathbf{for}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\textcolor[rgb]{0,0,1}{i}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\textcolor[rgb]{0,0,1}{\mathbf{to}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\textcolor[rgb]{0,0,1}{10}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\textcolor[rgb]{0,0,1}{\mathbf{do}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\textcolor[rgb]{0,0,1}{a}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{}\right)\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\textcolor[rgb]{0,0,1}{\mathbf{end do}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\textcolor[rgb]{0,0,1}{\mathbf{end proc}}
t≔\mathrm{Build}\left(\mathrm{procs}=[a,b],\mathrm{commands}='b\left(\right)'\right)
\textcolor[rgb]{0,0,1}{t}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{table}\textcolor[rgb]{0,0,1}{}\left([\textcolor[rgb]{0,0,1}{\mathrm{_Inert_ASSIGNEDNAME}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{"a"}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{"PROC"}\right)\textcolor[rgb]{0,0,1}{=}[\begin{array}{ccc}\textcolor[rgb]{0,0,1}{10}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{10}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}\end{array}]\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{_Inert_ASSIGNEDNAME}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{"b"}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{"PROC"}\right)\textcolor[rgb]{0,0,1}{=}[\begin{array}{ccc}\textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{10}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}\end{array}]]\right)
\mathrm{PrintProfiles}\left(t\right)
s≔\mathrm{Select}\left(\mathrm{selector},t\right)
\textcolor[rgb]{0,0,1}{s}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{table}\textcolor[rgb]{0,0,1}{}\left([\textcolor[rgb]{0,0,1}{\mathrm{_Inert_ASSIGNEDNAME}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{"a"}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{"PROC"}\right)\textcolor[rgb]{0,0,1}{=}[\begin{array}{ccc}\textcolor[rgb]{0,0,1}{10}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{10}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}\end{array}]]\right)
\mathrm{PrintProfiles}\left(s\right)
|
Global Constraint Catalog: Corchard
<< 5.303. or5.305. order >>
[Jackson1821]
\mathrm{𝚘𝚛𝚌𝚑𝚊𝚛𝚍}\left(\mathrm{𝙽𝚁𝙾𝚆},\mathrm{𝚃𝚁𝙴𝙴𝚂}\right)
\mathrm{𝙽𝚁𝙾𝚆}
\mathrm{𝚍𝚟𝚊𝚛}
\mathrm{𝚃𝚁𝙴𝙴𝚂}
\mathrm{𝚌𝚘𝚕𝚕𝚎𝚌𝚝𝚒𝚘𝚗}\left(\mathrm{𝚒𝚗𝚍𝚎𝚡}-\mathrm{𝚒𝚗𝚝},𝚡-\mathrm{𝚍𝚟𝚊𝚛},𝚢-\mathrm{𝚍𝚟𝚊𝚛}\right)
\mathrm{𝙽𝚁𝙾𝚆}\ge 0
\mathrm{𝚃𝚁𝙴𝙴𝚂}.\mathrm{𝚒𝚗𝚍𝚎𝚡}\ge 1
\mathrm{𝚃𝚁𝙴𝙴𝚂}.\mathrm{𝚒𝚗𝚍𝚎𝚡}\le |\mathrm{𝚃𝚁𝙴𝙴𝚂}|
\mathrm{𝚛𝚎𝚚𝚞𝚒𝚛𝚎𝚍}
\left(\mathrm{𝚃𝚁𝙴𝙴𝚂},\left[\mathrm{𝚒𝚗𝚍𝚎𝚡},𝚡,𝚢\right]\right)
\mathrm{𝚍𝚒𝚜𝚝𝚒𝚗𝚌𝚝}
\left(\mathrm{𝚃𝚁𝙴𝙴𝚂},\mathrm{𝚒𝚗𝚍𝚎𝚡}\right)
\mathrm{𝚃𝚁𝙴𝙴𝚂}.𝚡\ge 0
\mathrm{𝚃𝚁𝙴𝙴𝚂}.𝚢\ge 0
Orchard problem [Jackson1821]:
“Your aid I want, Nine trees to plant, In rows just half a score, And let there be, In each row, three—Solve this: I ask no more!”
\left(\begin{array}{c}10,〈\begin{array}{ccc}\mathrm{𝚒𝚗𝚍𝚎𝚡}-1\hfill & 𝚡-0\hfill & 𝚢-0,\hfill \\ \mathrm{𝚒𝚗𝚍𝚎𝚡}-2\hfill & 𝚡-4\hfill & 𝚢-0,\hfill \\ \mathrm{𝚒𝚗𝚍𝚎𝚡}-3\hfill & 𝚡-8\hfill & 𝚢-0,\hfill \\ \mathrm{𝚒𝚗𝚍𝚎𝚡}-4\hfill & 𝚡-2\hfill & 𝚢-4,\hfill \\ \mathrm{𝚒𝚗𝚍𝚎𝚡}-5\hfill & 𝚡-4\hfill & 𝚢-4,\hfill \\ \mathrm{𝚒𝚗𝚍𝚎𝚡}-6\hfill & 𝚡-6\hfill & 𝚢-4,\hfill \\ \mathrm{𝚒𝚗𝚍𝚎𝚡}-7\hfill & 𝚡-0\hfill & 𝚢-8,\hfill \\ \mathrm{𝚒𝚗𝚍𝚎𝚡}-8\hfill & 𝚡-4\hfill & 𝚢-8,\hfill \\ \mathrm{𝚒𝚗𝚍𝚎𝚡}-9\hfill & 𝚡-8\hfill & 𝚢-8\hfill \end{array}〉\hfill \end{array}\right)
The 10 alignments of 3 trees correspond to the following triples of trees:
\left(1,2,3\right)
\left(1,4,8\right)
\left(1,5,9\right)
\left(2,4,7\right)
\left(2,5,8\right)
\left(2,6,9\right)
\left(3,5,7\right)
\left(3,6,8\right)
\left(4,5,6\right)
\left(7,8,9\right)
. Figure 5.304.1 shows the 9 trees and the 10 alignments corresponding to the example.
Figure 5.304.1. Nine trees with 10 alignments of 3 trees
\mathrm{𝙽𝚁𝙾𝚆}>0
|\mathrm{𝚃𝚁𝙴𝙴𝚂}|>3
\mathrm{𝚃𝚁𝙴𝙴𝚂}
Attributes of
\mathrm{𝚃𝚁𝙴𝙴𝚂}
are permutable w.r.t. permutation
\left(\mathrm{𝚒𝚗𝚍𝚎𝚡}\right)
\left(𝚡,𝚢\right)
(permutation applied to all items).
𝚡
\mathrm{𝚃𝚁𝙴𝙴𝚂}
𝚢
\mathrm{𝚃𝚁𝙴𝙴𝚂}
\mathrm{𝙽𝚁𝙾𝚆}
\mathrm{𝚃𝚁𝙴𝙴𝚂}
characteristic of a constraint: hypergraph.
geometry: geometrical constraint, alignment.
\mathrm{𝚃𝚁𝙴𝙴𝚂}
\mathrm{𝐶𝐿𝐼𝑄𝑈𝐸}
\left(<\right)↦\mathrm{𝚌𝚘𝚕𝚕𝚎𝚌𝚝𝚒𝚘𝚗}\left(\mathrm{𝚝𝚛𝚎𝚎𝚜}\mathtt{1},\mathrm{𝚝𝚛𝚎𝚎𝚜}\mathtt{2},\mathrm{𝚝𝚛𝚎𝚎𝚜}\mathtt{3}\right)
\sum \left(\begin{array}{c}\mathrm{𝚝𝚛𝚎𝚎𝚜}\mathtt{1}.𝚡*\mathrm{𝚝𝚛𝚎𝚎𝚜}\mathtt{2}.𝚢-\mathrm{𝚝𝚛𝚎𝚎𝚜}\mathtt{1}.𝚡*\mathrm{𝚝𝚛𝚎𝚎𝚜}\mathtt{3}.𝚢,\hfill \\ \mathrm{𝚝𝚛𝚎𝚎𝚜}\mathtt{1}.𝚢*\mathrm{𝚝𝚛𝚎𝚎𝚜}\mathtt{3}.𝚡-\mathrm{𝚝𝚛𝚎𝚎𝚜}\mathtt{1}.𝚢*\mathrm{𝚝𝚛𝚎𝚎𝚜}\mathtt{2}.𝚡,\hfill \\ \mathrm{𝚝𝚛𝚎𝚎𝚜}\mathtt{2}.𝚡*\mathrm{𝚝𝚛𝚎𝚎𝚜}\mathtt{3}.𝚢-\mathrm{𝚝𝚛𝚎𝚎𝚜}\mathtt{2}.𝚢*\mathrm{𝚝𝚛𝚎𝚎𝚜}\mathtt{3}.𝚡\hfill \end{array}\right)=0
\mathrm{𝐍𝐀𝐑𝐂}
=\mathrm{𝙽𝚁𝙾𝚆}
The arc generator
\mathrm{𝐶𝐿𝐼𝑄𝑈𝐸}\left(<\right)
with an arity of three is used in order to generate all the arcs of the directed hypergraph. Each arc is an ordered triple of trees. We use the restriction
<
in order to generate a single arc for each set of three trees. This is required, since otherwise we would count more than once a given alignment of three trees. The formula used within the arc constraint expresses the fact that the three points of respective coordinates
\left({\mathrm{𝚝𝚛𝚎𝚎𝚜}}_{1}.𝚡,{\mathrm{𝚝𝚛𝚎𝚎𝚜}}_{1}.𝚢\right)
\left({\mathrm{𝚝𝚛𝚎𝚎𝚜}}_{2}.𝚡,{\mathrm{𝚝𝚛𝚎𝚎𝚜}}_{2}.𝚢\right)
\left({\mathrm{𝚝𝚛𝚎𝚎𝚜}}_{3}.𝚡,{\mathrm{𝚝𝚛𝚎𝚎𝚜}}_{3}.𝚢\right)
are aligned. It corresponds to the development of the expression:
\left|\begin{array}{ccc}{\mathrm{𝚝𝚛𝚎𝚎𝚜}}_{1}.𝚡& {\mathrm{𝚝𝚛𝚎𝚎𝚜}}_{2}.𝚢& 1\\ {\mathrm{𝚝𝚛𝚎𝚎𝚜}}_{2}.𝚡& {\mathrm{𝚝𝚛𝚎𝚎𝚜}}_{2}.𝚢& 1\\ {\mathrm{𝚝𝚛𝚎𝚎𝚜}}_{3}.𝚡& {\mathrm{𝚝𝚛𝚎𝚎𝚜}}_{3}.𝚢& 1\end{array}\right|=0
|
04.04 Density and Contour Plots
Sometimes it is useful to display three-dimensional data in two dimensions using contours or color-coded regions. There are three Matplotlib functions that can be helpful for this task: plt.contour for contour plots, plt.contourf for filled contour plots, and plt.imshow for showing images. This section looks at several examples of using these. We'll start by setting up the notebook for plotting and importing the functions we will use:
We'll start by demonstrating a contour plot using a function
z = f(x, y)
, using the following particular choice for
f
(we've seen this before in Computation on Arrays: Broadcasting, when we used it as a motivating example for array broadcasting):
A contour plot can be created with the plt.contour function. It takes three arguments: a grid of x values, a grid of y values, and a grid of z values. The x and y values represent positions on the plot, and the z values will be represented by the contour levels. Perhaps the most straightforward way to prepare such data is to use the np.meshgrid function, which builds two-dimensional grids from one-dimensional arrays:
Now let's look at this with a standard line-only contour plot:
Notice that by default when a single color is used, negative values are represented by dashed lines, and positive values by solid lines. Alternatively, the lines can be color-coded by specifying a colormap with the cmap argument. Here, we'll also specify that we want more lines to be drawn—20 equally spaced intervals within the data range:
Here we chose the RdGy (short for Red-Gray) colormap, which is a good choice for centered data. Matplotlib has a wide range of colormaps available, which you can easily browse in IPython by doing a tab completion on the plt.cm module:
Our plot is looking nicer, but the spaces between the lines may be a bit distracting. We can change this by switching to a filled contour plot using the plt.contourf() function (notice the f at the end), which uses largely the same syntax as plt.contour().
Additionally, we'll add a plt.colorbar() command, which automatically creates an additional axis with labeled color information for the plot:
The colorbar makes it clear that the black regions are "peaks," while the red regions are "valleys."
One potential issue with this plot is that it is a bit "splotchy." That is, the color steps are discrete rather than continuous, which is not always what is desired. This could be remedied by setting the number of contours to a very high number, but this results in a rather inefficient plot: Matplotlib must render a new polygon for each step in the level. A better way to handle this is to use the plt.imshow() function, which interprets a two-dimensional grid of data as an image.
There are a few potential gotchas with imshow(), however:
plt.imshow() doesn't accept an x and y grid, so you must manually specify the extent [xmin, xmax, ymin, ymax] of the image on the plot.
plt.imshow() by default follows the standard image array definition where the origin is in the upper left, not in the lower left as in most contour plots. This must be changed when showing gridded data.
plt.imshow() will automatically adjust the axis aspect ratio to match the input data; this can be changed by setting, for example, plt.axis(aspect='image') to make x and y units match.
Finally, it can sometimes be useful to combine contour plots and image plots. For example, here we'll use a partially transparent background image (with transparency set via the alpha parameter) and overplot contours with labels on the contours themselves (using the plt.clabel() function):
|
02.07 Fancy Indexing
In the previous sections, we saw how to access and modify portions of arrays using simple indices (e.g., arr[0]), slices (e.g., arr[:5]), and Boolean masks (e.g., arr[arr > 0]). In this section, we'll look at another style of array indexing, known as fancy indexing. Fancy indexing is like the simple indexing we've already seen, but we pass arrays of indices in place of single scalars. This allows us to very quickly access and modify complicated subsets of an array's values.
Fancy indexing is conceptually simple: it means passing an array of indices to access multiple array elements at once. For example, consider the following array:
Suppose we want to access three different elements. We could do it like this:
Alternatively, we can pass a single list or array of indices to obtain the same result:
When using fancy indexing, the shape of the result reflects the shape of the index arrays rather than the shape of the array being indexed:
Fancy indexing also works in multiple dimensions. Consider the following array:
array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]])
Like with standard indexing, the first index refers to the row, and the second to the column:
Notice that the first value in the result is X[0, 2], the second is X[1, 1], and the third is X[2, 3]. The pairing of indices in fancy indexing follows all the broadcasting rules that were mentioned in Computation on Arrays: Broadcasting. So, for example, if we combine a column vector and a row vector within the indices, we get a two-dimensional result:
array([[ 2, 1, 3], [ 6, 5, 7], [10, 9, 11]])
Here, each row value is matched with each column vector, exactly as we saw in broadcasting of arithmetic operations. For example:
It is always important to remember with fancy indexing that the return value reflects the broadcasted shape of the indices, rather than the shape of the array being indexed.
For even more powerful operations, fancy indexing can be combined with the other indexing schemes we've seen:
[[ 0 1 2 3] [ 4 5 6 7] [ 8 9 10 11]]
We can combine fancy and simple indices:
array([10, 8, 9])
We can also combine fancy indexing with slicing:
array([[ 6, 4, 5], [10, 8, 9]])
And we can combine fancy indexing with masking:
array([[ 0, 2], [ 4, 6], [ 8, 10]])
All of these indexing options combined lead to a very flexible set of operations for accessing and modifying array values.
One common use of fancy indexing is the selection of subsets of rows from a matrix. For example, we might have an
N
D
N
D
dimensions, such as the following points drawn from a two-dimensional normal distribution:
Using the plotting tools we will discuss in Introduction to Matplotlib, we can visualize these points as a scatter-plot:
Let's use fancy indexing to select 20 random points. We'll do this by first choosing 20 random indices with no repeats, and use these indices to select a portion of the original array:
array([93, 45, 73, 81, 50, 10, 98, 94, 4, 64, 65, 89, 47, 84, 82, 80, 25, 90, 63, 20])
Now to see which points were selected, let's over-plot large circles at the locations of the selected points:
This sort of strategy is often used to quickly partition datasets, as is often needed in train/test splitting for validation of statistical models (see Hyperparameters and Model Validation), and in sampling approaches to answering statistical questions.
Just as fancy indexing can be used to access parts of an array, it can also be used to modify parts of an array. For example, imagine we have an array of indices and we'd like to set the corresponding items in an array to some value:
[ 0 99 99 3 99 5 6 7 99 9]
We can use any assignment-type operator for this. For example:
Notice, though, that repeated indices with these operations can cause some potentially unexpected results. Consider the following:
Where did the 4 go? The result of this operation is to first assign x[0] = 4, followed by x[0] = 6. The result, of course, is that x[0] contains the value 6.
Fair enough, but consider this operation:
You might expect that x[3] would contain the value 2, and x[4] would contain the value 3, as this is how many times each index is repeated. Why is this not the case? Conceptually, this is because x[i] += 1 is meant as a shorthand of x[i] = x[i] + 1. x[i] + 1 is evaluated, and then the result is assigned to the indices in x. With this in mind, it is not the augmentation that happens multiple times, but the assignment, which leads to the rather nonintuitive results.
So what if you want the other behavior where the operation is repeated? For this, you can use the at() method of ufuncs (available since NumPy 1.8), and do the following:
The at() method does an in-place application of the given operator at the specified indices (here, i) with the specified value (here, 1). Another method that is similar in spirit is the reduceat() method of ufuncs, which you can read about in the NumPy documentation.
You can use these ideas to efficiently bin data to create a histogram by hand. For example, imagine we have 1,000 values and would like to quickly find where they fall within an array of bins. We could compute it using ufunc.at like this:
The counts now reflect the number of points within each bin–in other words, a histogram:
Of course, it would be silly to have to do this each time you want to plot a histogram. This is why Matplotlib provides the plt.hist() routine, which does the same in a single line:
This function will create a nearly identical plot to the one seen here. To compute the binning, matplotlib uses the np.histogram function, which does a very similar computation to what we did before. Let's compare the two here:
NumPy routine: 10000 loops, best of 3: 97.6 µs per loop Custom routine: 10000 loops, best of 3: 19.5 µs per loop
Our own one-line algorithm is several times faster than the optimized algorithm in NumPy! How can this be? If you dig into the np.histogram source code (you can do this in IPython by typing np.histogram??), you'll see that it's quite a bit more involved than the simple search-and-count that we've done; this is because NumPy's algorithm is more flexible, and particularly is designed for better performance when the number of data points becomes large:
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.